Ann R. Thryft

June 20, 2012

3 Min Read
Video: Robots Recognize Human Gestures

Researchers at the A*Star Institute for Infocomm Research in Singapore have created gesture recognition software that lets robots recognize human gestures correctly and quickly, with a minimal amount of training.

The research team, headed by Rui Yan, adapted a localist attractor network (LAN), which is a cognitive memory model, to develop a new system for communicating with social robots. Training robots to recognize and respond to human gestures is difficult because to a robot, simple gestures such as waving a hand or pointing may appear very different when made by different individuals. But this ability will be a key feature of robots that interact with humans.

112701_823664.jpg

Many social robots will be operated by people who are not experts in robotics or even comfortable with machines. For social robots, that means they need to have interfaces that can interact easily with humans. The most obvious, natural way for humans to communicate is through eye contact and gestures. The team's LAN gesture recognition system requires a very small amount of training data. It also avoids tedious training processes.

To test the system, Yan and his team integrated it with a jacket made of ShapeTape, which is a 3D bend-and-twist sensor based on fiber optics, to monitor the bending and twisting of a person's hands and arms. The "tape" proves accurate positioning and orientation information all along its length. It is typically used in virtual reality, motion tracking, and robotic control applications.

The team programmed the ShapeTape to provide real-time sensory data on the 3D orientation of shoulders, elbows, and wrists, which the system recognized as streams of feature vectors extracted from the data. The gesture recognition system instructed a virtual robot to execute predefined commands such as moving in different directions, changing speed, or stopping.

Five different users wore the ShapeTape jacket. Each user employed it to control a virtual robot by using simple arm motions that represented the predefined command -- for example, faster, slower, forward, or backward. The results were that the system correctly interpreted 99.15 percent of the different users' gestures. It was also easy to program the system with new commands, merely by demonstrating a new gesture a few times.

Yan and his team are addressing the next step in improving the system, which is to make it possible for people to control robots without wearing the jacket or other devices. They expect to do this by replacing the ShapeTape jacket with motion-sensitive cameras. The next-generation system will incorporate a Microsoft Kinect camera and be implemented on an autonomous robot to test the system's usability in actual service tasks.

The Kinect camera is becoming popular in robot research and development, as shown by the 3D navigation and mapping work being done by the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL).

Related posts:

About the Author(s)

Ann R. Thryft

Ann R. Thryft has written about manufacturing- and electronics-related technologies for Design News, EE Times, Test & Measurement World, EDN, RTC Magazine, COTS Journal, Nikkei Electronics Asia, Computer Design, and Electronic Buyers' News (EBN). She's introduced readers to several emerging trends: industrial cybersecurity for operational technology, industrial-strength metals 3D printing, RFID, software-defined radio, early mobile phone architectures, open network server and switch/router architectures, and set-top box system design. At EBN Ann won two independently judged Editorial Excellence awards for Best Technology Feature. She holds a BA in Cultural Anthropology from Stanford University and a Certified Business Communicator certificate from the Business Marketing Association (formerly B/PAA).

Sign up for the Design News Daily newsletter.

You May Also Like