The research team, headed by Rui Yan, adapted a localist attractor network (LAN), which is a cognitive memory model, to develop a new system for communicating with social robots. Training robots to recognize and respond to human gestures is difficult because to a robot, simple gestures such as waving a hand or pointing may appear very different when made by different individuals. But this ability will be a key feature of robots that interact with humans.
A*Star Institute for Infocomm Research has created gesture recognition software that lets robots recognize human gestures. (Source: Measureand Inc.)
Many social robots will be operated by people who are not experts in robotics or even comfortable with machines. For social robots, that means they need to have interfaces that can interact easily with humans. The most obvious, natural way for humans to communicate is through eye contact and gestures. The team's LAN gesture recognition system requires a very small amount of training data. It also avoids tedious training processes.
To test the system, Yan and his team integrated it with a jacket made of ShapeTape, which is a 3D bend-and-twist sensor based on fiber optics, to monitor the bending and twisting of a person's hands and arms. The "tape" proves accurate positioning and orientation information all along its length. It is typically used in virtual reality, motion tracking, and robotic control applications.
The team programmed the ShapeTape to provide real-time sensory data on the 3D orientation of shoulders, elbows, and wrists, which the system recognized as streams of feature vectors extracted from the data. The gesture recognition system instructed a virtual robot to execute predefined commands such as moving in different directions, changing speed, or stopping.
Five different users wore the ShapeTape jacket. Each user employed it to control a virtual robot by using simple arm motions that represented the predefined command -- for example, faster, slower, forward, or backward. The results were that the system correctly interpreted 99.15 percent of the different users' gestures. It was also easy to program the system with new commands, merely by demonstrating a new gesture a few times.
Yan and his team are addressing the next step in improving the system, which is to make it possible for people to control robots without wearing the jacket or other devices. They expect to do this by replacing the ShapeTape jacket with motion-sensitive cameras. The next-generation system will incorporate a Microsoft Kinect camera and be implemented on an autonomous robot to test the system's usability in actual service tasks.
The Kinect camera is becoming popular in robot research and development, as shown by the 3D navigation and mapping work being done by the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL).
Very cool project. It's really interesting how widespread an impact gaming technology is having on so-called "serious" development, from robotics to CAD software. Kinect-like interfaces are popping up in a variety of different platforms and will push the envelope in terms of helping people interact with previously pretty inaccessible technologies.
The Kinect approach is definately an important one for machine control. It is also most like human vision. I have seen, over many years (decades) the attempt to create autonomous vehicles and machines. They often use exotic sensors. Lately, though, there have been articles about using a Kinect system to drive these. The vision system is often coupled with a database or model of the scenario. This is much like what we humans do. Factory robots are starting to use some of this technology as well. This is a lot like the small robots that mimic insects, or other creatures. Mimicing humans may be the way to go here as well.
I think the key here is the Kinect visual-based motion sensor--a picture is worth 1000 lines of code? It's analogous to talking to your computer. They are both much more natural ways of interacting with machines, at least from the human perspective.
Nice to see gesture recognition is getting up to speed and developing some traction in public awareness. Given the several mentions of various Kinect sensor implementations, it seems fair to mention another "disruptively innovative" technology which handles all the tasks this article describes. Look for and check out the threads of commentary, info etc which were started when a company named Leap Motion made an announcement on May 21st.
Key elements of their announcement: an inexpensive sensor device which enables position-detection, motion-detection, and gesture recognition -- with a reproducible position-detection accuracy of 0.01mm (i.e., ten micrometers, one wavelength of long-wavelength-range IR), anywhere within a "recognition space" volume of eight cubic feet. And a movement detect-and-report latency below the threshold for human perception -- USB comm latency and your monitor's refresh rate are the bottlenecks there (I'm still hoping to hear a stat for maximum trackable position rate-of-change, re effective point-measurement-rate). And an API which uses perhaps 5% of the CPU time on a nothing-to-write-home-about generic PC. ...Hey, my jaw dropped too.
I am just one of many hopeful entries in their (still open) pool of developer applicants, with thousands scheduled to be selected to receive an SDK and a free Leap device in the next three months or so. Their obvious intention is to "crowd-source" a base of useable applications by the time the device is commercially available in the first part of 2013. Devices can be pre-ordered now, for the impatient.
Look for their website, their facebook page, their YouTube videos, and their forums. Because of patents pending, complete specs and technique info have not yet been released, but there has been some fairly credible guessing going on.
Important to note: The Leap technology will be making OUR reality "machine readable" -- If you can SEE something, you can use it as an input for consideration. No tape required. Anticipate interesting times.
Ann, I think this is a great achievement and revolutionary thought, where robots can be used in a very human friendly way. I think it may be able to detect the remote motions also, where we can use such technologies is disaster areas.
flared0ne, I did see the Leap announcement, but so far it's not a real product yet. If they can do what they say they want to do, it may leave Kinect technology in the dust. Also, as we stated in my article: ShapeTape was used only to test the A*Star system. It will not be required to use it: that's what Kinect is for.
The Beam Store from Suitable Technologies is managed by remote workers from places as diverse as New York and Sydney, Australia. Employees attend to store visitors through Beam Smart Presence Systems (SPSs) from the company. The systems combine mobility and video conferencing and allow people to communicate directly from a remote location via a screen as well as move around as if they are actually in the room.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.