The research team, headed by Rui Yan, adapted a localist attractor network (LAN), which is a cognitive memory model, to develop a new system for communicating with social robots. Training robots to recognize and respond to human gestures is difficult because to a robot, simple gestures such as waving a hand or pointing may appear very different when made by different individuals. But this ability will be a key feature of robots that interact with humans.
A*Star Institute for Infocomm Research has created gesture recognition software that lets robots recognize human gestures. (Source: Measureand Inc.)
Many social robots will be operated by people who are not experts in robotics or even comfortable with machines. For social robots, that means they need to have interfaces that can interact easily with humans. The most obvious, natural way for humans to communicate is through eye contact and gestures. The team's LAN gesture recognition system requires a very small amount of training data. It also avoids tedious training processes.
To test the system, Yan and his team integrated it with a jacket made of ShapeTape, which is a 3D bend-and-twist sensor based on fiber optics, to monitor the bending and twisting of a person's hands and arms. The "tape" proves accurate positioning and orientation information all along its length. It is typically used in virtual reality, motion tracking, and robotic control applications.
The team programmed the ShapeTape to provide real-time sensory data on the 3D orientation of shoulders, elbows, and wrists, which the system recognized as streams of feature vectors extracted from the data. The gesture recognition system instructed a virtual robot to execute predefined commands such as moving in different directions, changing speed, or stopping.
Five different users wore the ShapeTape jacket. Each user employed it to control a virtual robot by using simple arm motions that represented the predefined command -- for example, faster, slower, forward, or backward. The results were that the system correctly interpreted 99.15 percent of the different users' gestures. It was also easy to program the system with new commands, merely by demonstrating a new gesture a few times.
Yan and his team are addressing the next step in improving the system, which is to make it possible for people to control robots without wearing the jacket or other devices. They expect to do this by replacing the ShapeTape jacket with motion-sensitive cameras. The next-generation system will incorporate a Microsoft Kinect camera and be implemented on an autonomous robot to test the system's usability in actual service tasks.
The Kinect camera is becoming popular in robot research and development, as shown by the 3D navigation and mapping work being done by the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL).
Very cool project. It's really interesting how widespread an impact gaming technology is having on so-called "serious" development, from robotics to CAD software. Kinect-like interfaces are popping up in a variety of different platforms and will push the envelope in terms of helping people interact with previously pretty inaccessible technologies.
The Kinect approach is definately an important one for machine control. It is also most like human vision. I have seen, over many years (decades) the attempt to create autonomous vehicles and machines. They often use exotic sensors. Lately, though, there have been articles about using a Kinect system to drive these. The vision system is often coupled with a database or model of the scenario. This is much like what we humans do. Factory robots are starting to use some of this technology as well. This is a lot like the small robots that mimic insects, or other creatures. Mimicing humans may be the way to go here as well.
I think the key here is the Kinect visual-based motion sensor--a picture is worth 1000 lines of code? It's analogous to talking to your computer. They are both much more natural ways of interacting with machines, at least from the human perspective.
Nice to see gesture recognition is getting up to speed and developing some traction in public awareness. Given the several mentions of various Kinect sensor implementations, it seems fair to mention another "disruptively innovative" technology which handles all the tasks this article describes. Look for and check out the threads of commentary, info etc which were started when a company named Leap Motion made an announcement on May 21st.
Key elements of their announcement: an inexpensive sensor device which enables position-detection, motion-detection, and gesture recognition -- with a reproducible position-detection accuracy of 0.01mm (i.e., ten micrometers, one wavelength of long-wavelength-range IR), anywhere within a "recognition space" volume of eight cubic feet. And a movement detect-and-report latency below the threshold for human perception -- USB comm latency and your monitor's refresh rate are the bottlenecks there (I'm still hoping to hear a stat for maximum trackable position rate-of-change, re effective point-measurement-rate). And an API which uses perhaps 5% of the CPU time on a nothing-to-write-home-about generic PC. ...Hey, my jaw dropped too.
I am just one of many hopeful entries in their (still open) pool of developer applicants, with thousands scheduled to be selected to receive an SDK and a free Leap device in the next three months or so. Their obvious intention is to "crowd-source" a base of useable applications by the time the device is commercially available in the first part of 2013. Devices can be pre-ordered now, for the impatient.
Look for their website, their facebook page, their YouTube videos, and their forums. Because of patents pending, complete specs and technique info have not yet been released, but there has been some fairly credible guessing going on.
Important to note: The Leap technology will be making OUR reality "machine readable" -- If you can SEE something, you can use it as an input for consideration. No tape required. Anticipate interesting times.
flared0ne, I did see the Leap announcement, but so far it's not a real product yet. If they can do what they say they want to do, it may leave Kinect technology in the dust. Also, as we stated in my article: ShapeTape was used only to test the A*Star system. It will not be required to use it: that's what Kinect is for.
Chuck, I think the ShapeTape almost deserves its own story, although it's not really used in apps we cover. Those include motion-capture techniques for animated movies: I've seen two that use a similar (if not the same) technology, and both were considered ground-breaking. One is the animated film based on Beowulf with Angelina Jolie playing Grendel's mom, and the other was A Scanner Darkly, based on a Philip K. Dick novel.
Ann, I think this is a great achievement and revolutionary thought, where robots can be used in a very human friendly way. I think it may be able to detect the remote motions also, where we can use such technologies is disaster areas.
Truly amazing!I am also amazed at the speed in which the robotic system duplicates the movement of the shape tape and the degrees of freedom exhibited by the arm.If the robots get much more sophistificated we will have to make sure the designers employ Asimov's Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I suspect we have a long way to go but it seems to me the progressis consistent and steady.Great article Ann.
Thanks, Bob. Glad you like my articles on robots. Some truly amazing things are being done in robotics. I think you're right: we may need Asimov's 3 laws sooner than we realize: I just submitted a story on a swimming robot. Of course, if you think the future is going to go more along the lines of the Terminator story-line, then it may be already too late, lol.
We looked at a number of sources to determine this year's greenest cars, from KBB to automotive trade magazines to environmental organizations. These 14 cars emerged as being great at either stretching fuel or reducing carbon footprint.
Healthcare might seem to be an unlikely target application for the Internet of Things technology, but recent developments show small ways that big-data is going to make an impact on patient care moving into the future.
As energy efficiency becomes more and more a concern for makers of electronics devices, researchers are coming up with new ways to harvest energy from sound vibration, footsteps, and even electromagnetic fields in the air.
A quick look into the merger of two powerhouse 3D printing OEMs and the new leader in rapid prototyping solutions, Stratasys. The industrial revolution is now led by 3D printing and engineers are given the opportunity to fully maximize their design capabilities, reduce their time-to-market and functionally test prototypes cheaper, faster and easier. Bruce Bradshaw, Director of Marketing in North America, will explore the large product offering and variety of materials that will help CAD designers articulate their product design with actual, physical prototypes. This broadcast will dive deep into technical information including application specific stories from real world customers and their experiences with 3D printing. 3D Printing is