By focusing their attention on patterns created by flickering lights on a PC screen, which are associated with specific actions, users can control which actions they want a robot to perform, where the robot moves, and how it interacts with its environment. (Source: CNRS-AIST Joint Robotics Laboratory)
That's right, Rob, paraplegics and quadriplegics (tetraplegics) are one target group for this technology, for actual physical robotic embodiment a la Avatar. Under the VERE project aegis, the other target group consists of rehab training for people temporarily confined to a bed or wheelchair.
The algorithms are far simpler than you think because you have a "man-in-the-loop" who can unconciously compensate for fairly large errors. For example, given 2 systems which react with a 30 degree difference in angular movement for the same input - well with one you'll just push a little harder until you get the desired result. You wouldn't even notice it. With fully automated autonomous systems, output must match input exactly or there will be trouble.
Chuck, I wish we had more info on the project's engineering details, which are still under development. Considering how much work has already been done aimed at similar goals, such as various methods of motion capture, I suspect it won't take all that long to write the algorithms. Battar, thanks for the response on this subject, too. FWIW, Fujitsu started working on turning the electrical impulses from a person's thoughts into electronically controlled actions back in the late 80s to early 90s.
Yes, if paraplegics and quadriplegics could benefit from this technology, that would indeed be wonderful. I wonder if the other groups like the elderly could also benefit? (Or would the technology learning curve be a little too steep?)
Greg, the elderly could certainly benefit if they're among either target group, such as people confined to bed or wheelchairs. Since the technology is still being developed, most of the current learning curve is occurring among experimenters as they learn what thoughts produce what actions. Ideally, there won't be much for users.
Interesting link, Jim_E. Thanks for posting. I would think that the "bionic limb" idea would actually be easier since they are trying the use the biological processes already in place to do essentially what they were designed to do - think about moving your hand that used to be at the end of your arm and the new hand at the end of your arm moves as the original once did. The process of separate robots seems like a whole other ballgame.
Jim_E, thanks for the link to that Wired article (and I agree about print editions: Rolling Stone in the hand is very different from Rolling Stone on line, e.g.). But trying to control the incredibly complex movements of a hand and its fingers has got to be a few orders of magnitude more complicated than controlling legs enough to make them walk. So I'm not surprised there's been little progress in that area.
At the Design News webinar on June 27, learn all about aluminum extrusion: designing the right shape so it costs the least, is simplest to manufacture, and best fits the application's structural requirements.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This radio show will show what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.