By focusing their attention on patterns created by flickering lights on a PC screen, which are associated with specific actions, users can control which actions they want a robot to perform, where the robot moves, and how it interacts with its environment. (Source: CNRS-AIST Joint Robotics Laboratory)
That's right, Rob, paraplegics and quadriplegics (tetraplegics) are one target group for this technology, for actual physical robotic embodiment a la Avatar. Under the VERE project aegis, the other target group consists of rehab training for people temporarily confined to a bed or wheelchair.
Yes, if paraplegics and quadriplegics could benefit from this technology, that would indeed be wonderful. I wonder if the other groups like the elderly could also benefit? (Or would the technology learning curve be a little too steep?)
The algorithms are far simpler than you think because you have a "man-in-the-loop" who can unconciously compensate for fairly large errors. For example, given 2 systems which react with a 30 degree difference in angular movement for the same input - well with one you'll just push a little harder until you get the desired result. You wouldn't even notice it. With fully automated autonomous systems, output must match input exactly or there will be trouble.
Greg, the elderly could certainly benefit if they're among either target group, such as people confined to bed or wheelchairs. Since the technology is still being developed, most of the current learning curve is occurring among experimenters as they learn what thoughts produce what actions. Ideally, there won't be much for users.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.