By focusing their attention on patterns created by flickering lights on a PC screen, which are associated with specific actions, users can control which actions they want a robot to perform, where the robot moves, and how it interacts with its environment. (Source: CNRS-AIST Joint Robotics Laboratory)
That's right, Rob, paraplegics and quadriplegics (tetraplegics) are one target group for this technology, for actual physical robotic embodiment a la Avatar. Under the VERE project aegis, the other target group consists of rehab training for people temporarily confined to a bed or wheelchair.
The algorithms are far simpler than you think because you have a "man-in-the-loop" who can unconciously compensate for fairly large errors. For example, given 2 systems which react with a 30 degree difference in angular movement for the same input - well with one you'll just push a little harder until you get the desired result. You wouldn't even notice it. With fully automated autonomous systems, output must match input exactly or there will be trouble.
Chuck, I wish we had more info on the project's engineering details, which are still under development. Considering how much work has already been done aimed at similar goals, such as various methods of motion capture, I suspect it won't take all that long to write the algorithms. Battar, thanks for the response on this subject, too. FWIW, Fujitsu started working on turning the electrical impulses from a person's thoughts into electronically controlled actions back in the late 80s to early 90s.
Yes, if paraplegics and quadriplegics could benefit from this technology, that would indeed be wonderful. I wonder if the other groups like the elderly could also benefit? (Or would the technology learning curve be a little too steep?)
Greg, the elderly could certainly benefit if they're among either target group, such as people confined to bed or wheelchairs. Since the technology is still being developed, most of the current learning curve is occurring among experimenters as they learn what thoughts produce what actions. Ideally, there won't be much for users.
Interesting link, Jim_E. Thanks for posting. I would think that the "bionic limb" idea would actually be easier since they are trying the use the biological processes already in place to do essentially what they were designed to do - think about moving your hand that used to be at the end of your arm and the new hand at the end of your arm moves as the original once did. The process of separate robots seems like a whole other ballgame.
Jim_E, thanks for the link to that Wired article (and I agree about print editions: Rolling Stone in the hand is very different from Rolling Stone on line, e.g.). But trying to control the incredibly complex movements of a hand and its fingers has got to be a few orders of magnitude more complicated than controlling legs enough to make them walk. So I'm not surprised there's been little progress in that area.
Engineers at Fuel Cell Energy have found a way to take advantage of a side reaction, unique to their carbonate fuel cell that has nothing to do with energy production, as a potential, cost-effective solution to capturing carbon from fossil fuel power plants.
To get to a trillion sensors in the IoT that we all look forward to, there are many challenges to commercialization that still remain, including interoperability, the lack of standards, and the issue of security, to name a few.
This is part one of an article discussing the University of Washington’s nationally ranked FSAE electric car (eCar) and combustible car (cCar). Stay tuned for part two, tomorrow, which will discuss the four unique PCBs used in both the eCar and cCars.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.