Software that will let people and robots communicate to plan difficult and complex tasks, such as dismantling a nuclear power plant, is being developed at a Scottish university. (Source: Wikimedia Commons/Stefan Kühn)
Teknochip, you mean to shut down all the operating systems and software before dismantling the entire system. In that case you can control only the machinery part, nothing to deal with the dangerous nuclear fission/fusion parts.
What it sounds like is that the robot will be deciding what it will do, or what it wants to do, and telling the human. That will take a whole lot more brains than robots presently have. The problem seems to be that the humans in the situation would not have enough understanding of the situation to make correct judgements. The condition of inadequate operator understanding and insight is tracable to not having an adequate operator, usually because of not respecting the skills needed for the task.
The concept of robots communicating to do some task is quite interesting, but here is a need for caution, since the understanding of separation between robots may also lead to robot self-awareness. So we need to be aware of what is being done in the field of autonomous robotics, to avoid creating the situations that have been the subject of science fiction for many years. It does have the potential to be far worse than those stories ever predicted.
I've read the same statistic you mention, Chuck, but I'd like to know more about the specific situations. Driving a car mostly consists of understandable, easily repeatable motions. Making decisions about what to do if a truck suddenly turns around in your lane and comes back at you is a very different set of problems and decision-making. I'm giving that example because it's something completely unexpected (something similar happened to me once at 60 mph in the fast lane). In any case, something completely unexpected that the remote human can't see very well--i.e., inside a Fukushima reactor--and that needs to be done right the first time requires complex, highly sophisticated decision-making skills, and very good communication between robot and remote human. The researchers think that the ability to communicate thoroughly before and during complex, dangerous tasks, like two people would, is a good idea.
I don't really get the point of this either. If I am understanding the article, it sounds like they expect the robot to do things that the observers would have trouble figuring out. If the algorithms are that complex, it looks like the programmers would implement logging, or some trail of breadcrumbs to discern why the robot is doing what it is doing.
I'm a little surpised to hear that the robot's creators would be anticipating so much difficulty and confusion. The robotic driving systems developed by Google have been nearly flawless, despite the fact they have to deal with unpredictable humans. I recently read that Google cars have had only one accident after logging 250,000 miles, and that happened when a human driver decided to take the wheel.
Beth, this is text to logic symbols and back: no audio. As we mentioned, humans communicate with the robot via a keyboard (at least during the remote operation). Although the sources didn't specify, my guess is the humans see the robot's translated symbols-to-text on a screen. The big deal is being able to communicate in detail to a remote robot at a much more sophisticated level than was possible before. So instead of just being the humans' eyes and perhaps hands--or bomb zappers--like many of the military and rescue robots we've covered, this can let the humans stay at a distance. At Fukushima, all they could do was check and report back. Humans still had to go in to the high-rad area and decommission it. With this, they won't have to.
Mydesign, the software is not used to dismantle the power plant. The software is used to help humans and robots communicate ahead of time and during such a delicate operation, to make sure everything goes right. What other kinds of developments did you have in mind?
I agree with Tekochip; it seems like another way of translating machine algorthims into human-friendly text so data regarding the environment or instructional information can be passed back and forth. There's no spoken component to these systems, is there?? Not to say this isn't valuable or interesting, BTW.
One way to keep a Formula One racing team moving at breakneck speed in the pit and at the test facility is to bring CAD drawings of the racing vehicle’s parts down to the test facility and even out to the track.
Most of us would just as soon step on a cockroach rather than study it, but that’s just what researchers at UC Berkeley did in the pursuit of building small, nimble robots suitable for disaster-recovery and search-and-rescue missions.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies.
You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived.
So if you can't attend live, attend at your convenience.