Software that will let people and robots communicate to plan difficult and complex tasks, such as dismantling a nuclear power plant, is being developed at a Scottish university. (Source: Wikimedia Commons/Stefan Kühn)
Ann, what's the need of software to dismantle the nuclear power plant. I know the importance of human robot communication, but I think the developments has to be happen in other directions like disaster management and rescue operations.
I agree with Tekochip; it seems like another way of translating machine algorthims into human-friendly text so data regarding the environment or instructional information can be passed back and forth. There's no spoken component to these systems, is there?? Not to say this isn't valuable or interesting, BTW.
Beth, this is text to logic symbols and back: no audio. As we mentioned, humans communicate with the robot via a keyboard (at least during the remote operation). Although the sources didn't specify, my guess is the humans see the robot's translated symbols-to-text on a screen. The big deal is being able to communicate in detail to a remote robot at a much more sophisticated level than was possible before. So instead of just being the humans' eyes and perhaps hands--or bomb zappers--like many of the military and rescue robots we've covered, this can let the humans stay at a distance. At Fukushima, all they could do was check and report back. Humans still had to go in to the high-rad area and decommission it. With this, they won't have to.
Teknochip, you mean to shut down all the operating systems and software before dismantling the entire system. In that case you can control only the machinery part, nothing to deal with the dangerous nuclear fission/fusion parts.
Mydesign, I think the intent here regarding nuclear power plants refers to the robots used in dealing with the most radioactive parts. You might recall that in the Fukushima situation some robots from the US were sent in to check the affected areas so that humans would not have to. These robots carried cameras and sensors for that task.
Typically researchers will mention high value situations like this. If it works, though, the real money is always in high volume. The real payday on something like this is the cell phone market.
Mydesign, the software is not used to dismantle the power plant. The software is used to help humans and robots communicate ahead of time and during such a delicate operation, to make sure everything goes right. What other kinds of developments did you have in mind?
Ann, I have some other idea for disaster management, where humans can interact with robots via wifi or any other communication channel. This will help for a remote control operation from a master facility to control each and every wing of the nuclear station and safe shut down, in case of disaster.
Mydesign, wireless communication with remote-controlled robots is already used in military, nautical and rescue robots, among many other types, as we've mentioned before: http://www.designnews.com/author.asp?section_id=1386&doc_id=247687 http://www.designnews.com/author.asp?section_id=1386&doc_id=242527 http://www.designnews.com/author.asp?section_id=1386&doc_id=246206 But that does not solve the communication problem. Most robots can only report back very limited types of data. And communication is one way in one direction and then one way in the other direction. It does not allow for full-duplex two-way conversations. Plus, the robots are not intelligent enough, or autonomous enough, to perform the delicate operations of decommissioning a nuclear power plant.
I'm a little surpised to hear that the robot's creators would be anticipating so much difficulty and confusion. The robotic driving systems developed by Google have been nearly flawless, despite the fact they have to deal with unpredictable humans. I recently read that Google cars have had only one accident after logging 250,000 miles, and that happened when a human driver decided to take the wheel.
I don't really get the point of this either. If I am understanding the article, it sounds like they expect the robot to do things that the observers would have trouble figuring out. If the algorithms are that complex, it looks like the programmers would implement logging, or some trail of breadcrumbs to discern why the robot is doing what it is doing.
I've read the same statistic you mention, Chuck, but I'd like to know more about the specific situations. Driving a car mostly consists of understandable, easily repeatable motions. Making decisions about what to do if a truck suddenly turns around in your lane and comes back at you is a very different set of problems and decision-making. I'm giving that example because it's something completely unexpected (something similar happened to me once at 60 mph in the fast lane). In any case, something completely unexpected that the remote human can't see very well--i.e., inside a Fukushima reactor--and that needs to be done right the first time requires complex, highly sophisticated decision-making skills, and very good communication between robot and remote human. The researchers think that the ability to communicate thoroughly before and during complex, dangerous tasks, like two people would, is a good idea.
What it sounds like is that the robot will be deciding what it will do, or what it wants to do, and telling the human. That will take a whole lot more brains than robots presently have. The problem seems to be that the humans in the situation would not have enough understanding of the situation to make correct judgements. The condition of inadequate operator understanding and insight is tracable to not having an adequate operator, usually because of not respecting the skills needed for the task.
The concept of robots communicating to do some task is quite interesting, but here is a need for caution, since the understanding of separation between robots may also lead to robot self-awareness. So we need to be aware of what is being done in the field of autonomous robotics, to avoid creating the situations that have been the subject of science fiction for many years. It does have the potential to be far worse than those stories ever predicted.
Thanks, William, I think you captured the point of this research in your comments about autonomy. It is aimed at more autonomy in robots, which is why communication has to be much more detailed, and accurate, than it has to date. But inadequacy of the human operator is not the issue: inadequacy and incompleteness of information about why the robot makes the decisions it makes was one of the main spurs to this research. The two-way logic-to-text and text-to-logic communication will also let the human make informed suggestions and provide more data once it understands the situation as reported by the robot.
Yes, Ann, the robots have only their sensor information to base decisions on, and that is often not enough to make the very best choice. That was part of the basis for my comments about the value of experienced humans in the loop. Robots lack insight and understanding, they can only make the decisions that they are programmed to make, which may well be safe, but probably not optimum.
Giving the robots more data by allowing accurate communication will certainly offer the potential for better choices, and the concept of communicating that basis for the choices to a human is a good idea that should have been put into practice about 25 years ago.
William, thanks for clarifying. I agree, when I read the initial report, I thought why the heck hadn't somebody already figured this out and implemented it ages ago? OTOH, I don't think the state of hardware--sensors and processors--and comms tech were available for robots that could take advantage of this "translation" program.
The robot is following some program that some human loaded into it. The robot can only do what the programmer told it to do.
So, the human tells the robot what to do (via the program), and then the human says "why are you doing that?". The answer is always the same - "because you told me to".
I would think the obvious solution to this supposed problem is to send all sensor data to a computer that is running the same decision making software as the robot, and watch what the program is doing. (It will be doing what you told it to do - which may or may not be what you thought you told it to do.)
This article somehow makes it sound like the robot has a mind of its own. It doesn't. It can only do what some human told it to do, so why ask it why? The answer is, I'm doing what you told me to do given these sensor values.
ttemple, everything you said is correct about non-autonomous robots. This research team, like several others, is developing intelligent, autonomous robots, something very different. William's comment below, "Human-Robot communications", captures this difference.
Absolutely, William K. That is a very good description of the issue. Even routine functions can quickly change to ones that require past experience. That is why a lot of experts can operate on "gut feel". They can't explain their correct actions because it is based off of experience of similar occurences. This simply cannot be captured in a program.
I have programmed industrial robots and the closest those robots came to "insight" was knowing that they had to slow-down in order to accurately make a turn. This presents a quandry of sorts when the robot is doing something like putting a sealant along a seal surface, where a larger radius rounded corner is not what is needed. The solution was to bring the robot to a point, then a separate move from that point to the change in direction point, and then start in the new direction. A simple work-around. But if the robot had been able to tell that it needed to do something in order to be able to change direction it may have been easier to figure out. Instead, it was nessesary to read the 4000 page instruction manual.
The problems that will come with attempting to give robots insight is that it may easily lead to giving the robots self-awareness, which would probably lead to robots having emotions, and that could be VERY BAD. That is because robot source code is written by programmers, and programmers are not normal people. We need to always remember that, and beware.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.