Software that will let people and robots communicate to plan difficult and complex tasks, such as dismantling a nuclear power plant, is being developed at a Scottish university. (Source: Wikimedia Commons/Stefan Kühn)
What it sounds like is that the robot will be deciding what it will do, or what it wants to do, and telling the human. That will take a whole lot more brains than robots presently have. The problem seems to be that the humans in the situation would not have enough understanding of the situation to make correct judgements. The condition of inadequate operator understanding and insight is tracable to not having an adequate operator, usually because of not respecting the skills needed for the task.
The concept of robots communicating to do some task is quite interesting, but here is a need for caution, since the understanding of separation between robots may also lead to robot self-awareness. So we need to be aware of what is being done in the field of autonomous robotics, to avoid creating the situations that have been the subject of science fiction for many years. It does have the potential to be far worse than those stories ever predicted.
Teknochip, you mean to shut down all the operating systems and software before dismantling the entire system. In that case you can control only the machinery part, nothing to deal with the dangerous nuclear fission/fusion parts.
Ann, I have some other idea for disaster management, where humans can interact with robots via wifi or any other communication channel. This will help for a remote control operation from a master facility to control each and every wing of the nuclear station and safe shut down, in case of disaster.
Thanks, William, I think you captured the point of this research in your comments about autonomy. It is aimed at more autonomy in robots, which is why communication has to be much more detailed, and accurate, than it has to date. But inadequacy of the human operator is not the issue: inadequacy and incompleteness of information about why the robot makes the decisions it makes was one of the main spurs to this research. The two-way logic-to-text and text-to-logic communication will also let the human make informed suggestions and provide more data once it understands the situation as reported by the robot.
Mydesign, wireless communication with remote-controlled robots is already used in military, nautical and rescue robots, among many other types, as we've mentioned before: http://www.designnews.com/author.asp?section_id=1386&doc_id=247687 http://www.designnews.com/author.asp?section_id=1386&doc_id=242527 http://www.designnews.com/author.asp?section_id=1386&doc_id=246206 But that does not solve the communication problem. Most robots can only report back very limited types of data. And communication is one way in one direction and then one way in the other direction. It does not allow for full-duplex two-way conversations. Plus, the robots are not intelligent enough, or autonomous enough, to perform the delicate operations of decommissioning a nuclear power plant.
Yes, Ann, the robots have only their sensor information to base decisions on, and that is often not enough to make the very best choice. That was part of the basis for my comments about the value of experienced humans in the loop. Robots lack insight and understanding, they can only make the decisions that they are programmed to make, which may well be safe, but probably not optimum.
Giving the robots more data by allowing accurate communication will certainly offer the potential for better choices, and the concept of communicating that basis for the choices to a human is a good idea that should have been put into practice about 25 years ago.
William, thanks for clarifying. I agree, when I read the initial report, I thought why the heck hadn't somebody already figured this out and implemented it ages ago? OTOH, I don't think the state of hardware--sensors and processors--and comms tech were available for robots that could take advantage of this "translation" program.
The robot is following some program that some human loaded into it. The robot can only do what the programmer told it to do.
So, the human tells the robot what to do (via the program), and then the human says "why are you doing that?". The answer is always the same - "because you told me to".
I would think the obvious solution to this supposed problem is to send all sensor data to a computer that is running the same decision making software as the robot, and watch what the program is doing. (It will be doing what you told it to do - which may or may not be what you thought you told it to do.)
This article somehow makes it sound like the robot has a mind of its own. It doesn't. It can only do what some human told it to do, so why ask it why? The answer is, I'm doing what you told me to do given these sensor values.
With major product releases coming from big names like Sony, Microsoft, and Samsung, and big investments by companies like Facebook, 2015 could be the year that virtual reality (VR) and augmented reality (AR) finally pop. Here's take a look back at some of the technologies that got us here (for better and worse).
Good engineering designs are those that work in the real world; bad designs are those that don’t. If we agree to set our egos aside and let the real world be our guide, we can resolve nearly any disagreement.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.