How Can Engineers Stop AI from Going Rogue?: Page 2 of 2

Doomsday scenarios aside, rogue AI still presents significant hazards to safety and productivity in the workplace and in public. And researchers are actively developing means of keep artificial intelligence in check.

Problem solved, right? Well, while Google's safe interruptibility concept works for individual AI, what EPFL researchers discovered in their own work is that it isn't possible to generalize Google's proposed interruptibility method to a multi-agent system. Essentially, when you have systems in which AI are communicating with other AI it compounds the problem of interruptibility significantly.

“AI is increasingly being used in applications involving dozens of machines, such as self-driving cars on the road or drones in the air,” Alexandre Maurer, one of the authors of the EPFL study, said. “That makes things a lot more complicated, because the machines start learning from each other – especially in the case of interruptions. They learn not only from how they are interrupted individually, but also from how the others are interrupted.”

In their own paper the EPFL researchers outline the potential for danger of this in an example using two autonomous cars. Let's say you have two autonomous vehicles on a narrow road, one behind the other. If the human driver in the front car is constantly taking control over the vehicle (interrupting the AI), it creates a pattern of behavior that the second car must then adapt and adjust to. If the first car brakes too often, for example, it can create confusion in the AI of the second car as to when it should be braking or slowing down, which could result in a collision. The researchers also caution that malicious use must also be taken into effect. What if the driver of the front car deliberately starts driving erratically to confuse the AI of the second car?

To address these issues within multi-agent systems, the EPFL researchers have introduced what they call “dynamic safe interruptibility.” EPFL's solution tackles the multi-agent systems problem on two levels – which AI the researchers classified as “joint action learners” and “independent learners.” Joint action learners are AI agents that all simultaneously observe the outcome of a given action and each learn from it separately. Independent learners, as their name suggests, don't learn directly from each other, but rather focus on learning their tasks individually. In this case agents will adapt to behaviors performed by others, but without any direct coordination.

Imagine factory robots on an assembly line. In a joint action learner system each robot communicates with the others on the assembly line to let them know what it's doing so they can learn and adjust their own actions accordingly. The robots act as an interconnected group looking to assemble the best possible final product. In an independent learner system these same robots are only focused on the task in front of them and any learning occurs as an adaptation to what another robot in the system may have done rather than direct communication. In both of these systems however interruptions can create problematic behaviors that can propagate through the system.

The EPFL researchers found in both these instances it is possible to create dynamic safe interruptibility – in which the entire system of AI doesn't learn from interruptions happening with any other AI in the system – by applying a pruning technique that detects interruptions and instructs agents not to learn from them. If you think of the machines like a family and the interruptions like punishment from a parent then the researchers allow for one child to be punished without that punishment affecting all of the other children as well.

“Simply put, we add ‘forgetting’ mechanisms to the learning algorithms that essentially delete bits of a machine’s memory. It’s kind of like the flash device in Men in Black,” El Mahdi El Mhamdi, another author of the EPFL study said.

“We worked on existing algorithms and showed that dynamic safe interruptibility can work no matter how complicated the AI system is, the number of robots involved, or the type of interruption. We could use it with the Terminator and still have the same results,” EPFL's Maurer added.

The researchers state that the next step for their work would be to study how dynamic safe interruptibility operates when more complicated neural networks are used in place of reinforcement learning. Neural networks present their own challenges in terms of interruptions because they can be more predictive. If a neural network algorithm starts to make predictions based on when it is interrupted it can create all sorts of erratic behaviors in a machine. “A smart experience replay mechanism that would pick observations for which the agents have not been interrupted for a long time more often than others is likely to solve this issue,” the EPFL paper suggests. “More generally, experience replay mechanisms that compose well with safe interruptibility could allow to compensate for the extra amount exploration needed by safely interruptible learning by being more efficient with data.”

Pacific Design and Manufacturing

REGISTER FOR PACIFIC DESIGN & MANUFACTURING 2018

Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, is back at the Anaheim Convention Center February 6-8, 2018! Over three days, OKuncover software innovation, hardware breakthroughs, fresh IoT trends, product demos and more that will change how you spend time and money on your next project. CLICK HERE TO REGISTER TODAY!

Chris Wiltz is a Senior Editor at  Design News , covering emerging technologies including AI, VR/AR, and robotics.

[Main image source: Pixabay]

Comments (1)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...