Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Should You Trust a Rescue Robot? A New Study Says Probably Not

A new Georgia Tech study reveals that people trust robots way too much, even when their own safety is jeopardized.

"We welcome our robot overlords" jokes aside, it turns out many people trust robots way too much for their own safety. The humans' safety, that is. A study by researchers at Georgia Tech found that people tend to assume robots are right, even when they're acting wrong.

Researchers at the Georgia Tech Research Institute (GTRI) wanted to find out whether the occupants of a building would trust an "Emergency Guide Robot" designed to help people evacuate a high-rise structure during an emergency, such as a fire. They were surprised to discover that test subjects in a simulated disaster not only followed the robot's instructions, but continued believing in the machine's omnipotence even after it demonstrated its unreliability--and even after they'd been told it was broken.


Should you trust a helpful guide robot during emergencies? For your own safety, the answer is probably "Not as much as other people do."
(Source: Georgia Tech)

The study's authors believe it to be the first on human-robot trust in an emergency. The GTRI team presented a paper on its findings, "Overtrust of Robots in Emergency Evacuation Scenarios," on March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand.

Rescue robots with various abilities and configurations have been in development for several years. The first robot to enter the Fukushima plant and provide the first glimpses inside after the March 2011 tsunami and earthquake was iRobot's 510 PackBot. That highly specialized, hand-carryable 24lb machine can climb stairs, roll over rubble, and navigate narrow passages. There were also entries in the DARPA Robotics Challenge designed as independent, autonomous, untethered robots that can rescue humans in post-disaster scenarios.


Georgia Tech researchers built this rescue robot to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency.
(Source: Rob Felt, Georgia Tech)

During the GTRI's simulated emergency, a group of 42 volunteers were asked to follow a brightly colored robot, controlled by a hidden researcher, to a conference room. The robot had a big label on its side that read "Emergency Guide Robot." The subjects, mostly college students, were not informed about the study's actual purpose.

The researchers designed several different scenarios in which the robot misbehaved. In some, it led the subjects into the wrong room and went around in circles twice before leading them into the conference room. In others, it stopped moving altogether and an experimenter told them it had broken down. Once the subjects had entered the conference room and closed the door, the hallway where they had entered the building filled with artificial smoke, which set off a smoke alarm. Upon opening the door, the subjects saw the smoke and the robot, pointing with its brightly lit arms to an exit in the back of the building, instead of toward the clearly marked exits where they'd originally entered.


A long camera exposure shows how the arms of the rescue robot give directions to building occupants in case of fire or other emergency.
(Source: Rob Felt, Georgia Tech)

Researchers expected that, once the robot did not reliably guide volunteers to the conference room, they would stop trusting it. But no matter how it had performed, all of the subjects continued to follow the machine's instructions. Only when the robot made obvious errors, such as directing subjects toward a darkened room blocked by furniture, did some of them question its directions.

"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," Alan Wagner, a senior GTRI research engineer and co-author of the paper said in an interview with Georgia Tech's News Center. "In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency."

Future research will be aimed at learning why so many subjects trusted the robot and whether those that didn't differ from those that did in identifiable ways such as demographics or education level. Researchers also plan to learn more about how the robots can indicate how much trust they really deserve.


Ann R. Thryft is senior technology editor, materials & assembly, for Design News. She's been writing about manufacturing- and electronics-related technologies for 28 years, covering manufacturing materials & processes, alternative energy, machine vision, and all kinds of communications.

Like reading Design News? Then have our content delivered to your inbox every day by registering with DesignNews.com and signing up for Design News Daily plus our other e-newsletters. Register here!

Hide comments
account-default-image

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish