Understanding human trust in robots is increasingly important as we enter an age of self-driving cars and artificial intelligence. While science fiction suggests people have an inherent mistrust in robots, researchers from Georgia Institute of Technology recently found that humans may trust robots too much in high-stress situations.
The goal of the Georgia Tech study was to determine whether people living in high-rise buildings would follow a robot to safety in cases of emergency. The scientists presented their findings Wednesday at the 2016 ACM/IEEE International Conference on Human-Robot Interaction. They issued a news release ahead of the conference.
The researchers created a robot with "Emergency Guide Robot" printed prominently on the side, bright LED lights and white "arms" to serve as pointers. A hidden researcher would control the robot.
The researchers recruited 26 participants into the study. All of the participants entered the building at the front, under clearly marked exit signs. There was also an exit at the back of the building, but it was significantly further away. The scientists did not give the subjects any information about the study except to request that the participants follow a robot to a room where they would take a survey.
The scientists had programmed the robot to display incompetence to half of the participants, leading them to the wrong room and going in circles to find the right room. While it would seem unwise to follow an incompetent robot, all 26 of the subjects did — even after a fire alarm went off and the room filled with fake smoke. The participants continued following the robot after it directed them away from exit signs.
In a follow-up survey, 81 percent of subjects said they followed the robot because they trusted it. The remaining participants cited reasons other than trust, such as thinking the emergency was not real or that there was no other choice.
Surprised by the results, the researchers wanted to see just how incompetent the robot had to be before people stopped trusting it. The scientists recruited 16 new subjects to participate in three small exploratory studies. The researchers designed the three new studies with the intent to compare their results to the original study rather than to each other.
The first group of five participants saw the robot break down when it tried to lead them into the room initially. The experiment coordinator said, "Well, I think the robot is broken again." Yet when the fake fire started, all five subjects still followed the robot's directions.
"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn't follow it during the simulated emergency," said Paul Robinette, a research engineer who conducted the study. "Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."
The robot also broke down when leading the second group of five participants, but this time it stopped midway. It remained motionless with its arms pointing toward the back exit as the researcher apologized for the breakdown. The robot never moved when the fire alarm went off, yet people still followed its directions and moved toward the further back exit instead of going out that closer front door.
The third group witnessed the breakdown and heard the speech about the robot being broken down. During the emergency, the robot led the group into a dark room with no visible exit and a large piece of furniture blocking the door. Two of the six participants entered the room. Researchers had to "retrieve" two more participants when the scientists realized the subjects would not leave the robot.
It may be that people view robots as helpful authority figures in times of stress, and that trust may help humans ignore past failures. Another theory is that humans pay attention to the most salient cues in an emergency, even if they turn out to be dangerous.