How to Improve Robots' Social Skills for Interaction
Researchers at MIT have created a simulated environment using computer modeling that can teach robots to better interact with each other and ultimately humans.
November 29, 2021
While robots are becoming increasingly sophisticated to perform more and more tasks typically reserved for humans, one thing they haven’t quite mastered is social skills.
That could change in the future with help from new technologies like the one recently developed by researchers at MIT. A team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a computer model-based framework for robotics that has incorporated within it certain social interactions that direct how robots can better interact.
The technology also can help the machines learn to perform these social behaviors—based on a simulated environment that creates realistic and predictable social interactions for machines--on their own.
Though the technology is aimed at enabling robots to better interact with each other, it could help them one day lead to smoother and safer interactions with humans, explained Boris Katz, principal research scientist and head of CSAIL’s InfoLab Group and a member of the Center for Brains, Minds, and Machines (CBMM).
“Robots will live in our world soon enough, and they really need to learn how to communicate with us on human terms,” he said in a press statement. “They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening.”
The Framework
Specifically, the environment that researchers developed is one in which robots pursue physical and social goals as they move around a two-dimensional grid, researchers explained.
They designed each physical goal to relate to the environment, which each social goal involves guessing what a robot is trying to do and then acting on that guess, researchers said. In the environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.
For example, a robot’s physical goal might be to navigate to a tree at a certain point on the grid. Another robot may try to guess what that robot will do next—such as water the tree—and then act in a way that helps or hinders that goal, depending on its own goals.
“We have opened a new mathematical framework for how you model social interaction between two agents,” explained Ravi Tejwani, a research assistant at CSAIL, in a press statement. “Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically.”
Reward-Based System
In the system they’ve created, researchers use their model to specify what a robot’s physical goals are, what its social goals are, and how much emphasis it should place on one over the other, they said.
The researchers defined three types of robots in the framework--a level 0 robot that has only physical goals and cannot reason socially; a level 1 robot that has physical and social goals that assume all other robots only have physical goals; and a level 2 robot that assumes other robots have social and physical goals.
The model rewards a robot for actions it takes that get it closer to accomplishing its goals. If a robot is trying to help another robot, it adjusts its reward to match that of its companion; if it is trying to hinder, it adjusts its reward to be the opposite.
The system uses an algorithm as a planner that decides which actions the robot should take by continuously updating the reward to guide the robot to carry out a blend of physical and social goals.
Future Advancements
While the system currently aids strictly in robot-to-robot interactions, researchers said it could one day lead to smoother and more positive human-robot interactions, Katz said.
“This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” he said in a press statement. A paper on the team’s work is available online.
Researchers will continue their research to create a more sophisticated system with 3D agents in an environment that allows many more types of interactions, such as the manipulation of household objects, they said. They are also planning to modify their model to include environments where actions can fail.
The team also wants to develop a neural network-based robot planner into the model that learns from experience and performs faster. Researchers also want to run an experiment to collect data about the features humans use to determine if two robots are engaging in social interaction to further advance their technology, they said.
Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking
About the Author
You May Also Like