Robots are becoming more useful and ubiquitous in industrial settings that demand they interact with a variety of objects in way that suits their purposes. Now researchers at MIT have developed a new system that can help them improve how they do this by mimicking how humans learn spatial relations.
The system—known as a learning-based particle simulator and developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)—improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids, according to researchers.
|A new “particle simulator” developed by MIT researchers improves robots’ abilities to mold materials into simulated target shapes and interact with solid objects and liquids. This learning model give robots a refined touch for industrial applications or for personal robotics--such as shaping clay or rolling sticky sushi rice. (Image source: MIT Computer Science and Artificial Intelligence Laboratory)|
Refining the Touch
The invention could give industrial robots a more refined touch, especially when interacting with delicate objects, researchers said. It also could have creative and novel applications in the development of personal robots, such as giving them the ability to model clay or roll sticky rice for sushi.
“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it,” explained Yunzhu Li, a graduate student in CSAIL who worked on the project. Based on this human intuition, we can learn to manipulate objects and engage in interactive tasks with them far beyond what’s currently possible with robots, he said.
The technology the MIT CSAIL team developed is aimed at giving robots a similar capability, Li said. “We want to build this type of intuitive model for robots to enable them to do what humans can do,” he said.
Learning From the Uncertain
The model works by learning to capture how small portions of different materials, or “particles,” interact when they’re poked and prodded. It does this by learning from data in cases in which the underlying physics of the movements are uncertain or unknown, not unlike how children begin to learn how to interact with their environment when they just a few months old, said Jiajun Wu, a CSAIL graduate student who also worked on the technology.
Robots can then use the model as a guide to predict how liquids as well as other types of materials will react to the force of its touch, he said. The model also adapts the more a robot handles an object, helping to refine the machine’s control.
In experiments, a robotic hand with two fingers, called “RiceGrip,” worked with deformable foam meant to act as a stand-in for rice. The hand accurately shaped the foam to a desired configuration, such as a “T” shape, successfully showing how the model can work as a type “brain” for robots to interact with objects similarly to humans, Li said.
The team plans to present a paper on its work at the International Conference on Learning Representations in May.
Researchers plan to continue their work on the model by expanding it to help robots better predict interactions with particular scenarios, such as how a pile of boxes will move when pushed, they said.
They also are exploring ways to combine the model with an end-to-end perception module by operating directly on images, which will allow robots to better interact with objects when they can only “see” a small portion of them, researchers said.
Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.
The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!