The same MIT researchers who are helping the US military create robots that can autonomously generate 3D maps of their immediate location have developed similar technology humans can wear to navigate new and potentially dangerous environments.
Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have built a wearable system that senses the environment of its wearer and builds a digital map of the area as the person moves through it. The ultimate goal for the technology's development -- funded by the Air Force and the Office of Naval Research -- is to help emergency responders find their way through an unfamiliar area after a disaster safely, and possibly locate survivors, according to MIT.
MIT researchers have created a prototype of a wearable sensor that can create maps of a personís environment on the fly as they move through it. Researchers from the universityís Computer Science and Artificial Intelligence Laboratory, which based the technology on a previously designed robotic platform, envision emergency responders using the device to navigate disaster sites. (Source: MIT)
The prototype of the sensor platform is comprised of several small devices affixed to an iPad-sized sheet of hard plastic. The wearer bears the plastic on the chest like a backward backpack.
Researchers have tested the system on a graduate student who wandered through MIT hallways while the system's sensors used a wireless connection to send data to a laptop in a conference room away from the scene. As the student walked, the system created a map of his progress on the laptop, allowing people in the room to track his progress. The technology is based on a navigation system CSAIL engineers have been working on to allow robots to autonomously move through new and changing environments.
That system uses a low-cost camera -- such as the one in Microsoft's Kinect motion sensing input device -- to create images of the environment. It also uses algorithms based on Simultaneous Localization and Mapping to allow the robots to constantly update maps and keep track of their own location in it as they learn new information. In fact, the Kinect principle is becoming a driver of creating artificial intelligence to help robots more effectively and autonomously interact with their environments. Engineers even started a crowdsourcing project that allows people to use their Kinect camera to easily create 3D scans of anything and everything around them, hoping to make it easier to program environment-sensing AI in robots.
To modify the camera-based robotic sensing device they created, MIT researchers made a number of modifications to develop a wearable device so humans, too, could map their environment on the fly. For example, one of the system's sensors is a laser rangefinder that sweeps a laser beam in an arc and measures the time it takes for light pulses to return to calculate the distance of walls. However, a human -- particularly one moving through the rubble found after a disaster -- jostles the sensor more than a robot, causing a less-than-accurate reading.
The robot also has sensors in its wheels that a person wouldn't have to provide distance information, and someone responding to a disaster might have to traverse several levels of a building, requiring a map-generating sensor to recognize altitude changes.
To adapt the robot system to a wearable one, researchers added a series of accelerometers and gyroscopes and a camera, and also experimented with the use of a barometer, as air-pressure changes help to indicate change in floor level, researchers said.
One key commonality between the systems, however, is the camera, which is integral to both. The camera snaps photos of its environment every few meters, and software takes about 200 visual features from each image that are associated with particular map locations.
Given what we've all witnessed these last few days with Hurricane Sandy, it really turns a spotlight on the utility of this kind of technology to aid in disaster relief efforts. Whether it's hurricanes or earthquakes or even mine disasters, anything to help rescuers cue in on potential victims faster and help save lives is a bonus.
Beth, how true. I could imagine expanding and scaling this technology to other "labyrinthine" environments. This could include mining operations, automated warehousing applications, cave mapping, outdoors search and resue operations, firefighting, maybe even archaelogical sites as well. Pretty cool development.
I agree, Beth, I find this type of technology particularly interesting. Not only will it advance robotics development, allowing machines to sense their environment and become more intuitive, it will help people in these types of disaster-relief scenarios as well. It's always exciting to see technology that can actually make a difference in very real-world situations, particularly after something like Hurricane Sandy happens.
Beth, you are right. Such remote camera devices are very useful in disaster areas, where human interventions are not possible. I think during the tsunami disaster in Japan, they had some a similar technology (robots with camera) for monitoring atomic reactors. Such technologies have a wide application in space too.
Elizabeth, this is much like a project I saw many years ago at an IBM facility. The researchers were highlighting a message queueing mechanism. They used a Lego robotics kit, which had some basic sensors and actuators. The idea was to send one robot into a maze first. It would report on blockages and try different paths. This data was fed back to a computer which built up a map with the information. Other robots which followed would "subscribe" to the queue with the map information and would be able to navigate the maze without running into anything.
back in college I interviewed for a job where a bunch of computer geniuses were working on programs and one of their fun little projects was a learning computer program that could learn it's way through a maze. Kind of neat to see how something like this could combine with something like that to save lives.
Elizabeth, I think this is a modified version of eagle bird eye, where cameras are attaching to certain moving objects to track its motion. Images from came is transmitted through a small transmitter and receives the signal at the base station. Finally the signals are plotted over the map for path analysis.
Festo's BionicKangaroo combines pneumatic and electrical drive technology, plus very precise controls and condition monitoring. Like a real kangaroo, the BionicKangaroo robot harvests the kinetic energy of each takeoff and immediately uses it to power the next jump.
Design News and Digi-Key presents: Creating & Testing Your First RTOS Application Using MQX, a crash course that will look at defining a project, selecting a target processor, blocking code, defining tasks, completing code, and debugging.
These are the toys that inspired budding engineers to try out sublime designs, create miniature structures, and experiment with bizarre contraptions using sets that could be torn down and reconstructed over and over.
PowerStream is deploying the microgrid at its headquarters to demonstrate how people can generate and distribute their own energy and make their homes and businesses more sustainable through renewables.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.