From mowing the lawn to tracking hurricanes and providing therapeutic treatments, robots are undoubtedly becoming an integral part of the way we do things. But to help these machines interact effectively with their environment, engineers need to provide them with artificial intelligence tools that can help them recognize the things they may encounter.
Enter a new crowdsourcing project that makes it easy for people with a Microsoft Kinect motion-sensing device to make 3D scans of anything and everything around them. Robotics engineers plan to create a catalog of scans that will help them design intelligence tools into robots.
The Website for the Kinect@Home project asks volunteers to "help the robots." Using their Kinect, volunteers produce 3D models that can be shared on the site or with friends, embedded into other Websites, or imported into 3D modeling software to be used in other applications.
The project provides instructions, a plug-in, drivers, and installation software. Once the software is installed on a local computer, people can point their Kinect at whatever they want to scan and begin creating their models.
Microsoft released the first Kinect for its Xbox 360 video game console in late 2010. A version for Windows came out early this year. Microsoft has sold more than 18 million copies of the device, which lets people interact with the Xbox or a Windows PC using gestures and spoken commands.
Although image data storage isn't exactly small or cheap in terms of memory required, I think the basic idea here is analogous to that of machine vision image libraries, where the machine vision user builds up a database of images of objects to be inspected on the line, such as PC boards and components on the boards.
The idea is to create 3D scans of various objects to help teach robots about their environment and the objects in it, so they can navigate the environment and manipulate those objects, including, for example, refrigerators and people. An example given in this IEEE Spectrum article http://spectrum.ieee.org/automaton/robotics/robotics-hardware/kinecthome-wants-to-start-3d-scanning-the-world?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+IeeeSpectrum+(IEEE+Spectrum) is teaching a robot to open a fridge door. First, the robot has to have a map of a fridge door and how it operates. If the robot is Kinect-equipped, as many now are (in R&D, anyway), it can use 3D images for those maps. But fridges aren't all the same size, don't have the same kind of door, and doors aren't always located on the same side of the box. So it needs an image library for each object: lots and lots of images.
This is an interesting project. The Kinect is an interesting device, and has many uses. Building up a database in this way is an outstanding way to get a large mass of information in a short time. In AI it is very beneficial to have a large training set. Frankly, this is true of us humans as well.
Looks pretty cool and I like the crowdsourcing angle a ton, but I'm not really sure what kinds of scans are being collected with the Kinect. It is scans of people, physical objects, movements? I'm also curious how this data is being fed back to robotics designers for future use? My guess is through the site community, but just wanted to confirm.
Everyone has had the experience of trying to scrape the last of the peanut butter or mayonnaise from the bottom of a glass jar without getting your hand sticky. Inventor Ron Jidmar thinks he has a solution to all of that nonsense with a flexible jar design that can be squeezed with one hand to lift contents from the bottom to the top of a jar or container, leaving the other hand free to scoop the contents out cleanly.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.