Al, thanks for those additional details on the Kinect sensor's limitations. The fact that it doesn't detect objects less than two feet away should not be a deterrence to its use in checking out a new environment for military tasks, such as in advance of first responders. But I'm surprised that it doesn't work well in sunlight--that seems like a major limitation for these applications, and for helping the elderly or disabled, both of which were two applications the MIT team mentions and which occur at least partly in sunlight. I would not be surprised if this research team is working on methods for overcoming that problem, also.
Ann, Used as a tool to aid in developing mobile robots, the Kinect sensor provides a unique type of feedback which can be used in conjunction with flexible I/O, software algorithms and real-time controllers to quickly and easily prototype, test and deploy robotic applications. The development tools already available make it great for prototyping. But while the Kinect is useful for common robot tasks such as obstacle avoidance, like most sensors it also has limitations. For example, the Kinect cannot detect obstacles that are closer than two feet and does not work well in the sunlight. Still great technology at a mind boggling cost.
Al, I agree that obstacle-avoidance and mapmaking software is a big deal. Specifically, the map-making/obstacle avoidance algorithms based on Simultaneous Localization and Mapping (SLAM) techniques mentioned here, which may also be what's behind the tiny swarming robots' mapmaking ability:
One additional area of software innovation for mobile robots is algorithms for obstacle avoidance. Especially in systems where the mobile robot will encounter humans, such as the tire warehousing application, where the robot is "delivering" a completed tire to a storage/retrieval system, the mobile robot can encounter workers during that delivery process. The software to control those interactions are interesting and also critical to the success of the application.
Ann, The key technology with the mobile robots I've seen is software enhancements and intelligent algorithms. Enhancements in vision systems, for example, provides the mechanism to visualize and ultimately "map" the factory environment but in the end the most difficult task is the mass of intelligent software required. It ranges from becoming an expert system (gathering information to make more informed decisions) to advanced databases for storing information. Lots of software
ChasChas, that's an interesting question you pose. But some of these newer robots will be functioning autonomously, like this one, i.e., not under direct human control. So if these are designed as soldiers, not as merely explorers, the ethical situation changes somewhat.
was my first encounter with what are called autonomous robots, and the one in this story is my second. Both made me wonder where else that idea is being used, and what different technologies make them possible, in particular, the navigation and map-making abilities. The industrial environment is certainly an obvious choice.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.