Al, thanks for those additional details on the Kinect sensor's limitations. The fact that it doesn't detect objects less than two feet away should not be a deterrence to its use in checking out a new environment for military tasks, such as in advance of first responders. But I'm surprised that it doesn't work well in sunlight--that seems like a major limitation for these applications, and for helping the elderly or disabled, both of which were two applications the MIT team mentions and which occur at least partly in sunlight. I would not be surprised if this research team is working on methods for overcoming that problem, also.
Ann, Used as a tool to aid in developing mobile robots, the Kinect sensor provides a unique type of feedback which can be used in conjunction with flexible I/O, software algorithms and real-time controllers to quickly and easily prototype, test and deploy robotic applications. The development tools already available make it great for prototyping. But while the Kinect is useful for common robot tasks such as obstacle avoidance, like most sensors it also has limitations. For example, the Kinect cannot detect obstacles that are closer than two feet and does not work well in the sunlight. Still great technology at a mind boggling cost.
Al, I agree that obstacle-avoidance and mapmaking software is a big deal. Specifically, the map-making/obstacle avoidance algorithms based on Simultaneous Localization and Mapping (SLAM) techniques mentioned here, which may also be what's behind the tiny swarming robots' mapmaking ability:
One additional area of software innovation for mobile robots is algorithms for obstacle avoidance. Especially in systems where the mobile robot will encounter humans, such as the tire warehousing application, where the robot is "delivering" a completed tire to a storage/retrieval system, the mobile robot can encounter workers during that delivery process. The software to control those interactions are interesting and also critical to the success of the application.
Ann, The key technology with the mobile robots I've seen is software enhancements and intelligent algorithms. Enhancements in vision systems, for example, provides the mechanism to visualize and ultimately "map" the factory environment but in the end the most difficult task is the mass of intelligent software required. It ranges from becoming an expert system (gathering information to make more informed decisions) to advanced databases for storing information. Lots of software
ChasChas, that's an interesting question you pose. But some of these newer robots will be functioning autonomously, like this one, i.e., not under direct human control. So if these are designed as soldiers, not as merely explorers, the ethical situation changes somewhat.
What should be the perception of a product’s real-world performance with regard to the published spec sheet? While it is easy to assume that the product will operate according to spec, what variables should be considered, and is that a designer obligation or a customer responsibility? Or both?
Biomimicry has already found its way into the development of robots and new materials, with researchers studying animals and nature to come up with new innovations. Now thanks to researchers in Boston, biomimicry could even inform the future of electrical networks for next-generation displays.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.