Al, thanks for those additional details on the Kinect sensor's limitations. The fact that it doesn't detect objects less than two feet away should not be a deterrence to its use in checking out a new environment for military tasks, such as in advance of first responders. But I'm surprised that it doesn't work well in sunlight--that seems like a major limitation for these applications, and for helping the elderly or disabled, both of which were two applications the MIT team mentions and which occur at least partly in sunlight. I would not be surprised if this research team is working on methods for overcoming that problem, also.
Ann, Used as a tool to aid in developing mobile robots, the Kinect sensor provides a unique type of feedback which can be used in conjunction with flexible I/O, software algorithms and real-time controllers to quickly and easily prototype, test and deploy robotic applications. The development tools already available make it great for prototyping. But while the Kinect is useful for common robot tasks such as obstacle avoidance, like most sensors it also has limitations. For example, the Kinect cannot detect obstacles that are closer than two feet and does not work well in the sunlight. Still great technology at a mind boggling cost.
Al, I agree that obstacle-avoidance and mapmaking software is a big deal. Specifically, the map-making/obstacle avoidance algorithms based on Simultaneous Localization and Mapping (SLAM) techniques mentioned here, which may also be what's behind the tiny swarming robots' mapmaking ability:
One additional area of software innovation for mobile robots is algorithms for obstacle avoidance. Especially in systems where the mobile robot will encounter humans, such as the tire warehousing application, where the robot is "delivering" a completed tire to a storage/retrieval system, the mobile robot can encounter workers during that delivery process. The software to control those interactions are interesting and also critical to the success of the application.
Ann, The key technology with the mobile robots I've seen is software enhancements and intelligent algorithms. Enhancements in vision systems, for example, provides the mechanism to visualize and ultimately "map" the factory environment but in the end the most difficult task is the mass of intelligent software required. It ranges from becoming an expert system (gathering information to make more informed decisions) to advanced databases for storing information. Lots of software
ChasChas, that's an interesting question you pose. But some of these newer robots will be functioning autonomously, like this one, i.e., not under direct human control. So if these are designed as soldiers, not as merely explorers, the ethical situation changes somewhat.
Altair has released an update of its HyperWorks computer-aided engineering simulation suite that includes new features focusing on four key areas of product design: performance optimization, lightweight design, lead-time reduction, and new technologies.
At IMTS last week, Stratasys introduced two new multi-materials PolyJet 3D printers, plus a new UV-resistant material for its FDM production 3D printers. They can be used in making jigs and fixtures, as well as prototypes and small runs of production parts.
In a line of ultra-futuristic projects, DARPA is developing a brain microchip that will help heal the bodies and minds of soldiers. A final product is far off, but preliminary chips are already being tested.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.