Ann, this might mark me out as a bit wierd, but I think about this a lot. Whenever I put the silverware away I thnk to myself, how would I program a robot to do this?
What really strikes me about this, and some other situations I have seen, is that people are programming robots to do things using a fairly simple vision system along with memory (a database) and an algorithm. This contrasts with robotics approaches that use all kinds of complex sensors. In many cases they are trying to automate something we do with our simple sensors naturally. Interesting.
Picking up an object is only part of the problem. The picture shows a gripper spilling a glass of water. After the object is grasped, some purpose must be accomplished. If the water were wine and needed to go from a pitcher into a glass, it would be inportant not to spill it onto the floor or table, and that the robot's 'fingers' not get into the wine. While this is an interesting line of research, I can't see it replacing purpose-built grippers yet.
naperlou, not everyone thinks about how a robot would do things they themselves are doing. But that does sound like how engineers think. Thanks for the observation about the lack of sensors here--I think that's a good point, and it's interesting to know this isn't the only research team taking that approach.
Glenn, thanks for that observation about the photo. I should have pointed out in the caption that this universal gripper, without the algorithm, can pick up objects but that this shows how it does so in a non-optimal manner, forming a "before" picture.
Ann R Thryft; Yes, optimal vs. non-optimal is the clarification. For some applications the optimal gripper is vacuum cup(s). The human hand is a very versatile end effector. Duplicating it is not easy. There could be applications where this gripper would be optimal, but I don't think the water glass is one of them.
Right now this looks like a technology development seeking a solution. As the robots develop, solutions will appear. I've seen this notion of robots learning how to do things by trial and error. That's impressive.
The point here is that, with a less expensive universal gripper, such as Cornell's, plus the algorithm the team invented, a robotic assembly line can quickly adapt to optimally picking up all kinds of new objects with different sizes and shapes that it's never encountered before. The alternative, which we've heard a lot about in DN articles and comments, is lengthy and expensive programming in 4D, presumably with highly specialized grippers. This would be a big benefit in assembly lines, especially those of EMS, which are continually changing products.
I have had a bit of experience with assembly lines. I can't think of any application for this gripper. Printed circuit board assembly needs very fast small part placement with vision compensation, or fast very fine placement of large parts with many leads, using vision compensation. I have only seen vacuum nozzles used. I can't see this gripper being used in a high-speed vision application. In automotive speed, accuracy, and payload are important. I don't think this gripper has any of these 3. Even where I have seen off-line programming using 3-D modeling, an actual human had to step through the program to touch-up positions and movements. Robots, aka Flexible Automation, vs. 'hard automation', was the answer to changing products. The gripper or 'end effector' is always customized to the application. The part must be both 'picked' and 'placed'.
To belabor the point: I don't think this gripper could pick up a 1mm x 2mm chip, take a vision shot, and then place it into a solder screened location, and do it again 1/10 second later. I also doubt that it could pick up a 50 lb bag of flour and place it to a pallet.
The gripper and the algorithm are interesting research, without a current practical application.
I really liked the article. I don't know if I completely understand the inner workings of the pressure adaping inside of the big blue ball, but the statistics of success for picking up parts is pretty cool.
Festo's BionicKangaroo combines pneumatic and electrical drive technology, plus very precise controls and condition monitoring. Like a real kangaroo, the BionicKangaroo robot harvests the kinetic energy of each takeoff and immediately uses it to power the next jump.
Design News and Digi-Key presents: Creating & Testing Your First RTOS Application Using MQX, a crash course that will look at defining a project, selecting a target processor, blocking code, defining tasks, completing code, and debugging.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.