The five most important robotics trends of 2012, like the top five of 2011, will enable volume manufacturing and greater integration of robotics with machine vision and automated systems. Some trends discussed in the slideshow below outline very targeted applications. Yet, once again, the developments in each are relevant to other, often very different types of applications, which concern robot design and the design of the systems in which they work.
Click the image below to start the slideshow:
Just like people, robots do things better with two hands. More dexterous robots will be valuable in several applications, from surgery to materials handling, or even picking up samples as they walk across the surface of Mars. A step -- perhaps a grasp -- in the right direction is the small robot with two arms, two hands, and opposable thumbs described in Dual-Armed Robot Making In-Roads.
At Automate 2011, the SDA5D lifted spherical objects from a nearby table. It's being adopted in industrial applications from logistics and palletizing to automated assembly and distribution. A larger model is deployed in automotive assembly plants and by the National Aeronautics and Space Administrtion (NASA) for space simulation operations.
Very informative wrap up on what to watch for in terms of robotics trends in 2012. I would definitely agree with the last point--the idea that the software needs to--and is--catching up with the hardware. Just like all of the embedded software being added to cars to enable and all the new gadgetry, I imagine it will be the software that will ultimately drive the utility of these new robots, especially the ability for manufacturing engineers to more easily configure and program the robots to do their stuff.
In comments on an earlier robotics article, one engineer told us that programming by hand was excruciating. So the point and click interface described in that article I gave the link to definitely was an improvement. But the big problem it solved, along with the entire package, was making it easier to program smaller robots in smaller cells doing fewer, lower-volume jobs.
Nice slide show, Ann. I love to look at pictures of robots. As for the software, I was under the impression there are fewer and fewer instances where robots need to be programmed by hand. Maybe I'm wrong about this, but I thought more of the robots were now plug and play -- or at least as plug-and-play as possible.
Thanks, Rob. For industrial use, whether it's welding or assembly, or some other function, robots have to be programmed, since their complex movements must interact with other machines in 4-D. That said, the programming itself can either be hands on code crunching, or a simpler, point and click GUI, which is one of the big changes in the ABB story I gave the link to below.
Yes, Ann, I remember that article, including the bit about software for non programmers. I'm seeing that more and more with sensors and other devices in automation. The complex programming is pre-packaged and all the control staff had to do is point and click. Maybe it's not quite that simple, but original programming is not longer a must.
I wonder if we are starting to see more applications for two-armed robots. I know that single-armed robots can't do some simple operations, such as lifting and manipulating non-rigid objects. Is the manufacturing world starting to find applications for these two-armed units?
Good point, Chuck. It's interesting to observe that two-armed robots are in a way a mashup of industrial robots and the newer humanoid robots you explored so well in your piece, "Humanoid Robots Take Shape."
Chuck, the only places I know for sure where two-armed robots are being used are automotive assembly plants and in aerospace by NASA for space simulation operations. The company says their two-armed SDA5D is being adopted in all kinds of industrial applications, from automated assembly and distribution to logistics and palletizing. A larger model is deployed in and by the National Aeronautics and Space Administrtion (NASA) for space simulation operations. I'll bet surgery might be a big app, too.
The most interesting part of the application of two armed robots will undoubtedly be the programming, even moreso if they are programmed point-by-point from a pendant in the manner of one armed robots. Synchronizing the motions of two arms will add a whole additional dimension to the task. OF course, there may be programming methods available that take that into acount, which would be a valuable addition. I certainly hope that robot programming has advanced past the manual point by point path entry that I had to use, which was "a few years back". I have not seen any description of other programming methods mentioned in any detail in any Design News writeups, so I wonder what does exist currently.
The most interesting part of the application of two armed robots will undoubtedly be the programming, even moreso if they are programmed point-by-point from a pendant in the manner of one armed robots. Synchronizing the motions of two arms will add a whole additional dimension to the task. OF course, there may be programming methods available that take that into acount, which would be a valuable addition. I certainly hope that robot programming has advanced past the manual point by point path entry that I had to use, which was "a few years back". I have not seen any description of other programming methods mentioned in any detail in any Design News writeups, so I wonder what does exist currently. Are there any responses?
William, thanks for the points on the programming aspects of two-armed robots. The synchronization problems to solve will be pretty complex. Perhaps we'll get some comments from those with experience in that area.
And Rob, I've noticed a similar trend in machine vision--more point-and-click interfaces where operators can select pre-determined functions.
That's a great point, Bill, about the programming of two-armed robots constituting a big challenge. Indeed, it's the only programming exercise I can think of which rivals real-time programming. The solution is somewhat similar (except in the case of RTOSes the timing is handled implicitly, though you have to test explicitly for ability to respond to real-time interrupts. Anyway, so for two-armed robot programming, i think what the programmer needs to do is to set up a timing diagram prior to programming, and then to verify both the accuracy of this model and compliance with it, throughout all stages including programming and test and integration.
I don't know what sort of programming in a machine control system could be reduced to point and click, unless it would be the creation pf the operator interface portion. Machine controls are mostly about " when this and this and that, then do this, unless those", and that is about as simple as logic can get. Of course each different controller (PLC) has a different dialect, as it were, but many of them are close enough that picking up another one would take less than an hour. Some systems, such as Seimens, are totally different and have no similarity to the other languages, which makes choosing them a very large commitment, in that the new programming language is completely different, both in grammer, syntax, and spelling.
Robot programming as "point and click" is even harder to imagine, at least as far as Nachi and Motoman robots are concerned. But there may be something new that I am not aware of. Robot programs are mostly moves from point to point, with each point being described by three axis and three angles, at least in rectangular format programming. The alternative being to set up a value for each robot axis, recalling that there are six non-orthagonal axis. That method could easily become quite tedious, it would seem.
Alex, thanks for the programming feedback on two-armed robots. I would imagine it must be similar to programming any real-time system, such as machine vision, except probably a lot more complicated than MV. William, the point-and-click reference is to my story on ABB's smaller robot packages with simplified programming interfaces:
Ann, I am trying to imagine what part of robot programming point and click would work for, and I can see that I am going to have to chase that subject quite a bit more in order to see what new things are being used. The point by point programming could be called real slow time, since the motion is usually much slower than normal operation.
Of course, for anything beyond the very simplest program we always need a sequence of motions chart, which not only defines all of the moves but also lists all of the qualifying conditions, both for the move to begin and then when the move is done. That allows us to verify that one thing is complete before starting the next thing. When things must happen at the same time it becomes more complex, particularly if they must be synchronized. OF course a robot controller already does that, in that six axis may move in unison to move the arm from one position smoothly to another one. Consider the math to make the six non-orthagonal axis work that well.
To make things work along with a robot move we can put in an intermediate point where an I/O point is switched on during a move, as the motion passes a specific pointr. That function is not new, but it certainly can be very useful.
William, those are good questions. Please do let us know what you find out about, such as which robotic functions/programming steps have become objects or modules, or automated in some way. If it's anything like machine vision, my guess is that those are low-level function clusters of some kind. Or perhaps it's something entirely different.
Thanks, William. Let us know what you find out. Meanwhile, I checked ABB's website, and I found two things that may be relevant. First, the RAPID programming language is mentioned in the press release discussing the controller used for the package described in the Little Robots story that mentions point and click programming. Second, the software itself, RobotStudio for the PC, mentioned to me during the interview for that story, is described on their website as using simulation for offline programming: http://www.abb.com/product/seitp327/78fb236cae7e605dc1256f1e002a892c.aspx?productLanguage=us&country=US
@ANN, I did visit the ABB site and read through a large portion of the program manual. What they are describing is offline programming, in which the programmer first builds a virtual robot cell and then puts in a virtual robot with virtual tools. The tricky part that I see with that is bulding the vitual work cell.
What becomes clear is that offline programming appears to require accurate dimensions and spatial reference informatiom about the elements in the workcell, and they need to be very accurate. Of course it should be possible to do that for a cell if nothing can move relative to anything else. Now I understand how offline programming is done, and it would be similar to real-time programming, except that things would not break, and it would require some very good visualization skills. And I can see where point and click would fit in.
William, thanks for taking the time to check out what kind of programming is actually being discussed. When I saw the mentions about simulation and virtual robot cells, it looked like offline programming to me, but I was not about to conclude that. My understanding of this whole shift to point and click, which I've also encountered in machine vision, is that it's aimed at simplifying programming so that operators can do it instead of programmers, to save money. Obviously, this can only be aimed at less complex tasks that can be modulized in some way.
Engineers at Fuel Cell Energy have found a way to take advantage of a side reaction, unique to their carbonate fuel cell that has nothing to do with energy production, as a potential, cost-effective solution to capturing carbon from fossil fuel power plants.
To get to a trillion sensors in the IoT that we all look forward to, there are many challenges to commercialization that still remain, including interoperability, the lack of standards, and the issue of security, to name a few.
This is part one of an article discussing the University of Washington’s nationally ranked FSAE electric car (eCar) and combustible car (cCar). Stay tuned for part two, tomorrow, which will discuss the four unique PCBs used in both the eCar and cCars.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.