Researchers at the University of Arizona's department of electrical and computer engineering have developed a pair of robotic legs that walk with a biomechanically accurate gait. Modeled after human walking mechanisms, the legs and an attached pelvis are part of an effort to create human-like service robots.
The robot's movements emulate the neuromuscular architecture of human walking, which results from interactions among the musculoskeletal system, the nervous system, and the environment. The movements are made possible by emulating those three elements with mechanics, a complex central pattern generator (CPG) that constitutes the neural network, and feedback from sensors.
The researchers, Anthony Lewis and Theresa Klein, say that combining all three elements makes up a complete physical model of the human walking system, making it much more accurate. In an article published in the Journal of Neural Engineering, they claim: "We believe that this is the first robot which fully models walking in a biologically accurate manner." Another walking robot, Boston Dynamics' Petman, has learned how to walk up stairs.
The robot uses artificial leg muscles attached to Kevlar straps that move up and down as actuators, mimicing the natural agonist/antagonist muscle action of human legs. Each muscle consists of a servo motor attached to a bracket. The motor rotates to pull on the strap to mimic muscle contraction. The Golgi tendon organs of human legs are modeled by load sensors in the straps, while load sensors in the feet help a computer adjust the half-size legs' motion according to the surface they are walking on. The CPG is made up of a half-center oscillator plus phase-modulated reflexes that are simulated with a spiking neural network. The robot incorporates "positive force feedback from load sensors as well as other afferent signals to entrain the CPG and drive the step cycle." The robot's neural architecture is enough to produce a propulsive, stabilized walking pattern.
The purposes of this research are both using biology as an inspiration for robotics, and also a method for investigating biological systems. This type of robotics research is sometimes called "soft" robotics. It aims at developing humanoid service robots for use with people such as the elderly, instead of robots that are used in an industrial context. For example, the legs developed by Lewis and Klein are constructed so that they give somewhat when pushed, like organic legs, instead of being rigid and inflexible like an industrial robot.
Although the research is basic and aimed at robotics, it could also be applied to helping people with spinal cord injuries learn to walk again by assisting medical professionals in understanding the biomechanics of how people walk.
What I'm thinking, Ann, is that out in the real world, you would want to avoid the all of the wires connected to this device. I would think wireless connectivity could free up the device for greater flexibility. I know that can be an issue, as with the fire fighting robot on the ship, where you needed the power tether even though the wire could inhibit movement.
That's a great video, TJ. Says a lot about the difficulty of walking. However, we have the advantage of sight (most of us do) when we walk, so we can make adjustments for uneven surfaces because we can see them. The process of walking with the addition of sight is that much more complex.
When I watch the U of Michigan video, I see what looks like a rigid, fragile leg easily getting broken. Considering how much research has gone into reproducing the human, and other critters', gaits I'm surprised this team's research is still at such a basic level.
The MIT robotic legs seem much more sophisticated. But when it comes to tripping, the challenge may be as great for the MIT legs as it is for the UofM legs. Ultimately, some sort of vision needs to accompany the leg movement. I think we're finding out just how sophisticated our natural world is. I was astounded watching my kids when they were little. They didn't have to be taught how to walk, just encouraged. Their legs knew exactly what to do.
Rob, that's a really good point. We featured crawling robots in the Bugs and Worms robot slideshow:
and some of them, as well as other, snakelike robots, do workarounds and learn. I think the problem with the legs versions is that they're more likely to tip over because of a much higher center of gravity.
Yes, the crawlers may make more sense when it comes to movement. I still keep thinking there is a bias toward robots with human attributes -- like legs. Replicating human movement may not make the most sense.
I think it's an understandable bias, Ann. If we're trying to get a machine to replicate human movement -- as with the fire fighting robot -- it makes sense that we work with the solutions we already know and understand, our own movements.
I wasn't visualizing the robot walking in the real world just yet, since this is still very much an R&D project. But remote control makes a lot of sense. Most mobile robots are either remote controlled or autonomous, so no wires either way. I was surprised the firefighting ship robot had wires, but maybe that had to do with its size. Maybe the Navy should talk to the Arm or DARPA, which have both solved the wires problem already.
That makes sense, Ann. On the Navy robot, there was a comment about the wire. It had to do with the distance the robot could travel (and the obstacles it would move through) while still receiving power. The person who commented suggested that even with the power cord, the robot would have greater ability to move than with a wireless system.
Rob, the problem I'm having with that explanation is that the Army's BEAR is autonomous and can lift 500 lbs and, I believe, go a lot farther than a shipboard robot. So why can't the Navy's 'bot work by remote control?
Ann, I found the comment about the power cord. It was from GlennA:
Rob Spiegel; I agree that a tether could be a serious restriction. But if the battery pack is only good for 1/2 hour or so, and it only carried 25 to 50 lbs or so of fire extinguisher, it is really worth the cost to develop ? If this robot can drag a fire hose behind it, it should be able to drag a tether also. Someone is doing the cost justification between an autonomous unit vs. a tethered tele-operated unit. And they may decide to build both types for further evaluation, or for different applications. Or they may continue with a tethered unit (as it is now) until the battery pack version is viable.
I guess in a confined space like onboard a ship the tether isn't such a big deal, compared to the BEAR which has to roam all over a messy post-disaster scene. But, as I noted, BEAR can lift and carry 500 lbs without a tether, so it must have some awesome hardware, including batteries.
Yes, it sounds like a much different robot from the Navy firefighter. GlennA has a good point about the Navy robot in that it has to carry a fire hose. As long as it's carrying the fire hose, a power cord is not an additional hindrance.
The Navy robot probably has a Human/Machine Interface since once the robot has entered the fire zone the decision process would be handled by the human supervisor. Since the robot is working locally it makes sense to have an umbilical cord carrying all the relevant data back to a central control...a fully autonomous robot is still down the pike a bit.
ScotCan, can you define what you mean by "fully autonomous"? Autonomous robots already exist. Some of them have the option of being controlled remotely, and many can send back data to a remote human, using various forms of communication.
It depends on the interpretation of autonomous. My understanding is that fully autonomous means a robot's capacity to learn from its environment and carry out its actions accordingly.The human factor processes much more information than any computer because of the wide ranging human response to its environment...most times to the benefit of circumstances, but sometimes in error. That's why HMI makes more sense than attempting to build fully autonomous robots. Use HMI to confirm the robot's feedback and implement corrective action as a Human/Machine team, rather than to be fully automated. As for robots which send back info from far away (e.g. the drones) there's a time lapse in there which could affect decisions adversely....the physical distance between the controller and the robot needs to be reduced since there are at least 8 time dependent signal "journeys" between sending info, receiving it, deciding a course of action, transmitting it back and when the robot gets the instruction for it to trigger the action...by that time, however small, the circumstances could have changed, even in the case of a drone which has locked on to a specific target.
Thanks for that definition, ScotCan. If that's the accepted definition of "fully autonomous," we're definitely not there yet in robotics. I agree, HMI plus partially autonomous robots makes a lot more sense.
Looks to me like someone in robotics finally figured out that the simple act of a human walking involve more than just the legs. - - - Torso twisting, arms swinging and occasionally a hand reaching out to a rail or other nearby objects for stability - - - What looks so simple is impressively complex.
That robot running at 28MPH was very impressive. It was more like a gallop than a run, though. Not quite horse style yet, and it looks like "horse style" would have a better ability to balance. A robot as stable as a horse would certainly have a whole lot of applications. It might even be useful in getting around city traffic jamups. And the military uses would be totally demoralizing to the enemy. Just imagine, if the robots were dressed as soldiers, running in a charge, firing automatic weapons with both "hands". That would make almost everybody drop and run.
With major product releases coming from big names like Sony, Microsoft, and Samsung, and big investments by companies like Facebook, 2015 could be the year that virtual reality (VR) and augmented reality (AR) finally pop. Here's take a look back at some of the technologies that got us here (for better and worse).
Good engineering designs are those that work in the real world; bad designs are those that don’t. If we agree to set our egos aside and let the real world be our guide, we can resolve nearly any disagreement.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.