With Isaac, Nvidia Trains Robots in Virtual Environments

It's like training robots using The Matrix. Nvidia has announced Isaac, a virtual simulator that it says will allow robots to learn significantly faster than in the physical world.

Remember how the heroes in The Matrix were able to learn new skills almost instantly by training in a virtual environment? It might not be a realistic way for humans to learn (yet), but Nvidia believes it can already be done for robots.

Wednesday, at his keynote during Nvidia's GPU Technology Conference (GTC), Nvidia CEO Jensen Huang called for an “alternate universe,” where robots could learn but not be beholden to the laws of time. “This universe needs to train at warp speed,” he said.

The virtual environment, which Nvidia is calling Isaac (named after Newton and Asimov, according to Huang) is an alternative to real-world reinforcement learning, which has robots and AI learn in the same way humans do – through trial and error. On the low end reinforcement learning could teach a robot a simple assembly task. On the highest end you get an AI that can beat champion players at Go.

Isaac is able to run multiple simulations of a task simulatenously and build upon the smartest virtual brain. (Image source: Nvidia) 

The problem with doing reinforcement learning in the physical world with robots is that it can be time consuming and expensive to wait around while a machine figures out how to do a task perfectly each time. It can also be dangerous if larger robots are involved. By doing all of the training virtually Nvidia says robotics companies can save significant amounts of development time. “The virtual brain gets trained, then transferred into a real robot, and the robot does its last bit of adaptation in the physical world,” Huang said. Essentially the robot would be ready to go about its task immediately after installation.

As a major player in the computer graphics card market, Nvidia is no stranger to creating high-end simulations with real-world physics. In fact, Isaac will leverage a modified version of the popular Unreal Engine 4 (the engine behind popular video games such as Gears of War , Street Fighter V , and Robo Recall ) to create its virtual environments. Bob Pette, VP of Visualizations at Nvidia, told Design News that part of the advantage of using Unreal is that developers with a knowledge of the engine may also be able to modify it and create their own environments the same way that video game creators do. The system will also interface with the OpenAI Gym , an open-source toolkit of reinforcement learning algorithms, giving developers access to a library of already-existing training tools.

A demo video shown during the keynote showed a robot learning how to hit a hockey puck into a goal and then how to sink a golf put. Huang said the system can even run multiple simulations at once, then take the smartest one, replicate that, and create a new test group, continuously iterating and reducing the overall learning speed by many factors. And since its designed with deep learning, there is no programming involved, “the system tries until it figures out the task,” Huang said.

While a simple hockey demo is nice Jesse Clayton, Senoir Manager of Product

Comments (2)

Please log in or register to post comments.
By submitting this form, you accept the Mollom privacy policy.
  • Oldest First
  • Newest First
Loading Comments...