How an Autonomous Drone Flies With Deep Learning

A team of engineers at Nvidia share how they created a drone capable of fully autonomous flight in the forest.

Autonomous cars haven't even fully hit the roads yet, and companies are already touting the potential benefits of autonomous drones in the sky – from package delivery and industrial inspection, all the way to modern warfare. But a drone presents new levels of challenges beyond a car. While a self-driving car or land-based autonomous robot at least has the ground underneath it to use as a baseline, drones potentially will have a full 360-degree space to move around in and must avoid all of the inherent obstacles and pitfalls associated with this.

Speaking at the 2017 GPU Technology Conference (GTC), a team of engineers from Nvidia believe the solution to having freely autonomous drones lies in deep learning. Their research has already yielded a fully autonomous drone flight through a 1 km forest path while traveling at 3 m/s, the first flight of its kind according to Nvidia.

 

 

“We decided to pick the forest because it's the most complex use case and it applies to search and rescue and military applications,” Nikolai Smolyanskiy, a principle software engineer at Nvidia, told the GTC audience. Forests have challenging light and dynamic environments with light occlusion that make them an absolute nightmare for autonomous flight. Smolyanskiy and his team reason if they can get an autonomous drone to fly through the forest, they can get one to fly almost anywhere.

The drone was a commercially available 3DR Iris+ quadcopter modified with a 3D-printed mount on its underside to hold an Nvidia Jetson TX1 development board, which handled all of the computation. The board was connected to a small PX4Flow smart camera facing downward that was used in conjunction with a lidar sensor for visual-inertial stabilization.

Over the course of nine months of testing, the team initially tried to use GPS navigation to guide the drone, but quickly discovered it was prone to crashes. It also didn't solve the larger question of how you could deploy these drones in remote areas where GPS might not be available (such as in search and rescue applications). “In areas where GPS is not available, you need to navigate visually,” Smolyanskiy said.

From left: Nvidia engineers Nikolai Smolyanskiy, Alexey Kamenev, and Jeffrey Smith discuss their automous drone project at the 2017 GPU Technology Conference. (Image source: Design News)  

The Nvidia team opted to use a deep neural network (DNN) they called TrailNet to solve the problem, training it to handle orientation and lateral offset to follow a path through the forest. For obstacle avoidance they employed Simultaneous Localization and Mapping (SLAM), the same algorithmic technology being used to help automous cars avoid collisions. SLAM allows the drone to get a sense of itself in 3D physical space as well as where it is in relation to the obstacles around it (in this cases trees, branches, and other foliage).

'We tried several neural network topologies. So far we've found that the best performing is based on S-ResNet-18 with some modifications,” Alexey Kamenev, a senior deep learning and computer vision engineer at Nvidia, told the GTC audience. Deep Residual Learning Neural Networks

Comments (1)

Please log in or register to post comments.
By submitting this form, you accept the Mollom privacy policy.
  • Oldest First
  • Newest First
Loading Comments...