Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Autonomous Car’s Big Challenge: Using the Hyperscale Server Fleet to Train AI Neural Networks

DesignCon 2019 keynoter Gloria Lau, Uber
Gloria Lau, head of hardware engineering for Uber Technologies, Inc, explains how hyperscale hardware technology accelerates the training of the neural networks behind self-driving vehicles.

The world knows that artificial intelligence (AI) plays a big role in autonomous cars. At the upcoming DesignCon 2019 conference, keynoter Gloria Lau will explain that artificial intelligence consists of two aspects: inference and training. The AI inference enables self-driving vehicles to think and react to sensory data. The training of the AI model is computationally intensive because neural networks need to use large data sets. The complexity of neural networks requires hyperscale hardware technology to accelerate this training.

“As the amount of data explodes, the current machine learning techniques are inadequate. This is where deep learning is needed,” said Lau, head of hardware engineering for Uber Technologies, Inc.

At DesignCon 2019, Gloria Lau will talk about Uber’s use of hyperscale server fleets to train AI neural networks. (Image source: Gloria Lau)

After the AI models are coded and trained at the data center, the trained AI model enables autonomous vehicles to make inferences about the world around them. “AI allows the vehicles to see where they’re going, define the best route to take, determine how to react to pedestrians, watch for road signs, and recognize obstacles,” Lau told us. “It involves computer vision, classification of objects, and much more.”

The creation of that “trained AI model,” however, is one of the great, unappreciated stories underlying the development of autonomous vehicles. “Uber is at a turning point in the evolution of deep learning. The complexity of neural networks requires hyperscale hardware technology to accelerate the training. Here, you have challenges where the data storage and AI compute servers could be spread across the country at different data center locations,” Lau said. “Uber needs to define how distributed parallel training will work in this hyperscale environment.”

Intelligent, on-board AI technology must first be trained to do complex tasks, such as recognizing pedestrians. To do that, the training process consists of showing the on-board AI model tens of thousands of images until it “learns” what a pedestrian looks like. “At the data center, we use highly optimized compute, storage, and GPU servers to train the models, run simulations, run regressions, and test new software releases,” she said.

To promote the advancement of AI for autonomous cars, Lau points to three key areas of focus. First and foremost, she said, hardware innovation in server architecture and design for compute, networking, and storage areas is a critical area of focus to support the AI workload. Second, Uber is focusing on the refinement of the software stack to ensure that distributed deep learning can run in an HPC (high-performance compute) environment. Third, investment in R&D is essential. “We need to drive AI R&D to help advance this important field,” she said.

Lau said that a big part of Uber’s effort involves driving hardware innovation and server technology for compute, networking, and storage in order to support its AI workload. At her keynote speech, Hyperscale Artificial Intelligence: A Glimpse at the Hardware Innovation Powering Uber's Self-Driving and Flying Cars, Lau will go over how Uber takes on the challenges of architecting and designing hardware servers for a hyperscale data center to optimize the communication, synchronization, reduction, and distribution for the AI workload. An electrical engineer with a BSEE and MSEE from MIT and professional stops at Intel, Nvidia, Facebook, SGI, and Sun Microsystems, Lau will examine the challenge from a hardware perspective.

She will also call on the DesignCon audience to aid in that effort. “The audience at DesignCon are chip designers, board designers, and system designers,” she told us. “My message is we need them to help us innovate—to create better, faster, and higher-performing computers to sustain this compute-intensive AI deployment at the hyperscale data centers around the world.”

Lau believes that Uber’s autonomous cars will transform so many lives. They can enable the elderly who have given up the privilege of driving. Uber's advantage is that it has both the ride sharing and autonomous technology all in one roof.

DesignCon 2019 engineering educationBy Engineers, For Engineers. Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend, hosted by Design News’ parent company UBM.
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.