Google Is Bringing AI to the Edge for Everyone in 2019

The 2019 Google I/O developer conference brought a wave of artificial intelligence announcement as the company touted a shift toward edge-based AI, and a new focus on improved privacy.

Chris Wiltz

May 7, 2019

8 Min Read
Google Is Bringing AI to the Edge for Everyone in 2019

At I/O 2019, Google's CEO Sundai Pichai unvieled new AI technologies and discussed the company's commitment to maintaining privacy with AI. (Image source: Google)

Last year, when Google debuted Duplex, its AI assistant capable of making voice calls on behalf of users, it turned a lot of heads and stirred its fair share of controversy. What it also did was make a firm statement that the search giant had big plans for artificial intelligence and the role it can play in users' daily lives.

This year, at its 2019 I/O developer conference, Google doubled (even tripled) down on its AI ambitions with a slew of announcements aimed at making AI more accessible, more powerful, and more respecting of users' privacy. It's a company-wide effort that Google CEO Sundar Pichai called “AI for Everyone.”

Is Google an Edge Company Now?

If there is one major overall takeaway from the various product and services announcements at I/O 2019, it's that Google is all about edge computing now. That's right, one of the biggest names in cloud computing is making a heavy investment into bringing artificial intelligence and machine learning to the edge.

Traditionally, hardware limitations have made this too prohibitive. But, during the conference's opening keynote, Pichai told the I/O 2019 audience that Google has been able to take its deep learning models, which were previously up to 100GB in size, and scale them down to 0.5GB.

“What if we could bring the AI that powers [Google] Assistant right onto your phone?” Scott Huffman, VP of Engineering for Google Assistant, asked the audience. He said that Google wants to bring Assistant to the edge in such a manner that will “make tapping on your phone seem slow by comparison.”

The next-generation Google Assistant will run on-device rather than via the cloud. Huffman explained that this will allow Assistant to process requests in real time and deliver answers up to 10 times faster. In a live demonstration, Google Assistant was able to quickly multitask through a series of apps (such as maps, Lyft, and text messaging) all with a series of voice commands, and without the need to utter “Hey Google” for each respective command.

Asked to pick out photos from a specific trip to text to a friend, Assistant was able to bring up a selection of these photos, then further narrow down its results when told, “The photos with animals.”

Huffman announced that the newest version of Google's Pixel brand of phones – the Pixel 3a and 3a XL – will come with the new Assistant built in. It's designed to pair nicely with updates to Google's machine learning models that will be incorporated into the next version of the Android OS, called Android Q. 

Little was given in the way of technical details of the new Pixel phones, but specs reveal both phones run on a Qualcomm Snapdragon 670 processor. While not the latest version of the Snapdragon, the 670 does feature Qualcomm's proprietary “Multicore AI Engine” designed to run AI algorithms on the edge. Google likely opted for the 670 as a means of handling its new edge-based AI while also keeping the phone at a mid-range price point of $399.

According to a Google, the addition of AI on the edge to Google's phones allows the Pixel to achieve things with software and machine learning that other phones have tried (and sometimes failed) to achieve with hardware alone.

More Duplex

Google is also continuing its development of Duplex and announced it is extending Duplex into handling web-based tasks, with test programs being rolled out to handle movie ticketing and car rentals.

When implemented, “Duplex on the web” will handle all of the legwork for users with a simple voice command. By just telling Duplex to book you a rental car it can automatically go to a rental agency's website, fill in your persona information, pick the dates you need the car based on your travel itinerary (that can be culled from Gmail), and even select the type of car you prefer. All of this would be handled on Google's end and require no action on the part of businesses for implementation.

RELATED ARTICLES:

Privacy on the Edge

In order for AI to do its job however it needs insight into a good amount of user data however – something that naturally brings up questions around privacy and potential biases in the data.

To Google's credit, Pichai was quick to address these concerns during his keynote. He explained that part of the aim of moving its AI to the edge is to actually make it less intrusive into users' personal and sensitive data and also help eliminate the potential for unexpected biases.

“We want to ensure that our AI models don't reinforce biases that exist in the real world,” Pichai said, adding that Google is doing fundamental research to improve the transparency of its ML models and reduce bias.”

In a demonstration, Pichai noted that most models operate on low-level features. An image recognition model for example looks at things like edges and lines, rather than the higher level details that humans do. A human identifies a zebra by its strips; an algorithm does it at a much more granular level.

Google is looking to address this with a new method called Testing with Concept Activation Vectors (TCAV). The idea behind TCAV is to allow AI to address more high level concepts in image recognition.

For example, an algorithm can pick out different details of an image and weigh them based on their importance. In Pichai's example, an algorithm could take a picture of a male wearing a lab coat and stethoscope and weigh these different factors to determine it is a picture of a doctor. The danger in this is that it can teach bias to the algorithm. For instance, it may come to associate maleness as a key factor in identifying a doctor and exclude females.

Errors like this also have more serious stakes as well. An algorithm for recognizing skin cancer, for example, needs to be able to make its determination with a wide variety of skin tones. Any biases towards finding or ignoring cancer in different skin tones can have a very serious negative impact.

Google is implementing a new method called TCAV to help guard against uninteded biases in AI algorithms. (Image source: Google)

Federated Learning

On the privacy end, moving AI to the edge also allows Google to implement a new process it calls Federated Learning in order to keep algorithms from sharing users' data out into the cloud.

Federated Learning allows Google's AI products to work without collecting raw data from devices and processing it in the cloud. It actually works in reverse of how these sorts of services are implemented today. Rather than sending your data to the cloud to be processed by a machine learning model, Federated Learning sends the machine learning model directly to your device so that the data can be processed on-device.

The system maintains a global model that is sent to devices. As the algorithm gets smarter based on users' input, the updated model on the device is then leveraged by the global model to make it smarter for all users.

Using Federated Learning, devices personalize a machine learning model model locally based on user imput (A). The personalized models are then aggregated across users (B) and a consensus change is formed (C) to the shared, global model. (Image source: Google AI)

With the Google digital keyboard (Gboard) for example, the keyboard can learn new words from the millions of people typing on Google devices. It improves by looking at how all of the models across devices are improved, not by picking up personal data from the phone.

This model also holds wider implications for improving user experiences. The latest call screening feature on the upcoming Pixel phones will use speech recognition and natural language processing to identify and screen robocalls and spam, for example.

Google has said Federated Learning is still in early development, but it is going to be rolling out across many of Google's products. Android Q will use Federated Learning in its text and audio features so that none of this type of data actually leaves the phone. This feature will be distributed throughout the OS so users should be expecting to see the first examples of Federated Learning as Android Q devices begin to roll out.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

ESC, Embedded Systems Conference

ESC BOSTON IS BACK!
REGISTER TODAY!

The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!

Sign up for the Design News Daily newsletter.

You May Also Like