Emotional AI Makes Your Car Really Know How You Feel

Imagine if your car knew how you felt and adjusted accordingly. Affectiva's Automotive AI is a system capable of recognizing the emotional states of drivers and passengers in real-time.

Chris Wiltz

May 2, 2018

5 Min Read
Emotional AI Makes Your Car Really Know How You Feel
AffectivaAffectiva's goal is to add an emotional component to your vehicle. The company's Emotional AI system is capable of recognizing a range of human emotions, including anger and drowsiness, via a video camera system mounted into the interior of the car. The ultimate aim is to allow cars to respond in a variety of ways based on the emotions of the driver and passengers. A vehicle could automatically adjust temperature or play soothing music depending on a driver's mood. And in extreme situations an autonomous vehicle could someday pull over to the road and alert EMTs if it detects a emergency with the driver such as a heart attack or fainting.(Image source: Affectiva)

Imagine if your car could pull itself over when you're drowsy or nauseous, or adjust the temperature and music when gridlock is stressing you out. Maybe it could even refuse to start if it knows you're intoxicated.

With advanced ADAS systems already in place and the days of autonomous vehicles on the horizon, a lot of work is being done around sensing and machine learning to help vehicles better understand the roads and the world around them. But Boston-based startup Affectiva thinks more needs to be done around the internal world of the car—specifically the emotional state of the driver.

This Automotive AI system can recognize seven emotional metrics and as many as 20 facial expression metrics in drivers and passengers. (Image source: Affectiva)

Affectiva has built its business model around creating “emotional AI,” algorithms capable of recognizing human emotional states. The company recently rolled out its first product, Affectiva Automotive AI—a system capable of real-time analysis of the emotional states of drivers and passengers via cameras and voice recorders mounted into the cabin.

Speaking with Design News, Abdelrahamn Mahmoud, product manager at Affectiva, said that over the past year, the company's technology has garnered a lot of interest from Tier 1 suppliers and OEMs—particularly in the automotive space. “[They] were surprised about what we could do to understand what's happening in the cabin, whether that was in-cabin activities or motions, and how can we use those metrics to actually have the systems in the car adapt to that, whether it was for entertainment or for safety,” Mahmoud said.

He explained that Affectiva only supplies the software end, leaving suppliers and automakers to customize the system as they see fit in terms of the hardware needed. “We did a lot of training our models to recognize emotion from different head positions and head views, so the OEM has control over the design and where to place the camera,” Mahmoud said. “We also worked a lot on making sure the platform can run robustly in real time on end devices. There's no CPU or GPU required. We've even had our models run on dual-core CPUs for mobile devices.” He noted that the company has added support for near infrared (NIR) cameras for use at night to make sure the driver and occupants can be monitored under all lighting conditions.

A driving simulator at GTC 2018 demonstrated Affectiva's emotional AI's real-time emotion recognition capabilities. (Image source: Design News) 

Affectiva's emotional AI can currently recognize seven emotional metrics (anger, contempt, disgust, fear, joy, sadness, and surprise) and as many as 20 facial expression metrics. Mahmoud said that automakers are particularly interested in measuring joy, anger, surprise, drowsiness, frustration, intoxication, and nausea—with a particular emphasis on drowsiness, distraction, and intoxication. Ultimately, it will be up to the OEMs to decide what metrics they want to measure and how the vehicle will respond. “We see different levels of control depending on things like the level of drowsiness,” Mahmoud explained. “You can first have auditory alerts, followed by visual alerts, then things that could suggest, like if the car has semi-autonomous capability, why not engage those capabilities when [the system] detects drowsiness.”

The challenge in this scenario becomes apparent: How can you standardize this across all drivers? Even something as seemingly simple as adjusting music according to mood can get very complex once human factors are taken into effect. The same Led Zepplin song that might make one driver happy and relaxed might send another driver's stress levels through the roof.

Mahmoud said the solution to this has been developing AI capable of building a long-term emotion profile. “We've developed a model that can do long-term emotional tracking, not just in the course of one interaction. but over time to build an emotional profile and baseline as well as detecting anomalies and major events.”

In a talk at the recent 2018 GPU Technology Conference, Ashutosh Sanan, a computer vision scientist at Affectiva, explained the challenge around sensing emotion was in using temporal modeling—meaning the AI had to be able to discern emotions from sequences of images (i.e., video camera footage) rather than just a single image. “It's a tough problem because facial muscles can generate hundreds of expressions and emotions,” Sanan said. Such a process involves analyzing a lot of complex expressions and performing a lot of multi-attribute classifications. And it all needs to be done on a system fast enough to run on embedded systems and mobile devices.

To overcome this, Sanan said the team at Affectiva used a combination of a convolutional neural network (CNN) and long short-term memory (LSTM). CNNs are typically used in image recognition to teach AI to recognize specific objects or properties of images. An LSTM is a type of recurrent neural network that allows an AI to learn to recognize patterns in large data sets. Combine the two and you have a model that is capable of recognizing patterns in sequences of images and storing that information.

“Your emotional state is a continuously evolving process, “ Sanan said. “Leveraging [temporal information] makes our predictions more robust and accurate. Adding temporal information makes it easer to detect highly subtle changes in facial state.”

There has been no official word on a release of the first vehicles to implement Affectiva's technology. But should the idea of emotional AI for autos catch on, we may even be seeing autonomous taxies and fleet vehicles adjusting their behavior based on their passengers' personalities. While Affectiva did not specifically set out to become an auto-centric company, Mahmoud said that automotive is becoming the company's core focus. The next steps, he said, are to become more engaged in the productization of its technology for cars. “On the research side, we're also doing more around in-cabin sensing for more nuanced emotional states. Nausea and stress are active areas of research for us.” According to the company, its database used for training its emotional AI currently consists of over six million faces representing 87 countries.

Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, and robotics.

Sign up for the Design News Daily newsletter.

You May Also Like