"It's a very bright light, so when it turns on in your peripheral vision, you'll have a reflex action to look back in that direction," said Ron Szabo, director of forward engineering for Delphi's infotainment and driver interface product line.
If the LEDs don't work, MyFi can take other mitigating actions, such as sounding an alarm or shutting down selected features. At a higher level, it can incorporate information such as vehicle speed or sensor data from a forward-looking radar array to detect obstacles, pedestrians, and lane departures. It can also connect to the cloud and incorporate traffic conditions.
"Texting and cellphone usage tend to dominate the discussion today," DeVos said. "But they're really just the latest technologies. We're trying to create a system that can comprehend any form of distraction and provide counter measures to it."
Toyota's NS4 concept car uses a human-machine interface (HMI) with the look and feel of a smartphone. Toyota is teaming with Intel, Microsoft, and Salesforce.com on the interface.
Detecting distraction is only part of the battle. Automotive engineers are also trying to build better in-dash systems, so that the distraction doesn't happen in the first place.
One way to do that already exists but needs improvement. Voice recognition combined with Bluetooth headsets can help drivers direct the operation of many of the car's features. Ford's Sync lets users operate phones and radios and navigate music libraries with voice commands. And Delphi has shown how speech-to-text systems can help drivers send text messages, and text-to-speech technology can read incoming texts.
However, engineers say that voice recognition systems need to become more intuitive than they are today. "We'd like to get to the point where we have no-look controls," Szabo said. "Natural voice commands will be important in helping us get there."