To be sure, TRW's system is not the first script-recognition device for automobiles. Audi already has similar systems in production, and other manufacturers are said to be working on the technology.
But by employing a less complex sensor array, TRW hopes that it can make its system appeal to manufacturers of entry- and mid-level vehicles. The company contends its technology is less costly than traditional indium-tin oxide (ITO) capacitive arrays and, therefore, would enable automakers to uses simple 8-bit and 16-bit MCUs to control it, instead of complex application specific integrated circuits (ASICs). TRW representatives would not say how large the cost reduction for automakers might be.
Studies performed by TRW reportedly suggest that the technology could reduce driver distraction by as much as 78 percent compared to the use of alpha-numeric keyboards. That would be a major step forward for automakers, many of whom are searching for new ways to deal with the myriad distractions in today's vehicles. In December, the National Transportation Safety Board called for the "first-ever nationwide ban on portable electronic devices" in vehicles, saying that driver distraction accounts for about 3,000 fatalities per year.
TRW's technology is likely to compete not only with other script recognition systems, but with voice recognition systems, as well. Today, many vehicles use voice commands to access dashboard features. TRW representatives said they see their technology as a step up from voice recognition.
"In a number of situations, touchpad controls can be better than voice commands as you do not have the issues of a noisy cabin or road environment that could obscure or cause the commands to be misinterpreted," TRW spokesman John Wilkerson wrote in an email to Design News.
TRW said it doesn't yet know when the technology will reach a production vehicle, but added that it is "working on capacitive sensing projects with automakers."
For a close-up look at GM's Chevy Volt, go to the Drive for Innovation site and follow the cross-country journey of EE Life editorial director, Brian Fuller.
My husband has always had cars that have voice recognition (he trades a car every three years for work) and it's been a constant source of entertainment for the family. He's somewhat of a techie so he'd get everyone all ready to see how accomplished his voice recognition system would be--how the car would automatically dial grandma or find a cool restaurant along the route. The things that came back during the interaction were literally hilarious and never even close to the command he was issuing. I have to say, over the last few years, even though the voice systems have gotten better, those experiences have made him lay off using the capabilities pretty much altogether. Perhaps something like Apple's Siri can change the technology's bad rap.
I think there's a lot of potential in terms of using scripts. Much like the old short hand, you could have a long list of options and cut to the chase with commands pretty quickly. I'm definitely very interested to see where this technology can take us.
I think the script idea is a good one, and it's been around for a long time. Back in 1983 I used a CAD system from Applicon that employed user-created symbols that you could define to do whatever commands you chose.
If you could choose what symbol you wanted to create (a Z for the radio, a C for the cruise, N for navigation, etc.), then you could customize your experience, and have hands-free customized access to everything.
That's what we did with the Applicon CAD system, and it was the most productive system in the department.
Beth, I fully agree with you. I think this is why voice recognition software for PCs has not really taken off. I am sitting in a Starbucks (the office of choice for many, I notice) and I would not want to have to speak to enter this post. I notice, by the way, that there are very few people with iPads (or other tablets) here. Just about everyone has a laptop open.
Formula 1 drivers intereact with their cars via the steering wheel. These have become fantastically expensive (primarily becuase of the low volumes). In fact, they stay with the driver. A system like this one from TRW in a steering wheel could be an interesting twist on that idea.
I'd trade voice commands for script commands any day. This technology seems pretty promising in that it seems, on the surface, pretty simplistic in terms of usability. The problem with voice is there is so many openings for the system to misinterpret what you're asking of it that it's almost a joke. This seems much more straight forward, especially if the commands are simple.
Are they robots or androids? We're not exactly sure. Each talking, gesturing Geminoid looks exactly like a real individual, starting with their creator, professor Hiroshi Ishiguro of Osaka University in Japan.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.