6 Times AI Didn't Live Up to the Hype

There's a lot of hype surrounding artificial intelligence, but it's important that we don't forget the realities of the technology and what it really can and can't do.
  • Hollywood hasn't done artificial intelligence any favors, according to Oliver Christie. Speaking at the recent 2018 Pacific Design & Manufacturing Show in Anaheim, Calif., Christie, a consultant specializing in artificial intelligence, said we're in danger of letting the hype and hyperbole around AI cloud our thinking about the technology and its true capabilities.

    “We think of AI in a war type setting,” he told the audience. “We think of the technology as if we're in a sci-fi world, but we're not. And these views are impacting decisions made in the real world.”

    When it came to fantastical things that we were afraid were going to come and destroy the world, aliens used to be at the top. But now there's strong evidence that AI has taken the top spot. Between the scenarios painted out in shows like Westworld and movies ranging from 2001 to Terminator and Ex Machina Christie said films and TV are doing for AI what Jaws did for sharks.

    Indeed people have accused AI of all sorts of shenanigans from eavesdropping on us to creating Bitcoin as part of a master plan to take over the world. And exaggerated scenarios like these have the potential to have real influence on the policymakers and engineers overseeing AI.

    “I don't want regulation to come from hollywood,” Christie said. “People in government panic; they overreact and if we're not careful we're going to have a straitjacket put on [AI] very quickly.”

    But tackling AI hype isn't a daunting task, merely a matter having a clearer conversation on what AI and machine learning can really do. In that spirit we present six times AI failed to live up to the hype and what it really means for our industries and society.

    Click the image above to start the slideshow

  • Facebook's AI Created its Own Language

    The hype: An artificial intelligence experiment conducted at Facebook was promptly shut down after it was discovered that the AI was creating its own language.

    The reality: AI works on a reward-based system. This is the type of story that easily conjures up frightening scenarios of machines conspiring to overthrow us. The reality is just an example of an AI doing what we expect machines to do – finding the most efficient means of accomplishing a task. When Facebook let two AI agents have a conversation with each other they both quickly realized that English, as spoken, is rather inefficient. As a result the agents, rather than creating some all-new, secret language, began using a stripped down version of English to get its points across to each other. Facebook researchers didn't witness the emergence of a new intelligence, they merely watched two programs apply error reduction.

    In an interview with Fast Company Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research, described in incident in plain terms: “... Neither [AI] was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.”  

    [image source: Pixabay]  

  • Face-Tracking AI

    The hype: The machines are watching us and they know who we are! We're rapidly approaching a Minority Report-type of world in which algorithms can track our every movement and whereabouts with our faces.

    The reality: Though computer vision is one of the most highly touted and most researched applications of AI, algorithms are still pretty bad at facial recognition. The one exception may be males with lighter skin, which it turns out AI is pretty adept at identifying.

    Joy Buolamwini a researcher in the MIT Media Lab’s Civic Media group, recently conducted a study in which she tested three commercially available facial recognition software systems from Microsoft, IBM, and Megvii respectively. Buolamwini and her team found that the software was more accurate at identifying lighter skin subjects than darker skinned. Each program had an error rate that never exceeded 0.8 percent in identifying the gender of light-skinned men. As the subjects' skin got darker, however, the error rate shot up. One program had a 20 percent error rate in identifying African-American woman and the other two showed an error rate of about 34 percent in the same case.

    In an article from MIT Buolamwini discussed her study's results and potential consequences, saying, “The same data-centric techniques that can be used to try to determine somebody’s gender are also used to identify a person when you’re looking for a criminal suspect or to unlock your phone. And it’s not just about computer vision. I’m really hopeful that this will spur more work into looking at [other] disparities.”

    [image source: MIT News]  

  • Jobs

    The hype: Artificial intelligence is becoming so smart that machines and software will soon be taking over all of our jobs.

    The other hype: Rather than replace human workers, AI is going to augment the workforce moving us away from tedious, labor-intensive jobs and into more quality creative and intellectually stimulating work.

    The reality: It's too soon to tell the true impact of AI on jobs – whether it will bring about a new industrial revolution or lay waste to the human workforce. And even then the reality will probably fall somewhere in the middle.

    In actuality, what impact AI is having on jobs depends on a number of factors: whom you ask; what industry you're talking about; and how far into the future you're willing to look ahead. A 2017 study by technology consulting group, Capgemini, suggested that not only is AI not destroying jobs, it is actually creating them, with banking and retail being the industries most impacted. Analysts at Gartner have made similar predictions and said that by the year 2020 AI will have eliminated 1.8 million jobs, but it will have created 2.3 million jobs.

    Robotics companies will tell you that machines are definitely displacing human workers, but only in the sorts of repetitive, menial tasks that humans would rather not be doing anyway. Further, they will tout that this displacement is allowing human workers to move up to higher level positions that are more dynamic and carry more responsibility.

    On the flip side to this, a 2017 report by Gallup predicted that 37% of millennials are at a high risk of having their job replaced by some form of automation. A January 2017 report by the McKinsey Global Institute suggested that half of today’s jobs could be automated sometime between 2035 and 2055, depending on various technological and economic factors. However, the McKinsey report also acknowledges, “The pace of automation, and thus its impact on workers, will vary across different activities, occupations, and wage and skill levels.”

    We also know it will be some time before AI can make any valid attempt at taking over any tasks that requires skills like intuition and creativity. Attempts at having AI craft original fiction, for example, have been laughable at best. Your job may go away, it may not, but it may very certainly change at some point in your lifetime due to AI. Depending on your occupation, your career doomsday may vary.

    [image source: Rethink Robotics] 

  • Microsoft's Racist AI

    The hype: Microsoft created a chatbot named Tay that, once released out into the world, quickly became racist and antisemitic.

    The reality: AI only learns what it is taught. Tay was designed to interact with users on Twitter in the voice of a 18-24 year-old-girl. The way it learned was simply by picking up on the language, words, and phrasing used by the people it interacted with. Once people saw how easily Tay could be influenced, mischievous users started teaching the AI racist and offensive language. The result was not an AI that became offensive in a vacuum, but one that was only mimicking what it thought was normal human conversation.

    [image source: Twitter / Microsoft]  

  • Self-Driving Vehicles

    The hype: The days of car accidents, gridlock, and bad driving will soon be over! Self-driving vehicles are coming and vehicles guided by intelligent AI will solve all of our transportation issues.

    The reality: A lot of major companies and even some automotive startups are making very exciting developments in autonomous vehicle technology. However, we're still a long way off from filling our roads with level-5 cars and trucks that can pilot themselves with no human intervention. In fact many experts agree that full level-5 autonomy may be as much as 10 years off.

    Setting aside societal issues such as consumer understanding and acceptance and liability, there are still a great number of technical hurdles to overcome. Driving on any road, particularly in a major city, presents AI with an arguably incalculable number of scenarios. How do you test and validate an AI to handle all of these situations? “You could spend years of testing and validation on public roads and not encounter every specific scenario that can happen in a vehicle’s life,” Kay Stepper, vice president of automated driving and driver assistance for Robert Bosch LLC, told Design News.

    What's more, even having AI in vehicles compounds the number of problems that can occur on the road. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL), for example, have been examining safeguards against autonomous car AI causing potentially fatal accidents because of misunderstandings and miscommunications.

    [image source: Alphabet / Google] 

  • Sophia the Humanoid Robot

    The hype: Sophia, a robot made to resemble a human woman, is the first step toward “artificial general intelligence” (AGI) – machines so smart and lifelike that they're indistinguishable from humans.

    The reality: Fans of Westworld are still a very long way from being able to visit the park. Uncanny valley issues aside, Sophia has become somewhat of a celebrity. The robot, whose look is based off of Audrey Hepburn, has been making national TV appearances, joking of her plans to dominate the human race and even been granted citizenship in Saudi Arabia.

    In reality, even Hanson Robotics, the creators of Sophia, admit the robot's human-like actions are more the result of clever programming, and not actual intelligence. When Hanson released Sophia's code onto Github, experts who reviewed it agreed that she is essentially a chatbot with a lifelike face. She can be given pre-loaded phrases that her software then matches with appropriate facial expressions. And she can utilize a dialogue system where she can offer pre-programmed responses to people based on what they say.

    In an interview with Quartz, David Hanson, CEO of Hanson Robotics, said this of Sophia's programming: “From a theatrics point of view, you’re throwing everything but the kitchen sink at your robot to make a great performance. ... We do have a lot of real AI research behind there, but it’s mixed up with a lot of theatrically oriented stuff, as well.”

    In summation: Sophia doesn't understand what she's saying, nor the real context of it. While what Hanson Robotics has achieved with Sophia isn't easy by any means, those of us wanting to experience true human interaction and conversation are still better off sticking to the real thing.

    [image source: Hanson Robotics]  

Forget old-fashioned service logs. In the factory of the future, machines collect their own data, and statistical algorithms and machine learning crunch the numbers to predict future performance. In this session, you'll learn how to apply predictive analytics to prevent down time at your plant, no matter its size. Oliver Christie, who is quoted in this Design News piece, will presenting this session at Design & Manufacturing Atlantic 2019 .

Click here to register today!

 

Chris Wiltz is a Senior Editor at  Design News , covering emerging technologies including AI, VR/AR, and robotics.

Comments (0)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...