Why Is the US Losing the AI Race?

Why is the US, once an assumed leader in artificial intelligence, losing its foothold on the technology? And how can it get its position back?
(Image source: Tim Gouw on Unsplash)

Two years ago, the emergence of autonomous driving technologies led Intel CEO Brian Krzanich to declare that data is the new oil. If that's true, then artificial intelligence is the new oil refinery.

AI is rapidly becoming a globally valued commodity. And nations that lead in AI will likely be the ones that guide the global economy in the near future.

That's according to a recent report released by the U.S. House of Representatives Oversight and Government Reform Committee's Subcommittee on Information Technology. The report—the result of a series of subcommittee hearings on AI with members of academia, the technology industry, and government—outlines a need for the US to secure its position in AI research and make further investments into the technology.

“As AI technology continues to advance, its progress has the potential to dramatically reshape the nation’s economic growth and welfare. It is critical the federal government build upon, and increase, its capacity to understand, develop, and manage the risks associated with this technology’s increased use,” the report stated.

While the US has traditionally led the world in developing and applying AI technologies, the new report finds it's no longer a given that the nation will be number 1 when it comes to AI. Witnesses interviewed by the House Subcommittee said that federal funding levels for AI research are not keeping pace with the rest of the industrialized world, with one witness stating: “[W]hile other governments are aggressively raising their research funding, US government research has been relatively flat.”

The Chinese Connection

Perhaps not surprisingly, China is the biggest competitor to the US in the AI space. “Notably, China’s commitment to funding R&D has been growing sharply, up 200 percent from 2000 to 2015,” the report said. “On February 7, 2018, the National Science Board’s (Board) and the National Science Foundation’s (NSF) Director, who jointly head NSF, said in a statement that if current trends continue, the Board expects 'China to pass the United States in R&D investments' by the end of 2018.”

“While nations such as Russia and China are more restrictive in many ways, the clear state-sponsored mandates to be the leader in AI technologies unleashes efficiency and urgency and paves the way for speedy execution in AI research and development,” Aman Khanna, VP of products at ThumbSignIn, told Design News. “Another advantage that China enjoys is the availability of massive sets of data available to the research organizations simply due to the higher population and the (several times higher) number of Internet-connected devices (1.02 billion in China versus 0.319 million in the US).”

ThumbSignIn develops biometrics-based security solutions that use features such as thumb print and facial recognition to authenticate users. Khanna said abundant data is the staple resource for AI models to become more accurate, and this is where China has a distinct advantage over the US. “The fact that individual privacy laws may be less strict in these countries, coupled with the openness of research organizations to collaboratively share data, actually works out to the advantage of organizations developing AI technologies.”

The Cybersecurity Concern

AI's potential threat to national security was cited as a key reason to ramp up R&D efforts. While there has yet to be a major hack or data breach involving AI, many security experts believe it is only a matter of time. Cybersecurity companies are already leveraging AI to assist in tasks such as monitoring network traffic for suspicious activity and even for simulating cyberattacks on systems. It would be foolish to assume that malicious parties aren't looking to take advantage of AI for their own gain as well.

“The US should be very concerned about being a frontrunner in AI as far as cybersecurity is concerned,” Khanna said. “It is clear that cybersecurity will become an increasingly important frontier in national security, and AI is the technology with the potential to give unsurpassed power to the leader in this field.”

Khanna said using smart AI algorithms capable of learning from massive amounts of data will offer countries a distinct strategic advantage when it comes to both launching and defending against cyberattacks. “Several countries in the world have clearly declared the intent to be the AI superpowers and are working aggressively toward this goal through rapid technological development,” he said.

While AI-powered cyberattacks are a relatively new development, Khanna noted that the lowest hanging fruit for attackers would be to dramatically increase the effectiveness of phishing attacks (impersonating a trusted entity to steal valuable information) with highly accurate mass customization generated using AI.

“Traditionally, the effectiveness of a phishing attack has been dependent on the skill and experience of the hacker who has designed the attack, which has put a natural ceiling on the number and sophistication of phishing attacks,” Khanna explained. “However, with AI algorithms working overtime to analyze vast amounts of data stolen from social networks, hackers are able to generate phishing attacks that are far more effective than the best hackers in the world today—giving this power abundantly to the multitudes of malicious actors who did not have the resources to hire these experts.”

AI could also be used in social engineering tactics to steal personal information. A hacker could create fake social media users that could pretend to be human and then reach out to real people in an attempt to steal their personal information. Algorithms can also be used to create customized, very targeted social media posts that could spread false and misleading information. The more the algorithms interact with real people, the better they could get at customizing content to get more shares and reactions across platforms.

It's a scenario that sounds more and more likely, with rampant concerns over Russian hackers manipulating social media to spread false information and “fake news” during the 2016 Presidential election. And the future possibilities become more disconcerting when considering technologies such as Google's Duplex AI assistant, which sounds human enough to make real phone calls.

Is AI Even a Good Thing?

Within all the outcry for further investment in AI, however, is a growing concern over the ethics and impact of allowing more algorithms into our lives and workplaces. The impact of AI on jobs alone sees viewpoints coming from both sides of the fence. A 2017 study by consulting firm Capgemini found that implementing AI into workflows had actually created jobs in 83% of the companies it surveyed. Meanwhile, a 2013 study from Oxford University estimated the likelihood of various occupations being computerized in the future and found that 47 percent of total US employment is at risk.

“The common thread from all of these studies is that our economic policies must take into account the uncertain future of work faced by Americans as AI takes hold, and the need for increased investments in education and worker retraining,” the House Subcommittee report read. The report also called for federal, state, and local agencies to engage with educators, employers, employees, unions, and other stakeholders on “the development of effective strategies for improving the education, training, and re-skilling of American workers to be more competitive in an AI-driven economy.” The House Subcommittee believes there is a need for the federal government to “lead by example by investing more in education and training programs that would allow for its current and future workforce to gain the necessary AI skills.”

Then, there is the question of bias. One of the most significant challenges facing AI experts today is how to create algorithms that do not succumb to the same biases and prejudices as their human counterparts or the humans that create them. After all, an algorithm can only learn based on what data it receives. How can it tell, then, if data is skewed in favor of one group or another? There have been several cases of this over the years, but the House Subcommittee report cited a particular investigation done by ProPublica.

In 2016, ProPublica conducted an investigation into American judges using computerized “risk prediction” tools in criminal sentencing and bail hearings. The idea was to use algorithms to determine the level of risk of a defendant committing another crime in the future. It automates a system of risk assessment already commonly done in the criminal justice system.

The problem was that the prediction system was shown to be “racially biased and inaccurate,” according to ProPublica. African-Americans were almost twice as likely as whites to be labeled as “higher risk.” In a specific incident outlined in ProPublica's report, an African-American female charged with a burglary of $80 worth of goods was rated a higher risk than a white male who had shoplifted items of comparable value, despite him having a longer criminal history:

Prater [the white male] was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden [the African-american female] had a record, too, but it was for misdemeanors committed when she was a juvenile.

“Federal, state, and local agencies that use AI-type systems to make consequential decisions about people should ensure the algorithms supporting these systems are accountable and inspectable,” the House subcommittee report read. “In addition, federal, state, and local governments are encouraged to more actively engage with academic institutions, non-profit organizations, and the private sector in discussions on how to identify bias in the use of AI systems, how best to eliminate bias through technology, and how to account for bias.”

A Little Isn't Enough

For its part, the US government hasn't rested on its laurels in terms of backing AI research. This year has seen some significant investment in AI—particularly in the defense sector, with attempts to partner with major Silicon Valley companies such as Google, Amazon, and Microsoft to develop AI technologies for national security purposes. The National Defense Authorization Act (NDAA) approved by President Trump for the 2019 fiscal year includes Department of Defense (DoD) funding specifically allocated for new AI projects as well as toward establishing a Joint Artificial Intelligence Center (JAIC) under the DoD to oversee about 600 active AI projects. A National Security Commission on Artificial Intelligence will be given a $10 million budget to examine how AI can be leveraged for national security. Additionally, in May of 2018, the White House Office of Science and Technology Policy established a National Science and Technology Council Select Committee on AI. Its purpose will be to advise the White House on priority areas of AI research as well as establishing structures around federal R&D for AI.

But is a bigger AI budget sufficient to keep the US at the forefront? ThumbSignIn's Khanna is not convinced. “Increased government funding for AI R&D is necessary, but not sufficient for the US to continue to be the front-runner in AI,” he said. “Maintaining such a leadership position will require a clear state mandate to be a leader and a series of important policy decisions that will enable research organizations to have access to the top talent globally as well as to abundant data within and outside the country. Such policies should be aimed at positioning the US as an attractive destination for the best minds within the country and internationally to work on the development of AI technologies as well as making it easier for research organizations to access (and collaboratively share) the required data without compromising the privacy of individuals.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

Comments (2)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...