Deepfakes: The Looming Threat Of 2020: Page 2 of 2

Deepfakes have been lurking on the internet for years now. But in 2020 the AI technology will become a powerful weapon for misinformation, fraud, and other crimes.

Deepfakes Vs. 2020

Some of the first research to really sound the alarm bells about the political dangers of deepfakes appeared from the University of Washington in 2017. Using video footage of then-President Barack Obama, along with audio from one of his speeches, University of Washington researchers were able to create a realistic, synthesized video of Obama giving that speech.

“Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes,” the researchers wrote. “Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what [Obama] appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.”

For anyone paying attention the implications were clear. The University of Washington researchers had used audio from actual speeches. But what would happen if someone used doctored or fake audio? In 2018, filmmaker and comedian Jordan Peele (who does a great impression of Obama's voice) demonstrated just that in a PSA video created in collaboration with Buzzfeed to warn against the dangers of deepfakes:

“Deepfakes are going to be a greater challenge in 2020, and it will have a menacing effect on everything -- be it entertainment, or politics, or business,” Sandeep Dutta, chief practice officer, APAC, at Fractal Analytics, an AI analytics company, told Design News. “When we wake up in the morning, the first thing we do is scan through digital media to understand what is going around the world through apps like Facebook, etc. Over time, this has become habit forming and we seem to trust what we see and let it lead our thoughts and beliefs without knowing if it is real or malicious.”

Ben Lamm, CEO of Hypergiant Industries, a provider of AI products and consulting services, told Design News that the election year makes the impact of deepfakes particularly concerning.In a time when facts and truth are at war with propaganda, groupthink, and disinformation, deepfake technologies pose a huge threat to the legitimacy of content,” he said. “At any given moment, over 30 countries are engaging in cyberwarfare. With this technology on-hand, false accusations and misrepresentations are provided with added ammunition beyond simply 'he said, she said.' They’re given a face and a personality.”

Lamm said he has no doubt that in 2020, deepfake videos will pose a very real threat and be used to attempt to divide the American people and influence voting behavior.

“It’s imperative that the government understands deepfake technology because of the outside impact on our political system, but also because it fundamentally impacts the fabric of our society,” Lamm said. “Concurrently, there is a strong need for more technology policy that regulates against the misuse of and creation of technologies that harm our society.”

While legislation can play a supporting role in combating deepfakes, for Dutta the solution will lie in more self-governance by the tech and media industry.

“Legislation will help. But, in the tech category, regulations lag behind,” Dutta said. “That is why we should not depend upon it. I don’t think that legislation, with the sophistication needed, will come in to tackle the problem. Regulators are not known to be tech-savvy or to work quickly. Even where there is regulation, enforcement will be a challenge.”

Fighting Fire With Fire

Companies are already developing algorithms capable of detecting deepfakes and other malicious content. Rochester,NY-based Blackbird.AI has made its mission to use “machine learning and interdisciplinary human intelligence to combat disinformation.” Blackbird.AI’s technology detects misinformation in real-time and at scale.

Brice Chambraud, managing director of Blackbird.AI in APAC, explained to Design News:

“ 'Deepfake' is starting to be used as a blanket term for altered video, images, and text. We built our architecture to first check for content that is synthetically amplified, for example, whether it is spread as part of a coordinated misinformation campaign, along with harmful content, then analyze that content for deepfakes.

“As deepfake detection is computationally heavy and requires significant resources, the benefits of our approach is a significant reduction in scale and the ability to capture all types of harmful content including deepfake videos, text based propaganda, and other doctored content.”

For Chambraud, deepfakes will be less of a concern in 2020 than disinformation from text and memes. He stresses the importance of tools to fight these things as well. “There are many other types of manipulated content already causing harm, more so text-based, AI-generated misinformation flooding the internet at scale, causing mass confusion,” he said. “Deepfake technology has yet to reach a stage used by threat actors at scale, but it remains one of the fastest progressing technologies and a major threat in the near future.”

“The same deep learning technology that is responsible for creating deepfakes can be used to cure it,” Dutta said. “Algorithms need to be robust and better designed to detect a real video versus a fake video or image at the source. For example, before somebody uploads a video on YouTube or Facebook, there needs to be a check to identify whether the video is real or fake. Deep learning tools can be made more sophisticated to identify inconsistencies at the pixel level with high precision. Research shows that three of the most common deepfake techniques are known as 'lip sync,' 'face swap,' and 'puppet master.' These techniques, however, can create a disconnect that may be uncovered by a clever algorithm as a way to combat deepfakes.”

That's not to say that developing anti-deepfake technologies is a quick and easy fix. “The problem here is one of iterations,” Lamm said. “Yes, GANs [generative adversarial networks] can help to combat deepfake tech. However, over time both systems will become better and better. The end result is a race to fight a technology in iterations. It’s not a magic bullet. It’s a Band-Aid.”

The Public (Deep) Face

Then what about the general public? Surely something that can create more funny videos of Nic Cage can't be all bad.

“Deepfakes can possibly add value in entertainment, helping creators enhance visual storytelling and provide immersive gaming experiences. However, this needs to be stewarded responsibly,” Blackbird.AI's Chambraud said.

“Of course there are potential upsides to deepfake tech, just as there is with any technology. No technology is inherently bad, but any technology can be used to do bad if applied in unethical ways. That said, we have to weigh the positives against the negatives and come up with unique, creative solutions from both technology and policy perspectives to address the negatives without hindering the positives,” Hypergiant's Lamm said.

“Generally, I do believe this is one tech development where the negatives outweigh the positives,” Fractal Analytics' Dutta said. “However, deepfakes can potentially offer an upside in personalization. For example, deepfake technology can be used for translating many different languages – allowing people to connect more across languages. On the business front, company CEOs and celebrity endorsers could speak directly to individuals with customized messages, even referring to a consumer by name. The third aspect is possibly recreating famous historical figures who have inspired us or left positive impacts on society. There may also be applications in education and running social programs.”

But even with those potential upsides, many experts agree that deepfakes could have much more of a downside than any lasting beneficial impact.

In an October 2019 blog, Jeff Pollard, VP and principal analyst at Forrester predicted that costs associated with deepfake scams – ranging from fraud to ransom attacks – will exceed $250 million in 2020. In August 2019 The Wall Street Journal reported on an incident in which cybercriminals were able to trick the CEO of a UK-based energy company into wiring them $243,000 by using deepfake software to mimic the voice of his boss over the phone.

“One of the reasons deepfake works so well is because a large portion of people are willing to believe them and spread them,” Dutta said. “Our society needs to be educated on what deepfakes can do as a technology and what one can do at their level to mitigate it.” He outlined some tell-tale signs of a deepfake video such as face discoloration, lighting changes, poor audio synchronization, and blurriness that people should watch out for.

“People need to understand this and learn to not trust everything that they see. They should be made aware of some of the basic tools that can be used for verification – like a reverse image search by uploading a video into the Google search bar.”

“The moment deepfake technologies can run efficiently and inexpensively in combination with text, audio, and images, extremely compelling websites can be developed and adversaries can create a huge amount of deceptive content and confusion,” Chambraud said.With key insights on content from tools such as credibility labels, users can better understand if specific content is harmful and make informed decisions on the information they consume and share within their network. Over time we see this working like spam detection, whereby misinformation detection technologies will improve and people will be more comfortable accepting suggestions to ignore harmful manipulated content.”

From Lamm's perspective however, the challenge here is getting awareness to everyone – not just those with a vested or particular interest in ensuring and verifying that the information they consume online is real and factual. “Understanding what a deepfake is and how to spot one would be a great way to minimize the influence that they can have on people who specifically are interested in finding factual content, and have an active concern for deepfake influence themselves,” Lamm said. “Those who are not as proactive with their research into this and other technologies, as well as people who remain loyal to opinion over fact may not be as easy to educate. These are the people that would need the education the most, but you can’t teach someone who doesn’t want to be taught.”

The Reaction From Big Tech

For their part, the big tech companies have stepped up their fight against deepfakes. Google has made its own library of synthetic videos freely available to researchers can companies to use to help build deepfake identifying tools.

Facebook and Microsoft, meanwhile, have partnered with universities to conduct the Deepfake Detection Challenge (DFDC) with the aim of developing deepfake detection solutions to provide to the media industry. The DFDC gives participants a dataset consisting of real videos made with actors and doctored videos generated by AI. Facebook itself has pledged $10 million to fund the challenge.

Adobe has been working with researchers at UC Berkeley to develop AI that can distinguish real facial photos from fakes. As the company behind Photoshop, Adobe obviously has a vested interest in ensuring its software isn't being used with malicious intent.

“Technology companies need to start from a position of understanding intent: What are we developing, why and how might people use this to harm society?” Lamm said. “We are no longer willing to accept a 'do no evil' policy by technology companies – instead we must hold people to a standard to do good.”

Chaumbaud encourages a focus away from hardware and more toward the social platforms and channels that could be utilizing and distributing deepfakes. “At the very least, they need to be held accountable to measure and label the creation of disinformation on their platforms,” he said. “As a start, platforms that are currently utilizing video altering technologies should consider providing metadata and watermark content to make it easier to authenticate.”

“Big tech has to own the responsibility to counter [deepfakes] with better AI tech,” Dutta said. The floodgates are already open and as computer hardware continues to grow more powerful it's only going to enable more and more people to create deepfakes. Five to 10 years ago it would have been a pain to take video from your smartphone, put an animated graphic over it, and upload it online. Now it can all be done in seconds within apps themselves.

The same thing will happen for deepfakes. And when it does it's going to become very hard for any of us to believe anything we see or hear online. Snapchat is currently rolling out a new feature, Cameo, that uses deepfake-like technology to put a user's face into videos.

“The technology is already advancing by leaps and bounds and will further accelerate [in 2020], making the deepfakes even more sophisticated and difficult to catch,” Dutta said. “The discerning may be cautious, but the problem will be with the majority, where deepfakes will play to echo-chambers of their biases and deepen them.

There is no turning back this tide.”

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.

Comments (1)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...