1 of 3
Associated Press
This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in new technology that lets anyone make videos of real people appearing to say things they've never said. There is rising concern that U.S. adversaries will use new technology to make authentic-looking videos to influence political campaigns or jeopardize national security.

SALT LAKE CITY — Rep. Chris Stewart isn't sure there's a way to control the proliferation of digital technology that makes it easier to create synthetic video, images, audio or text known as "deepfakes."

The Utah Republican said that some of the suggestions industry experts offered during a congressional hearing Thursday might be helpful in an ideal world, but in the real world they would be nearly impossible to implement.

Policies could be adopted to control government and to some extent legitimate businesses, he said.

"But we can't control everyone. This is going to be so pervasive and so available that virtually anyone could create this. It’s easy to control the U.S. government and say you can’t use it, you can’t create it for political manipulations or whatever it might be," Stewart said.

"But you can’t control the other 6 billion people on the earth."

A House Intelligence Committee hearing in Washington examined the dangers that deepfakes — including the use of emerging artificial intelligence — pose to national security, upcoming elections, public trust and journalism. Experts also proposed recommendations on what Congress could do to combat digital misinformation.

Stewart said the sheer volume of deepfakes make them difficult to track.

"It's like trying to monitor every bumblebee that’s flying around America," he said.

'This is a race'

David Doermann, director of the Artificial Intelligence Institute at the University of Buffalo, told the committee that combatting synthetic and manipulated media is not only a technical challenge but a social one as well.

"There’s no easy solution and it’s likely to get much worse before it gets much better," he said.

People need to use tools and processes to detect fake media rather than relying on government and social media platforms to police content, Doermann said.

"If individuals can perform a sniff test and the media smells of misuse, they should have ways to verify it or prove it or easily report it," he said.

Detection also needs to be on the front end, not just after the images appear, he said. If that doesn't work, there should be warning labels on content that's not real or authentic whether that's determined by humans, machines or both. Pressure must be put on social media companies to realize that the way their platforms are being misused is unacceptable, Doermann said.

"Let there be no question that this is a race — the better that manipulators get the better detectors need to be," he said.

Software using artificial intelligence or AI can now be used in synthesizing voices, impersonating people in videos and creating a virtual person, said Jack Clark, policy director at San Francisco-based OpenAI.

"I don’t think AI is the cause of this. It think AI is an accelerant to an issue that has been with us for some time. We do need to take steps to deal with this problem because the pace of this is challenging," he told the panel.

Instilling conflict

Clint Watts, a former FBI agent on the Joint Terrorism Task Force, identified Russia and China as using AI to mount disinformation campaigns that instill fear and conflict in Western democracies and distort reality for Americans and their allies.

Over the long term, deepfakes will target U.S. officials, institutions and agencies to subvert democracy and demoralize Americans, he said. In the short term, synthetic media may incite physical mobilizations under false pretenses, initiate public safety crises and spark the outbreak of violence.

J. Scott Applewhite, Associated Press
Rep. Devin Nunes, R-Calif., ranking member of the House Intelligence Committee, center, is joined by, from left, Rep. Chris Stewart, R-Utah, Rep. Brad Wenstrup, R-Ohio, Chairman Adam Schiff, D-Calif., and Rep. Jim Himes, D-Conn., during a hearing on politically motivated fake videos and manipulated media, on Capitol Hill in Washington on Thursday, June 13, 2019.

The U.S. government should maintain intelligence on adversaries capable of launching deepfake content or the proxies they use to spread disinformation, Watts said.

Watts also said Congress should pass laws prohibiting U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content.

Policymakers should work with social media companies to develop standards for content accountability and the private sector to implement digital verification signatures for the date, time and origin of content, he said.

A crudely altered video of House Speaker Nancy Pelosi, D-Calif., which was viewed more than 3 million times on social media, gave only a glimpse of what the technology can do. Experts dismissed the clip, which was slowed down to make it appear that Pelosi was slurring her words, as nothing more than a "cheap fake."

The Pelosi video "demonstrates the scale of the challenge we face,” said Rep. Adam Schiff, D-Calif., the committee chairman. But he said he fears a more "nightmarish scenario" with such videos spreading disinformation about a political candidate and the public struggling to separate fact from fiction.

Schiff said the technology has "the capacity to disrupt entire campaigns, including that for the presidency."

Dueling AIs

Utah Valley University cybersecurity program director Robert Jorgensen explained how the deployment of AI-driven tools turbo-charged the rate at which deepfake videos have improved over the last few years.

Jorgensen said that just a short time ago, computer generated or manipulated images were relatively easy to identify by "quirks" that were apparent to even casual viewers. What has evolved, however, is a technique to pit AI-based tools against each other in a game of one-upmanship that has led to methods to create wholesale fake videos that are extremely difficult to detect.

"Generative adversarial networks are a kind of machine learning system where you basically have two AIs that are essentially fighting each other," Jorgensen said. "One AI tries to create a deepfake or something of that nature and the other tries to detect it. One machine is learning to make them better and another is learning to detect them better."

Jorgensen said the software tools to create deepfakes are also widely available and cited a project that two UVU cybersecurity graduate students did last year. While neither student had any expertise in videography or AI, Jorgensen said they were able to leverage research, experimentation and software they found on the internet into some very convincing deepfake videos of themselves.

Jorgensen noted while deepfake videos are having a moment right now, thanks to the recent Pelosi video and a super-villain version of Facebook founder Mark Zuckerberg, the ability to manipulate images is not new.

"This kind of manipulation and propagation isn’t new, it's just being taken to the next level," Jorgensen said. "We’ve doubted the veracity of photos for a long time, and if we see something unlikely in a photograph, our response is, 'It must be Photoshopped.'

"Until fairly recently, manipulated videos looked manipulated … but it’s gotten to the point were it's very convincing."

Jorgensen, who spent years working on cybersecurity issues in the private sector before joining UVU, said the efforts to develop deepfake detection software are active, but lagging behind advances in producing the videos.

He noted the U.S. military's research division, the Defense Advanced Research Projects Agency or DARPA, is spending tens of millions via its Media Forensics program aimed at keeping up with deepfake technology and other synthesized content creation.

'Plays to our fears'

University of Utah S.J. Quinney College of Law professor Amos Guiora explored potential impacts of digital security issues in his book "Cybersecurity: Geopolitics, Law and Policy." He noted that behavior patterns typical of internet users, coupled with the advanced techniques of producing fake or manipulated videos, has laid the groundwork for wide dissemination of deepfake content.

"What’s so troubling is, if you’re not a keen and careful observer, these things are so realistic, you can get sucked into it," Guiora said. "For most of us, our attention span when we’re surfing is quick, it’s brief. You see something that’s pretty cool or interesting, you send it on to your circle.

"The dissemination rate, the speed that things go viral is so incredibly fast, by the time people catch on to the darkness, the nefariousness, it’s too late."

Guiora said he believes there is real danger associated with the production and disbursement of deepfake videos that extend beyond attempts to malign someone's character based on political ideologies.

16 comments on this story

"What this doctoring does is play to our worst instincts, plays to our fears, plays to our pre-existing biases and prejudices," Guiora said. "The distance from that to … physical harm really isn't very far."

Guiora, who teaches criminal law and national security law among other topics, believes deepfake videos distributed via social media nodes is further evidence that the time has come to consider how to regulate those arenas, noting the challenges that would come with that effort and balancing First Amendment protections.

"It’s a conversation that must be had," Guiora said.