AI expert David Doermann to testify Thursday before Congress on deepfakes

Deepfakes pose a disinformation threat capable of disrupting the 2020 election, Doermann says

Release Date: June 7, 2019

Print
David Doermann.

David Doermann

“The idea of seeing is believing is, unfortunately, no longer always true. The technology behind these videos is getting so sophisticated, yet simple to use, that it poses an increasingly serious national security threat. ”
David Doermann, director
University at Buffalo Artificial Intelligence Institute

BUFFALO, N.Y. — University at Buffalo artificial intelligence expert David Doermann will testify before Congress on Thursday concerning national security challenges posed by deepfake videos and other manipulated forms of digital media.

The hearing, before the House Intelligence Committee in Washington, D.C., is among the first to focus on deepfakes, which are manipulated videos and other digital content produced by AI that yield seemingly realistic but ultimately fabricated images and sounds.

This type of misinformation wasn’t an issue during the 2016 election; instead Russia relied on false social media accounts and other tactics, according to U.S. intelligence reports.

But officials and lawmakers worry that deepfakes and manipulated videos, such as a recent video made to appear that House Speaker Nancy Pelosi was slurring her words, will be used to influence upcoming elections and spread false information.

“The idea of seeing is believing is, unfortunately, no longer always true. The technology behind these videos is getting so sophisticated, yet simple to use, that it poses an increasingly serious national security threat,” says Doermann, who previously oversaw a Defense Advanced Research Projects Agency (DARPA) effort to combat evolving image and video manipulation technology.

Doermann, director of UB’s Artificial Intelligence Institute, will appear before lawmakers Thursday to discuss how recent advancements in a subset of AI called “deep learning” are making it easier to manipulate video, audio and images in ways that are more convincing than ever before.

He is expected to discuss, among other things, future advancements of deep learning; how deepfake technology can be detected; and counterintelligence and national security risks, including efforts to manipulate the American public and democratic processes such as future elections.

Media Contact Information

Cory Nealon
Director of Media Relations
Engineering, Computer Science
Tel: 716-645-4614
cmnealon@buffalo.edu