The Real Dangers Behind ‘Deepfakes’

digital reproduction of human's face.

Videos manipulated by AI weaponize disinformation. A UB expert is fighting back with the facts.

On June 13, University at Buffalo artificial intelligence (AI) expert David Doermann testified before Congress on the national security threat posed by deepfakes: manipulated videos and other digital content produced by AI that are so realistic, the deception is practically undetectable to the untrained eye, and on the way to confounding even the trained eye.

AI arms race

It’s not that manipulated or synthetic content is new, wrote Doermann in his statement to the House Permanent Select Committee on Intelligence, invoking Hollywood movies as an obvious example. But it used to be a time-consuming, specialized process. Machine learning has made it widely available—and much, much more sophisticated. Warned Doermann, “We can make people dance in ways they have never danced, and put words into people’s mouths that they have never said.”

And although the Russians did not deploy deepfakes during their campaign to influence the 2016 presidential election, the potential for manipulators to spread disinformation and disrupt future elections is huge.

It is precisely this possibility that worries Doermann, who led a program combating media manipulation technology at the Defense Advanced Research Projects Agency (DARPA) before becoming the inaugural director of UB’s Artificial Intelligence Institute in 2018. “One thing that kept me up at night was the concern that someday our adversaries would be able to synthesize entire events with minimal effort,” he recalled of his time at DARPA. “If the past five years are any indication, that someday is not very far in the future.”

Countering the threat

According to Doermann, a large share of the responsibility for the spread of deepfakes falls on social media platforms, which have been slow to recognize and respond to misinformation shared on their sites. One possible step: flagging suspicious photos or videos with warning labels that encourage users to consider their veracity.

But, he emphasized, individuals must be empowered too. “We need to get tools and processes in the hands of individuals,” he said. “If individuals perform the ‘sniff test’ and the [video] fails, they should have ways to verify or prove it.”

Ultimately, he acknowledged, the fight against deepfakes will continue. “There is no easy solution, and it will likely get much worse before it gets better.”