Demystifying the Deep Fake

0

This past week, Fordham Law alumna Danielle Citron ’94 was one of 26 recipients of a “genius grant,” the prestigious fellowship awarded annually by the MacArthur Foundation. Professor Citron, currently a Boston University law professor, was a visiting professor at Fordham Law in fall 2018. The MacArthur Foundation recognized Citron’s trailblazing work on cyber harassment and online abuse. The generous MacArthur Fellowship awards are provided annually to a select handful of individuals for their outstanding accomplishments, creativity, and potential to continue valuable work.

Only a couple of days before the MacArthur recognition, Professor Citron returned to Fordham with Professor Bobby Chesney, law professor and associate dean for Academic Affairs at the University of Texas, on September 23 to discuss their groundbreaking joint work on “deep fake” videos, as part of a Digital Rights and Civil Liberties Colloquium hosted by Fordham Law School Professor Catherine Powell. Students can sign up for the colloquium for academic credit, and participate in weekly conversations with leading technology law experts, such as Citron and Chesney. During Professor Citron’s stint as a visiting professor at Fordham Law last fall, Professor Powell was inspired by a similar seminar Citron taught, prompting Powell to raise funds from Microsoft to cover travel for guest speakers to share their work with students in this fall’s colloquium, which focuses on human rights (Powell’s expertise) more broadly. This year’s colloquium—which explores both the freedom-enhancing and freedom-limiting dimensions of the digital age—focuses on how emerging technologies are changing our relationships with the state, with each other, and even with ourselves (for example, in how we express and perform online). In addition to weekly sessions with top scholars and practitioners, students facilitate discussions on topics “torn from the headlines” concerning privacy, equality, free speech, and other human rights implicated by the digital space.   

Citron and Chesney spoke with students in the colloquium this past week about their forthcoming paper, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.”  The co-authors raised an alarm about how “deep fake” videos pose threats not only to democracy and national security, but can also be used to harass and exploit celebrities, leaders in business and government, and just about anyone whose likeness can be appropriated and used without consent.

In April of last year, a video surfaced of former President Barack Obama cautioning the public about the prevalence of technology capable of creating videos in which anyone can be shown saying or doing things that person would never actually say or do in a public forum. Thirty-six seconds in, the screen splits, and filmmaker Jordan Peele appears, speaking, gesturing, and even blinking in time with Obama. The video, a collaboration between Peele—delivering his uncanny impression of Obama’s voice—and Buzzfeed, is known as a “deep fake,” a digitally edited video using a technique called generative adversarial network (GAN) to superimpose an existing video onto another video. Peele’s public service announcement demonstrates just how advanced GAN technology has become (and it is safe to assume that it has grown even more advanced in the year and a half since the video’s airing), and it warns the viewers that they can no longer blindly trust everything they see. Many deep fakes are harmless, intended as humor or satire, like Peele and Buzzfeed’s Obama video, but the technology has limitless applications, and can very easily be used to cause great harm—to individuals and to the general public.

During their visit to Fordham this past week, Citron and Chesney spoke to the assembled students and faculty about their research on the possible risks of GAN technology and how to mitigate the potential harm posed by deep fakes. 

“The GAN technique, because it does not depend on scarce technical resources that are hard to reproduce, is already spilling out and diffusing in a way that will put it into all of our hands,” warned Chesney. The technique is based on knowledge that can be gleaned from YouTube tutorials, giving anyone with an internet connection and a little bit of time the ability to generate relatively convincing footage of nearly anyone, doing anything. Currently, most deep fakes are not a perfect simulacrum of real footage—even Peele’s Obama video does feel slightly off— but, as the technology to create these manipulated videos improves, fakes will become more and more difficult to spot. Chesney and Citron also noted that low quality fakes may not necessarily discredit themselves entirely, as “audiences inclined to believe their message might disregard the red flags.”

Citron also pointed out that the victims of deep fakes are “disproportionately women and people from marginalized communities.”

In 2017, an anonymous Reddit user began posting pornographic videos on the site featuring various female celebrities, all of which were faked. Some dismissed such videos as juvenile titillation, as these were quickly debunked and did little to affect the reputations of the actresses whose likenesses were used. These simulated pornographic videos are far from harmless, however. The following year, Indian journalist Rana Ayyub was victimized by a deep-fake porn video, likely in retaliation for taking a stand against a gang that raped and murdered an 8-year-old girl in Kathua, Jammu and Kashmir. The video went viral, and Ayyub experienced an onslaught of harassment on all social media platforms. “She withdrew from all of her online activity, which is a hard thing to do when you make a living as a journalist,” Citron recounted. “She was afraid to go outside.” Law enforcement in India did nothing to help her. The UN Council on Human Rights issued a statement saying it was worried about her safety. 

Individual harm, however, is not the only potential risk of deep fakes. A fabricated video or image meant to discredit a political leader or candidate, especially if deployed in close proximity to a major election, could easily throw the election results, and thus undermine democratic stability. 

These stories make it easy to envision the destructive potential of videos that depict an event that has not taken place, but Citron and Chesney warn not to underestimate the potential denial power afforded by deep fake technology. Now, as the videos become more and more lifelike, it has become much easier for those recorded engaging in incriminating behavior or making inflammatory comments to claim that the footage was fabricated. Crucially, as we become more accustomed to the concept of “fake news” and credibly faked audio and video, we become more inclined to believe such assertions. Chesney and Citron call this risk the “liar’s dividend.” 

The anonymity of the deep fake creators poses another problem. As Citron noted in her recent TED Talk on deep fakes, “You can’t leverage law against people you can’t identify and find. And if a perpetrator lives outside the country where a victim lives, you may not be able to insist that the perpetrator come into local courts to face justice.”

Citron and Chesney admitted their work brings to light more problems than solutions. In a world where salacious fabrications have been shown to spread ten times more quickly than actual news, the popularity of deep fakes will likely only rise, and detection technologies and law enforcement are likely to be perpetually one step behind.

“As the [GAN] technology improves, Citron and Chesney show, policymakers and judges will have to think much harder about how to impose obligations on the intermediaries that distribute third-party deep fake videos – never mind the anonymous charlatans and bad actors who create them,” said Professor Olivier Sylvain, professor of law at Fordham and director of the McGannon Communication Research Center. “The consequences are dire for democracies whose consumers and voters depend on the robust circulation of truthful information and ideas.”

Education is key in preventing harm from deep fakes—educating the public on how to research the veracity of the news they encounter online, educating law enforcement on the potential damage of deep fakes, and continually adapting technology to identify faked footage.

Said Linda Sugin, Fordham Law School’s associate dean for academic affairs, “Danielle’s work has been focusing on this important issue since before anyone really understood the problem. She has been tireless in advocating for better law to match more advanced technologies.” 

Share.

Comments are closed.