People and Community Science and Technology

Deepfake technology has us asking: Is it real or fake?

Deepfake technology used to alter video and other media is becoming more sophisticated. University of Miami experts weigh in on the threat it poses.
Advancements in facial mapping and artificial intelligence are making it easy for just about anyone to produce videos of real people appearing to say something they never said.

The video footage certainly looked authentic. Recorded at a Center for American Progress event last May, it showed House Speaker Nancy Pelosi slurring her words while delivering remarks on how President Trump’s refusal to cooperate with congressional investigations was equivalent to a “cover-up.” 

But it turns out the video was distorted—altered to make Pelosi sound as if she were intoxicated. Equally bad, the video was circulated widely across social media, picking up more than 2.5 million views on Facebook. 

Once the stuff of Hollywood special effects studios, doctored videos, known as “deepfakes,” are now on the rise, as advancements in facial mapping and artificial intelligence are making it easy for just about anyone to produce videos of real people appearing to say something they never said. 

When such content is spread via social media, the damage can be irreparable. 

“Anyone who is targeted by this abuse suffers harm, harm that is practically impossible to undo,” said Mary Anne Franks, a professor of law at the University of Miami School of Law and the legislative and tech policy director of the Cyber Civil Rights Initiative. “Even when manipulated videos are criticized or debunked, there is no real way to undo the initial impact of being portrayed as saying or doing something you have never said or done. The more salacious the content—for instance, pornographic depictions of women—the more harm is done.” 

While public figures like Pelosi are especially vulnerable because of their visibility and status, average citizens are also in peril because they have fewer means of fighting back, said Franks, who drafted the first model criminal statute on nonconsensual pornography, or revenge porn, which has been used as the template for multiple state laws. 

“Every target of this abuse will have to struggle with the so-called ‘Streisand effect’—that is, the fact that seeking recourse or correction for the harm that is done will inevitably bring more attention to the harmful content itself,” said Franks. 

“Deepfakes” can be even more convincing than fake news “because we’ve all grown up with the idea that seeing is believing,” said Joseph Treaster, a professor in the School of Communication and a former New York Times reporter. “In the digital age, video gets far more attention than written material.” 

With the 2020 elections approaching, both major political parties are fearful that “deepfakes” will increasingly become weapons used to interfere in the democratic process and, perhaps worse, threaten national security. 

And it is the proliferation of technology available to the general public that is making it harder to rein in “deepfakes.” There was even an app developer who created an algorithm that could remove clothing from the images of women to make them look realistically nude. It was shut down after four days. 

“From face swapping to lip movements and even merging two locations—those are just some of the technologies available for anyone to use,” said Alexis Morales Rivera, broadcast operations manager at the School of Communication.

UM’s experts discuss some of the other important elements of “deepfakes.” 

What laws exist to punish those who create “deepfakes?”

There are multiple laws that could apply in theory, including laws against defamation, false light, fraud, harassment, and regulations concerning deceptive business practices. In practice, however, it is often difficult to identify the creators of such content, and this undermines any potential legal action from the start. What is more, there is no law that squarely targets the creation or distribution of imagery manipulated to seem authentic. New laws, carefully and narrowly drafted to avoid infringing upon the First Amendment right to free expression, will be necessary to address the problem of misinformation.

—Mary Anne Franks, professor of law at the School of Law 

Is there any way to hold social networks like Facebook responsible for posting such misinformation?

At the moment, it is almost impossible to hold intermediaries such as Facebook responsible for the role they play in spreading misinformation, whether it’s deepfakes, anti-vaccine ideology, or more traditional forms of defamation. This is due to the overzealous interpretation and application of a federal law called Section 230 of the Communications Decency Act. While the act on its face provides legal immunity to online intermediaries for engaging in practices aimed at eliminating harmful content on their platforms, it has been successfully invoked to also protect intermediaries who do nothing about such content, as well as to protect intermediaries who profit from or encourage such content. This too must change in order to effectively address the threat of misinformation.”

—Mary Anne Franks, professor of law at the School of Law 

How can people tell if a video is real or fake?

We’re all on notice now. We know that fake videos are out there. So we have to be on guard. Now when we see a video, we have to ask ourselves: Could it be true? Before you accept it as true, try Googling the people and places in the video. You may find original, true video that will contradict the fake. One of the fundamental techniques of the fake video makers is to take a real, true video and doctor it.”

—Joseph Treaster, professor in the School of Communication 

We have known about altered photos for a while. Now we have altered videos. Common sense is that the best way to detect the fakes is that if someone is saying something that makes you think twice about what you’re watching, chances are it’s fake. There are also several research institutions, including the U.S. Department of Defense, that are working on algorithms to detect deepfakes.

—Kim Grinfeder, associate professor and director of the Interactive Media Program in the School of Communication 

It is [our] responsibility to be able to examine the content and, using common sense, discern and be wise about what to share. Anybody can post videos on the internet without been held accountable. A news station will be more careful on posting content than a website with random feeds where there’s no accountability over what’s been posted.”

—Alexis Morales Rivera, broadcast operations manager at the School of Communication 

What are the implications of fake videos on society and democracy, especially as we head into an election season?

There already is a deep mistrust of the media today. Fake videos, like any other fake media, only increase this distrust. I think reliable news sources that can verify the legitimacy of the news they distribute are more important than ever. 

—Kim Grinfeder, associate professor and director of the Interactive Media Program in the School of Communication 

We are all harmed by deepfakes, whether or not we are ever targeted personally. Deepfakes and other forms of misinformation pollute the marketplace of ideas and destabilize the idea of truth itself. We are all worse off in a world in which people not only believe false things are true but also that true things are false. This is a threat to democracy as well as to personal dignity and autonomy.

—Mary Anne Franks, professor of law at the School of Law