Arts and Humanities People and Community

Can you spot a deepfake?

University of Miami researchers familiar with artificial intelligence technology offer their opinions about a phenomenon—the posting of images or videos that have been manipulated—that is becoming more prevalent online.
Deepfakes

If you surf the internet—and who doesn’t—you have seen a video that has caught your attention.

Tom Cruise performing magic tricks and eating a lollipop.

Former President Barack Obama spouting offensive words.

Kim Kardashian rapping.

These videos were not real. They were deepfakes.

A deepfake is an image or video of someone’s likeness that looks realistic; when in fact, it has been altered or created and usually by using artificial intelligence (AI).These phony representations have been around for quite a while. But the technology to create them is becoming easier to access and use.

According to experts, the new AI synthetic technology offers countless opportunities for marketers, advertisers, and influencers, to create these products. It can also offer chances for those bent on promoting disinformation to mislead in dangerous ways, leaving viewers to question reality.

For example, deepfakes have been created that superimpose innocent victims into pornographic videos. 

“Deepfakes could convince some people of things that are not true,” said Joseph Uscinski, professor of political science at the University of Miami and an expert on conspiracy theories.

“They could also damage reputations by making people think that someone did something that they did not actually do,” he added. “If people aren’t aware of the technology and what it can do, they may not be on the lookout for it.”

Lindsay Grace, associate professor at the School of Communication and Knight Chair in Interactive Media, said deepfakes should be subjected to fact-checking. A consumer of videos should be savvy enough to question and dig into videos they view. 

“As the quality of deepfakes increase, it’s going to be hard to tell the real from the artificial,” Grace pointed out. “It’s important to practice a few proven strategies, [such as] verifying the source. For deepfakes of political figures, newsworthy accounts, or popular content, there are also online resources to check such content.”

These are the skills Grace and his colleagues in the communication literacy task force have tried to develop in an introductory course and a massive online open course about misinformation in the digital age.

Approaching the internet with a critical eye and questioning the broader intention of the creation of a video, photo, or audio is crucial in today’s world, said Ching-Hua Chuan, assistant professor of interactive media. “In the early days of deepfakes, you could spot a counterfeit subject because it did not blink, had strangely messy hair, or a blurred ear. Those days are gone,’’ she added.   

“Deepfakes are becoming more and more realistic, so it is getting harder and harder to differentiate deepfakes from the truthful ones,” Chuan said.

Often the social manipulators of deepfakes share old content under the wrong context and manipulate images to suit their message, she noted. Photoshopped images are shared widely across the internet, Chuan pointed out. The experts agreed that scrutinizing the images very carefully is important.

In addition, viewers have to be aware of shallowfakes, where altered content is added to a post. Such was the case with a 2020 video that was widely circulated on social media of former House Speaker Nancy Pelosi appearing to be drunk. The video’s audio track had simply been slowed to make Pelosi appear to be slurring her words.

“The way deepfakes are used for misinformation can create political divides and this can affect society,” Chuan explained. “People lose trust in what they see and what they hear.” 

False audio can also be used to defraud people out of money, she warned. She related a well-documented case, where a chief executive from a British energy firm was tricked into sending 220,000 British pounds sterling to a Hungarian supplier who made him believe, in a fake voice audio, that he was an executive from the British company.

However, there is an emerging industry of software that aims to detect anomalies common to deepfakes. Large companies like Intel—which offers a deepfake detector—have developed such products. Facebook and Microsoft are looking into technologies that can spot deepfakes and alert consumers when they are viewing them.

Chuan recommended that those who want to learn more about how to spot deepfakes start by taking this interactive quiz.

She also noted that one cannot think of AI only in negative terms. Because deepfake technology can be used in many creative ways. Artists use it to generate innovative images. And the AI can help people who have lost their voices by restoring their capability to speak using their own voice data.

“The technology itself does not necessarily cause damage,” Chuan said. “It is how it is used that may cause damage.”