While fake videos and other manipulated media have many lawmakers worried ahead of the 2020 U.S. presidential election, plans to moderate such content are being weighed with the protection of First Amendment rights.
“All of this is so contextual, so I don’t think we can have a one-size-fits-all rule for synthetic video, even as to impersonations, because you could have satire and parody,” said Danielle Citron, professor of law at the University of Maryland Francis King Carey School of Law, at a Thursday morning House Intelligence Committee hearing.
“At the same time, we have to bring context to the fore, and say there are times when these falsehoods, the deepfake, causes real harm.”
Leading up to the hearing, a video of Mark Zuckerberg drew attention on Instagram. This video shows the Facebook CEO sitting at a desk and looking toward a camera. In the video, he said, “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”
Except, it wasn’t Zuckerberg saying this. It was a deepfake video; people modified previous footage of him and made him say things he never actually said.
Artists create Mark Zuckerberg video to demonstrate the potential power of ‘deepfake’ videos and Facebook’s policies on hosting them pic.twitter.com/MTg2JnaxgV
— Reuters Top News (@Reuters) June 13, 2019
Intelligence Committee Chairman Adam Schiff said thinking ahead into the 2020 elections and beyond, “nightmarish scenarios” could be envisioned because of manipulated media, with the government, media and public struggling to discern what is real and fake.
For example, he mentioned possible election-affecting deepfake videos that show politicians accepting a bribe.
“What enables deepfakes and other modes of disinformation to become truly pernicious is the ubiquity of social media and the velocity at which false information can spread,” Schiff said. “We got a preview of what that might look like recently when a doctored video of Speaker Nancy Pelosi went viral on Facebook.”
Facebook, which owns Instagram, decided not to remove the Zuckerberg video. The company also opted not to remove the video of Pelosi, which was altered to make it sound like she was slurring her words. YouTube chose to remove that video.
“Now is the time for social media companies to put in place policies to protect users from this kind of misinformation,” Schiff said during the hearing. “Not in 2021, after viral deepfakes have polluted the 2020 elections.”
Citron said she believed Facebook should have removed the altered Pelosi video. But she also said Facebook made the right decision not removing the deepfake Zuckerberg video. Given the context, she said it was satire and parody.
Citron said the immunity of social media platforms from being held liable for their content should be conditioned on “reasonable practices.” The immunity platforms receive is provided by Section 230 of the Communications Decency Act.
Clint Watts, a research fellow with the Foreign Policy Research Institute, said at the hearing that he didn’t believe all synthetic content should be removed.
“If we went to that extreme, we would have a country where everything that’s ever been changed or modified for any reason would have to be policed,” Watts said.
Instead, Watts proposed a “triage” system, where certain content is flagged, pulled down and checked. When the content reappears to the public, it would then be given context about its authenticity.
“I see it as the public needs to be given context, so we’re not really suppressing all freedom of speech,” he said, pointing to synthetic media that is used for entertainment, comedy and other visualizations.