LONDON (AP) — Facebook said it is banning “deepfake” videos, the false but realistic clips created with artificial intelligence and sophisticated tools, as it steps up efforts to fight online manipulation. However, the policy leaves plenty of loopholes.
The social network said late Monday it’s beefing up its policies for removing videos edited or synthesized in ways that aren’t apparent to the average person and which could dupe someone into thinking the video’s subject said something he or she didn’t actually say.
Created by artificial intelligence or machine learning, deepfakes combine or replace content to create images that can be almost impossible to tell are not authentic.
“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Facebook’s vice president of global policy management, Monika Bickert, said in a blog post.
However, she said the new rules won’t include parody or satire, or clips edited just to change the order of words. The exceptions underscore the balancing act Facebook and other social media services face in their struggle to stop the spread of online misinformation and “fake news,” while also respecting free speech and fending off allegations of censorship.
The U.S. tech company has been grappling with how to handle the rise of deepfakes after facing criticism last year for refusing to remove a doctored video of House Speaker Nancy Pelosi slurring her words, which was viewed more than 3 million times. Experts said the crudely edited clip was more of a “cheap fake” than a deepfake.
Then, a pair of artists posted fake footage of Facebook CEO Mark Zuckerberg showing him gloating over his one-man domination of the world. Facebook also left that clip online. The company said at the time neither video violated its policies.
The problem of altered videos is taking on increasing urgency as experts and lawmakers try to figure out how to prevent deepfakes from being used to interfere with the U.S. presidential election in November.
The new policy is a “strong starting point,” but doesn’t address broader problems, said Sam Gregory, program director at Witness, a nonprofit working on using video technology for human rights.
The bigger problem is videos that are shown without context or lightly edited, which some have dubbed “shallow fakes,” Gregory said. These include the Pelosi clip or one that made the rounds last week of Democratic presidential candidate Joe Biden that was selectively edited to make it appear he made racist remarks.
Gregory, whose group was among those that gave feedback to Facebook for the policy, said while the new rules look strong on paper, there are questions around how effective the company will be at uncovering synthetic videos.
Facebook has built deepfake-detecting algorithms and can also look at an account’s behavior to get an idea of whether it’s intention is to spread disinformation. That will give the company an edge over users or journalists in sniffing them out, Gregory said.