Facebook has said it will not remove a manipulated video of its chief Mark Zuckerberg from Instagram, in which he appears to credit a secretive organisation for his success.
The clip is a “deepfake”, made by AI software that uses photos of a person to create a video of them in action.
Facebook had previously been criticised for not removing a doctored clip of US House Speaker Nancy Pelosi.
The latest decision coincided with the announcement of 500 new jobs in London.
The social network has said many of the posts will be involved in building machine-learning based software of its own to automatically detect and remove malicious content posted to its platforms.
In addition, they will build tools to help human workers review potentially harmful material.
The company said it would bring its tally to more than 3,000 jobs in the capital by the end of 2019.
The deepfake video of Mark Zuckerberg was created for an art installation on display in Sheffield called Spectre.
It is designed to draw attention to how people can be monitored and manipulated via social media in light of the Cambridge Analytica affair – among other scandals.
It features a computer-generated image of the chief executive’s face merged with footage of his body sourced from a video presentation given in 2017 at an office in Facebook’s Silicon Valley headquarters. An actor provided the audio recording it is synched to.
The 16-second clip – which plays on a loop – was uploaded to Instagram on Saturday. However, it only gained prominence after Motherboard reported on its existence on Tuesday.
“The result is fairly realistic – if you leave the video muted,” commented the news site.
“The voice superimposed on the video is clearly not Zuckerberg, but someone attempting an impression.”The account involved had labelled the video with a #deepfake hashtag.
The Instagram post has now been viewed more than 25,000 times. Copies have also been shared via Facebook itself.
“We will treat this content the same way we treat all misinformation on Instagram,” said a spokesman for the app’s parent company Facebook. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”
The artists involved said they “welcomed” Facebook’s decision but still questioned the company’s ethics.
“We feel that by using art to engage and critically explore this kind of technology, we are attempting to interrogate the power of these new forms of computational propaganda and as a result would not like to see our artwork censored by Facebook,” they told the BBC.
“We would however welcome meaningful regulation and oversight of the digital influence industry.”
Had Facebook opted to block the post, it could have faced accusations of hypocrisy after refusing to remove a manipulated clip of Ms Pelosi three weeks ago.
That video was not a deepfake, but had been slowed down in parts to make the Democratic leader’s speech appear garbled.
The tech firm said at the time that information posted to its site did not need to be “true”. But it said it would limit how often the video appeared in members’ news feeds, and provide a link to fact-checking sites.
Ms Pelosi subsequently criticised the firm saying: “Right now they are putting up something they know is false.”
“I can take it … But [Facebook is] lying to the public.”
The Washington Post has since reported that Mr Zuckerberg tried to personally contact Ms Pelosi to discuss the matter, but she had not responded.