Twitter to label deepfakes and other deceptive media

Dunya News

Social companies have been under pressure to tackle the emerging threat of "deepfake" videos.

(Reuters) - Twitter said it would start applying a label to tweets containing synthetic or deceptively edited forms of media, as social media platforms brace for a potential onslaught of misinformation ahead of the 2020 presidential election.

The company also said it would remove any deliberately misleading manipulated media likely to cause harm, including content that could result in threats to physical safety, widespread civil unrest, voter suppression or privacy risks.

Social companies have been under pressure to tackle the emerging threat of “deepfake” videos, which use artificial intelligence to create hyper-realistic but fabricated videos in which a person appears to say or do something they did not.

Alphabet Inc’s YouTube said earlier this week it would remove any content that has been technically manipulated or doctored and may pose a “serious risk of egregious harm,” while TikTok, owned by China’s ByteDance, issued a broad ban on “misleading information” last month.

Facebook Inc said last month it would remove deepfakes and some other manipulated videos from its websites, but would leave up satirical content, as well as videos edited “solely to omit or change the order of words.”

The company sparked outrage among lawmakers when it said the new policy would not be applied to a heavily edited video that circulated widely online and attempted to make House of Representative Speaker Nancy Pelosi seem incoherent by slurring her speech.

Facebook said it would label the video as false, but that it would continue to be allowed on the platform as “only videos generated by artificial intelligence to depict people saying fictional things will be taken down.”

Twitter under its new policy will similarly apply a “false” warning label to any photos or videos that have been “significantly and deceptively altered or fabricated,” although it will not differentiate between the technologies used to manipulate a piece of media.

“Our focus under this policy is to look at the outcome, not how it was achieved,” Twitter’s head of site integrity, Yoel Roth, said on a phone call with reporters along with Del Harvey, the company’s vice president of trust and safety.

Roth said Twitter would generally apply a warning label to the Pelosi video under the new approach, but added that the content could be removed if the text in the tweet or other contextual signals suggested it was likely to cause harm.

The two executives declined to answer questions around the resources the company would put toward spotting manipulated media, saying Twitter would consider user reports and build relationships with “third party experts” to identify content.