AI generated content should be labelled, EU Commissioner Jourova says
Technology
icrosoft-backed OpenAI's ChatGPT has become the fastest-growing consumer application in history
BRUSSELS (Reuters) - Companies deploying generative AI tools such as ChatGPT and Bard with the potential to generate disinformation should label such content as part of their efforts to combat fake news, European Commission deputy head Vera Jourova said on Monday.
Unveiled late last year, Microsoft-backed (MSFT.O) OpenAI's ChatGPT has become the fastest-growing consumer application in history and set off a race among tech companies to bring generative AI products to market.
Concerns however are mounting about potential abuse of the technology and the possibility that bad actors and even governments may use it to produce far more disinformation than before.
"Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google (GOOGL.O) should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation," Jourova told a press conference.
"Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognise such content and clearly label this to users," she said.
Companies such as Google, Microsoft and Meta Platforms (META.O) that have signed up to the EU Code of Practice to tackle disinformation should report on safeguards put in place to tackle this in July, Jourova said.
She warned Twitter, which quit the Code last week, to expect more regulatory scrutiny.
"By leaving the Code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinised vigorously and urgently," Jourova said.