OpenAI sees continued attempts by threat actors to use its models for election influence
Technology
OpenAI sees continued attempts by threat actors to use its models for election influence
(Reuters) - OpenAI has seen several attempts where its AI models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT maker said in a report on Wednesday.
Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said.
So far this year it neutralized more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the US elections, the company said.
It also banned several accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X.
There is increasing worry about the use of AI tools and social media sites to generate and propagate fake content related to elections, especially as the US gears for presidential polls.
According to the US Department of Homeland Security, the US sees a growing threat of Russia, Iran, and China attempting to influence the Nov. 5 elections, including by using AI to disseminate fake or divisive information.
OpenAI cemented its position as one of the world's most valuable private companies last week after a $6.6 billion funding round.
ChatGPT has had 250 million weekly active users since its launch in November 2022.