OpenAI says AI tools can be effective in content moderation
Technology
Startup does not train its AI models on user-generated data
(Reuters) - ChatGPT creator OpenAI made a strong case for the use of AI in content moderation, saying it can unlock efficiencies at social media firms by speeding up the time it takes to handle some of the grueling tasks.
Despite the hype around generative AI, companies such as Microsoft (MSFT.O) and Google-owner Alphabet (GOOGL.O) are yet to monetize the technology in which they have been pumping billions of dollars in the hope that it will have a big impact across industries.
OpenAI, which is backed by Microsoft, said its latest GPT-4 AI model can reduce the process of content moderation to a few hours from months and ensure more consistent labeling.
Content moderation can be a grueling task for social media firms such as Facebook-parent Meta (META.O), which works with thousands of moderators around the world to block users from seeing harmful content such as child pornography and images of extreme violence.
"The process (of content moderation) is inherently slow and can lead to mental stress on human moderators," OpenAI said. "With this system, the process of developing and customizing content policies is trimmed down from months to hours."
Separately, OpenAI CEO Sam Altman said on Tuesday that the startup does not train its AI models on user-generated data.