World is not ready for AGI, warns OpenAI former leader

World is not ready for AGI, warns OpenAI former leader

Technology

Organisations are not sufficiently equipped to handle the societal impacts of AGI.

Follow on
Follow us on Google News
 

(Web Desk) - OpenAI former advisor Miles Brundage has warned about the risks of artificial generative intelligence who spent six years helping to shape the company’s AI safety initiatives.

“Neither OpenAI nor any other frontier lab is ready [for AGI], and the world is also not ready. “To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time,” said Brundage.

 Brundage, a leading voice in AI safety, has expressed alarming concerns about the lack of preparedness for the challenges and risks posed by AGI. He argues that OpenAI and similar organisations are not sufficiently equipped to handle the societal impacts of AGI.

This sentiment is part of a larger trend where prominent figures in AI safety are increasingly frustrated with the growing commercial pressures that seem to undermine the original goal of developing AI safely.

OpenAI's shift from a non-profit to a public benefit corporation with commercial goals has significantly heightened internal tensions. This transition has arguably diminished the focus on AI safety, with resources being redirected toward profit-driven initiatives.

The restructuring of safety-focused teams, such as the disbanded "AGI Readiness" and "Superalignment" teams, indicates a potential shift in priorities.

These changes raise concerns about whether OpenAI is prioritising short-term commercial gains over long-term safety considerations, calling into question the company’s commitment to responsible AI development.