DUNYA NEWS
Technology

Experts warn AI could cause 'human extinction'

We should understand serious risks posed by these technologies

(Web Desk) - A group of current and former employees at top Silicon Valley firms developing artificial intelligence warned in an open letter that without additional safeguards, AI could pose a threat of “human extinction.”

The letter, signed by 13 mostly former employees of firms like OpenAI, Anthropic, and Google’s DeepMind, argues top AI researchers need more protections to air criticisms of new developments and seek input from the public and policymakers over the direction of AI innovation.

“We believe in the potential of AI technology to deliver unprecedented benefits to humanity,” the Tuesday letter reads.

“We also understand the serious risks posed by these technologies.

These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

The letter argues that the companies developing powerful AI technologies, including artificial general intelligence (AGI), a theorised AI system that’s as smart or smarter than human intelligence, “have strong financial incentives to avoid effective oversight,” from both their own employees and the public at large.

Neel Nanda of DeepMind is the only AI researcher currently affiliated with one of the copmanies who signed the letter.

“This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers,” he wrote on X.

“But I believe AGI will be incredibly consequential and, as all labs acknowledge, could pose an existential threat. Any lab seeking to make AGI must prove itself worthy of public trust, and employees having a robust and protected right to whistleblow is a key first step.”

The message calls for companies to refrain from punishing or silencing current or former employees who speak out about the risks of AI, a likely reference to a scandal this month at OpenAI, where departing employees were told to choose between losing vested equity and or signing a non-disparagement agreement about the company that never expired. (OpenAI later lifted the requirement, saying, “It doesn’t reflect our values or the company we want to be.”)

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” an OpenAI spokesperson told The Independent. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

The company added that it takes a number of steps to ensure its employees are heard and its products are developed responsibly, including an anonymous hotline for workers and a Safety and Security Committee scrutizing the company’s developments. OpenAI also pointed to its support for increased AI regulation and voluntary committments around AI safety.

 

Recent Articles