US report says AI poses threat to human existence, security

US report says AI poses threat to human existence, security

Technology

Avert the threat by putting guardrails on AI chips and training of models to certain threshold

Follow on
Follow us on Google News

(Web Desk) - Artificial Intelligence (AI) would pose threat to humans existence and nuclear weapons prevalence leading to human extinction, a report commissioned by the U.S. government published on Monday stated, The Times reported exclusively.

The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report.

“Current frontier AI development poses urgent and growing risks to national security. The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

Threats of AI

The report delineates two primary risk categories. The first, termed "weaponization risk," entails the potential for AI systems to be exploited for designing and carrying out severe biological, chemical, or cyber attacks, as well as facilitating novel weaponized functionalities in swarm robotics.

The second category, labelled "loss of control" risk, pertains to the concern that advanced AI systems could surpass human oversight.

According to the report, there's a possibility that these systems, if developed using existing methodologies, could prove uncontrollable and default to adversarial behaviour towards humans.

The report underscores that both risk categories are heightened by what it terms as "race dynamics" within the AI sector.

It highlights that the prospect of substantial economic gains for the first company to achieve Artificial General Intelligence (AGI) motivates firms to prioritize rapid development over safety considerations.

According to the report, frontier AI laboratories are under significant pressure to rapidly scale their AI systems, with less immediate incentives to invest in safety or security measures that don't directly yield economic returns, despite some labs doing so due to genuine concerns.

Policy measures

According to the report, high-end computer chips utilized for training AI systems are identified as a critical limitation to advancements in AI capabilities.

The report suggests that regulating the spread of this hardware could be the foremost necessity in ensuring long-term global safety and security concerning AI.

TIME reported that potential measures could involve establishing a new AI agency to regulate the computing power at which AI operates.

Additionally, it could mandate AI companies to seek government approval before deploying new models surpassing a specified threshold. There's consideration of prohibiting the publication of operational details of powerful AI models, such as through open-source licensing.

The three authors of the report dedicated over a year to their research, engaging in discussions with over 200 individuals including government officials, experts, and employees from leading AI companies such as OpenAI, Google DeepMind, Anthropic, and Meta.

Insights gleaned from these conversations reveal troubling observations, indicating that numerous AI safety professionals within advanced laboratories harbor apprehensions about the potentially negative motivations influencing decision-making among company executives.
 




Advertisement