90 scientists sign letter aimed at preventing AI bioweapons

90 scientists sign letter aimed at preventing AI bioweapons

Technology

AI technologies could be misused to cause harm

Follow on
Follow us on Google News

(Web Desk) – Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.

Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.

Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.

The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.

That’s a haunting thought, but the letter outlines principles for the responsible use of AI in designing new proteins (aka the building blocks of life).

The fear is that AI could be used to generate new viruses or toxins. That said, the letter doesn’t seek to ban AI use completely — in fact the scientists say the benefits outweigh the harms.
Rather, they’re hoping to regulate the actual equipment used to generate new genetic materials.

Advances in artificial intelligence (AI) are creating unprecedented opportunities for life science research, including by enabling the design of functional biological molecules, especially proteins.

This application of AI for protein design holds immense potential to enhance our understanding of the world and help address some of humanity’s most pressing challenges by enabling rapid responses to infectious disease outbreaks, curing numerous diseases, unlocking sustainable sources of energy, helping to mitigate climate change, and more.

As scientists engaged in this work, we believe the benefits of current AI technologies for protein design far outweigh the potential for harm and we would like to ensure our research remains beneficial for all going forward.

Given anticipated advances in this field, a new proactive risk management approach may be required to mitigate the potential of developing AI technologies that could be misused, intentionally or otherwise, to cause harm.

We are therefore motivated as a community to articulate a set of values and principles to guide the responsible development of AI technologies in the field of protein design.

These values include safety, security, equity, international collaboration, openness, responsibility, and pursuing research for the benefit of society.

Furthermore, we as signatories voluntarily agree to a set of specific, actionable commitments informed by these values and principles and outlined here.

We will work together with global stakeholders across academia, governments, civil society, and the private sector to ensure that this technology develops in a responsible and trustworthy manner and that it is safe, secure, and beneficial for all.

 




Advertisement