US says leading AI companies join safety consortium to address risks

US says leading AI companies join safety consortium to address risks

Technology

The group is tasked with working on priority actions outlined in President Biden’s October order

Follow on
Follow us on Google News

WASHINGTON (Reuters) - The Biden administration on Thursday said leading artificial intelligence companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI.

Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC), which includes OpenAI, Alphabet's Google (GOOGL.O), opens new tab, Anthropic and Microsoft (MSFT.O), opens new tab along with Facebook-parent Meta Platforms (META.O), opens new tab, Apple (AAPL.O), opens new tab, Amazon.com (AMZN.O), opens new tab, Nvidia (NVDA.O), opens new tab Palantir (PLTR.N), opens new tab, Intel, JPMorgan Chase (JPM.N), opens new tab and Bank of America (BAC.N), opens new tab.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

The consortium, which also includes BP (BP.L), opens new tab, Cisco Systems (CSCO.O), opens new tab, IBM (IBM.N), opens new tab, Hewlett Packard (HPE.N), opens new tab, Northop Grumman (NOC.N), opens new tab, Mastercard (MA.N), opens new tab, Qualcomm (QCOM.O), opens new tab, Visa (V.N), opens new tab and major academic institutions and government agencies, will be housed under the U.S. AI Safety Institute (USAISI).

The group is tasked with working on priority actions outlined in President Biden’s October AI executive order "including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."

Major AI companies last year pledged to watermark AI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team."

Biden's order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks.

In December, the Commerce Department said it was taking the first step toward writing key standards and guidance for the safe deployment and testing of AI.

The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a "new measurement science in AI safety," Commerce said.

Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.
While the Biden administration is pursuing safeguards, efforts in Congress to pass legislation addressing AI have stalled despite numerous high-level forums and legislative proposals.