Governments race to regulate AI tools
Technology
Governments race to regulate AI tools
(Reuters) - Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree with laws governing the use of the technology.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
Planning regulations
Australia will make search engines draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material, the country's internet regulator said on Sept. 8.
BRITAIN
Planning regulations
Britain's data watchdog said on Oct. 10 it had issued Snap Inc's (SNAP.N) Snapchat with a preliminary enforcement notice over a possible failure to properly assess the privacy risks of its generative AI chatbot to users, particularly children.
The country's competition authority set out seven principles on Sept. 18 designed to make developers accountable, prevent Big Tech from tying up the tech in their walled platforms, and stop anti-competitive conduct like bundling.
The proposed principles, which come six weeks before Britain hosts a global AI safety summit, will underpin its approach to AI when it assumes new powers in the coming months to oversee digital markets.
CHINA
Implemented temporary regulations
China issued a set of temporary measures effective Aug. 15, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
EUROPEAN UNION
Planning regulations
EU lawmaker Brando Benifei, who is leading negotiations on the bloc's AI Act, on Sep. 21 urged member countries to compromise in key areas to reach an agreement by the end of the year. Lawmakers are thrashing out details with EU countries before the draft rules can become legislation.
European Commission President Ursula von der Leyen on Sept. 13 called for a global panel to assess the risks and benefits of AI.
FRANCE
Investigating possible breaches
France's privacy watchdog said in April it was investigating complaints about ChatGPT.
G7
Seeking input on regulations
G7 leaders in May called for the development and adoption of technical standards to keep AI "trustworthy".
ITALY
Investigating possible breaches
Italy's data protection authority plans to review AI platforms and hire experts in the field, a top official said in May. ChatGPT was temporarily banned in the country in March, but it was made available again in April.
JAPAN
Investigating possible breaches
Japan expects to introduce by the end of 2023 regulations that are likely closer to the U.S. attitude than the stringent ones planned in the EU, an official close to deliberations said in July.
The country's privacy watchdog has warned OpenAI not to collect sensitive data without people's permission.
POLAND
Investigating possible breaches
Poland's Personal Data Protection Office said on Sept. 21 that it was investigating OpenAI over a complaint that ChatGPT breaks EU data protection laws.
SPAIN
Investigating possible breaches
Spain's data protection agency in April launched a preliminary investigation into potential data breaches by ChatGPT.
UNITED NATIONS
Planning regulations
The U.N. Security Council held its first formal discussion on AI in July, addressing military and non-military applications of AI, which "could have very serious consequences for global peace and security", Secretary-General Antonio Guterres said.
Guterres has also backed a proposal by some AI executives for the creation of an AI watchdog and announced plans to start work on a high-level AI advisory body by the end of the year.
U.S.
Seeking input on regulations
The U.S. Congress held hearings on AI on Sept. 11-13 and an AI forum featuring Meta (META.O) CEO Mark Zuckerberg and Tesla CEO Elon Musk.
More than 60 senators took part in the talks, during which Musk called for a U.S. "referee" for AI. Lawmakers said there was universal agreement about the need for government regulation of the technology.
On Sept. 12, the White House said Adobe (ADBE.O), IBM (IBM.N), Nvidia (NVDA.O) and five other firms had signed President Joe Biden's voluntary commitments governing AI, which require steps such as watermarking AI-generated content.
A Washington D.C. district judge ruled on Aug. 21 that a work of art created by AI without any human input cannot be copyrighted under U.S. law.
The U.S. Federal Trade Commission opened in July an investigation into OpenAI on claims that it has run afoul of consumer protection laws.