US FTC leaders will target AI that violates civil rights or is deceptive
Technology
US FTC leaders will target AI that violates civil rights or is deceptive
WASHINGTON (Reuters) - Leaders of the U.S. Federal Trade Commission said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive.
The sudden popularity of Microsoft-backed (MSFT.O) OpenAI's ChatGPT this year has prompted calls for regulation amid concerns around the world about the possible use of the innovation for wrongdoing even as companies are seeking ways to use it to enhance efficiency.
In a congressional hearing, FTC Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya were asked about concerns that recent innovation in artificial intelligence, which can be used to produce high-quality deep fakes, could be used to make more effective scams or otherwise violate laws.
Bedoya said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts.
"It's not okay to say that your algorithm is a black box" and you can't explain it, he said.
Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would "should put them on the hook for FTC action."
Slaughter noted that the agency had throughout its 100-year history had to adapt to changing technologies and indicated that adapting to ChatGPT and other artificial intelligence tools were no different.
The commission is organized to have five members but currently has three, all of whom are Democrats.