(Reuters) - An artificial intelligence safety forum launched by companies including OpenAI, Microsoft and Google named its first director on Wednesday, and said it would create an advisory board in the coming months to help guide its strategy.
The Frontier Model Forum also said it created a fund to back research into the technology, with initial funding commitments of more than $10 million from its backers and partners.
It said its first-ever director would be Chris Meserole, who most recently served as director of AI and emerging technology initiative at the Brookings Institution, a Washington-based think tank.
He joins a forum launched in July with a focus on "frontier AI models" that exceed the capabilities present in the most advanced existing models. Industry leaders have warned that such models could have dangerous capabilities sufficient to pose severe risks to public safety.
Many countries are planning AI regulation and Britain is hosting a global AI safety summit in November, focusing on understanding the risks posed by the technology and how national and international frameworks could be supported.
The Frontier Model Forum is backed by ChatGPT-owner OpenAI, Microsoft (MSFT.O), Google's parent Alphabet (GOOGL.O) and AI startup Anthropic.