New benchmark tests speed of systems training ChatGPT-like chatbots

New benchmark tests speed of systems training ChatGPT-like chatbots

Technology

New benchmark tests speed of systems training ChatGPT-like chatbots

San Francisco (Reuters) - MLCommons, a group that develops benchmark tests for artificial intelligence (AI) technology, on Tuesday unveiled results for a new test that determines system speeds when training algorithms used for chatbots like ChatGPT - and Nvidia (NVDA.O) came out on top.

The MLPerf benchmark is based on GPT-3, an AI model used to train ChatGPT, the viral chatbot developed by OpenAI and backed by Microsoft (MSFT.O). However, because the model is huge, the benchmark only uses a representative portion.

"This was our most expensive benchmark so far," MLCommons Executive Director David Kanter told Reuters. "We spent over 600K hours of accelerator compute time to develop it, plus some fantastically talented engineers."

Kanter declined to disclose the cost of development, only saying it was in the millions of dollars.

Only two chip firms - Nvidia and Intel's (INTC.O) Habana Labs - submitted results for the benchmark, with the fastest time coming from systems using the latest H100 chip from Nvidia, the uncontested leader in hardware for training AI.

Nvidia's largest system submitted in partnership with AI cloud startup CoreWeave used 3,584 H100 chips, resulting in a training time of 10.94 minutes. Habana Labs, an AI chip company acquired by Intel, ran the benchmark in 311.945 minutes with a much smaller system equipped with 384 Gaudi2 chips.

Generally, more chips and a bigger system mean faster training.

Intel's Jordan Plawner, senior director of AI Products, said the results demonstrated the potential of Gaudi2, which will have a software update in September to boost speed.

"You will get a 1.5X to 2X speed up on the Habana results. So that's when we see Habana Gaudi2 being really competitive and lower priced than H100," Plawner told Reuters.

Plawner declined to say how much a Gaudi2 chip costs but said the industry needs a second supplier of chips for AI training, and the MLPerf results show Intel can fill that need.