Nvidia CEO Jensen Huang speaks during the Nvidia GTC artificial intelligence conference at the SAP Center on March 18, 2024 in San Jose, California.

Justin Sullivan | Getty Images

Nvidia on Monday announced a new generation of AI chips and AI model management software. The announcement, made at Nvidia’s developer conference in San Jose, comes as the chipmaker looks to solidify its position as a leading supplier for AI companies.

Nvidia’s stock price has risen fivefold and total sales have more than tripled since OpenAI’s ChatGPT kicked off the AI ​​boom in late 2022. Nvidia’s high-end server GPUs are essential for training and deploying large AI models. Companies like Microsoft and Meta have spent billions of dollars to buy the chips.

The new generation of AI GPUs is called Blackwell. Blackwell’s first chip is called the GB200 and will ship later this year. Nvidia is luring its customers with more powerful chips to spur new orders. Companies and software makers, for example, are still struggling to get their hands on the current generation of “Hopper” H100 and similar chips.

“Hopper is fantastic, but we need bigger GPUs,” Nvidia CEO Jensen Huang said Monday at the company’s developer conference in California.

Shares of Nvidia fell more than 1% in extended trading on Monday.

The company also unveiled a monetization software called NIM that will make it easier to implement AI, giving customers another reason to stick with Nvidia’s chips over a growing field of competitors.

Nvidia executives say the company is becoming less of a chip supplier for hire and more of a platform provider, like Microsoft or Apple, on which other companies can build software.

“Blackwell is not a chip, it’s the name of a platform,” Huang said.

“The commercial product that could be sold was the GPU, and the software was meant to help people use the GPU in different ways,” Nvidia enterprise vice president Manuvir Das said in an interview. “Of course we still do that. But what’s really changed is that now we really have a commercial software business.”

Das said Nvidia’s new software will make it easier to run programs on any of Nvidia’s GPUs, even older ones that may be better suited for implementing but not building AI.

“If you’re a developer, you have an interesting model that you want people to adopt, if you put it in NIM, we’ll make sure it can run on all our GPUs, so you reach a lot of people,” Das said.

Meet Blackwell, Hopper’s successor

Nvidia’s GB200 Grace Blackwell Superchip, with two B200 GPUs and one Arm-based CPU.

Every two years, Nvidia updates its GPU architecture, unlocking a big leap in performance. Many of the AI ​​models released this past year were trained on the company’s Hopper architecture — used by chips like the H100 — which was announced in 2022.

Nvidia says Blackwell-based processors like the GB200 offer a huge performance upgrade for AI companies, with 20 petaflops in AI performance versus 4 petaflops for the H100. The extra processing power will allow AI companies to train larger and more complex models, Nvidia said.

The chip includes what Nvidia calls a “transformer engine specifically designed to drive transformer-based AI, one of the core technologies behind ChatGPT.

The Blackwell GPU is large and combines two separately manufactured matrices into one chip manufactured by TSMC. It will also be available as an entire server called the GB200 NVLink 2, combining 72 Blackwell GPUs and other Nvidia parts designed to train AI models.

Nvidia CEO Jensen Huang compared the size of the new Blackwell chip to the current Hopper H100 chip at the company’s developer conference in San Jose, California.

Nvidia

Nvidia will also sell B200 GPUs as part of a complete system that takes up an entire server rack.

Nvidia inference microservice

Nvidia also announced that it is adding a new product called NIM, which stands for Nvidia Inference Microservice, to its Nvidia Enterprise Software subscription.

NIM makes it easier to use older Nvidia GPUs for inference or the process of running AI software, and will allow companies to continue using the hundreds of millions of Nvidia GPUs they already own. Inference requires less computing power than initially training a new AI model. NIM enables companies that want to run their own AI models instead of buying access to AI results as a service from companies like OpenAI.

The strategy is to get customers buying Nvidia-based servers to sign up for Nvidia enterprise, which costs $4,500 per GPU per year for a license.

Nvidia will work with AI companies like Microsoft or Hugging Face to ensure their AI models are set up to run on all compatible Nvidia chips. Then, using NIM, developers can efficiently run the model on their own servers or cloud-based Nvidia servers without a lengthy configuration process.

“In my code where I was calling OpenAI, I’m going to replace one line of code to point it to this NIM that I got from Nvidia instead,” Das said.

Nvidia says the software will also help AI run on GPU-equipped laptops instead of servers in the cloud.

https://www.cnbc.com/2024/03/18/nvidia-announces-gb200-blackwell-ai-chip-launching-later-this-year.html