Nvidia CEO defends his moat as AI labs change how they develop their AI models

[ad_1]
Nvidia earned more than $19 billion in revenue last quarter, the company reported Wednesday, but that did little to reassure investors that its rapid growth will continue. In its earnings call, analysts questioned CEO Jensen Huang about how Nvidia will treat him if tech companies start using new ways to improve their AI models.
An approach that supports OpenAI’s O1 model, or “exploration time scaling,” came a long way. It’s the idea that AI models will provide better answers if you give them more time and computing power to “think” about the questions. Specifically, it adds more computing to the AI design phase, which is everything that happens after the user hits their speed.
Nvidia’s CEO was asked if he sees AI model developers switching to these new approaches, and how Nvidia’s older chips will work in AI understanding.
Huang told investors that o1, as well as scaling test time more broadly, could play a big role in Nvidia’s business going forward, calling it “one of the most exciting developments” and a “new law of scaling.” Huang went out of his way to reassure investors that Nvidia is well positioned for change.
The comments of the Nvidia CEO are in line with what Microsoft CEO Satya Nadella said on stage at the Microsoft event on Tuesday: o1 represents a new way for the AI industry to develop its models.
This is a big deal for the chip industry because it puts a lot of emphasis on AI understanding. While Nvidia’s chips are the gold standard for training AI models, there is a wide set of well-funded startups creating the fastest AI chips, such as Groq and Cerebras. It would be a very competitive space for Nvidia to operate in.
Despite recent reports that the development of generative models is slow, Huang told analysts that AI model developers are still improving their models by adding more computing and data during the training phase.
Anthropic CEO Dario Amodei also said Wednesday during an interview at Cerebral Valley in San Francisco that he doesn’t see a slowdown in model development.
“The evaluation of the Foundation model pretraining is correct and ongoing,” Huang said on Wednesday. “As you know, this is an active law, not a physical law, but the evidence is that it continues to grow. What we are learning, however, is that it is not enough,” said Huang.
That’s exactly what Nvidia investors wanted to hear, as the chipmaker’s stock is up more than 180% in 2024 on sales of AI chips that OpenAI, Google, and Meta train their models with. However, Andreessen Horowtiz partners and many other AI executives have previously said that these methods are starting to show diminishing returns.
Huang noted that much of Nvidia’s computing work today is closer to pre-training AI models — not inference — but he said that created where the AI world is today. He said that one day, there will be more people using AI models, which means more will happen with AI. Huang noted that Nvidia is the largest platform in the world today and the company’s scale and credibility give it a huge advantage over startups.
“Our hopes and dreams are that one day, the world will do a ton of thinking, and that’s when AI will really succeed,” Huang said. “Everybody knows that if they innovate on top of CUDA and Nvidia, they can innovate faster, and they know that everything has to work.”
Source link