Nvidia dunks on the doubters
CEO Jensen Huang and CFO Colette Kress dismantled most of the recent arguments and bear cases put forward by their naysayers.
Nvidia’s Q3 results and Q4 outlook provided an emphatic statement that speaks for itself: it’s still boom times for the company at the heart of AI.
And while actions (and numbers!) may speak louder than words, the Q3 conference call offered plenty to chew on. During both prepared remarks and the Q&A, CEO Jensen Huang and CFO Colette Kress systematically addressed and dissected most of the recent arguments and bear cases put forward by their naysayers — whether it was brought up in a question or not.
There were flexes galore.
Huang spoke not only as the CEO of the world’s largest company, but also as an ambassador for AI, justifying the immense spending that benefits his firm by pointing to the rewards he believes his customers will reap.
This is the kind of conference call that will either have people revisiting some of these quotes and going, “I should have known this would be the first $6 trillion company,” or, “All this hubris was such a big tell that the AI trade was doomed.”
Or, depending on how much of a sense of humor the market gods have, both!
AI bubble?
“There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different… The transition to accelerated computing is foundational and necessary, essential in a post-Moore’s Law era. The transition to generative AI is transformational and necessary, supercharging existing applications and business models. And the transition to agentic and physical AI will be revolutionary, giving rise to new applications, companies, products and services.” –Jensen Huang
Nvidia putting up massive sales numbers is not, in and of itself, proof in favor of or against an AI bubble.
A bubble needs irrationality, whether that be in valuations or earnings. Nvidia came into this report trading at its lowest valuation relative to the S&P 500 since June (a forward price-to-earnings premium of less than 13%). So, the valuation bubble argument isn’t of particular relevance to Nvidia at this juncture. Of greater concern is the potential for an “earnings bubble” — that is, Nvidia is benefiting from spending that ultimately won’t make much sense from the perspective of its customers, and is poised to retrench sharply once they figure that out.
Huang makes the argument that there’s no irrationality here because of the applications AI already has as well as the fresh opportunities it unlocks. In short, he’s saying the big spenders have many reasons to spend big.
Success stories
And to justify that, management talked up their customers’ wins.
“RBC is leveraging agentic AI to drive significant analyst productivity, slashing report generation time from hours to minutes. AI and digital twins are helping Unilever accelerate content creation by 2x and cut costs by 50%. And Salesforce’s engineering team has seen at least 30% productivity increase in new code development after adopting Cursor.” –Colette Kress
What really jumped out, however, was the CEO’s shout-outs to Meta. Mark Zuckerberg’s company has been a major laggard in the AI space as of late. Like other hyperscalers, it has a massive capital expenditure budget, but unlike that group, it doesn’t have a cloud business. That is, its spending is more “downstream” in nature than its peers; it relies on AI to make money, not on someone else wanting AI compute to make money.
“Meta’s GEM, a foundation model for ad recommendations trained on large-scale GPU clusters, exemplifies this shift. In Q2, Meta reported over a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed, driven by generative AI-based GEM.” –Jensen Huang
His underlying message: everyone’s AI spending pays dividends, even if the market isn’t rewarding it at this moment.
Burry buried
Michael Burry of “The Big Short” fame recently raised concerns about whether Nvidia’s customers are understating depreciation, arguing that the GPUs they’ve bought should be losing value faster than the balance sheets of these buyers suggest.
Nvidia’s CFO came not to praise Burry, but to bury him:
“The long useful life of Nvidia’s CUDA GPUs is a significant TCO [total cost of ownership] advantage over accelerators. CUDA’s compatibility and our massive installed base extend the life of NVIDIA systems well beyond their original estimated useful life.
For more than two decades, we have optimized the CUDA ecosystem, improving existing workloads, accelerating new ones, and increasing throughput with every software release. Most accelerators without CUDA and Nvidia’s time-tested and versatile architecture became obsolete within a few years as model technologies evolve. Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today, powered by vastly improved software stack.” –Colette Kress
As an aside, the idea that Nvidia’s chips remain very useful for a long time is something that, on the surface, seems much better for Nvidia’s customers than its sales outlook, but it’s really hard to nitpick that given how much demand is in the pipeline.
Growth runway
One nagging fear about this reporting period was that Nvidia had already given investors the good news: in late October, Huang said the company already had more than $500 billion in orders for its flagship chips through 2026.
When a company is growing this much, this fast, it’s reasonable to ask questions about how much more of an appetite there is out there to be sated.
And Nvidia has answers, both on how big it expects the AI market to get and how much demand it thinks it’ll realize in the near term.
“We believe Nvidia will be the superior choice for the $3 trillion to $4 trillion in annual AI infrastructure build we estimate by the end of the decade.” –Colette Kress
“For example, just even today, our announcements with KSA, and that agreement in itself is 400,000 to 600,000 more GPUs over three years. Anthropic is also net new. So there’s definitely an opportunity for us to have more on top of the $500 billion that we announced.” –Colette Kress
Off the chain
Having all this demand is one thing; meeting it is another. Such worries have been in ascendance with memory chip price hikes abound and Huang recently asking TSMC to boost production.
Its Blackwell ramp was not necessarily seamless. But...
“Our ecosystem will be ready for a fast Rubin ramp.” –Colette Kress
Nvidia’s answer, in short, is that its supply chains are decades in the making, and everyone wants to work with the leader in the space.
“Our supply chain has been working with us for a very long time. And so in many cases, we’ve secured a lot of supply for ourselves, because obviously, they’re working with the largest company in the world in doing so.” –Jensen Huang
“The supply chain, we have much better visibility and control over it, because obviously we’re incredibly good at managing our supply chain. We have great partners that we’ve worked with for 33 years. And so the supply chain part of it, we’re quite confident. Now looking down our supply chain, we’ve now established partnerships with so many players in land and power and shell, and of course financing. These things — none of these things are easy, but they’re all tractable and they’re all solvable things. And the most important thing that we have to do is do a good job planning. We plan up the supply chain, down the supply chain. We’ve established a whole lot of partners. And so we have a lot of routes to market.” –Jensen Huang
Of marginal concern
Meeting high demand at a time of increasing pressures up and down the supply chain had some analysts worried about the outlook for Nvidia’s profitability.
Expect more of the same, management said.
“Earlier this year, we indicated that through cost improvements and mix that we would exit the year in our gross margins in the mid-70s. We achieved that and getting ready to also execute that in Q4. So now it’s time for us to communicate where are we working right now in terms of next year. Next year, there are input prices that are well known in industries that we need to work through... So we’re taking all of that into account, but we do believe if we look at working again on cost improvements, cycle time, and mix, that we will work to try and hold at our gross margins in the mid-70s.” –Colette Kress
First among unequals
The last question Nvidia faced on the conference call related to the competitive threat posed by custom chips (or ASICs). Google’s recently released Gemini 3 model, for instance, was trained using its in-house TPU chips, and offers some cost advantages, particularly when it comes to the price of inputting tokens, as well providing power efficiencies.
While Huang was serving as an ambassador for AI, he’s of course first and foremost an advocate for Nvidia’s AI solutions.
He didn’t take on the question of GPUs vs. ASICs directly. Instead, he offered the following arguments in favor of Nvidia-centric systems.
“Back in the Hopper day and the Ampere days, we would build one GPU. That’s the definition of an accelerated AI system. But today, we’ve got to build entire racks, entire — three different types of switches, a scale-up, a scale-out, and a scale-across switch. And it takes a lot more than one chip to build a compute node anymore.” –Jensen Huang
Translation: we’re not in Kansas anymore. We’re not just focused on making the best chip, but also the best total package.
And when wrapping his list of the five things that make Nvidia special, Huang said:
“The most important thing, the fifth thing, is if you are a cloud service provider, if you’re a new company like Humain, if you’re a new company like CoreWeave or Nscale or Nebius, or OCI for that matter, the reason why Nvidia is the best platform for you is because our offtake is so diverse. We can help you with offtake. It’s not about just putting a random ASIC into a data center.”
Simply, it’s easier to sell capacity using Nvidia’s architecture because its CUDA software is ubiquitous in high-performance computing.
