Tech
TAIWAN-TECH-BUSINESS-AI-COMPUTEX
(I-Hwa Cheng/Getty Images)
top of the flops

Hoppers, Blackwells, and Rubins: A field guide to the complicated world of Nvidia’s AI hardware

It’s common knowledge that Nvidia is at the core of the AI boom, but understanding what makes a “superchip” or why a NVL72 rack costs millions takes a bit of work.

Jon Keegan

No company has played a more central role to the current AI boom than Nvidia. It designed the chips, networking gear, and software that helped train today’s large language models and scale generative-AI products like ChatGPT to billions of users.

Understanding Nvidia’s AI hardware offerings, even for the tech savvy, can be challenging. While many of the biggest tech companies are hard at work building their own custom silicon to give them an edge in the ultracompetitive AI market, you will find Nvidia’s AI hardware powering pretty much every big AI data center out there today.

Some estimates have Nvidia owning as much as 98% of the data center GPU market. This has fueled the company’s meteoric rise to become one of the world’s largest companies. 

A chip by any other name...

To start understanding the landscape of Nvidia’s chips, it’s helpful to understand what each generation is called and which semis came out in that time. Going all the way back to 1999, Nvidia has named its various chip architectures after famous figures from science and mathematics. 

Earlier generations of Nvidia’s chip architecture powered the rise of advanced video graphics cards (in case you didn’t know, GPU stands for graphics processing unit) that helped propel the video game industry to new heights, but GPUs’ ability to run massively parallel vector math turned out to make them perfectly suited for AI.

The hot H100

The breakout star of Nvidia’s hardware offerings was undoubtedly the most powerful Hopper series chip, the H100 Tensor Core GPU. Announced in April 2022, this GPU was a breakthrough that featured the new “Transformer Engine,” a dedicated accelerator for the kinds of processing that large language models relied on for both training and “inference” (running a model) — which saw a 30x improvement from the previous generation’s fastest chip, the A100.

After OpenAI’s ChatGPT exploded onto the scene, demand for the H100 led tech companies to stockpile hoards of hundreds of thousands of the GPUs to help build bigger and faster large language models.

The H100s are estimated to cost between $20,000 and $40,000 each.

Nvidia H100
A Nvidia H100 GPU (Nvidia)

Blackwell “superchip”

In the fast-moving AI industry, while the H100 is still a hot item, the latest chip everyone is turning to is the GB200 — what Nvidia calls the “Grace Blackwell superchip.” This chip combines two Blackwell series B200 GPUs and a “Grace” CPU in one package.

Nvidia GB200 superchip
Nvidia CEO Jensen Huang holding a GB200 superchip at the Computex expo (Nvidia)

But if youre in the market for such powerful AI hardware, it’s likely you want dozens, hundreds, or even thousands of these chips wired up with the fastest interconnections you can get. That’s where the “GB200 NVL72” comes in. The NVL72 comes packed with 36 of the GB200 superchips — so 36 Grace CPUs and 72 of the B200 GPUs. Confused yet?

And if youre going on a GPU shopping spree, you better have lined up some VCs with deep pockets. Each GB200 superchip is estimated to cost between $60,000 and $70,000, while a fully equipped NVL72 rack is estimated to cost roughly $3 million, as it requires not only the pricey superchips but also expensive networking and liquid cooling.

If that’s too rich for you, you can always turn to AI investor darling CoreWeave, which advertises access to its batch of GB200 NVL72s starting at $42 per hour. CoreWeave says it has over 250,000 Nvidia GPUs in its data centers.

Chips within chips

According to Bloomberg, the “Stargate” mega data center project backed by OpenAI, SoftBank, and Oracle is planning on installing 400,000 of the GB200 superchips.

And Meta CEO Mark Zuckerberg has stated that he expects the company to have over 1.3 million GPUs by the end of 2025.

Leaps in performance

When youre talking about leaps forward in AI, its important to remember than rather than slow incremental bumps, each generation of chips is making exponential gains in a metric known as FLOPS, which measures performance.

Rubin matters

All this Nvidia jargon aside, there’s one model name you should pay attention to: Rubin, which will be the next leap forward in compute power.

Next year we’ll see the first of the Rubin architecture chips, the “Vera Rubin” superchip named after the American astronomer known for discovering dark matter.

Following the Vera Rubin chip release will be the Vera Rubin NVL144 (144 GPUs) and then Vera Rubin Ultra NVL576 (576 GPUs) in the second half of 2027.

Phew. Got all that?

More Tech

See all Tech
tech

WSJ: Anduril’s weapons systems have failed during several tests

Autonomous drones by sea, land, and air. Futuristic AI-powered support fighter jets, and swarms of networked drones controlled by sophisticated software. These are some of the visions for the future of warfare pitched by defense tech startup Anduril. Cofounded by Oculus founder Palmer Luckey, the Peter Thiel-backed startup has landed some major national security contracts based on this futuristic outlook for battlefield AI.

But according to a report from The Wall Street Journal, the company’s tech is failing key tests in the real world, raising concerns about the viability and safety of Anduril’s systems within the military command.

Anduril’s Altius drones proved vulnerable to Russian jamming while deployed in Ukraine and have been pulled from the battlefield, per the report.

More than a dozen sea-based drone ships powered by Anduril’s Lattice command and control software recently shut down during a Navy test, creating a hazard for other vessels in the exercise.

And this summer, during a drone intercept test, Anduril’s counter-drone system crashed and caused a 22-acre fire at a California airport, the report found.

Anduril told the WSJ that the failures are just part of its rapid iterative development process:

“We recognize that our highly iterative model of technology development — moving fast, testing constantly, failing often, refining our work, and doing it all over again — can make the job of our critics easier. That is a risk we accept. We do fail… a lot.”

But according to a report from The Wall Street Journal, the company’s tech is failing key tests in the real world, raising concerns about the viability and safety of Anduril’s systems within the military command.

Anduril’s Altius drones proved vulnerable to Russian jamming while deployed in Ukraine and have been pulled from the battlefield, per the report.

More than a dozen sea-based drone ships powered by Anduril’s Lattice command and control software recently shut down during a Navy test, creating a hazard for other vessels in the exercise.

And this summer, during a drone intercept test, Anduril’s counter-drone system crashed and caused a 22-acre fire at a California airport, the report found.

Anduril told the WSJ that the failures are just part of its rapid iterative development process:

“We recognize that our highly iterative model of technology development — moving fast, testing constantly, failing often, refining our work, and doing it all over again — can make the job of our critics easier. That is a risk we accept. We do fail… a lot.”

tech

OpenAI’s partners shouldering $100 billion of debt, taking on all the risk

OpenAI’s ambitious plans for global AI infrastructure projects — like its series of massive Stargate AI data centers — will require tens of billions of dollars funded by debt, but you won’t find much of that on OpenAI’s balance sheet.

According to a new analysis by the Financial Times, OpenAI has somehow convinced its many partners to shoulder at least $100 billion in debt on its behalf, as well as the risks that come with it.

Partners Oracle, SoftBank, CoreWeave, Crusoe, and Blue Owl Capital are all taking on debt in the form of bonds, loans, and credit deals to meet their obligations with OpenAI for infrastructure and computing resources.

Having close ties with OpenAI has been an anchor for many publicly traded companies in recent weeks. The company’s cash burn and the rise of Gemini 3 have seemingly darkened its outlook and fostered guilt by association for many of its close partners and investors. Most notably, Oracle’s aggressive capital expenditure plans to support demand from OpenAI have sparked a sell-off in its stock while widening its credit default swap spreads.

A senior OpenAI executive told the FT: “That’s been kind of the strategy. How does [OpenAI] leverage other people’s balance sheets?”

Partners Oracle, SoftBank, CoreWeave, Crusoe, and Blue Owl Capital are all taking on debt in the form of bonds, loans, and credit deals to meet their obligations with OpenAI for infrastructure and computing resources.

Having close ties with OpenAI has been an anchor for many publicly traded companies in recent weeks. The company’s cash burn and the rise of Gemini 3 have seemingly darkened its outlook and fostered guilt by association for many of its close partners and investors. Most notably, Oracle’s aggressive capital expenditure plans to support demand from OpenAI have sparked a sell-off in its stock while widening its credit default swap spreads.

A senior OpenAI executive told the FT: “That’s been kind of the strategy. How does [OpenAI] leverage other people’s balance sheets?”

tech

Chinese tech giants are training their models offshore to sidestep US curbs on Nvidia’s chips

Nvidia can’t sell its best AI chips in the world’s second-largest economy. That’s an Nvidia problem. But it’s also a China problem — and it’s one that the region’s tech giants have resorted to solving by training their AI models overseas, according to a new report from the Financial Times.

Citing two people with direct knowledge of the matter, the FT reported that “Alibaba and ByteDance are among the tech groups training their latest large language models in data centers across south-east Asia.” Clusters of data centers have particularly boomed in Singapore and Malaysia, with many of the sites kitted out with Nvidia’s latest architecture.

One exception, per the FT, is DeepSeek, which continues to be trained domestically, having reportedly built up a stockpile of Nvidia chips before the US export ban came into effect.

Last week, Nvidia spiked on the news that the Trump administration was reportedly considering letting the tech giant sell its best Hopper chips — the generation of chips that preceded Blackwell — to China.

Citing two people with direct knowledge of the matter, the FT reported that “Alibaba and ByteDance are among the tech groups training their latest large language models in data centers across south-east Asia.” Clusters of data centers have particularly boomed in Singapore and Malaysia, with many of the sites kitted out with Nvidia’s latest architecture.

One exception, per the FT, is DeepSeek, which continues to be trained domestically, having reportedly built up a stockpile of Nvidia chips before the US export ban came into effect.

Last week, Nvidia spiked on the news that the Trump administration was reportedly considering letting the tech giant sell its best Hopper chips — the generation of chips that preceded Blackwell — to China.

tech
Millie Giles

Alibaba unveils its first AI glasses, taking on Meta directly in the wearables race

Retail and tech giant Alibaba launched its first consumer-ready, AI-powered smart glasses on Thursday, marking its entrance into the growing wearables market.

Announced back in July, the Quark AI glasses just went on sale in the Chinese retailer’s home market, with two versions currently available: the S1, starting at 3,799 Chinese yuan (~$536), and the G1, at 1,899 yuan (~$268) — a considerably lower price than Meta’s $799 Ray-Ban Display glasses, released in September.

tech
Jon Keegan

Musk: Tesla’s Austin Robotaxi fleet to “roughly double” next month, but falls well short of earlier goals

Yesterday, Elon Musk jumped onto a frustrated user’s post on X, who was complaining that they were unable to book a Robotaxi ride in Austin. Musk aimed to reassure the would-be customer that the company was expanding service in the city:

“The Tesla Robotaxi fleet in Austin should roughly double next month,” Musk wrote.

While that sounds impressive, there are reports that Austin has only 29 vehicles in service.

But last month, Musk said the Robotaxi goal was to have “probably 500 or more in the greater Austin area” by the end of the year.

Meanwhile, Google’s Waymo has more than 100 autonomous taxis running in Austin, and 1,000 more in the San Francisco Bay Area.

“The Tesla Robotaxi fleet in Austin should roughly double next month,” Musk wrote.

While that sounds impressive, there are reports that Austin has only 29 vehicles in service.

But last month, Musk said the Robotaxi goal was to have “probably 500 or more in the greater Austin area” by the end of the year.

Meanwhile, Google’s Waymo has more than 100 autonomous taxis running in Austin, and 1,000 more in the San Francisco Bay Area.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.