Tech
DeepSeek And Nvidia Logos
(VCG/Getty Images)

The trillion-dollar mystery surrounding DeepSeek’s Nvidia GPUs

There’s a cloud of suspicion hanging over the type and number of Nvidia GPUs DeepSeek used to train its R1 models.

At the center of the story of DeepSeek’s breakthrough achievement with its R1 models lies the Nvidia hardware that powered the servers that trained those models.

In December 2024, DeepSeek researchers released a paper that outlined the development and capabilities of the new DeepSeek-V3 large language model. In the paper, the researchers said they were able to train their powerful, efficient model over 2.78 million GPU hours of computing time on a cluster of only 2,048 Nvidia H800 GPUs. That is a very small number of GPUs for a model that matched or beat OpenAI’s state-of-the-art o1 model in some benchmarks.

For comparison, Meta trained its Llama 3.1 models on two clusters, using a total of 39.3 million GPU hours with 49,152 Nvidia H100 GPUs. Last week, Mark Zuckerberg said that Meta is planning on ending 2025 with over 1.3 million GPUs.

Released in 2023, the H800 is a GPU thats similar to the H100 but is tailored for the Chinese market to comply with US export controls concerning national security parameters that the Biden administration rolled out in 2022. Reuters reported that the main thing Nvidia changed in the H800 was that it “reduced the chip-to-chip data transfer rate to about half the rate.”

But The Wall Street Journal reports that government officials found the H800 exploited technical loopholes that met the strict requirements of the ban, but still gave Chinese buyers very powerful AI chips. To close the loophole, in October 2023, the US government banned the export of H800s as well.

It appears that DeepSeek was able to acquire its H800s during that short window of availability.

DeepSeek’s claims are drawing suspicion from some observers in the AI industry, but most appear to be just speculation. Scale AI CEO Alexandr Wang told CNBC that he suspected DeepSeek has “about 50,000 H100s, which they can’t talk about obviously because it is against the export controls that the United States has put in place,” and in a tweet, Elon Musk replied, “Obviously.” Musk, meanwhile, has bragged about xAI’s “Colossus supercluster,” which is powered by 100,000 H100 GPUs, and that he plans to scale up to 1 million of the expensive Nvidia chips.

There have been reports of H100s being smuggled into China through a series of intermediaries on the black market, but no evidence that DeepSeek did so.

Adding to the confusion, DeepSeek cofounder Liang Wenfeng said that the company does own a cluster of 10,000 Nvidia A100 GPUs, a cheaper and less powerful AI chip.

The H100 has earned a status of being one of the most coveted pieces of computer hardware in the AI age. Even when other chips are used, the power is sometimes expressed as a number of “H100-equivalent” GPUs.

Nvidia is in the process of rolling out its next-gen H200 Blackwell GPUs, and last year CEO Jensen Huang hand-delivered the first DGX H200 server to OpenAI headquarters.

More Tech

See all Tech
tech

Meta announces new Texas data center, partnership with Arm

Meta announced today it’s breaking ground on a new “AI-optimized” data center in El Paso, Texas that will scale to 1GW. That’s not to be confused with the city-sized AI data center it’s building in Louisiana that’s expected to scale to 5GW.

In other Meta AI data center news, Reuters reports that Meta is also partnering with chip tech provider Arm Holdings for “data center platforms to power its AI ranking and recommendation systems, which are key to discovery and personalization across its apps.” The partnership also likely represents an effort to diversify away from Nvidia chips.

Meta is expected to spend up to $72 billion in capex this year, as it amps up AI-related infrastructure projects.

Meta is expected to spend up to $72 billion in capex this year, as it amps up AI-related infrastructure projects.

tech

Report: OpenAI scrambles to find new revenue in its 5-year business plan

After a flurry of enormous (and confusing) deals, OpenAI has committed to spending more than $1 trillion with various partners in the AI ecosystem. Now it has to figure out how to pay for it all.

The Financial Times has some details of OpenAI’s five-year business plan and how it’s exploring “creative” ideas to secure more capital.

Among the elements of the plan:

OpenAI is currently pulling in $13 billion in annual recurring revenue, with 70% of that coming from consumer ChatGPT subscriptions, according to the report. But it also plans on burning $115 billion through 2029.

Among the elements of the plan:

OpenAI is currently pulling in $13 billion in annual recurring revenue, with 70% of that coming from consumer ChatGPT subscriptions, according to the report. But it also plans on burning $115 billion through 2029.

England’s Coldstream Guards

Google’s Waymo plans to launch autonomous rides in London next year

This marks the company’s second international expansion after Tokyo.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.