Tech
Yann Le Cun meta AI
Meta’s chief AI scientist, Yann LeCun (Julien De Rosa/Getty Images)
GP-who?

Just four companies are hoarding tens of billions of dollars worth of Nvidia GPU chips

Each Nvidia H100 can cost up to $40,000, and one big tech company has 350,000 of them.

Jon Keegan

Meta just announced the release of Llama 3.1, the latest iteration of their open source large language model. The long-awaited, jumbo-sized model has high scores on the same benchmarks that everyone else uses, and the company said it beats OpenAi’s ChatGPT 4o on some tests. 

According to the research paper that accompanies the model release, the 405b parameter version of the model (the largest flavor) was trained using up to 16,000 of Nvidia’s popular H100 GPUs . The Nvidia H100 is one of the most expensive, and most coveted pieces of technology powering the current AI boom. Meta appears to have one of the largest hoards of the powerful GPUs. 

Of course, the list of companies seeking such powerful chips for AI training is long, and likely includes most large technology companies today, but only a few companies have publicly crowed about how many H100s they have.  

The H100 is estimated to cost between $20,000 and $40,000 meaning that Meta used up to $640 million worth of hardware to train the model. And that’s just a small slice of the Nvidia hardware Meta has been stockpiling. Earlier this year, Meta said that it was aiming to have a stash of 350,000 H100s in its AI training infrastructure – which adds up to over $10 billion worth of the specialized Nvidia chips. 

Venture capital firm Andreesen Horowitz is reportedly hoarding more than 20,000 of the pricey GPUs, which it is renting out to AI startups in exchange for equity, according to The Information

Tesla has also been collecting H100s. Musk said on an earnings call in April that Tesla wants to have between 35,000 and 85,000 H100s by the end of the year.  

But Musk also needs H100s for X and his AI company xAI. This week, Musk boasted on X that xAI’s company’s training cluster is made up of 100,000 H100s. 

A tweet from Elon Musk stating that xAI has 100,000 H100 GPUs.
Source: X @elonmusk https://x.com/elonmusk/status/1815325410667749760


Musk was recently sued by Tesla shareholders for allegedly re-directing 12,000 of the H100s intended for the car maker’s AI training infrastructure to xAI instead. When asked about this diversion in yesterday’s Tesla Q2 earnings call, Musk said that the GPUs were sent to xAI because “the Tesla data centers were full. There was no place to actually put them.”

The H100s are in such demand that people are being paid to sneak them into China, to bypass U.S. export controls. You can watch unboxing videos of these graphics cards, and there are even a few for sale on Amazon – including one for $34,749.95 (with free delivery).

OpenAI hasn’t said how many H100s they are sitting on, but The Information reports that the company rents a cluster of processors dedicated to training from Microsoft at a steep discount as part of Microsoft’s $10 billion investment in OpenAI. The training cluster reportedly has the power of 120,000 of Nvidia’s previous gen A100 GPUs, and will be spending $5 billion to rent more training clusters from Oracle over the next two years, according to The Information’s report. OpenAI does appear to have a special relationship with Nvidia — in April, Nvidia CEO Jensen Huang “hand-delivered” the first cluster of the company’s next generation H200 GPUs to co-founders Sam Altman and Greg Brockman. 

A tweet by OpenAI’s Greg Brockman with a photo featuring Brockman, OpenAI CEO Sam Altman and Nvidia CEO Jensen Huang
Source: X @gbd https://x.com/gdb/status/1783234941842518414

Nvidia declined to comment for this story, and Meta, X, OpenAI, Tesla, and Andreessen Horowitz did not respond to requests for comment. 

More Tech

See all Tech
$100B

Each day of the Musk v. Altman trial in Oakland, California, more details of Microsoft’s complicated $13 billion partnership emerge form the courtroom.

Yesterday, Microsoft executive Michael Wetter said that the company has spent over $100 billion on the OpenAI partnership. A big chunk of that came from the fact Microsoft needed to build the costly infrastructure before OpenAI could use it, according to Wetter.

Microsoft’s investment looks like it was worth it, as OpenAI is currently valued at $852 billion, making Microsoft’s stake worth about $135 billion. OpenAI is planning for an IPO later this year.

tech

Alphabet’s Waymo to add 200 square miles of coverage area to existing markets

Waymo, a subsidiary of Alphabet, announced today that it’s expanding its coverage area by 200 square miles in several existing markets, including Miami, the San Francisco Bay Area, Houston, Austin, and Atlanta. That will bring its total coverage area to more than 1,400 square miles. The autonomous car service is currently offering public rides in 11 markets, after expanding to Nashville last month.

25%

AI companies are amping up their spending in Washington as they push for federal approval for more data centers and industry-friendly rules regarding their use of copyrighted material, among other asks, The New York Times reports, citing data from nonprofit watchdog Public Citizen. 25% of currently registered federal lobbyists are now involved in pushing AI interests. That’s more than double what it was — 11% — in 2023. Meta, Nvidia, and Alphabet spent $47.8 million combined last year, up 22% from 2024.

Latest Stories

Sherwood Media, LLC and Chartr Limited produce fresh and unique perspectives on topical financial news and are fully owned subsidiaries of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Money, LLC, Robinhood U.K. Ltd, Robinhood Derivatives, LLC, Robinhood Gold, LLC, Robinhood Asset Management, LLC, Robinhood Credit, Inc., Robinhood Ventures DE, LLC and, where applicable, its managed investment vehicles.