Tech
Man and Girl Walking Through Aisle of Computers
(CSA-Printstock/Getty Images)
great scott!

Clash of the titans: Here are the biggest AI data center projects

Hyperion, Colossus, Prometheus, and Stargate. Our guide to the GPUs and gigawatts that make up the largest AI infrastructure projects in the industry.

Jon Keegan
Updated 9/24/25 1:02PM

Nvidia’s $100 billion megadeal with OpenAI to build massive AI data centers filled with Nvidia GPUs sets a new threshold for unprecedented computing power: the plan describes a partnership to build a staggering 10 gigawatts of computing power.

Gigawatts — a million kilowatts — is a metric increasingly used as a yardstick to measure the scale of AI infrastructure and the energy consumed by projects like these. According to the Department of Energy, in 2022 the average annual amount of energy used by a residential home was 10,791 kilowatt-hours (kWh). Based on this figure, 10 gigawatts is roughly enough power to supply 8 million homes for a day.

The other key attribute of these titanic data centers is how many thousands of pricey GPUs are filling their racks to the ceiling. Here’s how the biggest-profile projects from the top players stack up:

There are a lot of billions of dollars and gigawatts flying around. Here’s a guide to the biggest AI data center projects underway.


Nvidia and OpenAI partnership

Nvidia says it will invest as much as $100 billion into OpenAI as part of an unprecedented 10-gigawatt data center buildout. Nvidia CEO Jensen Huang told CNBC that the project would be “the biggest AI infrastructure project in history.” Reuters reports that the deal gives Nvidia nonvoting shares for its staged investment in OpenAI, and OpenAI in turn uses that money to purchase the GPUs from Nvidia. Final details of the plan are yet to be announced.

  • Location: ???

  • Power: 10 gigawatts

  • GPUs: 4 million to 5 million

  • Value: $100 billion


OpenAI, Oracle, SoftBank - “Stargate”

stargate
Stargate I (OpenAI)

Stargate is the $500 billion OpenAI partnership with Oracle and SoftBank to build a series of massive AI data centers, totaling 5.5-gigawatts. The latest announcement expanded the number of locations for the facilities and puts the project ahead of schedule after earlier doubts had been surfaced around the timeline.

  • Locations: Abilene, Texas; Shackelford County, Texas; Doña Ana County, New Mexico; Lordstown, Ohio; Milam County, Texas, undisclosed site in the Midwest.

  • Power: 5.5 gigawatts

  • GPUs: 2 million

  • Value: $500 billion (over four years)


Meta - “Hyperion”

Richland-Parish-Data-Cener
(Meta)

Meta’s massive city-sized data center in Richland Parish, Louisiana, is known as Hyperion. CEO Mark Zuckerberg said it will scale up to 5 gigawatts over several years.

  • Location: Richland Parish, Louisiana

  • Power: Scaling up to 5 gigawatts

  • GPUs: ???

  • Value: $10 billion... or is it $50 billion?


Meta - “Prometheus”

Meta is also building Prometheus, a data center in New Albany, Ohio, due to come online next year. It’s the first of the “titan clusters” that Meta has planned to power its mission to achieve “superintelligence.”

  • Location: New Albany, Ohio

  • Power: 1 gigawatt “plus”

  • GPUs: 500,000 (estimated)

  • Value: ???


xAI - “Colossus”

xAI “Colossus” data center
xAI’s Colossus data center (Steve Jones/Southwings)

Built in a record 122 days, Colossus, a data center in South Memphis, Tennessee, was the first to reach 1 megawatt of computing power. It’s used to train and run xAI CEO Elon Musk’s Grok AI. Watchdog groups have taken issue with methane spewing from 35 portable gas turbines, allowed without permits using a loophole.

  • Location: South Memphis, Tennessee

  • Power: 300 megawatts

  • GPUs: 230,000

  • Value: $3 billion (estimated)


xAI - “Colossus II”

colossus-2
Inside the Colossus II data center (@elonmusk on X)

Colossus II is the second Memphis-area data center currently under construction. Musk said it will use 550,000 GPUs.

  • Location: South Memphis, Tennessee

  • Power: 1 gigawatt

  • GPUs: 550,000

  • Value: ???


Amazon, Anthropic - “Project Rainier”

Amazon Project Rainier
Amazon’s Project Rainier (Amazon)

Running on Amazon’s custom Trainium2 chips, Project Rainier is a cluster of 30 data centers at one site in Indiana. It’s used to power Amazon partner Anthropic’s AI services.

  • Location: St. Joseph County, Indiana

  • Power: 2.2 gigawatts

  • GPUs: “Hundreds of thousands”

  • Value: $11 billion


Microsoft - “Fairwater”

Microsoft CEO Satya Nadella called Fairwater, a new Wisconsin data center, “a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times.” 90% of the facility will use a “state-of-the-art closed-loop liquid cooling system” that greatly reduces its water use.

FAIRWATER OMB-Image-1-Datacenter
Microsoft’s “Fairwater” data center (Microsoft)
  • Location: Mount Pleasant, Wisconsin

  • Power: ???

  • GPUs: “Hundreds of thousands”

  • Value: Initial $3.3 billion; adding a $4 billion expansion

Update (September 23): Corrected definition of gigawatts as a million kilowatts, not a thousand kilowatts.

Update (September 24): Updated details about the Stargate project following a new announcement from OpenAI.

More Tech

See all Tech
tech

Tesla and SpaceX to jointly run “most epic chip-building exercise in history by far”

In the latest instance that Elon Musk views Tesla and SpaceX as effectively one company, the CEO of both announced Saturday that the two firms will join forces on his Terafab project — what Musk says will be “the most epic chip-building exercise in history by far.”

Many of the details mirror what we reported last week, with one major addition: SpaceX will play a leading role.

Terafab, planned for the north campus of Tesla’s Gigafactory Texas, aims to vertically integrate the entire chipmaking process, from design and fabrication to testing and packaging. The goal is to supply AI chips to Tesla, SpaceX, and its subsidiary, Musk’s AI company, xAI, whose suppliers Musk said will be unable to handle their demand in “three or four years.” While Tesla has designed its own chips, it has never manufactured them.

Musk said the facility is intended to produce up to 1 terawatt of compute annually. The plant will manufacture two types of chips: inference chips for Tesla’s robotaxis and Optimus robots, and custom AI chips intended for space-based applications like solar-powered AI satellites. According to Musk, roughly 80% of the compute will be allocated to space-related uses, with the remaining 20% supporting projects on Earth.

Morgan Stanley has estimated the project could cost Tesla an additional $35 billion to $45 billion in capital expenditures, though now perhaps some of that capex might be shared with SpaceX. Like many of Musk’s ambitions, the project is enormous in scale and will likely to take years to complete — potentially into the end of the decade or beyond.

tech
Jon Keegan

White House releases AI legislative framework

The White House has released its policy wish list for AI legislation — and what it wants excluded.

Still, the odds of any actual AI regulation getting passed in Congress right now are very slim.

The “National Policy Framework” for AI lays out seven issues that the Trump administration wants to see reflected in any congressional action around AI.

The items listed in the framework include:

  • Child safety protections, age verification, and parental controls for AI.

  • Data center projects voluntarily pay their own way when it comes to power, but incentives should still be encouraged.

  • Copyright laws should allow for training models on copyrighted works, while protecting individuals’ voice and likeness.

  • Free speech should be defended for AI systems, preventing the government from pressuring companies to ban or alter content based on partisan agendas.

  • A light touch to regulation to encourage innovation, and no federal agency to regulate AI.

  • American workers vulnerable to AI job replacement should be retrained and supported.

  • Federal AI rules should preempt any state AI legislation to prevent a patchwork of laws that companies would hate.

The policy list is the latest in a series of proposals from the AI-friendly Trump administration.

The items listed in the framework include:

  • Child safety protections, age verification, and parental controls for AI.

  • Data center projects voluntarily pay their own way when it comes to power, but incentives should still be encouraged.

  • Copyright laws should allow for training models on copyrighted works, while protecting individuals’ voice and likeness.

  • Free speech should be defended for AI systems, preventing the government from pressuring companies to ban or alter content based on partisan agendas.

  • A light touch to regulation to encourage innovation, and no federal agency to regulate AI.

  • American workers vulnerable to AI job replacement should be retrained and supported.

  • Federal AI rules should preempt any state AI legislation to prevent a patchwork of laws that companies would hate.

The policy list is the latest in a series of proposals from the AI-friendly Trump administration.

tech
Jon Keegan

WSJ: OpenAI rolling everything into one desktop “superapp”

OpenAI is trying to eliminate distractions and focus on building AI that helps with enterprise productivity tasks like coding and organizing spreadsheets.

As part of that effort, the startup is consolidating some of its side quests into one superapp, according to a report from The Wall Street Journal.

The plan is to merge ChatGPT, Codex, and the Atlas browser together, as it seeks to focus its efforts as it competes with Anthropic and Google for lucrative enterprise customers.

OpenAI Head of Apps Fidji Simo told staffers in an internal memo that “we realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts. That fragmentation has been slowing us down and making it harder to hit the quality bar we want,” per the report.

The plan is to merge ChatGPT, Codex, and the Atlas browser together, as it seeks to focus its efforts as it competes with Anthropic and Google for lucrative enterprise customers.

OpenAI Head of Apps Fidji Simo told staffers in an internal memo that “we realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts. That fragmentation has been slowing us down and making it harder to hit the quality bar we want,” per the report.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.