Michael Burry has some concerns about AI accounting
Not enough appreciation for depreciation, per the “Big Short” investor.
Michael Burry think there’s not enough appreciation for depreciation.
The investor of “Big Short” fame posted on X on Thursday, taking aim at the way Oracle and Meta handle accounting for their GPUs.
Understating depreciation by extending useful life of assets artificially boosts earnings -one of the more common frauds of the modern era.
— Cassandra Unchained (@michaeljburry) November 10, 2025
Massively ramping capex through purchase of Nvidia chips/servers on a 2-3 yr product cycle should not result in the extension of useful… pic.twitter.com/h0QkktMeUB
First, for some background and context:
A capex boom is a big reason behind the surging S&P 500 profit growth. Companies that spend hundreds of billions to invest in data centers don’t count that money as an expense immediately, but rather record the cost over time as the equipment is used (the depreciation to which Burry refers). Meanwhile, that spending immediately becomes the revenues for other companies. Ergo, any capex binge is a nitrous oxide boost for Corporate America’s bottom line.
The estimated “useful life” of AI servers for the publicly-traded hyperscalers is about 5 to 6 years. Their useful economic life — how long they’re actually being used to help make money — may be longer or shorter than that.
Not all chip usage is created equal: training imposes a much larger strain than inference. Tech companies have argued that their chips effectively get a second life by being repurposed from training to inference, which is intended to coincide with when new flagship models are introduced and put towards the rigors of training. (This line of thinking makes you nod along when you see that Microsoft contracted out some of its AI training needs to GPUs owned by Nebius.)
Let’s evaluate Burry’s argument using the evidence available, and note what’s not available.
Yes, the depreciation schedule for servers does not align with the product cycle for flagship chips. But also... there’s no hard-and-fast reason why they should? In sports parlance, your third-best wide receiver this year may have been your best wide receiver four years ago. That’s not stopping him from contributing to the team’s success, albeit in a diminished role. The key dynamic to track here is whether improvements in power efficiency as newer models get introduced is what drives obsolescence.
Chips seem to command less money as they age. Silicon Data’s indexes that track rental rates for Nvidia’s Hopper and Ampere GPUs are trending downwards.
On the other hand, company-specific reports from industry bellwethers muddy the above waters, and suggest older chips are still very much in demand:
From Nebius chief revenue officer Marc Boroditsky during today’s earnings call: “An interesting set of dynamics that we're experiencing is that as customers come to their renewal for Hoppers or if they're looking to upgrade to say, Blackwells, in both cases, we're typically selling them immediately and often case and often at better pricing than they were previously priced as we're actually in tandem rolling out the Blackwell.”
From CoreWeave CEO Michael Intrator during Monday’s conference call: “In Q3, we saw our first 10,000-plus H100 contract approaching expiration. Two quarters in advance, the customer proactively recontracted for the infrastructure at a price within 5% of the original agreement. This is a powerful indicator of customer satisfaction as well as the long-term utility and differentiated value of the GPUs run on CoreWeave's platform.”
Heck, even The Information report on Oracle’s tiny margins renting out access to Nvidia’s chips (which briefly shook the stock) included this tidbit: “One silver lining in Oracle’s GPU business is the amount of revenue it is generating from older generations of Nvidia chips, such as the Ampere chips that came out in 2020. Those chips appear to be helping Oracle’s margins, while newer versions of Nvidia chips strain them.”
Just because A100s have been able to stand the test of time doesn’t mean future generations of chips will. Recall, for instance, how Nvidia’s Blackwell ramp was delayed because of overheating issues. Perhaps that’s something that impacts the longevity of these chips in the field. Or not. We really don’t know.
The proof, ultimately, will be in the cash flows over time, or a lack thereof, and how the answers to these questions play out.
Are consumers and businesses willing to pay for a non-flagship level of AI compute for certain tasks? Early evidence suggests yes.
Are chips physically able to hold up to their workloads for a five-year plus period? Early evidence also hints at yes.
Do changes in which tasks GPUs are being asked to perform radically alter the overall ROI on all this spending? It’s too early to tell.
If you’re looking for a more pointed and cutting critique than Burry’s broad hand-wave in the direction of accounting shenanigans, fellow short seller Jim Chanos has you covered:
As the AI DC bulls now try to convince you to extend depreciable lives on GPU’s today, consider this: $CRWV’s 3Q annualized Adj EBITDA was $3.4B, and annualized interest was $1.2B. Using 10-year life(!) on their $20.0B of est. GPU’s ($2.0B) means they are still barely profitable.
— James Chanos (@RealJimChanos) November 11, 2025
