“Did 2025 end badly for OpenAI?” is the wrong question. Here are the two questions that do matter.
Whether OpenAI learned from the mistakes that punished all of its competitors in late 2025 — and if those missteps were actually advantages — will determine where the OG LLM kingpin goes from here.
Stop me if you’ve heard this before, but OpenAI was really scuffling in the back half of 2025.
First it was Gemini 3’s release taking the wind out of OpenAI’s sails, then Anthropic’s coding and agentic tools threatening the ChatGPT maker’s enterprise sales.
That and more (but not much more) are confirmed in a recent article from the Wall Street Journal, which details how OpenAI missed internal targets for users and revenues in 2025 as well as “multiple” monthly revenue targets this year, citing people familiar with OpenAI’s financial situation.
“Has OpenAI been doing less well than it hoped?” is a question with an easy answer. Which likely makes it the wrong question to ask.
To borrow from an infamous Reddit post, Don't even ask the question. The answer is yes, it's priced in.
We spent the final two months of 2025 punishing stocks for being close to OpenAI, and until April, those names remained in the penalty box for most of 2026, lagging the Nasdaq 100 and significantly trailing Google-linked stocks. On Hyperliquid, OpenAI perpetual futures were flat from late November through late February, when it announced its long-awaited $110 billion funding round that valued the company at a pre-money valuation of $730 billion.
Some better questions whose answers might illuminate the path forward:
Does OpenAI understand why it lost market share among enterprises, and has it done anything about it?
Can OpenAI compete on quality, and does it even need to?
For 1), the answer largely appears to be “yes.”
Most of OpenAI’s internal and external communiques in 2026 have taken care to spotlight the growth of Codex (its AI coding tool) and how enterprise revenues are gaining ground on consumer sales within the firm. Notwithstanding the bizarre foray into purchasing TBPN, this appears to be a company that’s better balancing the need for enterprise depth to go along with its consumer breadth. In other words, it’s offering more robust competition for the ground it had previously been ceding.
Add to that one cliché often bandied about on sportsball talk radio when discussing injury-riddled teams: the best ability is availability. OpenAI has sought to make this a key differentiating and selling point relative to Anthropic. The Claude developer has been bedeviled by complaints about use limits and is in the midst of a mad scramble for compute that’s seen it strike or expand deals with CoreWeave, Amazon, Google, and Broadcom over the past month.
In its response to the WSJ article, OpenAI called its compute strategy “the great enabler,” saying that “the moves we made (and got criticized for) to lock up massive supply has been proven right and are giving us the ability to deliver a better product experience to our customers."
Which brings us to 2). Just because something is in supply does not mean there will be demand. I’m not qualified to judge how good or bad AI tools are, but SemiAnalysis certainly is. From their recent report:
SemiAnalysis is famous (infamous?) for shilling Claude, and we’ve been testing GPT-5.5 as part of an alpha program with OpenAI the past few weeks.
We think GPT-5.5 is a significant improvement within Codex specifically. Previously, ~all our engineers used Claude exclusively, and use of ChatGPT models for coding was restricted to wrappers like Cursor. Now, most of our engineers switch between Codex and Claude models depending on the task and IDE preference.
Gemini 3 and Claude Code/Cowork received rave reviews — by my subjective temperature check on public opinion, better than anything OpenAI’s garnered in years.
But all OpenAI really needs to show is that its tools, like theirs, are powerful enough to be counted on to help solve business problems.
Commoditization might sound like a bit of a dirty word, or devaluing the impact of a potentially revolutionary technology. But at its essence, all we’re describing here is the ability of AI tools to produce a (roughly) standardized and reliable output: you don’t think twice about whether the gas you’re putting in your car at Exxon Mobil will be any better or worse than Shell’s. Both get you where you want to be.
To tie these two points together: if Exxon Mobil is closed and Shell is open, well, then, there’s really no choice for whose fuel you’ll be using.
In “The Lion King,” Rafiki tells an adolescent Simba, “The past can hurt. But the way I see it, you can either run from it, or learn from it.”
The past is hurting today, with OpenAI perpetuals down about 5% over the prior 24 hours. But if the company’s product development and compute accumulation strategy have put it in a position to capitalize on seemingly voracious end-user demand, then it’ll be a lesson well worth learning — and in fact, one it already has.
But if OpenAI’s inability to hit revenue targets is latest in a series of proof points about perceived product shortcomings, then all the AI compute in the world won’t fix it, and the cash burn used to put it in place will lead to new pertinent and pointed questions about the viability of its operations.
