Tech
OpenAI CEO Sam Altman
(Screenshot: OpenAI)

OpenAI backtracks on plan to mothball old AI models after user outcry

OpenAI CEO Sam Altman spent the weekend on social media trying to quell an uproar from dedicated users.

After a splashy (but rocky) rollout last week of OpenAI’s latest flagship model, GPT-5, users pushed back loudly on the company’s plan to deprecate older models in place of the new release.

CEO Sam Altman’s weekend was a little hectic.

During the livestream announcing the release, the plan to mothball the old models was mentioned in passing, as an employee ran a demo showing how GPT-5 could write a “eulogy” to the models that were marked for death.

A spokesperson from OpenAI confirmed with Sherwood News that the plan was to replace the prior models with GPT-5 to make it easier for users by eliminating a confusing choice over which model is best suited for their task. The problem is that people have very strong feelings for the previous leading model, 4o.

Users pushed back online, and following Thursday’s launch in a series of posts on social media, Altman folded. In addition to keeping 4o around for ChatGPT Plus users, OpenAI will double the “rate limits” (the maximum requests you can make in a period of time) for GPT-5.

Damage control

To quell the uproar, on Friday 11 a.m. PT, Altman jumped on Reddit with eight other OpenAI employees to hold an “AMA” (ask me anything) on the ChatGPT subreddit. Altman addressed the embarrassing charts that made it into the livestream. In response to the question, “What was up with those graphs? It looked misleading,” Altman wrote:

“the numbers here were accurate but we screwed up the bar charts in the livestream overnight; on another slide we screwed up numbers. the blog post and system card were accurate though. people were working late and were very tired, and human error got in the way. a lot comes together for a livestream in the last hours.”

An hour after the AMA started, and just over 24 hours after the launch, Altman acknowledged the rollout’s “bumpiness” and announced the reversal and some significant changes to the ChatGPT service:

A few hours later on Friday night, Altman had more to say in a mea culpa, acknowledging that the company “for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways.”

On Sunday afternoon, Altman posted about the increased rate limits and some upcoming user interface changes.

Altman indicated the company would post an update Monday or Tuesday to “share our thinking on how we are going to make capacity tradeoffs over the coming months.”

More Tech

See all Tech
tech

Apple stock takes a hit on report it’s pushing back AI Siri features — again

Apple customers may have to wait even longer for the company’s long-awaited AI Siri, Bloomberg reports.

The iPhone maker had been planning to include a number of upgrades to Siri in a March operating system update, but the company now is planning to spread those out over future versions. That means some features first announced in June 2024 — an AI Siri that can tap into personal data and on-screen content — might not arrive until September with iOS 27.

The postponements happened after “testing uncovered fresh problems with the software,” Bloomberg said, including instances where Siri didn’t properly process queries or took too long to respond.

The stock, which had been trading up more than 2% today, has pared some of those gains on the news.

For what it’s worth, Apple’s iPhone sales — a record last quarter — don’t appear to be suffering for lack of AI.

The postponements happened after “testing uncovered fresh problems with the software,” Bloomberg said, including instances where Siri didn’t properly process queries or took too long to respond.

The stock, which had been trading up more than 2% today, has pared some of those gains on the news.

For what it’s worth, Apple’s iPhone sales — a record last quarter — don’t appear to be suffering for lack of AI.

tech

Meta breaks ground on massive $10 billion AI data center — and the costs won’t stop there

Meta announced today that it broke ground on a new, giant AI data center: This one is located in Indiana, has 1GW of capacity, and will cost more than $10 billion.

In a press release, the company touted the 4,000 construction jobs and 300 operational positions Meta expects to bring to the area. It did not disclose any tax incentives tied to the project.

But much like with the company’s Hyperion data center in Louisiana, — where we calculated incentives in the billions — the number of long-term jobs is likely small relative to any public subsidies the company ultimately receives.

The $10 billion build represents a notable chunk of Meta’s planned $115-$135 billion in capital expenditures this year. And operating costs will add substantially to that total over time.

Earlier this year, Trump warned tech giants to “pay their own way” when it comes to energy, as data centers have driven up electricity costs in some regions. Meta’s announcement appears to anticipate that criticism, dedicating significant space to explaining how it will mitigate the energy and water impact of the facility:

“With all our data centers, we strive to be good neighbors. We pay the full costs for energy used by our data centers and work closely with utilities to plan for our energy needs years in advance to ensure residents aren’t negatively impacted. To help support local families in need, we’re providing $1 million each year for 20 years to the Boone REMC Community Fund to provide direct assistance with energy bills, and funding emergency water utility assistance through The Caring Center. We also pay the full cost of water and wastewater service required to support our data centers. Over the course of this project, Meta will make investments of more than $120 million, toward critical water infrastructure in Lebanon, as well as other public infrastructure improvements including roads, transmission lines and utility upgrades.”

Unlike hyperscalers such as Google and Microsoft, which can offset infrastructure costs by selling cloud capacity to customers, Meta bears those expenses largely on its own. That dynamic could make the economics of AI infrastructure more challenging for the company as its AI spending continues to accelerate.

But much like with the company’s Hyperion data center in Louisiana, — where we calculated incentives in the billions — the number of long-term jobs is likely small relative to any public subsidies the company ultimately receives.

The $10 billion build represents a notable chunk of Meta’s planned $115-$135 billion in capital expenditures this year. And operating costs will add substantially to that total over time.

Earlier this year, Trump warned tech giants to “pay their own way” when it comes to energy, as data centers have driven up electricity costs in some regions. Meta’s announcement appears to anticipate that criticism, dedicating significant space to explaining how it will mitigate the energy and water impact of the facility:

“With all our data centers, we strive to be good neighbors. We pay the full costs for energy used by our data centers and work closely with utilities to plan for our energy needs years in advance to ensure residents aren’t negatively impacted. To help support local families in need, we’re providing $1 million each year for 20 years to the Boone REMC Community Fund to provide direct assistance with energy bills, and funding emergency water utility assistance through The Caring Center. We also pay the full cost of water and wastewater service required to support our data centers. Over the course of this project, Meta will make investments of more than $120 million, toward critical water infrastructure in Lebanon, as well as other public infrastructure improvements including roads, transmission lines and utility upgrades.”

Unlike hyperscalers such as Google and Microsoft, which can offset infrastructure costs by selling cloud capacity to customers, Meta bears those expenses largely on its own. That dynamic could make the economics of AI infrastructure more challenging for the company as its AI spending continues to accelerate.

tech

Humanoid robot maker Apptronik raises $520 million

Apptronik, an Austin, Texas-based robot manufacturer, said it has closed out its Series A fundraising round, raising $520 million. The fundraising is an extension of a $415 million round raised last February, and included investments from Google, Mercedes-Benz, AT&T, and John Deere. Qatar’s state investment firm, QIA, also participated in the fundraising round.

Apptronik makes Apollo, a humanoid robot targeted for warehouse and manufacturing work. The company is one of several US robotics companies that are racing to apply generative-AI breakthroughs to humanoid robots, in anticipation of a new market for robots in homes and workplaces.

Apptronik makes Apollo, a humanoid robot targeted for warehouse and manufacturing work. The company is one of several US robotics companies that are racing to apply generative-AI breakthroughs to humanoid robots, in anticipation of a new market for robots in homes and workplaces.

tech

Ives: Microsoft and Google’s giant capex plans are worth it

Don’t mind the AI sell-off, says Wedbush Securities analyst Dan Ives, who thinks fears around seemingly unfettered Big Tech capex budgets are unfounded, especially in the case of Microsoft and Google. Together, the two hyperscalers are slated to spend around $300 billion on the purchases of property and equipment this year as they double down on AI infrastructure, but he says both have already shown that they can turn the spending into revenue and growth.

“They are reshaping cloud economics around AI-first workloads that carry higher switching costs, deeper customer lock-in, and longer contract durations than before,” Ives wrote, adding that these giant costs will be spread out over time and set the companies up for success in the long run. Per Ives:

“While near-term free cash flow optics remain noisy, the platforms that invest early and at scale are best positioned to capture durable share, pricing power, and ecosystem control as AI workloads mature. Over time, we expect utilization leverage to turn today’s elevated investment into a meaningful driver of long-term value creation.”

“They are reshaping cloud economics around AI-first workloads that carry higher switching costs, deeper customer lock-in, and longer contract durations than before,” Ives wrote, adding that these giant costs will be spread out over time and set the companies up for success in the long run. Per Ives:

“While near-term free cash flow optics remain noisy, the platforms that invest early and at scale are best positioned to capture durable share, pricing power, and ecosystem control as AI workloads mature. Over time, we expect utilization leverage to turn today’s elevated investment into a meaningful driver of long-term value creation.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.