Tech
Sam Altman In Sun Valley
OpenAI CEO Sam Altman (Kevin Dietsch/Getty Images)
Eye on AI

OpenAI releases its long-awaited flagship model, GPT-5

Researchers said the new AI model is “significantly less deceptive” than prior models as the company tries to shift expectations from the giant leaps in performance seen earlier in the AI boom.

Jon Keegan
Updated 8/7/25 4:55PM

Looking back to November 2022, when ChatGPT was released to the world as a technical preview, it’s dizzying to think of the incredible progress — and the hundreds of billions of dollars — that AI startups and Big Tech companies have spent to rapidly train and improve their ever-larger language models in a furious race to the top of benchmark leaderboards.

Today, OpenAI released its latest flagship large language model, GPT-5, and it might mark the end of the first explosive wave of generative AI and the start of a new era where gains are measured in different ways.

Details leaked overnight, but today in a webcast, OpenAI cofounder and CEO Sam Altman debuted the new model, which comes in four flavors:

  • GPT-5: “Designed for logic and multi-step tasks.”

  • GPT-5-mini: “A lightweight version for cost-sensitive applications.”

  • GPT-5-nano: “Optimized for speed and ideal for applications requiring low latency.”

  • GPT-5-chat: “Designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.”

The new models will be available to free, Plus, Pro, and Team users today. ChatGPT Enterprise and Edu users will gain access on August 14, according to OpenAI’s website. Executives highlighted GPT-5’s strengths in code generation, math, physics, healthcare, and the ability to dive deeper into a problem when needed without the user having to choose that ahead of time. OpenAI said that “improving factuality” was a priority for GPT-5, and hallucinations have been reduced.

For safety, a new technique called “safe completions” has been introduced to make sure GPT-5 can decline to answer potentially harmful responses while still being helpful. Researchers also highlighted the work to reduce the ability of ChatGPT to deceive its user, saying that the new model is “significantly less deceptive” than prior models.

On the livestream, an OpenAI researcher announced that with the release of GPT-5, the company will be deprecating all of the previous GPT models. Taya Christianson, a spokesperson for OpenAI, told Sherwood News that to prevent users from having to pick the right model to use, GPT-5 will be the default. “Models will remain available for Pro, Team, Enterprise, and Edu tiers for the next 60 days, after which GPT-5 will be the default,” Christianson said.

Among the new features for ChatGPT include customizing the colors of the chat interface and an early “research preview” of chat personalities, to be tailored to a user’s needs.

OpenAI’s website described the major release:

“GPT-5 is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It handles complex coding tasks with minimal prompting, provides clear explanations, and introduces enhanced agentic capabilities, making it a powerful coding collaborator and intelligent assistant for all users.”

End of the old playbook?

The original ChatGPT release (powered by GPT-3.5) in 2022 sparked other tech companies to follow a proven playbook for how to compete in a brand-new industry with few rules — to win, you needed a bigger, smarter, more capable model that could notch gains on AI leaderboards and achieve record scores on benchmark tests.

The formula seemed deceptively simple: more training data, plus more Nvidia GPUs, equals a smarter model. And it worked, at least for a while.

The horse race that resulted saw OpenAI go up against Anthropic’s Claude, xAI’s Grok, Google’s Gemini, Meta’s Llama, and dozens of other models. Novel generative-AI features like Midjourney’s text-to-image generation, conversational speech mode, and code-writing assistants like Microsoft’s GitHub Copilot were dropping daily, and the sky appeared to be the limit.

But last year, researchers started seeing smaller and smaller gains when training their giant models, and talk of an AI “plateau” started to emerge. Tech companies had already pledged hundreds of billions in capital expenditures to build jumbo and mega-super-jumbo data centers, in anticipation of training bigger and bigger models.

If generative AI was hitting a plateau, investors might have some questions about the rush to build out all of this extremely expensive infrastructure, and without a profitable business model.

DeepSeek disrupts

The end of 2024 saw Chinese AI startup DeepSeek release its R1 “reasoning” model, which matched or beat the top state-of-the-art AI models in some areas. What caught everyone’s attention was that DeepSeek researchers were using older, slower chips (due to US export controls) to train the model for about $5.6 million — a fraction of what Western tech companies were shoveling into their gargantuan data centers at breakneck speed.

News of the open-source model throttled tech stocks as investors reconsidered Nvidia’s place in the white-hot center of the AI universe.

DeepSeek’s entrance into the field did appear to mark a big shift in strategy for the industry. Meta dedicated a team to analyze DeepSeek’s R1 model, and later offered similar reasoning capabilities with its Llama 4 models.

While OpenAI’s o1 model was the first major model to tout “chain of thought reasoning,” released last September, DeepSeek ran with the technique. DeepSeek used a distributed “mixture of experts” scheme where queries are distributed to smaller, specialized models, which was more efficient than one huge, monolithic model, like the ones the industry was racing to build. OpenAI’s odd GPT-4.5 release in February came with lowered expectations, and the announcement that it would be the last “non-reasoning” model the company would make.

Earlier this week, OpenAI released its new open-weight “gpt-oss” model in a large and small size. Releasing the model weights lets anyone run these new models on a laptop for free, and can be further trained for specialized use cases. At the time, Altman posted about GPT-4.5: “This isn’t a reasoning model and won’t crush benchmarks.”

High school, college, Ph.D.... What’s next?

OpenAI stressed in its GPT-5 demonstrations that while benchmarks have been helpful to measure specific improvements, they might be reaching their limits. OpenAI President Greg Brockman said of GPT-5’s high benchmark scores:

“They’re exciting numbers, but we’re starting to saturate them. When you’re moving between 98% and 99% of some benchmark, it means you need something else to really capture how great the model is. And one thing we’ve done very differently with this model is really focus on not just these numbers, but really on real-world application, it being really useful to you in your daily workflow.”

On the livestream, Altman positioned GPT-5 as a significant evolution from the company’s previous GPT models, and said it was far along on its academic journey:

“GPT-3 was sort of like talking to a high school student. There were flashes of brilliance, lots of annoyance, but people started to use it and get some value out of it. With GPT-4, maybe it was like talking to a college student — real intelligence, real utility. But with GPT-5, now it’s like talking to an expert, a legitimate Ph.D.-level expert in anything, any area you need, on demand that can help you with whatever your goals are.”

Updated at 2:45 p.m. ET on August 7 to include comment from OpenAI.

More Tech

See all Tech
tech

Tesla’s European sales rise for the first time in more than a year but still lag BYD

New Tesla registrations jumped 12% in February from a year earlier to 17,664 units across the European Union, the United Kingdom, and the European Free Trade Association, according to new data from the European Automobile Manufacturers’ Association. China’s BYD once again beat out the American EV maker, posting 17,954 registrations in February, up 162% from a year earlier. BYD and Tesla each represented 1.8% of the European new car market last month.

The February data is a notable shift for Tesla, which saw its first monthly jump in the region since December 2024. Tesla has struggled in Europe since Elon Musk's ascension to the Trump administration and his forays into European politics in support of far-right parties. Tesla also posted gains in China in February, which is a much larger market for the carmaker.

tech

Jensen Huang: We have achieved AGI now... sort of

Lots of AI leaders are thinking about a big moment looming over the current AI boom: when will we have achieved artificial general intelligence?

There’s no shortage of predictions, but we haven’t yet seen a full-throated declaration that this slippery milestone has been achieved.

Until now. On Lex Friedman’s podcast Monday, Nvidia CEO Jensen Huang was asked what he thought the timeline looked like for “an AI system that’s able to essentially do your job. So, run — no, start, grow, and run a successful technology company.”

Huang confidently answered: “I think it’s now. I think we’ve achieved AGI.”

Huang then hedged, noting that Friedman was talking about running a $1 billion dollar company, but he didn’t specify for how long. Huang elaborated, “It is not out of the question that a Claude was able to create a web service, some interesting little app that all of a sudden, you know, a few billion people used for $0.50, and then it went out of business again shortly after.”

So maybe it will be a while before Jensen Huang can get help running Nvidia by eating his own dog food.

Huang confidently answered: “I think it’s now. I think we’ve achieved AGI.”

Huang then hedged, noting that Friedman was talking about running a $1 billion dollar company, but he didn’t specify for how long. Huang elaborated, “It is not out of the question that a Claude was able to create a web service, some interesting little app that all of a sudden, you know, a few billion people used for $0.50, and then it went out of business again shortly after.”

So maybe it will be a while before Jensen Huang can get help running Nvidia by eating his own dog food.

17.5%

OpenAI is trying to woo private equity investors with a sweet offer: a guaranteed minimum return of 17.5% on their investments, which is “significantly higher than typical preferred instruments, as well as early access to new models, according to a report from Reuters.

The deal aims to build joint ventures to raise capital amid OpenAI’s intense competition for a bigger slice of the enterprise AI market. The minimum return offer is something that its competitor Anthropic is not currently offering, per Reuters.

Dog Eating Dog Food

Big Tech’s strategy for selling AI: Dogfooding

I’m not only the AI CEO, but I’m also a client.

Elon Musk at Terafab keynote

Musk’s Terafab might be his most technically difficult challenge yet

One does not simply start fabricating semiconductors.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.