Tech
Sam Altman In Sun Valley
OpenAI CEO Sam Altman (Kevin Dietsch/Getty Images)
Eye on AI

OpenAI releases its long-awaited flagship model, GPT-5

Researchers said the new AI model is “significantly less deceptive” than prior models as the company tries to shift expectations from the giant leaps in performance seen earlier in the AI boom.

Jon Keegan
Updated 8/7/25 3:55PM

Looking back to November 2022, when ChatGPT was released to the world as a technical preview, it’s dizzying to think of the incredible progress — and the hundreds of billions of dollars — that AI startups and Big Tech companies have spent to rapidly train and improve their ever-larger language models in a furious race to the top of benchmark leaderboards.

Today, OpenAI released its latest flagship large language model, GPT-5, and it might mark the end of the first explosive wave of generative AI and the start of a new era where gains are measured in different ways.

Details leaked overnight, but today in a webcast, OpenAI cofounder and CEO Sam Altman debuted the new model, which comes in four flavors:

  • GPT-5: “Designed for logic and multi-step tasks.”

  • GPT-5-mini: “A lightweight version for cost-sensitive applications.”

  • GPT-5-nano: “Optimized for speed and ideal for applications requiring low latency.”

  • GPT-5-chat: “Designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.”

The new models will be available to free, Plus, Pro, and Team users today. ChatGPT Enterprise and Edu users will gain access on August 14, according to OpenAI’s website. Executives highlighted GPT-5’s strengths in code generation, math, physics, healthcare, and the ability to dive deeper into a problem when needed without the user having to choose that ahead of time. OpenAI said that “improving factuality” was a priority for GPT-5, and hallucinations have been reduced.

For safety, a new technique called “safe completions” has been introduced to make sure GPT-5 can decline to answer potentially harmful responses while still being helpful. Researchers also highlighted the work to reduce the ability of ChatGPT to deceive its user, saying that the new model is “significantly less deceptive” than prior models.

On the livestream, an OpenAI researcher announced that with the release of GPT-5, the company will be deprecating all of the previous GPT models. Taya Christianson, a spokesperson for OpenAI, told Sherwood News that to prevent users from having to pick the right model to use, GPT-5 will be the default. “Models will remain available for Pro, Team, Enterprise, and Edu tiers for the next 60 days, after which GPT-5 will be the default,” Christianson said.

Among the new features for ChatGPT include customizing the colors of the chat interface and an early “research preview” of chat personalities, to be tailored to a user’s needs.

OpenAI’s website described the major release:

“GPT-5 is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It handles complex coding tasks with minimal prompting, provides clear explanations, and introduces enhanced agentic capabilities, making it a powerful coding collaborator and intelligent assistant for all users.”

End of the old playbook?

The original ChatGPT release (powered by GPT-3.5) in 2022 sparked other tech companies to follow a proven playbook for how to compete in a brand-new industry with few rules — to win, you needed a bigger, smarter, more capable model that could notch gains on AI leaderboards and achieve record scores on benchmark tests.

The formula seemed deceptively simple: more training data, plus more Nvidia GPUs, equals a smarter model. And it worked, at least for a while.

The horse race that resulted saw OpenAI go up against Anthropic’s Claude, xAI’s Grok, Google’s Gemini, Meta’s Llama, and dozens of other models. Novel generative-AI features like Midjourney’s text-to-image generation, conversational speech mode, and code-writing assistants like Microsoft’s GitHub Copilot were dropping daily, and the sky appeared to be the limit.

But last year, researchers started seeing smaller and smaller gains when training their giant models, and talk of an AI “plateau” started to emerge. Tech companies had already pledged hundreds of billions in capital expenditures to build jumbo and mega-super-jumbo data centers, in anticipation of training bigger and bigger models.

If generative AI was hitting a plateau, investors might have some questions about the rush to build out all of this extremely expensive infrastructure, and without a profitable business model.

DeepSeek disrupts

The end of 2024 saw Chinese AI startup DeepSeek release its R1 “reasoning” model, which matched or beat the top state-of-the-art AI models in some areas. What caught everyone’s attention was that DeepSeek researchers were using older, slower chips (due to US export controls) to train the model for about $5.6 million — a fraction of what Western tech companies were shoveling into their gargantuan data centers at breakneck speed.

News of the open-source model throttled tech stocks as investors reconsidered Nvidia’s place in the white-hot center of the AI universe.

DeepSeek’s entrance into the field did appear to mark a big shift in strategy for the industry. Meta dedicated a team to analyze DeepSeek’s R1 model, and later offered similar reasoning capabilities with its Llama 4 models.

While OpenAI’s o1 model was the first major model to tout “chain of thought reasoning,” released last September, DeepSeek ran with the technique. DeepSeek used a distributed “mixture of experts” scheme where queries are distributed to smaller, specialized models, which was more efficient than one huge, monolithic model, like the ones the industry was racing to build. OpenAI’s odd GPT-4.5 release in February came with lowered expectations, and the announcement that it would be the last “non-reasoning” model the company would make.

Earlier this week, OpenAI released its new open-weight “gpt-oss” model in a large and small size. Releasing the model weights lets anyone run these new models on a laptop for free, and can be further trained for specialized use cases. At the time, Altman posted about GPT-4.5: “This isn’t a reasoning model and won’t crush benchmarks.”

High school, college, Ph.D.... What’s next?

OpenAI stressed in its GPT-5 demonstrations that while benchmarks have been helpful to measure specific improvements, they might be reaching their limits. OpenAI President Greg Brockman said of GPT-5’s high benchmark scores:

“They’re exciting numbers, but we’re starting to saturate them. When you’re moving between 98% and 99% of some benchmark, it means you need something else to really capture how great the model is. And one thing we’ve done very differently with this model is really focus on not just these numbers, but really on real-world application, it being really useful to you in your daily workflow.”

On the livestream, Altman positioned GPT-5 as a significant evolution from the company’s previous GPT models, and said it was far along on its academic journey:

“GPT-3 was sort of like talking to a high school student. There were flashes of brilliance, lots of annoyance, but people started to use it and get some value out of it. With GPT-4, maybe it was like talking to a college student — real intelligence, real utility. But with GPT-5, now it’s like talking to an expert, a legitimate Ph.D.-level expert in anything, any area you need, on demand that can help you with whatever your goals are.”

Updated at 2:45 p.m. ET on August 7 to include comment from OpenAI.

More Tech

See all Tech
tech

Pentagon adds xAI’s Grok to its AI platform

Grok is going to war.

Today the Pentagon announced that xAI’s controversial Grok chatbot will be added to GenAI.mil, the Department of Defense’s “bespoke AI platform.”

Launched earlier this month, GenAI.mil joins Google’s Gemini on the platform, which the Pentagon says will usher in an “AI-driven culture change” at the agency.

Federal workers have had access to Grok since the White House ordered the chatbot added to the GSA’s approved AI vendor list in August.

xAI has had some embarrassing episodes as it scrambles to monetize Grok, after spending billions on its Colossus data centers. Just this summer, several examples emerged of Grok responding to user queries with antisemitic tropes, and even praising Hitler.

Launched earlier this month, GenAI.mil joins Google’s Gemini on the platform, which the Pentagon says will usher in an “AI-driven culture change” at the agency.

Federal workers have had access to Grok since the White House ordered the chatbot added to the GSA’s approved AI vendor list in August.

xAI has had some embarrassing episodes as it scrambles to monetize Grok, after spending billions on its Colossus data centers. Just this summer, several examples emerged of Grok responding to user queries with antisemitic tropes, and even praising Hitler.

tech

Alphabet acquires data center company Intersect for $4.75 billion

Google parent Alphabet announced a deal to acquire data center and energy infrastructure builder Intersect. Alphabet already held a minority stake and a partnership with the company. The acquisition is for $4.75 billion in cash.

According to Alphabet CEO, Sundar Pichai: “Intersect will help us expand capacity, operate more nimbly in building new power generation in lockstep with new data center load, and reimagine energy solutions to drive US innovation and leadership. We look forward to welcoming Sheldon and the Intersect team.”

The deal is expected to close in the first half of 2026.

tech

Tesla might get to a 1,000 Robotaxis in the Bay Area this year after all

Tesla has registered 1,655 ride-hailing vehicles in California, up from just 28 when it launched the service in August, according to California Public Utilities Commission data cited by Business Insider. That growth suggests Tesla — which currently has about 130 Robotaxis operating with a driver using Full Self-Driving in the Bay Area — could realistically hit CEO Elon Musk’s target of 1,000 vehicles in the region by the end of the year.

Registered vehicles aren’t the same as an active fleet, but the increase signals that Tesla is gearing up for significant expansion.

Google’s Waymo remains in the lead, with nearly 2,000 driverless vehicles registered across its two California markets, including more than 1,000 operating in the Bay Area and 700 in Los Angeles.

It’s less clear whether Tesla can meet Musk’s other goals, including deploying 500 Robotaxis in Austin, where just 32 vehicles are currently operating, or removing safety monitors by year’s end. Only two of those Austin vehicles are currently testing without drivers.

Registered vehicles aren’t the same as an active fleet, but the increase signals that Tesla is gearing up for significant expansion.

Google’s Waymo remains in the lead, with nearly 2,000 driverless vehicles registered across its two California markets, including more than 1,000 operating in the Bay Area and 700 in Los Angeles.

It’s less clear whether Tesla can meet Musk’s other goals, including deploying 500 Robotaxis in Austin, where just 32 vehicles are currently operating, or removing safety monitors by year’s end. Only two of those Austin vehicles are currently testing without drivers.

tech

Activists claim to have scraped most of Spotify, planning release

An activist archiving group claims they have scraped a large part of Spotify’s library of music.

Anna’s Archive, a self-described “open source search engine for shadow libraries” announced in a blog post that the pirated Spotify files will be a “preservation archive,” meant to archive the snapshot of music for future generations.

Anna’s Archive says they have scraped 86 million tracks. Spotify says their platform hosts over 100 million tracks. The group has already released a database of metadata from Spotify’s collections, having reportedly scraped 256 million rows’ worth, per Billboard, with plans to release music files later down the line.

Such a large corpus of publicly available music data would be a goldmine to AI companies, looking for fresh data to train their models. Spotify told Billboard that it is actively investigating the incident.

Anna’s Archive says they have scraped 86 million tracks. Spotify says their platform hosts over 100 million tracks. The group has already released a database of metadata from Spotify’s collections, having reportedly scraped 256 million rows’ worth, per Billboard, with plans to release music files later down the line.

Such a large corpus of publicly available music data would be a goldmine to AI companies, looking for fresh data to train their models. Spotify told Billboard that it is actively investigating the incident.

15

In the absence of official statistics, Bloomberg attempted to tally the number of US deaths linked to crashes in which Tesla’s door functionality may have impeded escape or rescue. The analysis identified “at least 15 deaths in a dozen incidents over the past decade in which occupants or rescuers were unable to open the doors of a Tesla that had crashed and caught fire.”

In September, the National Highway Traffic Safety Administration opened an investigation into whether door issues in certain Tesla vehicles can prevent emergency access, following a separate Bloomberg report.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.