Tech
GPT-5.1 screenshot
(OpenAI)

OpenAI releases GPT 5.1, which can be “Professional,” “Candid,” or “Quirky”

The new “more conversational” model follows instructions better, but backslides on some safety tests.

Today OpenAI released GPT 5.1, an update that aims to make ChatGPT “more conversational.” The model comes in two versions: GPT-5.1 Instant (“now warmer, more intelligent, and better at following your instructions”) and GPT-5.1 Thinking (“now easier to understand and faster on simple tasks, more persistent on complex ones”).

Despite an earlier update this year that was rolled back due to being overly sycophantic, the new model responds in more chummy conversation that the company says “surprises people with its playfulness” in testing.

Users now have finer control over ChatGPT’s “personality,” with new settings for “Professional,” “Candid,” and “Quirky.”

In the model’s system card, OpenAI details how well the new 5.1 models compare to the earlier 5.0 models on internal benchmarks for disallowed content.

The company has said it is prioritizing the addition of new checks to help users who may be suffering a mental health crisis, after a series of alarming incidents where ChatGPT encouraged self-harm and reinforced delusional behavior.

Two new tests were included with this release for the first time: “mental health” and “emotional reliance.” GPT 5.1 Thinking actually scored slightly lower on 9 of 13 testing categories than its predecessor, GPT-5 Thinking, and GPT-5.1 Instant scored lower than GPT-5 Instant on 5 of 13 tests.

More thinking, more tokens

OpenAI says that GPT-5.1 Thinking now spends less time on simple tasks and more time on difficult problems. This is measured by the number of model-generated tokens (tiny bits of text). Based on a chart in the announcement, the very toughest queries handled by GPT-5.1 Thinking will use 71% more tokens to complete the query. That’s a lot more tokens, and a lot more computing!

All those tokens can add up. Every time OpenAI’s customer-facing models gobble up more computing cycles, it spends more on “inference,” or running the models (as opposed to the more resource-intensive training process that happens while building the models). When enterprise customers use OpenAI’s API to use the models, the customer pays by the token count, but free users using the chat interface do not.

As a private company, OpenAI’s finances aren’t public, but a new report from the Financial Times raises the question of how much all these “thinking” models are costing the company. While The Information recently reported that OpenAI spent $2.5 billion in the first half of 2025, AI skeptic, podcaster, and writer Ed Zitron told the FT he has seen internal OpenAI figures showing that OpenAI’s cash burn for the first half of the year was much higher — close to $5 billion.

To satisfy the $1 trillion in recent deals it has signed on to, OpenAI will need to find a way to generate more revenue.

More Tech

See all Tech
tech

AI agent fatigue may be hitting enterprise customers

You may have noticed that recently, every piece of business or productivity software seems to have an “AI agent” feature that keeps getting pushed in front of you, whether you want it or not.

That’s leading to AI agent fatigue among enterprise customers, according to The Information.

Companies like Salesforce, Microsoft, and Oracle have been pushing their AI agent features to help with tasks such as customer service, IT support, and hiring. But many of those features are all powered by AI services from OpenAI and Anthropic, leading to a similar set of functions, according to the report.

As companies race to tack on AI agents to their legacy products, it remains to be seen which functions will become the “killer app” for enterprise AI.

Companies like Salesforce, Microsoft, and Oracle have been pushing their AI agent features to help with tasks such as customer service, IT support, and hiring. But many of those features are all powered by AI services from OpenAI and Anthropic, leading to a similar set of functions, according to the report.

As companies race to tack on AI agents to their legacy products, it remains to be seen which functions will become the “killer app” for enterprise AI.

tech

Google’s Waymo has started letting passengers take the freeway

Waymo’s approach to robotaxi expansion has been slow and steady — a practice that has meant the Google-owned autonomous ride-hailing service that launched to the public in 2020 is only just now taking riders on freeways.

On Wednesday, Waymo announced that “a growing number of public riders” in the San Francisco Bay Area, Phoenix, and Los Angeles can take the highway and are no longer confined to local routes. The company said it will soon expand freeway capabilities to Austin and Atlanta. It also noted that its service in San Jose is now available, meaning Waymos can traverse the entire San Francisco Peninsula.

Waymo’s main competitor, Tesla, so far operates an autonomous service in Austin as well as a more traditional ride-hailing service across the Bay Area, where a driver uses Full Self-Driving (Supervised). On the company’s last earnings call, CEO Elon Musk said Tesla would expand its robotaxi service to 8 to 10 markets this year.

tech
Jon Keegan

Google rolls out Private AI Compute matching Apple’s AI privacy scheme

One of the barriers to people embracing AI in their daily lives is trust — making sure that the company that built the AI isn’t going to just spill your most sensitive info to advertisers and data brokers.

Google is announcing a new feature called Private AI Compute that takes a page from Apple to help assure users that Google will keep your AI data private.

In June 2024, Apple announced its Private Cloud Compute scheme, which ensures only the user can access data sent to the cloud to enable AI features.

While Apple’s AI tools have yet to fully materialize, Google’s new offering looks a lot like Apple’s. AI models on its phones process data in a secure environment, and when more computing is needed in the cloud, that security is extended to the cloud to be processed by Google’s custom TPU chips.

A press release said: “This ensures sensitive data processed by Private AI Compute remains accessible only to you and no one else, not even Google.”

In June 2024, Apple announced its Private Cloud Compute scheme, which ensures only the user can access data sent to the cloud to enable AI features.

While Apple’s AI tools have yet to fully materialize, Google’s new offering looks a lot like Apple’s. AI models on its phones process data in a secure environment, and when more computing is needed in the cloud, that security is extended to the cloud to be processed by Google’s custom TPU chips.

A press release said: “This ensures sensitive data processed by Private AI Compute remains accessible only to you and no one else, not even Google.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.