Tech
Screenshot of OpenAI Operator
A screenshot of OpenAI’s “Operator” agent (OpenAI)
SMOOTH OPERATOR

OpenAI’s “Operator” is here to slowly take over your computer and mess up your life

Operator made a consequential mistake 13% of the time in early testing, such as emailing the wrong person or messing up a reminder for a person to take medication.

Jon Keegan

OpenAI released a “research preview” of its AI agent that can control your web browser. Called “Operator,” it has the ability to control your mouse and keyboard and analyze things it “sees” on your computer — very, very slowly. Currently it’s only available to ChatGPT Pro users in the US.

Operator makes use of the multistep “reasoning” you can find in ChatGPT o1, and the multimodal “vision” capabilities of ChatGPT 4o. This reasoning process achieves better (but slower) performance by breaking tasks into steps. Lots and lots of steps.

In the video demonstrations shared on the product page, you can watch Operator break the task down into dozens of distinct actions like “clicking,” “typing,” and “scrolling.” One example showed 152 steps to take a grammar quiz, and 146 steps to determine the amount of a refund from a canceled online order.

Screenshot from demo of OpenAI Operator
(OpenAI)

The potential for this kind of freewheeling AI web browsing on demand is positioned as an agent that can save you the drudgery of having to order groceries, research holidays, make restaurant reservations, or buy tickets to concerts.

Operator makes high-stakes mistakes

It’s one thing when ChatGPT spits out an incorrect answer, but if your chatbot is actually spending your money and triggering things in the real world, the stakes are much, much higher.

In its testing, OpenAI found that in one test of 100 sample tasks, 13% of the time Operator made a consequential mistake like emailing the wrong person, incorrectly bulk-removing email labels, setting the wrong date for a reminder to take the user’s medication, and ordering the wrong food item. Some of the other mistakes were easily reversible “nuisances.” OpenAI noted after mitigations, they reduced this error rate by approximately 90%.

OpenAI stresses that you have the ability to grab the wheel from the AI at any time, and you can approve any action before it is executed, but in this early evaluation version, you’ll probably have to spend more time babysitting the agent than just going ahead and doing the task on your own.

For now it limits the tasks you can use it for, prohibiting banking or job applications.

OpenAI shared a list of example tasks that some hypothetical user might want an AI to do for them. Ten out of ten times Operator was able to research bear habitats, create a grocery list, and make a ’90s playlist on Spotify.

Medium persuasion

The system card for the model behind Operator — Computer-Using Agent (CUA) — describes the process OpenAI used to assess the risks of letting a prerelease, novel AI agent go hog wild with your computer.

Like other model releases, OpenAI tested the model by using red teams with expertise in social engineering, CBRN (chemical, biological, radiological, and nuclear) threats, and cybersecurity. OpenAI gave itself a “low” risk for everything except “persuasion,” which got a “medium” risk score and is considered safe enough for public release.

High consequence

But there are some important restrictions on how you can use Operator. Because there is a slightly elevated risk of using Operator for influencing people, the usage policy prohibits impersonating people or organizations, concealing the role of AI in tasks, or using it to spread disinformation or false interactions, like fake reviews or fake profiles.

OpenAI prohibits people from using Operator to commit any crimes, but you are also prohibited from using it to bully, harass, defame, or discriminate against others based on protected attributes.

Under a heading titled “high consequence domains,” it notes that you can’t use Operator to make “high-stakes decisions” that might affect your safety or well-being, automate stock trading, or use it for political campaigning or lobbying.

OpenAI’s announcement follows competitor Anthropic’s October release of a similar feature that can control your computer. There is widespread hype that “agentic AI” like Operator will be a breakthrough for how people use these tools.

OpenAI CEO Sam Altman said in an announcement video that Operator is expected to roll out to international ChatGPT Pro and ChatGPT Plus users “soon,” but noted that the European rollout “will unfortunately take a while.”

More Tech

See all Tech
tech

Apple closes at record high for first time in 2025

After spending the day at intraday highs, Apple set an all-time closing high of $262.24 Monday, following reports of increased iPhone 17 sales and an analyst upgrade. Loop Capital raised its price target to a Street high of $315.

The stock’s previous all-time closing high was in December 2024.

Apple reports its fiscal year 2025 results later this month, during which analysts expect the company’s all-important iPhone sales to return to growth.

two faces

A tale of two Teslas from two analyst notes by guys named Dan

Ahead of Tesla’s third-quarter earnings, Barclays’ Dan Levy and Wedbush Securities’ Dan Ives weigh in.

tech

Data center frenzy taxes natural resources, sparks anger around the globe

The race to build ever-larger power-hungry data centers isnt limited to the US. In Ireland, more than 20% (!!!) of the country’s electricity is consumed by data centers. In Mexico, poor communities near data center sites are seeing water supplies dry up and their fragile power grids falter.

A New York Times report examines what these data center projects look like around the world and tracks the local opposition mounted by environmental groups seeking to block future projects.

The report notes that despite growing local opposition, countries are still bending over backward to lure the billions of dollars in investment that come with these data center projects, offering rich tax incentives to the companies developing the projects, in exchange for a relatively small number of jobs and promises of various, if vague, local benefits.

Much like in the US, the data center deals are shrouded in secrecy, with elected officials required to sign NDAs and the extensive use of shell companies masking the identity of the massive tech companies behind the projects.

A New York Times report examines what these data center projects look like around the world and tracks the local opposition mounted by environmental groups seeking to block future projects.

The report notes that despite growing local opposition, countries are still bending over backward to lure the billions of dollars in investment that come with these data center projects, offering rich tax incentives to the companies developing the projects, in exchange for a relatively small number of jobs and promises of various, if vague, local benefits.

Much like in the US, the data center deals are shrouded in secrecy, with elected officials required to sign NDAs and the extensive use of shell companies masking the identity of the massive tech companies behind the projects.

Man Working at Machine

OpenAI claimed a math breakthrough this weekend, only to be smacked down

The embarrassing episode sprouted from a misunderstood post, amplified by an OpenAI executive as proof of GPT-5’s mathematical prowess, but turned out not to be what it seemed.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.