Tech
Scarlett Johansson attends the 35th Annual American Cinematheque Awards
Matt Winkelmeyer/Getty Images
Unforced AI-rrors

Scarlett Johansson, YouTube evasion, CEO chaos: A running list of OpenAI gaffes

Even though the company seems to have captured the broader public’s interest in AI, Sam Altman’s passion project has become a PR nightmare.

Rani Molla

OpenAI can’t stop tripping over its own feet. Instead of enjoying its first-mover position in the surging AI industry, the ChatGPT maker’s leadership keeps making unforced errors that threaten to disrupt its lead.

The ScarJo incident

Most recently, it needlessly tried to make its voice chatbot sound like virtual assistant Samantha from Spike Jonze’s arguably dystopian 2013 film “Her,” where a divorcée falls in love with an AI voiced by Scarlett Johansson.

Yesterday Johansson released a statement saying that Altman had asked her to voice its Sky assistant multiple times but she declined. He then went ahead and released a voice that sounded just like her from “Her” anyway. He even called attention to the similarity.

OpenAI had released a statement this weekend saying its Johansson-sounding Sky voice was actually a “different professional actress using her own natural speaking voice,” but didn’t name that actress. Yesterday the company “paused” the use of the voice as it dealt with “questions” about its origins. Apparently those questions were from ScarJo’s lawyer.

This didn’t have to be a problem at all. Having a movie star’s voice wasn’t going to make or break the chatbot — how well it works is what counts. The move instead feels juvenile and bears an Elon Musk level of hubris.

The Johansson incident is also representative of a long-standing criticism of AI companies: that they hoover up people’s work to train their models without giving credit or asking permission.

The YouTube evasion

OpenAI itself keeps getting in hot water over its apparent inability to say whether or not it trained its image generator Sora on YouTube, which it likely did.

At a conference earlier this month, the company’s leadership failed to answer the question — an obvious one for the moderator to ask since the company’s chief technology officer had infamously flubbed answering the same question when posed by the Wall Street Journal a couple months before.

So they either don’t know or don’t want to admit how they train their AI — both bad looks.

Doing so would be a violation of YouTube’s terms of service. The New York Times and eight daily newspapers are currently suing OpenAI for cribbing their content.

Nasty NDAs

Of course, it’s not as if the company is free with its own trade secrets. In fact, OpenAI makes its employees sign extremely punitive nondisclosure and nondisparagement agreements, that put employees at risk of losing their already vested equity for speaking out.

As Vox’s Kelsey Piper wrote:

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

Perhaps a more flexible policy toward former workers would let them give their former employer feedback, so the company could stop making such obvious mistakes.

Trouble at the top

The roots of the recent gaffes seem to stem from Altman himself, a Silicon Valley wunderkind and former partner at startup incubator Y Combinator. The fuse at OpenAI seems to have been lit in late 2023, when the company devolved into chaos as Altman was fired and then reinstated as CEO over the course of five days last November. At the time the board wrote in a blog post that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” It added, “The board no longer has confidence in his ability to continue leading OpenAI.”

Within a few days, however, Altman was back at OpenAI after pushback from investors and employees.

Employee departures

And then last week leaders of the company’s superalignment team, cofounder Ilya Sutskever and researcher, Jan Leike announced their departures from OpenAI. Sutskever had been one of the executives behind Altman’s ouster last year.

Leike in a post on X said that “safety culture and processes have taken a backseat to shiny products.” Their departures hint at more internal strain over the direction of the company and the decisions of its leaders to come.

That wasn’t Altman’s first dustup with a company he led. He was pushed out of Y Combinator in 2019 for putting “his own interests ahead of the organization.”

The fact that OpenAI seems to keep stepping on rakes even while it’s captured the broader public’s attention with its products is mystifying at best, and worrying at worst. It may have the pole position in the market right now, but there are plenty of upstarts happy to overtake its efforts while infighting and chaos reign in Altman’s universe.

More Tech

See all Tech
Two Cat Businessmen Holding Drinks

The most outlandish tech CEO quotes from 2025

Tech CEOs have been nuttier than ever.

tech

Trump AI executive order is a “major win” for Open AI, Google, Microsoft, and Meta, says Ives

President Trump’s new executive order aiming to keep states from enacting AI laws that inhibit US “global AI dominance” is a “major win” for OpenAI, Google, Microsoft, and Meta, according to Wedbush Securities analyst Dan Ives. Big Tech companies have collectively plowed hundreds of billions into the technology, while seeing massive stock price gains, and Ives believes they stand to gain much more.

“Given that there have been over 1,000 AI laws proposed at the state level, this was a necessary move by the Trump Administration to keep the US out in front for the AI Revolution over China,” Ives wrote, adding that state-by-state regulation “would have crushed US AI startup culture.” The presidential order would withhold federal funds from states that put in place onerous AI regulations.

This morning, Whitehouse AI adviser Sriram Krishnan said in a CNBC interview that he’d be working with Congress on a single national framework for AI.

Despite Ives’ rosy read-through on the order, with the exception of Nvidia, which jumped on a report of boosted Chinese demand, many AI stocks are in the red early today. The VanEck Semiconductor ETF is down nearly 1% premarket, as the AI trade struggles thanks to underwhelming earnings results from Oracle earlier this week.

“Given that there have been over 1,000 AI laws proposed at the state level, this was a necessary move by the Trump Administration to keep the US out in front for the AI Revolution over China,” Ives wrote, adding that state-by-state regulation “would have crushed US AI startup culture.” The presidential order would withhold federal funds from states that put in place onerous AI regulations.

This morning, Whitehouse AI adviser Sriram Krishnan said in a CNBC interview that he’d be working with Congress on a single national framework for AI.

Despite Ives’ rosy read-through on the order, with the exception of Nvidia, which jumped on a report of boosted Chinese demand, many AI stocks are in the red early today. The VanEck Semiconductor ETF is down nearly 1% premarket, as the AI trade struggles thanks to underwhelming earnings results from Oracle earlier this week.

tech
Rani Molla

Epic scores two victories as “Fortnite” returns to Google Play and appeals court keeps injunction against Apple

“Fortnite” maker Epic Games notched two wins Thursday in its drawn-out battle against Big Tech’s app stores. “Fortnite” returned to the Google Play app store in the US, Reuters reports, as Epic continues working with Google to secure court approval for their settlement.

Meanwhile, a US appeals court partly reversed sanctions against Apple in Epic’s antitrust case, calling parts of the order overly broad, but upheld the contempt finding and left a sweeping injunction in place — keeping pressure on Apple to allow developers to steer users to outside payment options and reduce its tight control over how apps can communicate and monetize on iOS.

tech
Jon Keegan

Report: AI-powered toys tell kids where to find matches, parrot Chinese government propaganda

You may want to think twice before buying your kids a fancy AI-powered plush toy.

A new report from NBC News found that several AI-powered kids toys could easily be steered to dangerous as well as sexually explicit conversations in a shocking demonstration of the loose safety guardrails in this novel category of consumer electronics.

A report out by the Public Interest Research Group details what researchers found when they tested five AI-powered toys for kids bought from Amazon. Some of the toys offered instructions on where to find matches and how to start fires.

NBC News also bought some of these toys and found they parroted Chinese government propaganda and gave instructions for how to sharpen knives. Some of the toys also discussed inappropriate topics for kids, like sexual kinks.

The category of AI-powered kids toys is under scrutiny as major AI companies like OpenAI have announced partnerships with toy manufacturers like Mattel (which has yet to release an AI-powered toy).

A report out by the Public Interest Research Group details what researchers found when they tested five AI-powered toys for kids bought from Amazon. Some of the toys offered instructions on where to find matches and how to start fires.

NBC News also bought some of these toys and found they parroted Chinese government propaganda and gave instructions for how to sharpen knives. Some of the toys also discussed inappropriate topics for kids, like sexual kinks.

The category of AI-powered kids toys is under scrutiny as major AI companies like OpenAI have announced partnerships with toy manufacturers like Mattel (which has yet to release an AI-powered toy).

tech
Jon Keegan

OpenAI releases GPT-5.2, the “best model yet for real-world, professional use”

After feeling the heat from Google’s recent launch of its powerful Gemini 3 model, OpenAI’s response to its “code red” has been released, reportedly on an accelerated schedule to keep up with the competition.

The company’s new flagship model, GPT-5.2, is out, and the company is calling it “the most capable model series yet for professional knowledge work.”

OpenAI CEO Sam Altman called it the “smartest generally-available model in the world” and shared benchmarks that showed it achieving higher scores than Gemini 3 Pro and Anthopic’s Claude Opus 4.5 in some software engineering tests and abstract reasoning, math, and science problems.

In a press release announcing the new model, the company said: “Overall, GPT‑5.2 brings significant improvements in general intelligence, long-context understanding, agentic tool-calling, and vision — making it better at executing complex, real-world tasks end-to-end than any previous model.”

OpenAI CEO Sam Altman called it the “smartest generally-available model in the world” and shared benchmarks that showed it achieving higher scores than Gemini 3 Pro and Anthopic’s Claude Opus 4.5 in some software engineering tests and abstract reasoning, math, and science problems.

In a press release announcing the new model, the company said: “Overall, GPT‑5.2 brings significant improvements in general intelligence, long-context understanding, agentic tool-calling, and vision — making it better at executing complex, real-world tasks end-to-end than any previous model.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.