Tech
An image of a woman holding a laptop in front of an OpenAI logo
Artur Widak/NurPhoto via Getty Images
Platformer

What aren’t the OpenAI whistleblowers saying?

A group of current and former employees say the company has been reckless — but aren’t offering many details

Casey Newton

Eleven current and former employees of OpenAI, along with two more from Google DeepMind, posted an open letter today stating that they are unable to voice concerns about risks created by their employees due to confidentiality agreements. Today let’s talk about what they said, what they left out, and why lately the AI safety conversation feels like it’s going nowhere. 

Here’s a dynamic we’ve seen play out a few times now at companies including Meta, Google, and Twitter. First, in a bid to address potential harms created by their platforms, companies hire idealistic workers and charge them with building safeguards into their systems. For a while, the work of these teams gets prioritized. But over time, executives’ enthusiasm wanes, commercial incentives take over, and the team is gradually de-funded.

When those roadblocks go up, some of the idealistic employees will speak out, either to a reporter like me, or via the sort of open letter that the AI workers published today. And the company responds by reorganizing the team out of existence, while putting out a statement saying that whatever that team used to work on is now everyone’s responsibility.

At Meta, this process gave us the whistleblower Frances Haugen. On Google’s AI ethics team, a slightly different version of the story played out after the firing of researcher Timnit Gebru. And in 2024, the story came to the AI industry.

OpenAI arguably set itself up for this moment more than those other tech giants. After all, it was established not as a traditional for-profit enterprise, but as a nonprofit research lab devoted to safely building an artificial general intelligence.

OpenAI’s status as a relatively obscure nonprofit changed forever in November 2022. That’s when it released ChatGPT, a chatbot based on the latest version of its large language model, which by some estimates soon became the fastest-growing consumer product in history.

ChatGPT took a technology that had been exclusively the province of nerds and put it in the hands of everyone from elementary school children to state-backed foreign influence operations. And OpenAI soon barely resembled the nonprofit that was founded out of a fear that AI poses an existential risk to humanity. 

This OpenAI placed a premium on speed. It pushed the frontier forward with tools like plugins, which connected ChatGPT to the wider internet. It aggressively courted developers. Less than a year after ChatGPT’s release, the company — a for-profit subsidiary of its nonprofit parent — was valued at $90 billion.

That transformation, led by CEO Sam Altman, gave many in the company whiplash. And it was at the heart of the tensions that led the nonprofit board to fire Altman last year, for reasons related to governance. 

The five-day interregnum between Altman’s firing and his return marked a pivotal moment for the company. The board could have recommitted to its original vision of slow, cautious development of powerful AI systems. Or it could endorse the post-ChatGPT version of OpenAI, which closely resembled a traditional Silicon Valley venture-backed startup. 

Almost immediately, it became clear that a vast majority of employees preferred working at a more traditional startup. Among other things, that startup’s commercial prospects meant that their (unusual) equity in the company would be worth millions of dollars. The vast majority of OpenAI employees threatened to quit if Altman didn’t return.

And so Altman returned. Most of the old board left. New, more business-minded board members replaced them. And that board has stood by Altman in the months that followed, even as questions mount about his complex business dealings and conflicts of interest.

Most employees seem content under the new regime; positions at OpenAI are still highly sought after. But like Meta and Google before it, OpenAI had its share of conscientious objectors. And increasingly, we’re hearing what they think. 

The latest wave began last month when OpenAI co-founder Ilya Sutskever, who initially backed Altman’s firing and who had focused on AI safety efforts, quit the company. He was followed out the door by Jan Leike, who led the superalignment team, and a handful of other employees who worked on safety.

“OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there”

Then on Tuesday a new group of whistleblowers came forward to complain. Here’s handsome podcaster Kevin Roose in the New York Times:

The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.

They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.

“OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.

Anyone looking for jaw-dropping allegations from the whistleblowers will likely leave disappointed. Kokotajlo’s sole specific complaint in the article is that “some employees believed” Microsoft had released a new version of GPT-4 in Bing without proper testing; Microsoft denies that this happened.

But the accompanying letter offers one possible explanation for why the charges feel so thin: employees are forbidden from saying more by various agreements they signed as a condition of working at the company. (The company has said it is removing some of the more onerous language from its agreements, after Vox reported on them last month.)

“We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk,” an OpenAI spokeswoman told the Times. “We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.”

The company also created a whistleblower hotline for employees to anonymously voice their concerns. 

So how should we think about this letter? 

I imagine that it will be a Rorschach test for whoever reads it, and what they see will depend on what they think of the AI safety movement in general.

For those who believe that AI poses existential risk, I imagine this letter will provide welcome evidence that at least some employees inside the big AI makers are taking those risks seriously. And for those who don’t, I imagine it will provide more ammunition for the argument that the AI doomers are once again warning about dire outcomes without providing any compelling evidence for their beliefs.  

As a journalist, I find myself naturally sympathetic to people inside companies who warn about problems that haven’t happened yet. Journalism often serves a similar purpose, and every once in a while, it can help prevent those problems from occurring. (This can often make the reporter look foolish, since they spent all that time warning about a scenario that never unfolded, but that’s a subject for another day.)

At the same time, there’s no doubt that the AI safety argument has begun to feel a bit tedious over the past year, when the harms caused by large language models have been funnier than they have been terrifying. Last week, when OpenAI put out the first account of how its products are being used in covert influence operations, there simply wasn’t much there to report. 

We’ve seen plenty of problematic misuse of AI, particularly deepfakes in elections and in schools. (And of women in general.) And yet people who sign letters like the one released today fail to connect high-level hand-wringing about their companies to the products and policy decisions that their companies make. Instead, they speak through opaque open letters that have surprisingly little to say about what safe development might actually look like in practice.

“My greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.”

For a more complete view of the problem, I preferred another (and much longer) piece of writing that came out Tuesday. Leopold Aschenbrenner, who worked on OpenAI’s superalignment team and was reportedly fired for leaking in April, published a 165-page paper today laying out a path from GPT-4 to superintelligence, the dangers it poses, and the challenge of aligning that intelligence with human intentions.

We’ve heard a lot of this before, and the hypotheses remain as untestable (for now) as they always have. But I find it difficult to read the paper and not come away believing that AI companies ought to prioritize alignment research, and that current and former employees ought to be able to talk about the risks they are seeing.

“Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered,” Aschenbrenner concludes. “As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.”

And if those who feel the weight of what is coming work for an AI company, it seems important that they be able to talk about what they’re seeing now, and in the open.


Casey Newton writes Platformer, a daily guide to understanding social networks and their relationships with the world. This piece was originally published on Platformer.

More Tech

See all Tech
tech
Rani Molla

Amazon to lay off thousands more office workers on path to 30,000 cuts

Amazon plans to axe thousands of corporate workers next week, after laying off 14,000 back in October, according to Reuters. The new cuts could be “roughly the same” number as last time and may hit Amazon Web Services, retail, Prime Video, and human resources, the report said, citing people familiar with the matter.

The company plans to cut a total of 30,000 corporate positions as part of an effort to “streamline operations and reset its culture,” Business Insider reported separately, noting comments from CEO Andy Jassy, who said the earlier layoffs were “about culture” rather than AI-related cost cutting.

The company plans to cut a total of 30,000 corporate positions as part of an effort to “streamline operations and reset its culture,” Business Insider reported separately, noting comments from CEO Andy Jassy, who said the earlier layoffs were “about culture” rather than AI-related cost cutting.

Little  Bay Beach

There are now more than 1 million “.ai” websites, contributing an estimated $70 million to Anguilla’s government revenue last year

Data from Domain Name Stat reveals that the top-level domain originally assigned to the British Overseas Territory of Anguilla passed the milestone in early January.

tech

TikTok closes deal to operate in the US

TikTok has finally sealed its deal to establish a majority American-owned joint venture to manage its US operations.

On Friday, the social media company announced that its US arm will now be led by three “managing investors” — Silver Lake, Oracle, and MGX, each with a 15% holding — while ByteDance retains 19.9% of the business, and a swath of other investors, including Michael Dell’s family office, round out the cap table.

The joint venture will be operated by a seven-person majority American board of directors, which includes TikTok CEO Shou Chew, with Adam Presser, previously TikTok’s head of operations, trust, and safety, as its CEO.

Though the valuation of the new venture has not been shared, Vice President JD Vance has previously cited the market value of TikTok’s US operations at about $14 billion, just topping Snap and lower than Pinterest.

The deal closes the platform’s battle, which kicked off in earnest in August 2020 when President Donald Trump first tried to ban TikTok over national security concerns. The announcement notes that the new TikTok USDS Joint Venture LLC will “secure U.S. user data, apps and the algorithm.” Trump celebrated the deal, which has been signed off by both the US and Chinese governments, per Reuters, in a Truth Social post, saying TikTok “will now be owned by a group of Great American Patriots and Investors, the Biggest in the World.”

tech
Rani Molla

Elon Musk says Tesla Robotaxis are operating without drivers, sending stock higher

Tesla CEO Elon Musk said that Tesla’s Robotaxis are now operating in Austin without a safety monitor. Tesla has been testing driverless cars in the area for about a month, and Musk had previously said the company would remove safety drivers by the end of 2025.

It’s unclear how many exactly of the roughly 50 Robotaxis the company operates in the area don’t have drivers. Tesla is “starting with a few unsupervised vehicles mixed in with the broader robotaxi fleet with safety monitors, and the ratio will increase over time,” Ashok Elluswamy, Tesla’s head of AI, posted shortly after Musk. Ethan McKenna, the person behind Robotaxi Tracker, estimates it’s two or three vehicles.

What is clear is that the move is good for Tesla’s stock, which is currently up 3.5%, extending its gains after Musk’s tweet. Morgan Stanley said yesterday that it considers the removal of safety drivers a “precursor to personal unsupervised FSD rollout.” Unsupervised Full Self-Driving is widely considered to be integral to the would-be autonomous company’s value proposition.

At the World Economic Forum earlier on Thursday, Musk said, “Self-driving cars is essentially a solved problem at this point.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.