Tech
Kill bill character “the bride” holds her sword
(Anton_Ivanov/Shutterstock)
Platformer

Kill AI Bill: Controversial “kill switch” rule could become AI law

California’s SB-1047 is on a path to the governor’s desk, and could threaten the AI boom.

Casey Newton

California's controversial bill to regulate the artificial intelligence industry, SB-1047, passed out of the Assembly Appropriations Committee on Thursday. If it passes the full Senate by the end of the month, it will head to Gov. Gavin Newsom for his signature. Today let’s talk about what it could mean for Meta, Google, Anthropic, and the other leading AI companies that call California home.

If an AI causes harm, should we blame the AI — or the person who used the AI? That’s the question that runs through the debate over SB-1047, and the larger question of how to regulate the technology. 

We saw a practical example of the debate this week when X released the second generation of its AI model, Grok, which has an image generation feature similar to OpenAI’s DALL-E. X is known for its laissez-faire approach to content moderation, and the new Grok is no exception. 

Users quickly put the text-to-image generator through its paces — and, as Adi Robertson found out at The Verge, Grok will make just about anything. “Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to Donald Trump with a pregnant woman who (vaguely) resembles Kamala Harris to Trump and Harris pointing guns,” she writes, before citing several more examples of violent or edgy images that Grok created. (“Bill Gates sniffing a line of cocaine from a table with a Microsoft logo,” for example.)

One possible response to this is to get mad at Grok for creating the image. Another, conveyed with some deft sarcasm by this X user, is to suggest we should instead get mad at the person who created the image.

Tech companies would love to see a kind of Section 230 for AI.... But California’s bill takes the opposite approach

This kind of question is almost as old as the web. In the 1990s, internet service providers like Prodigy and Compuserve faced lawsuits related to potentially libelous material that their users had posted. Congress included Section 230 in the Communications Decency Act to specify that tech companies in most cases cannot be held legally liable for what their users post. 

In this case, Congress ruled that we should get mad at the person rather than the technology. And we’ve been fighting about it ever since.

Tech companies would love to see a kind of Section 230 for AI, making them immune to prosecution for what their users do with their AI tools. But California’s bill takes the opposite approach, putting the onus on tech companies to assure the government that their products won’t be used to create harm.

SB-1047 has some widely accepted provisions, such as adding legal protections for whistleblowers at AI companies, and studying the feasibility of building a public AI cloud that startups and researchers could use. 

More controversially, it requires makers of large AI models to notify the government when they train a model that exceeds a certain computing threshold and costs more than $100 million. It allows the California attorney general to seek an injunction against companies that release models that the AG considers unsafe. And it requires that large models have a “kill switch” that allows developers to stop them in the case of danger.   

SB-1047 was introduced in February by Sen. Scott Wiener, D-San Francisco. Wiener had released an outline of the bill last September and says he has gathered feedback from the industry and other stakeholders ever since. The bill passed out of the Senate’s privacy committee in June, and since then tech companies have become increasingly vocal about the risks that they argue the bill presents to the nascent AI industry.

On Thursday, before the bill passed out of the Senate’s appropriations committee, the industry won some significant concessions. The bill no longer enables the AG to sue companies for negligent safety practices before a catastrophic event occurs; it no longer creates a new state agency to monitor compliance; and it no longer requires AI labs to certify their safety testing under penalty of perjury. (AI companies had been warning loudly that the bill would result in startup founders being thrown in jail.)

The bill also no longer requires “reasonable assurance” from developers that their models won’t create harm. (Instead, they must only take “reasonable care.”) And amid widespread fears that the bill would chill the development of open-source models, the bill was amended to exempt anyone who spends less than $10 million to fine-tune an open-source AI model from the bill’s other requirements.

“It unreasonable to expect developers to completely control what end users do with their products”

“We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Wiener told TechCrunch. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.” 

Despite those changes, the bill still faces significant criticism — and not all of it comes from the tech industry. Shortly before the bill’s passage out of committee on Thursday, a group of eight Democratic members of Congress from California wrote a letter to California Gov. Gavin Newsom urging him to veto the bill in its then-current form. The lawmakers, led by Rep. Zoe Lofgren, write that they support a wide variety of AI regulations — but that the bill goes too far in asking tech companies to predict how people use their models.

“Not only is it unreasonable to expect developers to completely control what end users do with their products, but it is difficult if not impossible to certify certain outcomes without undermining the rights of end users, including their privacy rights,” they write. 

Moreover, they write, the bill could prompt AI companies to move out of California or stop releasing their AI models here. (Meta recently decided not to release multimodal AI models in Europe over similar rules, they note.)

Wiener’s bill also has some prominent backers, including two of the godfathers of AI — Geoffrey Hinton and Yoshua Bengio. Hinton and Bengio are among those who believe that we must put strong safeguards into place now before next-generation AI models arrive and potentially wreak havoc. 

But they have been countered by dozens of other academics who published a letter arguing that the bill will interfere with their academic freedom and hamper research efforts.

California is considering more than 30 other AI bills this term

Ultimately, I suspect lawmakers will regulate both AI and the people who use it. But I’m sympathetic to the members of Congress who find SB-1047 to be — if nothing else — premature. Today’s models have shown no risk of creating catastrophic harm, and President Biden’s executive order from last year should provide at least some defense against worst-case scenarios in the near term if next-generation models prove out to be much more capable than today’s. 

And in any case, it seems preferable to regulate AI once at the national level than encouraging 50 states to all experiment with their own risk models. 

In the meantime, Lofgren notes, California is considering more than 30 other AI bills this term, including much more urgent and focused efforts to restrict the creation of synthetic, nonconsensual porn and to require disclosures when AI is used to create election ads.

“These bills have a firmer evidentiary basis than SB 1047,” Lofgren writes. And given the continued opposition to Wiener’s bill, I suspect they may also have higher odds of Newsom signing them into law. 


Casey Newton writes Platformer, a daily guide to understanding social networks and their relationships with the world. This piece was originally published on Platformer.

More Tech

See all Tech
tech
Jon Keegan

OpenAI reportedly poaching key Apple designers, using Apple manufacturing partners for AI gadgets

New details are emerging about the mysterious AI gadgets being designed by former Apple design chief Jony Ive since OpenAI purchased his startup “io” in May.

According to a report by The Information, Ive’s team has recruited several key Apple design and hardware employees to work on the gadgets. The Information reported some details of the devices:

“One of the products OpenAI has talked to suppliers about making resembles a smart speaker without a display, the people said. OpenAI has also considered building glasses, a digital voice recorder and a wearable pin, and is targeting late 2026 or early 2027 for the release of its first devices, one of the people said.”

OpenAI is also turning to Apple’s Chinese manufacturing partners to build the products, having signed contracts with Luxshare, and has been in talks with Goertek, per the report.

“One of the products OpenAI has talked to suppliers about making resembles a smart speaker without a display, the people said. OpenAI has also considered building glasses, a digital voice recorder and a wearable pin, and is targeting late 2026 or early 2027 for the release of its first devices, one of the people said.”

OpenAI is also turning to Apple’s Chinese manufacturing partners to build the products, having signed contracts with Luxshare, and has been in talks with Goertek, per the report.

Mark Zuckerberg at Meta Connect 2025

Are Ray-Ban Meta glasses really a hit?

We checked how it stacks up to iconic gadgets, and the results are mixed.

tech
Rani Molla

Zuckerberg: AI might be a bubble but “misspending a couple of hundred billion” is worth it to achieve superintelligence

“It’s quite possible” that AI is a bubble, Meta CEO Mark Zuckerberg told tech journalist Alex Heath, formerly of The Verge, on his new podcast, “Access,” and for his newsletter, Sources. That isn’t stopping Zuckerberg’s social media company from going all in on AI in hopes of achieving superintelligence, aka AI that’s smarter than humans.

“If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” said Zuckerberg, who’s shelling out $600 billion on US data centers and infrastructure through 2028. “But what I’d say is I actually think the risk is higher on the other side.”

“The risk, at least for a company like Meta, is probably in not being aggressive enough rather than being somewhat too aggressive,” he added.

“If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” said Zuckerberg, who’s shelling out $600 billion on US data centers and infrastructure through 2028. “But what I’d say is I actually think the risk is higher on the other side.”

“The risk, at least for a company like Meta, is probably in not being aggressive enough rather than being somewhat too aggressive,” he added.

tech

Grok has 64 million monthly users while ChatGPT has 700 million weekly users

Daddy, it seems, is very much not home.

CEO Elon Musk spent the majority of his time at xAI this summer rather than at Tesla, where he recently claimed to have shifted his focus, The New York Times reports. The piece is full of other great details on his AI startup — read it all — but here are some notable tidbits from the story and from one of its reporters, Kate Conger, who shared extras on social media:

  • xAI’s Grok has 64 million monthly users, compared with OpenAI’s ChatGPT, which has about 700 million weekly users. Musk is currently suing OpenAI and Apple over what he says is unfavorable positioning on the iOS App Store.

  • Musk wanted Grok to be less woke and more popular, a command that led it to post antisemitic remarks and call itself “MechaHitler.”

  • Musk plans on building a Microsoft competitor called “Macrohard,” something he said he’s painting on the roof of xAI’s new Memphis data center.

  • xAI’s execs said after Grok 4, the next model will be called Grok 420.

UPDATE (September 19): Corrected headline of piece to reflect ChatGPT has 700 million weekly users, not daily.

  • xAI’s Grok has 64 million monthly users, compared with OpenAI’s ChatGPT, which has about 700 million weekly users. Musk is currently suing OpenAI and Apple over what he says is unfavorable positioning on the iOS App Store.

  • Musk wanted Grok to be less woke and more popular, a command that led it to post antisemitic remarks and call itself “MechaHitler.”

  • Musk plans on building a Microsoft competitor called “Macrohard,” something he said he’s painting on the roof of xAI’s new Memphis data center.

  • xAI’s execs said after Grok 4, the next model will be called Grok 420.

UPDATE (September 19): Corrected headline of piece to reflect ChatGPT has 700 million weekly users, not daily.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.