Newsom’s veto means AI industry free-for-all continues... for now
California’s ambitious bill aimed to put safeguards around fast-moving AI development, but was vetoed by Gov. Newsom, saying the bill as it stood would give a “false sense of security.”
The unregulated free-for-all in AI development will continue for the foreseeable future.
Yesterday, California Governor Gavin Newsom vetoed SB 1047 (“Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”). The bill was an ambitious attempt at placing some safeguards around an incredibly fast-moving industry. The bill called for the creation of a “Board of Frontier Models” which would decide which AI models would be covered and issue regulations.
The US Congress has failed to pass any major legislation regulating AI, so a broad California AI law would create a de facto standard for the rest of the country. California is also home to a large number of the biggest AI companies, including OpenAI, Meta, and Anthropic.
As the technology evolves with a blazing speed far exceeding the slow-moving law making process, just deciding how to define the large, powerful (and potentially dangerous) models that California seeks to regulate has proven difficult.
The bill tried to use specific computing power and cost measurements for its definition of a “covered model”:
“An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer."
Critics said such an approach would allow smaller models, which are used in many critical use cases, to evade regulation.
In his letter explaining his veto of the bill, Newsom cited this argument:
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good. ”
The AI industry knows that some form of regulation is coming, and all the biggest players are racing to position themselves to gain advantage.
Newsom made clear that the safety concerns of AI are serious and urgent, and signaled in his letter he was open to revised legislation. Newsom wrote:
“We cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable.”