SB 1047 Vetoed: What Newsom's Decision Means for AI Governance

Governor Newsom vetoed SB 1047 on Sunday, September 29 — one day before the legislative deadline. The veto message is short, three pages, and worth reading in full. It is also more nuanced than either side of the debate has so far acknowledged.

What the veto message actually says

Newsom does not reject frontier-model regulation in principle. The opposite, in fact: the message describes "the urgency to address the potential risks of advanced artificial intelligence," and notes that California has signed sixteen other AI bills this session — including coverage of deepfakes, healthcare AI, training-data disclosure, and AI-generated child sexual abuse material. (We will cover several of these in coming months.)

The veto rests on three specific objections to SB 1047 as drafted:

  1. Compute-threshold scoping is the wrong design choice. The message argues that limiting coverage to the largest, most expensive models could give "the public a false sense of security" by ignoring smaller specialized systems used in high-risk contexts. A model deployed in a critical infrastructure decision should be regulated regardless of whether it took $100M to train.
  2. Application context, not technological threshold. Related but distinct: the message endorses regulating AI based on the risk of the deployment, not the size of the underlying model.
  3. Curated empirical analysis is needed. The message commits the state to working with experts including Fei-Fei Li, Tino Cuéllar, and Jennifer Tour Chayes to develop "workable guardrails" based on empirical analysis. That is a process commitment, not a substantive one.

Notably absent from the veto message: any endorsement of the industry's stronger preemption arguments, any suggestion that AI safety regulation belongs only at the federal level, and any commitment to oppose a future bill of similar scope.

The political read

This was the safest available veto. Newsom is keeping his options open. By vetoing on design grounds rather than principled opposition, he avoids alienating either the safety-focused AI community or the developer/VC community. The expert working group is a face-saving structure: it can quietly endorse a 2025 bill that the Governor will then sign.

That said, the design objections are not all face-saving. The compute-threshold critique is one that thoughtful observers across the spectrum have made — Stuart Russell, for one, has long argued that capability matters more than training compute. If the 2025 successor to SB 1047 abandons compute thresholds entirely and instead regulates based on application context, it will be a meaningfully different bill.

What this means for state-law fragmentation

For practitioners, the practical consequence of the veto is that California is not going to be the first state with a frontier-model regulation regime. Colorado's SB 24-205 (effective February 2026) covers algorithmic discrimination but not safety. New York's pending RAISE Act, which closely mirrors SB 1047, has been moving more slowly but could now find renewed political space. Several other states have AI safety bills in early stages that will react to the vacuum.

The risk is the obvious one: a fragmented patchwork of state frontier-model regimes, each with slightly different definitions, thresholds, and obligations. The federal government has the institutional structure to head this off, but the political will is uncertain — and may shrink further depending on the November election.

For developers: where this leaves you

If you advise frontier-model developers, the SB 1047 veto changes the immediate compliance posture but not the strategic one. The relevant frameworks for now are:

In practice this means that the safety frameworks SB 1047 would have required — pre-deployment evaluations, red-teaming, incident reporting — should still be built. They will be required by the EU AI Act, by the next California bill, by procurement contracts, and probably by tort doctrine within a few years. The veto is a delay, not a reversal.

What we are watching

Three things between now and the end of the year:

The frontier-model regulation question is not closed. It is paused.