Frontier AI Safety Frameworks Become Law: A Read on the New Federal Statute

The Frontier Artificial Intelligence Safety and Innovation Act (FAISIA) was signed into law April 1. It is the first federal statute to impose substantive obligations on developers of frontier-scale AI models. Compute thresholds, third-party evaluations, and mandatory pre-deployment notifications all make an appearance. The bill we have been tracking since late 2025 has landed, with several substantive changes from the version that left the Senate Commerce Committee.

This post unpacks what is actually new in FAISIA, what is borrowed from existing frameworks (state, EU, and voluntary), and where the inevitable preemption fights will land.

Scope: who is covered

FAISIA applies to "covered AI developers" — entities that train models exceeding either of two thresholds:

The thresholds are disjunctive — meeting either one triggers coverage — and are calibrated above both the EU AI Act's 10^25 systemic-risk threshold and the original SB 1047 thresholds. As of the date of enactment, fewer than ten models in production worldwide cross the threshold. The thresholds are statutorily indexed and the Department of Commerce can adjust them via rulemaking with congressional review.

Two notable scope features:

Substantive obligations

FAISIA imposes five categories of obligation on covered developers:

  1. Safety determination. Before training a covered model, the developer must complete a safety determination addressing identified high-risk capabilities (CBRN risk, cybersecurity risk, autonomous-replication risk, and a general "catastrophic risk" category to be defined by NIST). The determination must be documented and submitted to the Department of Commerce. This is roughly analogous to the Article 55 systemic-risk evaluation framework under the EU AI Act, but with more specific risk categories and a pre-training rather than pre-deployment timing.
  2. Third-party evaluation. Before deployment, the developer must arrange for a qualified third-party evaluator to assess the model against the safety determination's identified risks. Qualifications for evaluators will be established by NIST in implementing regulations. The evaluation results are submitted confidentially to the Department of Commerce.
  3. Pre-deployment notification. The developer must notify the Department of Commerce at least 30 days before deploying a covered model. The notification triggers a review window during which Commerce can request additional information; deployment can proceed at the end of the window unless Commerce issues a specific stay (which it can do only on grounds of unaddressed catastrophic-risk concerns and is subject to expedited judicial review).
  4. Incident reporting. Covered developers must report "serious safety incidents" (defined as actual or near-miss events meeting specified thresholds) to the Department of Commerce within 72 hours of identification.
  5. Whistleblower protections. Covered developers must establish and maintain whistleblower processes for employees and contractors to report safety concerns internally and externally without retaliation. This is one of the more substantive ground-up provisions of the statute.

Notably absent: a "kill switch" obligation analogous to SB 1047. The Senate stripped this provision in markup on the grounds that it was operationally infeasible for open-weights models and otherwise duplicative of the safety-determination obligations.

What's borrowed

FAISIA's structure draws heavily from three sources:

What's genuinely new

Three pieces of FAISIA are genuinely new at the federal level:

  1. The 30-day pre-deployment review window. No prior U.S. federal AI statute has imposed pre-deployment review of any kind. The Commerce stay authority is narrow and judicially reviewable, but it is the first time the federal government has formal authority to delay an AI model deployment.
  2. The 72-hour serious-incident reporting timeline. Faster than the EU AI Act's analogous obligation (which has a 15-day baseline). Building reporting infrastructure for this timeline is going to be a meaningful 2026 build for covered developers.
  3. The whistleblower regime. The provisions go beyond Sarbanes-Oxley-style protections by including specific safety-relevant disclosure rights and by mandating that covered developers maintain internal channels with specified procedural features. This is the kind of provision that can have outsized effect even at small scale.

Preemption: what FAISIA does and does not displace

The preemption provision is narrower than industry advocates wanted and broader than state-rights advocates wanted. As enacted:

For practitioners, the preemption analysis becomes a per-state, per-statute question rather than a clean displacement.

Implementation timeline

FAISIA's effective date is staggered:

For covered developers, the relevant near-term work is the whistleblower-program implementation (the operational complexity is real) and the incident-reporting infrastructure build. The substantive obligations are far enough out that careful design work is feasible.

What this means for the international landscape

FAISIA changes the U.S./EU coordination story. With both jurisdictions now having operational frontier-model safety regimes — broadly compatible in shape, though differing in details — the international diplomatic architecture has more material to work with than it did at Paris in February 2025. Whether that produces meaningful coordination is a different question; the political will remains uncertain. But the legal scaffolding is now in place for U.S./EU mutual recognition arrangements that could considerably reduce duplicative compliance burden for frontier developers.

For non-frontier developers, FAISIA changes nothing. The thresholds are calibrated to capture only the largest models. The vast majority of AI development continues to be governed by sectoral and state-level regulation. The regulatory load is not federalized; it is concentrated, in a particular slice, at the federal level.

Bottom line

FAISIA is a meaningful federal statute and a much smaller deal, in scope of impact, than the headlines suggest. For frontier developers, the operational implementation work is significant. For everyone else, FAISIA is mostly background — a structural piece of the U.S. AI legal landscape that will affect them indirectly through standard-setting and through the model-evaluation infrastructure that the Act will produce. We will return to FAISIA implementation periodically; the next milestone is the NIST rulemaking timeline.