Frontier AI Safety Frameworks Become Law: A Read on the New Federal Statute
The Frontier Artificial Intelligence Safety and Innovation Act (FAISIA) was signed into law April 1. It is the first federal statute to impose substantive obligations on developers of frontier-scale AI models. Compute thresholds, third-party evaluations, and mandatory pre-deployment notifications all make an appearance. The bill we have been tracking since late 2025 has landed, with several substantive changes from the version that left the Senate Commerce Committee.
This post unpacks what is actually new in FAISIA, what is borrowed from existing frameworks (state, EU, and voluntary), and where the inevitable preemption fights will land.
Scope: who is covered
FAISIA applies to "covered AI developers" — entities that train models exceeding either of two thresholds:
- 10^26 integer or floating-point operations of training compute (the IOFLOP threshold), or
- $500 million in training cost (the dollar threshold).
The thresholds are disjunctive — meeting either one triggers coverage — and are calibrated above both the EU AI Act's 10^25 systemic-risk threshold and the original SB 1047 thresholds. As of the date of enactment, fewer than ten models in production worldwide cross the threshold. The thresholds are statutorily indexed and the Department of Commerce can adjust them via rulemaking with congressional review.
Two notable scope features:
- Substantial fine-tunes that exceed the threshold for the fine-tuning step alone are themselves covered models. The fine-tuner becomes the covered developer for that model.
- Open-weights distribution does not affect coverage, but the obligations attach only to the original training and deployment events. Downstream redistribution does not trigger fresh obligations.
Substantive obligations
FAISIA imposes five categories of obligation on covered developers:
- Safety determination. Before training a covered model, the developer must complete a safety determination addressing identified high-risk capabilities (CBRN risk, cybersecurity risk, autonomous-replication risk, and a general "catastrophic risk" category to be defined by NIST). The determination must be documented and submitted to the Department of Commerce. This is roughly analogous to the Article 55 systemic-risk evaluation framework under the EU AI Act, but with more specific risk categories and a pre-training rather than pre-deployment timing.
- Third-party evaluation. Before deployment, the developer must arrange for a qualified third-party evaluator to assess the model against the safety determination's identified risks. Qualifications for evaluators will be established by NIST in implementing regulations. The evaluation results are submitted confidentially to the Department of Commerce.
- Pre-deployment notification. The developer must notify the Department of Commerce at least 30 days before deploying a covered model. The notification triggers a review window during which Commerce can request additional information; deployment can proceed at the end of the window unless Commerce issues a specific stay (which it can do only on grounds of unaddressed catastrophic-risk concerns and is subject to expedited judicial review).
- Incident reporting. Covered developers must report "serious safety incidents" (defined as actual or near-miss events meeting specified thresholds) to the Department of Commerce within 72 hours of identification.
- Whistleblower protections. Covered developers must establish and maintain whistleblower processes for employees and contractors to report safety concerns internally and externally without retaliation. This is one of the more substantive ground-up provisions of the statute.
Notably absent: a "kill switch" obligation analogous to SB 1047. The Senate stripped this provision in markup on the grounds that it was operationally infeasible for open-weights models and otherwise duplicative of the safety-determination obligations.
What's borrowed
FAISIA's structure draws heavily from three sources:
- The EU AI Act's GPAI systemic-risk regime. The third-party evaluation, the structured documentation, and the incident-reporting framework all parallel Article 55. This is intentional; one of the bill's stated goals is to support international coordination on frontier-model governance.
- Voluntary lab commitments. The pre-training safety determination tracks the responsible-scaling-policy framework that several major labs adopted voluntarily in 2023-25. The statute essentially codifies a version of these voluntary commitments.
- NIST AI RMF and the GAI Profile. The risk-category enumeration in the safety determination is anchored in the GAI Profile we covered in August 2024. Compliance with the GAI Profile is one path to satisfying the safety-determination obligation, though not the only one.
What's genuinely new
Three pieces of FAISIA are genuinely new at the federal level:
- The 30-day pre-deployment review window. No prior U.S. federal AI statute has imposed pre-deployment review of any kind. The Commerce stay authority is narrow and judicially reviewable, but it is the first time the federal government has formal authority to delay an AI model deployment.
- The 72-hour serious-incident reporting timeline. Faster than the EU AI Act's analogous obligation (which has a 15-day baseline). Building reporting infrastructure for this timeline is going to be a meaningful 2026 build for covered developers.
- The whistleblower regime. The provisions go beyond Sarbanes-Oxley-style protections by including specific safety-relevant disclosure rights and by mandating that covered developers maintain internal channels with specified procedural features. This is the kind of provision that can have outsized effect even at small scale.
Preemption: what FAISIA does and does not displace
The preemption provision is narrower than industry advocates wanted and broader than state-rights advocates wanted. As enacted:
- FAISIA preempts state laws that impose safety-determination, third-party-evaluation, or pre-deployment-notification requirements on covered models. This effectively forecloses an SB 1047-shaped state successor, though California's expert working group has been working on something else.
- FAISIA does not preempt state algorithmic-discrimination laws (Colorado SB 24-205, Texas TRAIGA), state disclosure laws (California AB 2013, SB 942), or state common-law tort theories.
- FAISIA does not preempt sectoral federal regulation (FDA AI/ML, CFPB ADM, EEOC employment AI, etc.).
- FAISIA explicitly preserves state insurance regulation and state consumer-protection enforcement under existing UDAP authority.
For practitioners, the preemption analysis becomes a per-state, per-statute question rather than a clean displacement.
Implementation timeline
FAISIA's effective date is staggered:
- Whistleblower provisions: effective immediately upon enactment.
- Incident-reporting obligations: effective 180 days after enactment.
- Substantive obligations (safety determination, third-party evaluation, pre-deployment notification): effective 18 months after enactment, with NIST implementing regulations to be finalized within 12 months.
For covered developers, the relevant near-term work is the whistleblower-program implementation (the operational complexity is real) and the incident-reporting infrastructure build. The substantive obligations are far enough out that careful design work is feasible.
What this means for the international landscape
FAISIA changes the U.S./EU coordination story. With both jurisdictions now having operational frontier-model safety regimes — broadly compatible in shape, though differing in details — the international diplomatic architecture has more material to work with than it did at Paris in February 2025. Whether that produces meaningful coordination is a different question; the political will remains uncertain. But the legal scaffolding is now in place for U.S./EU mutual recognition arrangements that could considerably reduce duplicative compliance burden for frontier developers.
For non-frontier developers, FAISIA changes nothing. The thresholds are calibrated to capture only the largest models. The vast majority of AI development continues to be governed by sectoral and state-level regulation. The regulatory load is not federalized; it is concentrated, in a particular slice, at the federal level.
Bottom line
FAISIA is a meaningful federal statute and a much smaller deal, in scope of impact, than the headlines suggest. For frontier developers, the operational implementation work is significant. For everyone else, FAISIA is mostly background — a structural piece of the U.S. AI legal landscape that will affect them indirectly through standard-setting and through the model-evaluation infrastructure that the Act will produce. We will return to FAISIA implementation periodically; the next milestone is the NIST rulemaking timeline.