Post-Election AI Policy Outlook: Federal Pivot, State Acceleration
The 2024 federal election produced a unified Republican government taking office in January. We are not in the business of political prediction here, but we are in the business of legal prediction, and the implications for AI policy are clear enough to flag now while clients are doing 2025 planning.
This post is structured around three theses: the Biden Executive Order is going to be rescinded; federal AI rulemakings already in flight will mostly stall or be redirected; and state-level AI regulation will accelerate, both in volume and in litigation. None of these are particularly bold predictions individually. The combination matters because it changes the practical question of where the binding constraints on AI development and deployment will come from in 2025-26.
The Biden EO is going to be rescinded
Executive Order 14110, signed October 30, 2023, is the centerpiece of the current federal AI policy posture. It directs more than fifty agency actions, establishes the dual-use foundation model reporting regime under the Defense Production Act, and creates the AI Safety Institute at NIST.
The President-elect's transition team and 2024 platform have been explicit that EO 14110 will be rescinded "on day one." That is plausible. The rescission itself takes a stroke of a pen; the harder question is what happens to the agency work product that has already been produced.
Our read of how that lands:
- The dual-use foundation model reporting regime under DPA authority is rescindable but is being used in some form already. The Trump-era policy will likely retain some version of compute-threshold reporting framed in national security terms.
- The AI Safety Institute at NIST will probably survive in name but lose its safety-evaluation focus and pivot toward standards work. Its statutory authority (the AI in Government Act and the National AI Initiative Act) is independent of the EO.
- OMB M-24-10 governing federal agency AI use is on shakier ground but provides operational structure that the new administration may find easier to keep than to rebuild.
- The voluntary commitments labs made in 2023 are likely to be informally allowed to lapse without replacement.
Federal rulemakings: stall or redirect
Several federal rulemakings touching on AI are mid-flight:
- The FTC's commercial surveillance and data security ANPR has AI-relevant elements. Under a new chair, this will probably be narrowed or withdrawn.
- HHS / OCR's section 1557 nondiscrimination rule applies to AI tools used in healthcare settings. This is being challenged in court and will probably be allowed to lose or be narrowed in defense.
- The CFPB's proposed automated valuation model rule under the Dodd-Frank Act is essentially complete. It may be allowed to finalize but will probably not be aggressively enforced.
- The DOL's overtime rule had AI-adjacent monitoring implications that are now moot.
The exception worth flagging: defense and national-security AI work will almost certainly accelerate. The DoD's AI initiatives, the IC's AI investments, and CISA's AI-related cybersecurity work all align with the incoming administration's stated priorities. If you advise clients in those sectors, expect more activity, not less.
State acceleration is the bigger story
The federal pivot creates a vacuum that state legislators and AGs are already preparing to fill. Three things to watch:
- The 2025 legislative session. Twenty-plus states have AI legislation drafted or pre-filed. Texas (TRAIGA), Connecticut, Virginia, New York (RAISE Act), and Illinois are the ones we are watching most closely. The Colorado model — risk-based, AG-enforced, NIST-anchored — is going to be the dominant template. California will pass another frontier-model bill, probably in a different shape than SB 1047 given the working group's likely recommendations.
- State AG enforcement of existing law. Multiple state AGs have already signaled that consumer-protection statutes (UDAP) and existing anti-discrimination statutes apply to AI deployments. The Texas AG, for example, has been particularly active on AI healthcare claims. Expect more of this — state AG enforcement of existing statutes, with AI-specific allegations, even where AI-specific statutes do not yet exist.
- The patchwork problem becomes acute. Multistate compliance is going to get noticeably harder in 2025. The differences between Colorado, the eventual Texas TRAIGA, and California's expected successor to SB 1047 will be more than cosmetic. Practitioners advising national operators should start preparing for parallel-compliance work.
EU divergence
Without federal coordination, U.S. policy will drift further from the EU AI Act as it phases in. We do not see this as a near-term enforcement risk for U.S. companies — the EU AI Act applies based on EU market presence, not on home-country regulation — but it does mean that the U.S./EU diplomatic architecture around AI policy will be less productive in 2025-26 than it was in 2023-24. The TTC has already lost momentum; we expect that to continue.
A practical implication: the EU's Article 51 systemic-risk code of practice for GPAI models, which is being negotiated this fall and winter, is going to have to land in a context where U.S. labs face less domestic pressure than they did six months ago. That changes the negotiating dynamic and could push the code in either direction depending on which labs choose to engage.
What this means for compliance programs
Our advice to clients planning for 2025:
- Do not let go of the AI governance build-out. The drivers will shift from federal to state and contractual, but they will not disappear.
- Treat NIST AI RMF compliance as the durable baseline. It is the point of consensus that will survive the political pivot.
- If you operate in multiple states, build a state-by-state regulatory tracker now. The compliance overhead is real and growing.
- Keep your EU AI Act compliance work on schedule. The 2025 deadlines do not move.
The next eighteen months are going to look messier than the last eighteen, but the messiness is dispersed across more jurisdictions, not concentrated in fewer.