Two Years In: How AI Law Has Reorganized

When we started this blog in May 2024, the EU AI Act had just cleared its final political hurdle, California's SB 1047 was a curiosity, and "AI agent" was a term you mostly heard from venture capitalists. Two years on, the legal terrain looks very different. This is the twenty-fifth post on this site. It seemed like a useful moment to step back and ask: what have we actually learned about the shape of AI law as a field?

This post is more reflective than usual. It is also less authoritative — the field is still in motion, and any synthesis offered now will be partly wrong by the end of 2026. But the patterns are clear enough to write down.

What has solidified

The risk-tier structure for substantive obligation. The EU AI Act's prohibited / high-risk / limited-risk / minimal-risk pyramid is now functionally the operating model for substantive AI regulation everywhere. Colorado SB 24-205, Texas TRAIGA, and FAISIA all use risk-stratified obligation structures. The specifics differ; the architectural pattern does not. Practitioners who learned the EU AI Act framework two years ago can read the new statutes by analogy.

NIST AI RMF as the durable U.S. governance baseline. What we predicted in August 2024 has held. The RMF and its Profiles are cited as the operational expectation in state laws, federal statutes, federal sectoral guidance, and major contracts. They are voluntary in form and load-bearing in fact. Compliance programs that anchor on the RMF are durable across the political transitions; programs that didn't have struggled.

Documentation as the central compliance modality. Two years ago, AI compliance was largely about restraining what models would do. Today, it is largely about documenting what models do, why, and with what evidence. Training-data summaries, technical documentation packages, downstream-deployer information, system cards, evaluation reports, impact assessments, conformity assessments — the field's center of gravity has moved to documentation. This is partly because regulators have decided documentation is what they can actually inspect, and partly because it is what reasonable risk-management requires.

The U.S./EU split, with the EU on offense. The Brussels effect we predicted in February 2025 has largely played out. The EU AI Act's GPAI obligations, the Article 5 prohibitions, and the high-risk-system regime have shaped product design choices globally for the largest providers. The U.S. federal regime, after the early-2025 reset and the early-2026 FAISIA enactment, exists but is thinner than the EU regime in scope and reach.

State-level fragmentation as a permanent feature. The patchwork of state AI statutes is not going to consolidate. FAISIA's narrow preemption confirmed this. The compliance load of operating across multiple states with different AI obligation regimes is now part of the cost of doing business, not a transitional inconvenience.

What has fragmented

Frontier-model regulation. The federal government has its FAISIA framework. The EU has its Article 55 framework. California is finalizing its post-SB-1047 successor. The U.K., France, and Japan each have their own approach. International coordination, after the Paris diplomatic stall, is happening only in narrow technical pockets. Frontier model developers face four to six distinct regulatory regimes, none aligned in detail.

Copyright doctrine. Two years ago, the operative question was "does training on copyrighted material constitute infringement?" Today, the question has split into a dozen sub-questions, each receiving partial and inconsistent answers from different courts. NYT v. OpenAI may produce some clarity at summary judgment this summer, but the overall landscape is more, not less, complicated than two years ago. The licensing market that has developed in parallel is doing some of the work the litigation was supposed to do.

The "agent" question. We covered the emerging fiduciary doctrine in February. The agent question is itself fragmenting into sub-questions: agency-law characterization, products-liability treatment, fiduciary-duty doctrine, securities-law application, employment-law substitution, healthcare-licensing. None of these are converging on a unified theory. The "agent layer" of AI is going to be the messiest doctrinal terrain of the next five years.

Enforcement intensity by jurisdiction. Some authorities are aggressive (CNIL, Texas AG, SEC). Some are lighter-touch (the AI Office on substance, most state AGs in non-flagship states, much of the federal regulatory state under current leadership). Multinational practice has to navigate not just substantive variation but enforcement-intensity variation.

What we still do not know

Five questions where I would not bet meaningfully one direction or another:

  1. Will fair use survive in the AI training context? NYT v. OpenAI at summary judgment is going to set the tone, but a single decision will not settle the question. The factor-4 market harm analysis cuts strongly against fair use; the transformativeness analysis cuts variably. I expect a split among the circuits, eventual Supreme Court resolution, and meaningful uncertainty for at least three more years.
  2. How will the U.S. agent-liability question resolve? Fiduciary duties? Products liability? Sui generis framework? All three are live possibilities. The first appellate decisions we are still waiting for could push the doctrine in dramatically different directions.
  3. Will the EU AI Act's high-risk system regime work? The high-risk regime takes effect August 2 of this year. The conformity-assessment infrastructure is still incomplete. The compliance burden is heavy. Whether the regime produces meaningful safety improvements or primarily produces compliance theater is a 2026-27 question.
  4. Will FAISIA's pre-deployment review actually delay any deployments? The Commerce stay authority is narrow and judicially reviewable. Whether it ever gets exercised, and whether it survives first contact with a serious frontier developer, will determine whether the whole pre-deployment notification structure has practical effect.
  5. Will common-law torts catch up to AI? Negligence, products liability, professional malpractice — all of these are slow-evolving doctrines that have absorbed AI cases adequately so far but have not yet been tested by the most challenging fact patterns. The 2026-28 case law in these areas will determine whether the common law remains a usable framework or whether AI-specific statutory frameworks displace it.

What this blog gets wrong

One of the disciplines we have tried to maintain is reading our old posts back and noting where we got things wrong. Three notable ones:

What's next for this blog

We are going to continue at roughly the same cadence, with the same scope. The contributors are the same. The world is more complicated than it was two years ago, and the demand for careful synthesis has grown rather than shrunk.

For our readers: thank you for being part of this. Specific topic suggestions and corrections are always welcome. The next post, in early June, is going to be on the EU AI Act high-risk-system regime as it approaches its August 2 effective date — a topic we have written about repeatedly but that deserves a final pre-deadline treatment.

Two years on, AI law is no longer a curiosity practice. It is a normal part of how technology lawyers, employment lawyers, IP lawyers, and litigators work. The infrastructure is built; the unsettled questions have been narrowed; the compliance shape is mostly visible. There is more work, not less, but the work is now more recognizable as work.