The DEEPFAKES Accountability Act, Take Three
After two failed runs in prior Congresses, a slimmed-down DEEPFAKES Accountability Act passed the House in February and is now in Senate markup. The 2026 version drops the criminal provisions that doomed earlier iterations and instead leans on disclosure obligations and a private right of action. This post compares the three versions side by side and assesses what is likely to make it through.
A short legislative history
The DEEPFAKES Accountability Act has been introduced by Rep. Yvette Clarke in the 116th, 118th, and 119th Congresses, with predecessor versions going back to 2018. The 2019 version (introduced as H.R. 3230) was a comprehensive disclosure-and-criminal regime. It did not advance. The 2023 version (H.R. 5586) was a revised version that retained the criminal provisions but introduced more granular distinctions across deepfake types. It also did not advance. The 2026 version (H.R. 1244) reflects the most successful iteration — it has cleared the House on a 271-160 vote, with substantial bipartisan support — and is the one we have to take seriously.
What the 2026 bill does
Three substantive provisions:
Disclosure obligations. The bill requires that "advanced technological false personations" — defined as audiovisual records that have been substantially edited or generated by AI in ways that depict an identifiable person doing something they did not do — bear an "irremovable visual disclosure" and "embedded digital watermark" identifying them as such. The technical specifications would be set by NIST in implementing regulations. Disclosures must be present from the point of creation; downstream removal is prohibited.
Sex-related deepfake provisions. The bill makes it unlawful to create or distribute non-consensual intimate-imagery deepfakes. This piece survives substantially from the 2019 version but is narrower in some respects (clearer scienter requirements) and broader in others (covers attempts).
Private right of action. The bill creates a federal private right of action for individuals depicted in non-consensual deepfakes, allowing recovery of actual damages, statutory damages of up to $150,000 per work, attorney's fees, and injunctive relief. This is the major affirmative addition over earlier versions.
What is gone: the criminal provisions. The 2019 version included felony criminal penalties for production and distribution of certain deepfake categories. The 2023 version retained criminal penalties only for non-consensual intimate-imagery deepfakes. The 2026 version retains no criminal penalties; the sex-deepfake prohibition is enforced civilly through the AG and private litigants.
Why this version is moving
Three reasons:
- The criminal-provision drag is gone. The criminal provisions in earlier versions drew opposition from civil-liberties advocates concerned about prosecutorial overreach and from the platforms concerned about the secondary-liability implications. The disclosure-and-civil-liability structure draws much narrower opposition.
- The state-law landscape has matured. By February 2026, all 50 states have some form of deepfake legislation in place. The patchwork has become difficult enough to navigate that industry support for federal preemption has grown. The 2026 bill includes a partial-preemption provision that displaces state criminal regimes (mostly redundant under the federal civil regime) while leaving state civil regimes in place.
- The technical infrastructure now exists. The C2PA Content Credentials standard, in widespread use by major content platforms by 2025, makes the embedded-watermark requirement operationally feasible in a way it was not in 2019.
Comparing the three versions
The 2019 version (H.R. 3230, 116th Congress):
- Disclosure obligations on producers of "advanced technological false personations."
- Felony criminal penalties for failure to disclose, with enhanced penalties for distributions intended to incite violence, harm a political process, or facilitate fraud.
- Limited civil cause of action for individuals depicted.
- Did not advance.
The 2023 version (H.R. 5586, 118th Congress):
- Tightened scienter requirements on the criminal provisions.
- Distinguished between political, sex-related, and other deepfakes for differential treatment.
- Added safe harbor for platforms that implement specified disclosure-detection systems.
- Did not advance.
The 2026 version (H.R. 1244, 119th Congress):
- Disclosure obligations only — no criminal penalties for non-disclosure.
- Sex-deepfake prohibitions retained, civilly enforced.
- Robust private right of action with statutory damages.
- Partial preemption of state law.
- Platform safe harbor for compliance with NIST technical specifications.
- Passed House, in Senate markup.
Senate markup: what is likely to change
The Senate Commerce Committee has scheduled markup for early April. From conversations with people closer to the process, the changes most likely to land:
- Narrowing of "advanced technological false personation." The House definition is broad enough to potentially capture some legitimate satire, parody, and journalism uses. Senate Republicans in particular have flagged this; expect explicit safe harbors for clearly identified satire and journalistic context.
- Strengthening of the platform safe harbor. The current safe harbor is tied to compliance with NIST specifications that don't yet exist. Expect either a more concrete safe harbor or a longer effective-date runway to allow specifications to develop.
- Expansion of the preemption provision. Industry stakeholders are pressing for broader preemption of state private-right-of-action regimes; this will be contested but may move at least some.
- Clarification of the platform / generator distinction. The current bill imposes obligations on the creator of deepfake content but not clearly on the AI tool that produced it. Senate amendments may impose additional disclosure obligations on the AI tool providers, which the major labs are mostly receptive to.
I expect the bill to clear Senate Commerce by late spring and reach the Senate floor in summer. Final passage is plausible but not certain. If it passes, the operational effective date will probably be twelve to eighteen months after enactment to allow the NIST rulemaking and platform implementation work.
How it interacts with the state regimes
The partial-preemption framework matters. As drafted, the bill preempts:
- State criminal regimes addressing the same conduct (with carve-outs for specific narrow categories).
- State labeling regimes that differ from the federal labeling specifications.
The bill does not preempt:
- State civil causes of action, including state private rights of action.
- State election-deepfake regimes (at least not directly; the carve-out language is currently muddled).
- State right-of-publicity / right-of-personality claims, which several states have used as a vehicle for deepfake liability.
For practitioners, the multi-layer compliance question is going to remain. The federal regime, if it lands, will be one layer; state regimes will continue to be another. The tactical implications: a single federal compliance posture will satisfy most of the labeling-related requirements but state private-right-of-action exposure will remain. Plan accordingly.
For platforms and AI providers
The bill's NIST-anchored safe harbor is going to be the practical center of gravity for compliance. Platforms that implement C2PA-compatible content credentials and accept the implementing regulations are going to have a clear operational path; platforms that have not built that infrastructure will face harder choices. AI tool providers — particularly providers of image and video generation tools — face an emerging expectation that they will provide watermarking or other provenance signals that downstream platforms can rely on.
This is a story we will return to as the Senate markup process produces clearer text.