Election Deepfakes and the Limits of Section 230
Three weeks until the U.S. election, and the predicted wave of generative-AI political deepfakes has been more of a wash than a tsunami. The most notable incident so far was the January New Hampshire robocall using a synthetic Biden voice to discourage Democratic primary voting, which produced a $6 million FCC enforcement action and state-level criminal charges against the consultant who arranged it. There have been a handful of fake-image incidents and a much larger number of obvious satire posts, but no decisive synthetic-media political event.
The relative quiet is partly the result of legal infrastructure that did not exist a year ago. As of this month, twenty-five states have enacted statutes specifically targeting election-related deepfakes; many were passed in the spring or summer of 2024. The interaction of these statutes with Section 230 of the Communications Decency Act is going to be one of the more important questions of 2025, even if it does not get decisively litigated before November 5.
The shape of the state statutes
State election-deepfake laws fall into roughly three patterns:
- Disclosure regimes. Require synthetic political media to be labeled as such within a defined window before an election. California AB 730 and Texas SB 751 are the prototypes.
- Outright prohibitions with exceptions. Prohibit distribution of materially deceptive synthetic political media within a window, with exceptions for satire, parody, journalistic reporting, and labeled content. Michigan, Minnesota, and Washington follow this pattern.
- Private right of action regimes. Authorize the depicted candidate to sue for injunctive relief and damages. Some states layer this on top of either of the prior structures.
The constitutional questions for any of these are familiar — content-based speech regulation, strict scrutiny — but courts so far have been receptive on the theory that materially deceptive election content is the kind of false speech the Supreme Court has indicated is constitutionally regulable in narrow circumstances. The Eleventh Circuit's denial of preliminary injunction in Kohls v. Bonta earlier this summer was a useful indicator, though that case is far from final.
Where Section 230 enters
Section 230 immunity is structured around a simple distinction: a platform is not liable for content provided by another information content provider, even if it moderates that content. Generative AI scrambles this distinction in two directions.
First, when a platform's own AI tools generate content in response to user prompts, the platform may itself be a content provider. The Ninth Circuit's recent decision in Anderson v. TikTok (3d Cir.) — though about algorithmic recommendation, not generation — leaned in this direction by treating algorithmic curation as first-party expressive activity for Section 230 purposes. If that approach is extended to generative outputs, platforms will lose Section 230 immunity for AI-generated content they themselves produce.
Second, even pure user-generated synthetic content puts platforms in a tighter spot. State election-deepfake laws often impose obligations on the platform — labeling, takedown, sometimes proactive screening. Where a state statute imposes a duty independent of who created the content (e.g., a label-or-block obligation), Section 230's "treated as a publisher" formulation may not bar the claim. The Ninth Circuit's HomeAway and Lemmon v. Snap lines support this: duties grounded in the platform's own conduct, rather than the third-party content, are not preempted.
Three specific questions to watch
- Is generative AI output "another information content provider's" content? The clean answer would be: it depends on whether the user's prompt or the model's output supplied the content's substance. That clean answer will not survive contact with reality. We expect the first wave of cases to land on something like a "material contribution" test, with platforms that integrate generative AI tools into their products bearing higher exposure than those that merely host third-party AI-generated content.
- Are state labeling obligations preempted? Probably not, on a clean read of Section 230(e)(3) — labeling is a duty independent of treating the platform as a publisher. But platforms will argue, and have argued, that any obligation tied to a platform's hosting choices is a publisher duty in disguise.
- Does the model developer have separate exposure? This is the more interesting question. A model that generates a deepfake on demand is doing something the platform did not directly do. State statutes generally do not address upstream model developers, leaving liability to common-law tort doctrines. We expect the first negligence-design lawsuit against a frontier model developer for failing to refuse a clearly unlawful request will land within the next year.
Practical posture
For a platform that does not generate AI content itself, the safer near-term posture is robust labeling and takedown infrastructure scoped to the relevant state statutes. The compliance cost is real but predictable, and it materially reduces the litigation surface for the post-election period when state attorneys general will be looking for cases.
For a platform that integrates generative AI tools, especially ones that produce political content, the calculus is harder. The defensive value of input-side filtering, output-side watermarking, and proactive refusal of clearly election-related synthetic-likeness requests is high. Section 230 may eventually apply, but planning around the assumption that it will not is the prudent call until at least one circuit weighs in clearly.
For frontier model developers, the planning horizon is longer but the exposure is real. Building, documenting, and red-teaming election-deepfake refusal behaviors is no longer optional. The doctrine has not yet caught up to the technology, but the gap is narrowing fast.