Are AI Agents Fiduciaries? An Early Look at the Emerging Doctrine

I flagged in my September 2025 product-liability post that an "AI as agent" doctrinal framework was getting more analytical traction than I would have predicted. Five months on, that traction has produced something more interesting: courts in three states have now reached for fiduciary-duty language to describe what a deployer of a consumer-facing AI agent owes its user. The doctrine is incoherent in important ways, but the trajectory is unmistakable.

This post traces the cases, identifies the unifying intuition, and explains why scope-of-engagement disclosures are about to become very important.

The cases

Three to focus on:

Mendez v. ConcierGen Health Inc. (Tex. Dist. Ct. 2025). A Texas trial court denied a motion to dismiss in a case involving an AI healthcare-navigation agent that allegedly referred a user to a non-network provider despite explicit instructions to maintain in-network status. The court's reasoning relied substantially on Texas common-law fiduciary doctrine, characterizing the relationship between user and consumer-facing agent product as creating a "limited fiduciary relationship" with respect to the specific scope of the user's engagement. The opinion is short — sixteen pages — but is now widely cited.

Park v. Helios Financial Software (S.D.N.Y. 2025). A federal district court in New York, applying New York law, addressed an AI personal-finance agent that allegedly made tax-relevant recommendations without disclosing material conflicts arising from the deployer's affiliate-marketing relationships. The court declined to find a fiduciary duty as a matter of law on the bare facts, but allowed the plaintiff's claim to proceed under a more traditional negligent-misrepresentation theory while expressly noting that the question of "whether the relationship between user and AI agent is fiduciary in character is a question this Circuit has not addressed and would benefit from a developed factual record."

In re ListenAI Class Action Litigation (Cal. Super. Ct. 2025). A California superior court certified a class of consumers asserting fiduciary-duty claims against the deployer of an AI personal-assistant product. The class certification decision relied on the proposition that the agent was "held out" as acting in the user's interest, that users reasonably relied on it to do so, and that the deployer's design choices created consequences that fell on the user. The court did not reach the merits but allowed fiduciary-duty claims to proceed past the certification stage.

None of these are appellate decisions. None create binding precedent outside their narrow territories. But the convergence of three different courts in three different states reaching for fiduciary framing is itself a doctrinal signal worth taking seriously.

The unifying intuition

What is producing the convergence? My read is that courts are responding to a structural feature of consumer-facing AI agents that traditional contract and tort doctrine does not handle well: the agent is held out as acting in the user's interest, users reasonably rely on it to do so, but the deployer designs the agent's behavior and has interests that may not align with the user's.

This is the classic structure of fiduciary relationships. A trustee is held out as acting in the beneficiary's interest, the beneficiary reasonably relies, and the trustee has interests that may not align. Fiduciary doctrine evolved to manage that asymmetry. Courts are reaching for it because nothing else fits as well.

The doctrinal question is which of the various fiduciary frameworks — true trust, agency, attorney-client analog, broker-dealer analog — is the right one. Probably none of them is exactly right, and the U.S. courts will end up developing a sui generis framework over the next several years. But the family resemblance is what is producing the convergence.

The deployer's defense

The natural deployer defense is that the user has accepted terms of service that disclaim any fiduciary relationship. This works some of the time but increasingly does not. Three reasons:

  1. Some of the cases are coming under state consumer-protection regimes (UDAP, etc.) where fiduciary characterizations are not waivable by terms of service.
  2. Where the agent is held out as a "personal assistant" or "financial advisor" or any other framing that maps onto an existing fiduciary category, courts can characterize the terms-of-service disclaimer as inconsistent with the marketing representations and decline to enforce it.
  3. Several state courts have been receptive to the argument that fiduciary duties arise from the relationship and the conduct, not from contract — meaning that the parties cannot disclaim them by mutual consent (as was the holding in ListenAI).

The defense that does work, more often than not, is precise scope-of-engagement framing. If the deployer represents that the agent is acting on the user's behalf only with respect to specific tasks, with explicit limitations, and with conflicts disclosed in non-buried language, courts have generally respected that scoping. The fiduciary duty (if any) attaches to the represented scope, not to the user's broader self-interest.

Why scope-of-engagement disclosures are about to matter

If the trajectory continues — and I expect it will — the operational compliance question for consumer-facing AI agent products is going to be: what is the scope of the agent's representational duty to the user, and how is that scope disclosed?

This is going to be the next compliance investment area for agent product teams. Specifically:

None of this is simple. It is closer to the kind of disclosure obligation that a securities-licensed broker-dealer faces under Reg BI than to anything traditionally expected of a software product. The compliance build is going to take years.

The federal layer

The federal Frontier AI Safety and Innovation Act, which we will cover at length when its final form lands, includes a section on AI agent disclosure obligations that may federalize part of this question. Whether the federal disclosure rules align with or compete with the emerging state-court doctrines will be a 2026-27 question. My current bet is that the federal rules will be a floor, and state common-law doctrines will continue to evolve above the floor.

Bottom line

The fiduciary-doctrine question is going to be one of the major common-law developments of the next two years. Deployers of consumer-facing AI agents should be planning their disclosure infrastructure now, with the working assumption that the doctrinal posture will tighten further. The terms-of-service disclaimer, while still useful, is not the answer. Precise scope-of-engagement framing, with conflicts disclosed and technical mechanisms documented, is the durable approach.