Colorado's AI Discrimination Law: The First Comprehensive State Statute
On May 17, Governor Jared Polis signed Colorado SB 24-205, the Colorado Artificial Intelligence Act. Calling it the first comprehensive state AI statute is fair: New York City's Local Law 144 covers only automated employment-decision tools, and Illinois's BIPA-adjacent AI rules are narrower still. Colorado is the first state to put a generally applicable, EU-style anti-discrimination regime on AI on the books.
It is also worth reading the Governor's signing letter. Polis signed the bill while simultaneously calling on the legislature to revise it before its February 1, 2026 effective date. He highlighted concerns about compliance burden on small developers and the law's deviation from a "regulate the harm, not the technology" approach. Read literally, the Governor signed a law he wants substantially rewritten before it takes effect. That tells you something about where this is going.
What the Act covers
The Act regulates "high-risk artificial intelligence systems," which it defines as systems that, when deployed, make or are a substantial factor in making a "consequential decision." The latter term is doing a lot of work; it covers decisions affecting access to employment, education, financial services, government services, healthcare, housing, insurance, and legal services. The list will look familiar to anyone who has read EU AI Act Annex III. It is narrower in some places, broader in others.
Two categories of regulated party: developers (who build the system) and deployers (who use it to make consequential decisions about a Colorado consumer). Notably, the law uses "consumer" — meaning a Colorado individual acting in a personal capacity — which excludes most B2B contexts.
Developer obligations
Developers must, in summary:
- Use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.
- Provide deployers with a documentation package describing the system's intended uses, known limitations, training-data characteristics, and evaluations performed.
- Disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination.
The "reasonable care" formulation creates a duty-of-care framework familiar from common-law tort doctrine. Practitioners will recognize that this opens the door to discovery battles over what was "reasonably foreseeable" at the time of deployment.
Deployer obligations
Deployers carry the heavier load:
- Implement a risk management policy addressing algorithmic discrimination.
- Complete an impact assessment before deploying a high-risk system, and annually thereafter, addressing the system's purpose, expected use, data, mitigations, and post-deployment monitoring.
- Notify consumers when a high-risk system is used to make a consequential decision about them.
- Provide affected consumers with an explanation of the decision and an opportunity to correct the underlying data and appeal.
- Notify the AG within 90 days of discovering that the system has caused algorithmic discrimination.
The notification, explanation, and appeal triad will look familiar from the GDPR Article 22 right to a human review of automated decisions. The execution risk for U.S. deployers will be the appeal infrastructure — most do not have one today.
Affirmative defense and AG enforcement
Two structural features matter most. First, enforcement is exclusively by the AG; there is no private right of action. Second, there is an affirmative defense for entities that adopt and adhere to a recognized risk management framework — NIST AI RMF and ISO/IEC 42001 are the two most likely candidates, though the AG can recognize others by rule.
The AG-only enforcement model deliberately mirrors the Colorado Privacy Act and is a softer structure than, say, BIPA's private-right-of-action regime. Expect aggressive industry advocacy to keep it that way during the rewrite that everyone now expects.
What's likely to change before February 2026
Predictions, which will probably age poorly:
- Small-business carve-outs. The Polis signing letter explicitly flagged this. Expect a headcount or revenue threshold for full compliance, perhaps with a lighter regime for smaller deployers.
- Tighter "consequential decision" definition. The current list is broad enough that some applications (insurance underwriting, financial services credit decisions) overlap with existing federal regimes in awkward ways.
- Clarification on the "substantial factor" standard. When is an AI system a substantial factor in a decision? The current text leaves this to be litigated, and that is no fun for anyone.
- Possibly a sunsetted preemption clause if a federal AI anti-discrimination law materializes.
Practical takeaway
If your client deploys algorithmic decision systems for Colorado consumers in any of the listed categories, the planning horizon is now February 2026. That is closer than it sounds, especially given the documentation and impact-assessment work the law contemplates. We suggest two near-term actions: (1) inventory which deployments would qualify as "high-risk" under the current text, and (2) start mapping NIST AI RMF or ISO/IEC 42001 to your governance program if you are not there already. The affirmative defense will be the most valuable tool in this statute, and it is the work you can start now without waiting for the inevitable rewrite.