Colorado AI Legislation
Colorado is leading the nation in AI regulation with a sweeping consumer protection law and four additional sector-specific bills targeting health care, psychotherapy, child safety, and deepfake provenance.
This guide covers the full landscape of Colorado AI legislation: the landmark SB 24-205 (the “Big Bill”), along with four additional bills introduced in the 2026 session. Statuses are current as of March 16, 2026.
2026 Bill Status at a Glance
A real-time snapshot of where each AI bill stands in the legislative process. Click any bill number below to jump to its detailed section.
| Bill | Focus | Status | Next Step |
|---|---|---|---|
| SB 24-205 | General AI / Algorithmic Discrimination | Law | Goes live June 30. “Repeal and Replace” draft expected soon. |
| HB 26-1139 | AI in Health Care & Insurance | Passed House | Senate committee assignment. |
| HB 26-1195 | AI in Psychotherapy | House Floor | Full House vote expected this week. |
| HB 26-1263 | Chatbot Safety for Minors | Committee | House Business Affairs & Labor hearing held March 12. |
| SB 26-1786 | AI Provenance Data / Watermarking | In House | House committee assignment. |
The Colorado AI Act
Signed into law May 17, 2024, the Colorado AI Act imposes a duty of reasonable care on developers and deployers of high-risk AI systems, requiring them to protect consumers from algorithmic discrimination in consequential decisions. Originally set to take effect February 1, 2026, the deadline was pushed to June 30, 2026 via SB 25B-004. A working group continues to meet weekly on potential “repeal and replace” language, but no replacement bill has been filed.
High-Risk AI System
Any AI system that, when deployed, makes or is a substantial factor in making a “consequential decision.” To qualify as a substantial factor, the AI-generated component must be capable of altering the outcome of the decision.
Consequential Decision
A decision with a material legal or similarly significant effect on a consumer’s access to education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services.
Algorithmic Discrimination
The use of an AI system that results in unlawful differential treatment or disparate impact that disfavors an individual or group based on actual or perceived protected characteristics, including age, race, sex, disability, religion, ethnicity, genetic information, veteran status, and other protected classes.
Developer Obligations
Developers must provide deployers with comprehensive documentation (foreseeable uses, training data types, known limitations, mitigation methods), make public disclosures on their website, and notify the Attorney General within 90 days of discovering algorithmic discrimination.
Deployer Obligations
Deployers must implement a risk management program, conduct recurring impact assessments, notify consumers of adverse AI-driven decisions (with reasons and appeal options), and report incidents to the Attorney General within 90 days. A small deployer exemption exists for organizations with fewer than 50 employees under certain conditions.
Enforcement and Penalties
Enforced exclusively by the Colorado Attorney General. Violations are treated as unfair trade practices under the Colorado Consumer Protection Act, with fines up to $20,000 per violation. Compliance with NIST AI RMF or ISO/IEC 42001 creates a rebuttable presumption of reasonable care. There is no private right of action.
Universal AI Disclosure
Both developers and deployers of any consumer-facing AI system (not limited to high-risk systems) must disclose to users that they are interacting with an AI system, unless the interaction would be obvious to a reasonable person.
Eight Consequential Decision Categories
The Act covers AI systems involved in decisions that materially impact consumers in the following domains. These categories define what makes an AI system “high-risk.”
Enrollment, academic opportunity, and educational access decisions
Hiring, termination, promotion, and workforce management decisions
Loan approvals, credit decisions, and financial service access
Access to essential public services and government programs
Care access, treatment, and clinical decision-making
Rental approvals, homeownership, and housing access decisions
Coverage determinations, underwriting, and claims decisions
Access to legal representation and judicial processes
AI in Health Care & Insurance
This bill targets the use of AI in medical coverage decisions and health care companion chatbots. It passed the House Health & Human Services Committee on March 4, 2026, on a party-line vote (8-5), with Republicans voicing concerns about technical implementation burdens. After passing its third reading in the House on March 13, the bill now moves to the Senate.
Ban on AI-Only Coverage Denials
Insurance companies are prohibited from using AI as the sole basis for denying medical coverage. Any AI-suggested denial or delay must be reviewed and approved by a licensed clinician or physician before being communicated to the patient.
Individual Circumstance Requirement
AI systems used in health care coverage decisions must consider the patient’s individual medical or clinical history, not just aggregate group data. AI may still be used to expedite approvals, but the patient’s specific circumstances must inform the decision.
30-Minute AI Notification Rule
Health companion chatbots must provide a “clear and conspicuous notice” to users at least every 30 minutes reminding them that they are interacting with an AI system, not a licensed professional. This was added as an amendment during the committee process.
Chatbot Safety Protocols
AI systems used in a health care context are prohibited from implying they are a human mental health provider or licensed to practice. They must also implement protocols to address expressions of suicidal ideation or self-harm, such as referring the individual to a crisis hotline.
AI in Psychotherapy
The most popular AI bill this session, HB 26-1195 passed committee unanimously (13-0) on March 4, 2026, and is expected to clear the full House this week. It draws a bright line between AI as an administrative tool and AI as a therapist, prohibiting AI from engaging in direct therapeutic communication without a human professional in the loop.
Prohibition on Direct Therapeutic AI Interaction
Licensed psychologists, counselors, and social workers are prohibited from using AI to directly interact with clients for therapeutic communication or to detect emotions without a human professional maintaining active oversight. AI cannot “speak” as a therapist or generate treatment plans autonomously.
Administrative AI Is Permitted
Regulated professionals may use AI for administrative support or supplementary tasks, provided they maintain full responsibility for all interactions, outputs, and data use associated with the AI system. The bill recognizes that AI can be a useful tool when the human professional retains control.
Informed Consent for AI Recording
If a client’s therapeutic session is recorded or transcribed through an AI system, the regulated professional must obtain written, informed consent from the client (or the client’s representative) before the recording begins.
Licensure Enforcement
The bill makes it unlawful for any individual, corporation, or entity to provide, advertise, or offer psychotherapy services to the public in Colorado unless those services are delivered by a licensed, regulated professional. This effectively prevents standalone AI therapy apps from operating in the state without a human therapist.
Chatbot Safety for Minors
Introduced in response to reports of children being sexually groomed by AI chatbots, this bipartisan bill targets conversational AI platforms (such as ChatGPT and Character.ai) used by minors. Sponsored by Reps. Sean Camacho (D-Denver) and Javier Mabrey (D-Denver) along with Sens. John Carson (R-Highlands Ranch) and Iman Jodeh (D-Aurora), it was heard by the House Business Affairs & Labor Committee on March 12, 2026. If passed, provisions would take effect January 1, 2027.
Emotional Dependence Prohibition
AI chatbot operators must take reasonable measures to prevent a chatbot, when interacting with a child, from generating statements that stimulate emotional dependence, such as romantic role-playing or simulated emotional intimacy.
Sexually Explicit Content Safeguards
Operators must institute “reasonable measures” to prevent AI chatbots from producing sexually explicit content (images or text) when requested by a minor user.
Engagement Reward Ban
Platforms are prohibited from offering rewards or gamified features designed to increase a child’s engagement with a chatbot. This targets addictive design patterns that keep minors returning to AI conversations.
Suicide & Self-Harm Protocols
Operators must implement mandatory protocols for handling prompts related to suicide or self-harm, including referring the individual to a crisis hotline. Annual reporting to the Colorado Attorney General on the operation of these protocols would be required.
AI Disclosure & Parental Controls
Operators must “clearly and conspicuously” notify minors that they are interacting with AI rather than a real person. The bill also requires platforms to provide parental control features if their AI chatbots are accessible to children.
AI Provenance Data & Watermarking
Reported as an AI provenance and watermarking bill that passed the Colorado Senate and moved to the House. The bill would require that AI-generated or substantially altered video, images, and audio carry embedded provenance metadata. Note: this bill number could not be independently verified on the Colorado General Assembly website. Check leg.colorado.gov for the latest status.
Mandatory Provenance Data
Generative AI systems that create or substantially alter video, images, or audio must include provenance data (embedded metadata or digital watermarks) that identifies the content as AI-generated or AI-modified. This enables downstream verification of content authenticity.
Scope of Coverage
The bill covers three media types: video, images, and audio. It applies to content that is either fully generated by AI or substantially altered by AI tools, addressing the proliferation of realistic deepfakes used for misinformation, fraud, and non-consensual imagery.
Key Dates Across All Bills
A consolidated timeline tracking the major milestones for Colorado’s AI legislation, from the original SB 24-205 signing through the 2026 session and beyond.
Frequently Asked Questions
The Act is currently set to take effect June 30, 2026, after being delayed from February 1, 2026, by SB 25B-004. A legislative working group is developing “repeal and replace” language that could alter the law before the session ends on May 13, 2026. If no replacement passes, the original version becomes enforceable on June 30.
Health companion chatbots must remind users at least every 30 minutes that they are interacting with AI, not a licensed professional. This “clear and conspicuous notice” requirement was added as an amendment during the committee process and has been a point of contention, with some industry groups arguing it is technically cumbersome to implement.
Yes, but only for administrative support and supplementary tasks. Therapists can use AI for scheduling, note transcription (with written consent), and other non-therapeutic functions. What they cannot do is let AI directly communicate with clients in a therapeutic capacity or generate autonomous diagnoses or treatment plans.
The bill applies to any conversational AI service operating in Colorado that is accessible to minors, not just those specifically targeting children. If a platform like ChatGPT or Character.ai can be used by someone under 18, the protections would apply. Operators would need to implement parental controls, content safeguards, and the required disclosure that the user is speaking with AI.
The bill requires that AI-generated or AI-altered video, images, and audio include embedded provenance data (essentially a digital watermark) identifying the content as synthetic. This is intended to combat deepfakes and misinformation by enabling recipients to verify whether content is authentic or AI-generated.
SB 24-205 does not create a private right of action; enforcement is handled exclusively by the Colorado Attorney General. The newer bills have their own enforcement mechanisms, but Colorado has generally favored AG-based enforcement for AI regulation rather than opening the door to private litigation.
SB 24-205 is the “umbrella” law covering high-risk AI systems across all consequential decision categories. The sector-specific bills (HB 26-1139 for health care, HB 26-1195 for psychotherapy, HB 26-1263 for minors) add targeted requirements on top of the general framework. An AI system used in health care, for example, could be subject to both SB 24-205 and HB 26-1139 simultaneously. SB 26-1786 (provenance) is distinct, addressing content authenticity rather than decision-making.
The Act explicitly references the NIST AI Risk Management Framework as a benchmark. ISO/IEC 42001 is recognized as a substantially similar framework. Compliance with either creates a rebuttable presumption of reasonable care and serves as the basis for an affirmative defense.
This guide is for informational purposes only.
It does not constitute legal advice. Consult a qualified attorney for guidance specific to your organization. Monitor the 2026 legislative session for changes before key deadlines.