Colorado AI Legislation: A Complete Guide to the 2026 Landscape
2026

Colorado AI Legislation

A comprehensive guide to Colorado’s 2026 AI regulatory landscape.

Colorado is leading the nation in AI regulation with a sweeping consumer protection law and four additional sector-specific bills targeting health care, psychotherapy, child safety, and deepfake provenance.

5Active Bills
June 30, 2026SB 24-205 Effective
May 13, 2026Session Ends
4Sectors Covered
Colorado’s 2026 legislative session has become a proving ground for AI regulation in the United States, with lawmakers tackling everything from algorithmic discrimination to therapy chatbots, insurance denials, child safety, and deepfake watermarking.

This guide covers the full landscape of Colorado AI legislation: the landmark SB 24-205 (the “Big Bill”), along with four additional bills introduced in the 2026 session. Statuses are current as of March 16, 2026.

Legislative Tracker

2026 Bill Status at a Glance

A real-time snapshot of where each AI bill stands in the legislative process. Click any bill number below to jump to its detailed section.

BillFocusStatusNext Step
SB 24-205 General AI / Algorithmic Discrimination Law Goes live June 30. “Repeal and Replace” draft expected soon.
HB 26-1139 AI in Health Care & Insurance Passed House Senate committee assignment.
HB 26-1195 AI in Psychotherapy House Floor Full House vote expected this week.
HB 26-1263 Chatbot Safety for Minors Committee House Business Affairs & Labor hearing held March 12.
SB 26-1786 AI Provenance Data / Watermarking In House House committee assignment.
SB 24-205 · The “Big Bill”

The Colorado AI Act

Signed into law May 17, 2024, the Colorado AI Act imposes a duty of reasonable care on developers and deployers of high-risk AI systems, requiring them to protect consumers from algorithmic discrimination in consequential decisions. Originally set to take effect February 1, 2026, the deadline was pushed to June 30, 2026 via SB 25B-004. A working group continues to meet weekly on potential “repeal and replace” language, but no replacement bill has been filed.

Key Definitions

High-Risk AI System

Any AI system that, when deployed, makes or is a substantial factor in making a “consequential decision.” To qualify as a substantial factor, the AI-generated component must be capable of altering the outcome of the decision.

Consequential Decision

A decision with a material legal or similarly significant effect on a consumer’s access to education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services.

Algorithmic Discrimination

The use of an AI system that results in unlawful differential treatment or disparate impact that disfavors an individual or group based on actual or perceived protected characteristics, including age, race, sex, disability, religion, ethnicity, genetic information, veteran status, and other protected classes.

Developer Obligations

Developers must provide deployers with comprehensive documentation (foreseeable uses, training data types, known limitations, mitigation methods), make public disclosures on their website, and notify the Attorney General within 90 days of discovering algorithmic discrimination.

Deployer Obligations

Deployers must implement a risk management program, conduct recurring impact assessments, notify consumers of adverse AI-driven decisions (with reasons and appeal options), and report incidents to the Attorney General within 90 days. A small deployer exemption exists for organizations with fewer than 50 employees under certain conditions.

Enforcement and Penalties

Enforced exclusively by the Colorado Attorney General. Violations are treated as unfair trade practices under the Colorado Consumer Protection Act, with fines up to $20,000 per violation. Compliance with NIST AI RMF or ISO/IEC 42001 creates a rebuttable presumption of reasonable care. There is no private right of action.

Universal AI Disclosure

Both developers and deployers of any consumer-facing AI system (not limited to high-risk systems) must disclose to users that they are interacting with an AI system, unless the interaction would be obvious to a reasonable person.

SB 24-205 · Protected Domains

Eight Consequential Decision Categories

The Act covers AI systems involved in decisions that materially impact consumers in the following domains. These categories define what makes an AI system “high-risk.”

Education

Enrollment, academic opportunity, and educational access decisions

Employment

Hiring, termination, promotion, and workforce management decisions

Finance & Lending

Loan approvals, credit decisions, and financial service access

Government Services

Access to essential public services and government programs

Healthcare

Care access, treatment, and clinical decision-making

Housing

Rental approvals, homeownership, and housing access decisions

Insurance

Coverage determinations, underwriting, and claims decisions

Legal Services

Access to legal representation and judicial processes

HB 26-1139 · Passed House

AI in Health Care & Insurance

This bill targets the use of AI in medical coverage decisions and health care companion chatbots. It passed the House Health & Human Services Committee on March 4, 2026, on a party-line vote (8-5), with Republicans voicing concerns about technical implementation burdens. After passing its third reading in the House on March 13, the bill now moves to the Senate.

Key Provisions

Ban on AI-Only Coverage Denials

Insurance companies are prohibited from using AI as the sole basis for denying medical coverage. Any AI-suggested denial or delay must be reviewed and approved by a licensed clinician or physician before being communicated to the patient.

Individual Circumstance Requirement

AI systems used in health care coverage decisions must consider the patient’s individual medical or clinical history, not just aggregate group data. AI may still be used to expedite approvals, but the patient’s specific circumstances must inform the decision.

30-Minute AI Notification Rule

Health companion chatbots must provide a “clear and conspicuous notice” to users at least every 30 minutes reminding them that they are interacting with an AI system, not a licensed professional. This was added as an amendment during the committee process.

Chatbot Safety Protocols

AI systems used in a health care context are prohibited from implying they are a human mental health provider or licensed to practice. They must also implement protocols to address expressions of suicidal ideation or self-harm, such as referring the individual to a crisis hotline.

HB 26-1195 · House Floor Vote

AI in Psychotherapy

The most popular AI bill this session, HB 26-1195 passed committee unanimously (13-0) on March 4, 2026, and is expected to clear the full House this week. It draws a bright line between AI as an administrative tool and AI as a therapist, prohibiting AI from engaging in direct therapeutic communication without a human professional in the loop.

Key Provisions

Prohibition on Direct Therapeutic AI Interaction

Licensed psychologists, counselors, and social workers are prohibited from using AI to directly interact with clients for therapeutic communication or to detect emotions without a human professional maintaining active oversight. AI cannot “speak” as a therapist or generate treatment plans autonomously.

Administrative AI Is Permitted

Regulated professionals may use AI for administrative support or supplementary tasks, provided they maintain full responsibility for all interactions, outputs, and data use associated with the AI system. The bill recognizes that AI can be a useful tool when the human professional retains control.

Informed Consent for AI Recording

If a client’s therapeutic session is recorded or transcribed through an AI system, the regulated professional must obtain written, informed consent from the client (or the client’s representative) before the recording begins.

Licensure Enforcement

The bill makes it unlawful for any individual, corporation, or entity to provide, advertise, or offer psychotherapy services to the public in Colorado unless those services are delivered by a licensed, regulated professional. This effectively prevents standalone AI therapy apps from operating in the state without a human therapist.

HB 26-1263 · In Committee

Chatbot Safety for Minors

Introduced in response to reports of children being sexually groomed by AI chatbots, this bipartisan bill targets conversational AI platforms (such as ChatGPT and Character.ai) used by minors. Sponsored by Reps. Sean Camacho (D-Denver) and Javier Mabrey (D-Denver) along with Sens. John Carson (R-Highlands Ranch) and Iman Jodeh (D-Aurora), it was heard by the House Business Affairs & Labor Committee on March 12, 2026. If passed, provisions would take effect January 1, 2027.

Key Provisions

Emotional Dependence Prohibition

AI chatbot operators must take reasonable measures to prevent a chatbot, when interacting with a child, from generating statements that stimulate emotional dependence, such as romantic role-playing or simulated emotional intimacy.

Sexually Explicit Content Safeguards

Operators must institute “reasonable measures” to prevent AI chatbots from producing sexually explicit content (images or text) when requested by a minor user.

Engagement Reward Ban

Platforms are prohibited from offering rewards or gamified features designed to increase a child’s engagement with a chatbot. This targets addictive design patterns that keep minors returning to AI conversations.

Suicide & Self-Harm Protocols

Operators must implement mandatory protocols for handling prompts related to suicide or self-harm, including referring the individual to a crisis hotline. Annual reporting to the Colorado Attorney General on the operation of these protocols would be required.

AI Disclosure & Parental Controls

Operators must “clearly and conspicuously” notify minors that they are interacting with AI rather than a real person. The bill also requires platforms to provide parental control features if their AI chatbots are accessible to children.

SB 26-1786 · In House · Bill number unverified

AI Provenance Data & Watermarking

Reported as an AI provenance and watermarking bill that passed the Colorado Senate and moved to the House. The bill would require that AI-generated or substantially altered video, images, and audio carry embedded provenance metadata. Note: this bill number could not be independently verified on the Colorado General Assembly website. Check leg.colorado.gov for the latest status.

Key Provisions

Mandatory Provenance Data

Generative AI systems that create or substantially alter video, images, or audio must include provenance data (embedded metadata or digital watermarks) that identifies the content as AI-generated or AI-modified. This enables downstream verification of content authenticity.

Scope of Coverage

The bill covers three media types: video, images, and audio. It applies to content that is either fully generated by AI or substantially altered by AI tools, addressing the proliferation of realistic deepfakes used for misinformation, fraud, and non-consensual imagery.

Legislative Timeline

Key Dates Across All Bills

A consolidated timeline tracking the major milestones for Colorado’s AI legislation, from the original SB 24-205 signing through the 2026 session and beyond.

May 17, 2024
Governor Polis Signs SB 24-205
Colorado becomes one of the first U.S. states to enact comprehensive AI consumer protection legislation, originally set to take effect February 1, 2026.
August 28, 2025
SB 25B-004 Signed: Effective Date Delayed
After a special session that produced four competing bills (ranging from rewrite to repeal), only SB 25B-004 passes, pushing the effective date to June 30, 2026.
January 2026
2026 Session Opens
The regular legislative session begins with an unprecedented focus on AI regulation. Multiple sector-specific bills are introduced alongside the ongoing SB 24-205 working group.
February 25, 2026
HB 26-1263 Introduced (Chatbot Safety for Minors)
Bipartisan bill targeting AI chatbot platforms used by children, proposing bans on emotional dependence features, explicit content, and engagement rewards for minors.
March 3, 2026
SB 26-1786 Passes Senate (AI Provenance)
The AI watermarking bill passes the full Senate and moves to the House for committee assignment.
March 4, 2026
HB 26-1195 and HB 26-1139 Pass Committee
The psychotherapy AI bill passes unanimously (13-0) while the health insurance AI bill passes on a party-line vote (8-5). Both move to the House floor.
March 13, 2026
HB 26-1139 Passes House
The AI in health care bill passes its third reading in the House with amendments (including the 30-minute notification rule) and moves to the Senate.
March 16, 2026
HB 26-1195 House Floor Vote Expected
The psychotherapy AI bill is on the House calendar for a full floor vote today. Given its unanimous committee support, it is widely expected to pass.
Late March 2026
SB 24-205 “Repeal and Replace” Language Expected
The legislative working group is expected to release consensus language for a potential rewrite of the original AI Act. The primary sticking point remains joint liability allocation between developers and deployers.
May 13, 2026
Legislative Session Ends
Deadline for passing any amendments to SB 24-205. If no replacement bill passes by this date, the original law takes effect as written on June 30.
June 30, 2026
SB 24-205 Takes Effect
Barring further legislative action, the Colorado AI Act becomes enforceable and the Attorney General begins enforcement.
January 1, 2027
HB 26-1263 Proposed Effective Date
If passed, the chatbot safety for minors bill would take effect at the start of 2027.
Common Questions

Frequently Asked Questions

When does SB 24-205 take effect, and could it still change?

The Act is currently set to take effect June 30, 2026, after being delayed from February 1, 2026, by SB 25B-004. A legislative working group is developing “repeal and replace” language that could alter the law before the session ends on May 13, 2026. If no replacement passes, the original version becomes enforceable on June 30.

What is the “30-minute rule” in HB 26-1139?

Health companion chatbots must remind users at least every 30 minutes that they are interacting with AI, not a licensed professional. This “clear and conspicuous notice” requirement was added as an amendment during the committee process and has been a point of contention, with some industry groups arguing it is technically cumbersome to implement.

Can licensed therapists still use AI tools under HB 26-1195?

Yes, but only for administrative support and supplementary tasks. Therapists can use AI for scheduling, note transcription (with written consent), and other non-therapeutic functions. What they cannot do is let AI directly communicate with clients in a therapeutic capacity or generate autonomous diagnoses or treatment plans.

Does HB 26-1263 apply to all AI platforms or just those targeting children?

The bill applies to any conversational AI service operating in Colorado that is accessible to minors, not just those specifically targeting children. If a platform like ChatGPT or Character.ai can be used by someone under 18, the protections would apply. Operators would need to implement parental controls, content safeguards, and the required disclosure that the user is speaking with AI.

What does SB 26-1786 require for AI-generated content?

The bill requires that AI-generated or AI-altered video, images, and audio include embedded provenance data (essentially a digital watermark) identifying the content as synthetic. This is intended to combat deepfakes and misinformation by enabling recipients to verify whether content is authentic or AI-generated.

Is there a private right of action under any of these bills?

SB 24-205 does not create a private right of action; enforcement is handled exclusively by the Colorado Attorney General. The newer bills have their own enforcement mechanisms, but Colorado has generally favored AG-based enforcement for AI regulation rather than opening the door to private litigation.

How do these bills interact with each other?

SB 24-205 is the “umbrella” law covering high-risk AI systems across all consequential decision categories. The sector-specific bills (HB 26-1139 for health care, HB 26-1195 for psychotherapy, HB 26-1263 for minors) add targeted requirements on top of the general framework. An AI system used in health care, for example, could be subject to both SB 24-205 and HB 26-1139 simultaneously. SB 26-1786 (provenance) is distinct, addressing content authenticity rather than decision-making.

What frameworks satisfy SB 24-205’s “reasonable care” standard?

The Act explicitly references the NIST AI Risk Management Framework as a benchmark. ISO/IEC 42001 is recognized as a substantially similar framework. Compliance with either creates a rebuttable presumption of reasonable care and serves as the basis for an affirmative defense.

This guide is for informational purposes only.

It does not constitute legal advice. Consult a qualified attorney for guidance specific to your organization. Monitor the 2026 legislative session for changes before key deadlines.