Colorado AI Act Compliance
The first comprehensive US state AI law takes effect June 30, 2026. Understand what developers and deployers must do to prevent algorithmic discrimination.
What Is the Colorado AI Act?
The Colorado AI Act (SB 24-205) is the first comprehensive AI legislation enacted by a US state. Signed into law in May 2024, it takes effect June 30, 2026, and establishes requirements for both developers and deployers of high-risk AI systems.
The Act focuses specifically on preventing algorithmic discrimination, defined as differential treatment or impact that disfavors individuals based on protected characteristics including race, color, ethnicity, sex, religion, age, disability, sexual orientation, and other classifications.
Unlike the EU AI Act's broad scope, Colorado's law targets AI systems used to make or substantially contribute to "consequential decisions" affecting Coloradans in areas like employment, education, financial services, healthcare, housing, insurance, and legal services.
Organizations with customers, employees, or operations in Colorado should evaluate whether their AI systems fall under the Act's jurisdiction and begin preparing compliance measures.
Key Definitions
High-Risk AI System
Any AI system that makes or is a substantial factor in making a consequential decision. The risk classification is based on the decision being made, not the AI tool itself.
Consequential Decision
A decision that has a material legal or similarly significant effect on an individual's access to or cost of: education, employment, financial services, government services, healthcare, housing, insurance, or legal services.
Algorithmic Discrimination
Any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavors an individual or group based on their actual or perceived protected characteristics.
Developer
A person doing business in Colorado that develops or intentionally and substantially modifies an AI system. Developers have documentation and disclosure obligations.
Deployer
A person doing business in Colorado that deploys a high-risk AI system. Deployers must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
Requirements for Deployers
What It Requires
Deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. This is the core obligation of the Colorado AI Act.
How PolicyGuard Helps
Documented AI policies, employee training, and acknowledgment tracking demonstrate reasonable care in governing AI usage.
What It Requires
Implement a risk management policy and program to govern high-risk AI system deployment. The policy must specify processes for identifying and mitigating discrimination risks.
How PolicyGuard Helps
Our policy templates include risk management frameworks aligned with Colorado AI Act requirements.
What It Requires
Complete impact assessments for high-risk AI systems annually and within 90 days of any intentional and substantial modification. Assessments must evaluate discrimination risks.
How PolicyGuard Helps
PolicyGuard provides documentation infrastructure for tracking assessments and maintaining audit trails.
What It Requires
Provide consumers with a statement that AI is being used to make consequential decisions. Include contact information for questions and describe the purpose of the AI system.
How PolicyGuard Helps
Policy templates include disclosure requirements and language for consumer communications.
What It Requires
If you discover algorithmic discrimination has occurred, notify the Attorney General within 90 days of discovery.
How PolicyGuard Helps
Audit trails and incident documentation support investigation and reporting requirements.
Safe Harbor Provisions
Achieve Safe Harbor Protection
The Colorado AI Act provides a rebuttable presumption that a deployer used reasonable care if they comply with recognized AI governance frameworks. This safe harbor significantly reduces liability exposure.
NIST AI Risk Management Framework
Alignment with the most current version of NIST AI RMF creates safe harbor. This includes implementing the four core functions: Govern, Map, Measure, and Manage.
Get NIST AI RMF TemplateISO/IEC 42001 Certification
Certification to ISO 42001 AI management system standard also provides safe harbor. Certification demonstrates comprehensive AI governance practices.
Get ISO 42001 TemplateAdditional Safe Harbor Actions
Colorado AI Act Timeline
- SB 24-205 signed into law
- Preparation period
- Implement governance frameworks
- Conduct impact assessments
- Establish risk management policies
- Colorado AI Act effective date
- All requirements enforceable
- Annual impact assessments
- Continuous monitoring
- 90-day reporting for discovered discrimination
Prepare for Colorado AI Act with PolicyGuard
Risk Management Policy Templates
Our expert-curated templates include Colorado AI Act-aligned risk management policies covering high-risk AI governance, discrimination prevention, and incident response.
Framework Alignment Support
Templates aligned with NIST AI RMF and ISO 42001 help you achieve safe harbor protection. Document your framework alignment with timestamped acknowledgments.
Audit-Ready Documentation
If the Attorney General investigates, show comprehensive records of your governance efforts: policies acknowledged, training completed, assessments documented.
Build Your Colorado AI Act Compliance Program
See it in action. Colorado AI Act templates included. Setup in minutes.
Frequently Asked Questions
Yes, if you do business in Colorado and deploy high-risk AI systems affecting Coloradans. This includes having Colorado customers, employees, or operations.
An AI system is high-risk if it makes or substantially contributes to a consequential decision. The classification depends on the decision type (employment, credit, housing, etc.), not the AI tool itself. Using ChatGPT for drafting emails is not high-risk; using AI to screen job applicants is.
Align your practices with NIST AI RMF or achieve ISO 42001 certification. Additionally, take proactive measures to discover and correct algorithmic discrimination, and document everything thoroughly.
You must notify the Colorado Attorney General within 90 days of discovery. Document the discovery, your investigation, and corrective measures taken.
PolicyGuard provides Colorado AI Act-aligned policy templates, helps you document framework alignment for safe harbor, tracks employee training and acknowledgments, and maintains the audit trail you need to demonstrate reasonable care.









