Colorado recently enacted its Artificial Intelligence law, launching a new era of state AI laws.
What do you need to know?
- The bill is effective February 1, 2026 and enforceable by the Attorney General.
- This is a comprehensive AI bill that applies directly to the private sector.
- Like EU AI Act, the gating item is “high risk AI” defined as: an AI system that has been specifically developed and marketed or intentionally and substantially modified to make or to be a substantial factor in making a consequential decision.
- Like U.S. state privacy laws, a consequential decision is a decision that has a material legal or similarly significant effects on a consumer’s access to or availability, cost or terms of things like education, employment, essential goods or services, financial or lending service, healthcare, housing, insurance or legal services.
- Like EU AI Act, it allocates responsibility to “developers” and “deployers.” This means service providers are directly implicated.
- The focus (key violation): reasonable care to avoid algorithmic discrimination, with extensive to-do’s, for a presumption of non-discrimination.
Developers need to:
- Provide deployers information and documentation necessary to complete an impact assessment (evaluation; data governance, mitigation).
- Provide deployers and the public information on the types of high-risk systems that the developer has developed or intentionally and substantially modified and makes available.
- Report to the Attorney General’s Office any known or reasonably foreseeable risk of algorithmic discrimination within 90 days after the discovery or receipt of a credible report from deployer.
Deployers need to:
- Implement a risk management policy and program (NIST AI RMF recommended).
- Complete an impact assessment at least annually and within 90 days of substantial change.
- Notify a consumer of specified items if the system makes a consequential decision concerning a consumer (including a description of the system, purpose , decision, human involvement, data, right to opt out of profiling).
- Make a publicly available statement summarizing the types of high-risk systems currently deployed.
- Disclose to the AG the discovery of algorithmic discrimination within 90 days.
The impact assessment is similar to the U.S. state law required DPIA and needs to include:
- Purpose, intended use, context & benefits.
- Known or reasonably foreseeable risks of algorithmic discrimination & mitigation steps.
- Categories of data processed & outputs to customize high risk AI system.
- Metrics used to evaluate performance and limitations.
- Description of transparency measures taken.
- Post deployment monitoring and safeguards including oversight.