{"id":322,"date":"2025-08-15T18:33:48","date_gmt":"2025-08-15T18:33:48","guid":{"rendered":"http:\/\/www.greenville-nc.com\/?p=322"},"modified":"2025-08-19T10:26:12","modified_gmt":"2025-08-19T10:26:12","slug":"colorados-ai-law-wont-work-but-a-smarter-one-can-opinion","status":"publish","type":"post","link":"http:\/\/www.greenville-nc.com\/index.php\/2025\/08\/15\/colorados-ai-law-wont-work-but-a-smarter-one-can-opinion\/","title":{"rendered":"Colorado\u2019s AI law won\u2019t work \u2013 but a smarter one can\u00a0(Opinion)"},"content":{"rendered":"
When I think about artificial intelligence gone wrong, the first image that comes to mind is the unblinking red eye of HAL 9000, the AI from \u201c2001: A Space Odyssey.\u201d HAL was designed to assist humans, but when his logic conflicted with his programming, he made a chilling decision: eliminate the crew. \u201cI\u2019m sorry, Dave,\u201d he says coldly, \u201cI\u2019m afraid I can\u2019t do that.\u201d\u00a0<\/span><\/p>\n That fictional warning from 1968 is starting to feel real here in 2025. AI can now make decisions today about who gets a mortgage, which job candidates are interviewed, what content our kids are exposed to <\/span>online —<\/span> and who might fall through the cracks. The risks are no longer science fiction.\u00a0<\/span><\/p>\n That\u2019s why Colorado passed the Colorado Artificial Intelligence Act (CAIA), aiming to prevent algorithmic discrimination and ensure more transparency in high-risk AI systems. It was an ambitious <\/span>step —<\/span> but like HAL itself, it now risks becoming a system built with good intentions but poor oversight.\u00a0<\/span><\/p>\n The law is replete with buggy programming, including vague definitions and technical assumptions that <\/span>don\u2019t<\/span> reflect how AI <\/span>actually<\/span> works.<\/span> It expects companies — many of them small or mid-sized \u2014 to explain exactly how complex machine learning models make decisions. But these models are often statistical black boxes, trained by third-party developers. The result? A regulatory system that\u2019s difficult to comply with, hard to enforce, and likely to fail in holding the worst actors accountable.\u00a0<\/span><\/p>\n And just like in \u201c<\/span>2001,\u201d when HAL\u2019s creators gave him an impossible set of instructions, we\u2019ve now put businesses in the same position: follow unclear rules, or risk being shut out of Colorado\u2019s market. <\/span>Some companies doing good work in <\/span>health care<\/span>, education <\/span>and<\/span> clean energy innovation are already choosing not to relocate <\/span>here —<\/span> or worse, planning their exit.<\/span> Not because they fear regulation, but because they can\u2019t survive the ambiguity.\u00a0<\/span><\/p>\n Meanwhile, the very harms this law was meant to prevent — algorithmic bias, price gouging, discrimination in lending or housing \u2014 may still occur.\u00a0<\/span><\/p>\n But there\u2019s a way forward. As Colorado lawmakers prepare for a Special Legislative Session, we urge our lawmakers in Boulder and across the state to help reboot the system — this time with the right inputs.\u00a0<\/span><\/p>\n Core ideas must include a stronger, smarter foundation, including:\u00a0<\/span><\/p>\n \u2022 Clear disclosure when someone is interacting with AI instead of a human, particularly in sensitive services like health care, education, or housing;<\/p>\n \u2022 Explicit prohibition of discriminatory uses of generative AI under Colorado\u2019s civil rights laws;<\/p>\n \u2022 Making it clear that \u201cthe AI did it\u201d is not a valid excuse when harm is caused;<\/p>\n \u2022 Giving the Attorney General the tools to pursue bad actors using AI to deceive or exploit consumers.<\/p>\n Improvements like this don\u2019t reject regulation — they upgrade it. They create clarity for innovators, protect consumers from real harms and ensure we don\u2019t end up locked in a battle with the very tools we hoped would help us.\u00a0<\/span><\/p>\n