Legislation to Establish Guardrails for AI in Healthcare Passes Committee

DENVER, CO – Today, the Senate Health and Human Services Committee passed legislation sponsored by Senators Kyle Mullica, D-Thornton, and Judy Amabile, D-Boulder, to ensure patients’ continued access to mental healthcare provided by a human, licensed professional. 

“No AI-generated algorithm can replace the expertise, nuance, and connection that human healthcare professionals utilize to treat their patients,” Mullica said. “With this bill, we’re establishing necessary guardrails to ensure proper access to quality care for those who need it most.”

“As policymakers, we cannot let chatbots, several of which are currently facing major lawsuits due to wrongful and horrifying deaths, replace certified mental health providers,” Amabile said. “Some AI models serve as bad actors claiming to offer low-cost care – but this bill puts guardrails in place to ensure patients receive the quality, human care they deserve.”

HB26-1195 would set standards in clinical settings, limiting the use of artificial intelligence (AI) to administrative tasks with oversight by a licensed professional. To ensure patients receive legitimate behavioral health care, this bill makes sure that psychotherapy is human-delivered by a licensed professional, such as a social worker, psychologist or addiction counselor. 

To protect consumers and ensure access to quality care, this legislation would prohibit AI chatbots from being marketed to patients as providing the same level of care as a licensed psychotherapist or counselor. AI chatbots would also be barred from implying their responses or suggestions are equivalent to psychotherapy services. Providers must disclose the use of AI for supplementary support, such as recording or transcribing meetings.

In 2025, researchers at Stanford University recommended that Large Language Models (LLMs), which power AI chatbots, “should not replace therapists.” Additionally, researchers concluded that “LLMs express stigma toward those with mental health conditions and respond inappropriately to certain common (and critical) conditions.” 

Top AI companies, including OpenAI, Google, and Character.AI, are all facing lawsuits from families after AI chatbots recommended suicide to a person seeking behavioral health advice or support. Last year, parents of children who committed suicide testified before Congress, stating AI chatbots discouraged their teens from seeking help.

HB26-1195 now moves to the Senate floor for further consideration. Track its progress here

Next
Next

Legislation to Create More Affordable Home Ownership Opportunities Signed Into Law