Illinois just passed a first-in-the-nation law banning AI from delivering mental health therapy services without human oversight, charting a course that will define health care AI regulation for years to come. Professor of Practice Stacey Lee details the implications for health care providers

The prairie state changes the landscape of health care AI
This story is based on the Negotiation Matters newsletter segment, “The AI Regulation Wars Begin: What Illinois Just Taught Every Healthcare Executive,” by Stacey Lee, professor of practice in Law and Ethics at the Johns Hopkins Carey Business School, and CEO of Praxis Pacisci.
On August 4, 2025, Illinois passed the Wellness and Oversight for Psychological Resources Act — a first-in-the-nation law banning AI from delivering mental health therapy services without human oversight. At $10,000 per violation, the fines are hefty, and there are no exceptions for wellness apps that cross into therapeutic territory.
This is more than a mental health story. Illinois has set a precedent that could shape how every clinical AI tool operates, from diagnostic algorithms to ICU decision-support systems. In a single move, the state announced that the era of unregulated AI in health care is over.
The new law does three big things: it prohibits AI from making independent therapeutic decisions, requires licensed human oversight for mental health AI, and creates a clear liability pathway for patient harm. That liability piece is the real game-changer, giving courts a roadmap for holding providers and vendors accountable when AI goes wrong.
The Illinois finding is not without company. Nevada passed similar restrictions in June, Utah now requires AI disclosure notices, and Florida Governor Ron DeSantis has called AI regulation “the biggest issue” facing health care. Statehouses are moving faster than federal agencies, meaning national standards may be shaped from the ground up.
Three immediate challenges
For health care leaders, this move creates three immediate challenges. The first is compliance fragmentation. With states writing their own rules, relying on federal guidelines is no longer enough. Organizations must track state-by-state requirements and adapt policies to meet the strictest standards in their footprint.
The second challenge is liability exposure. Illinois’s blueprint opens the door for lawsuits, and many existing AI contracts aren’t built for this. Leaders need to know where their telehealth platforms operate, how vendor agreements address conflicts between state and federal law, and what their exposure looks like in high-regulation states. And at least for the time being, states still have the right to regulate AI, since the controversial moratorium that would have prevented them from doing so was removed from the “Big Beautiful Bill” recently passed by Congress.
The third challenge is competitive disadvantage. As in any regulatory shift, the early adopters will win trust and avoid costly disruptions. Some systems are already creating AI governance boards, hiring regulatory affairs experts, and building compliance into their innovation strategies. Those who resist change will be left behind.
How to get ahead instead of being left behind
Health care providers who want to lead rather than scramble to catch up need to start now, by mapping every AI tool they use and classifying it by risk. High-risk systems make clinical recommendations directly; medium-risk tools support diagnoses under human oversight; and low-risk systems handle purely administrative tasks.
Providers then need to develop a state-by-state compliance strategy, to track pending legislation, identify states likely to adopt Illinois-style rules, and create adaptable governance protocols that can be applied across jurisdictions.
The next step is to focus on vendors. Providers should update contracts to require compliance guarantees that match or exceed the Illinois standard, and negotiate shared liability provisions. Then they should evaluate each vendor on how quickly they can adapt to regulatory changes.
What to Read Next

Faculty/research
Fault lines in health care AI – Part five: The future of AI in medicine, next steps for policy and practiceFrom compliance to influence
This ruling is a turning point, and it isn’t just about avoiding penalties — it’s an opening to shape the rules. By engaging early, health care leaders can secure stronger vendor protections, collaborate with regulators on best practices, and reassure staff that AI governance protects both patients and providers. Those who position themselves as partners in crafting oversight standards will have a voice when federal rules arrive.
The timeline is moving quickly. Within six months, five to ten more states could pass similar laws. Within a year, we may see federal preemption with national standards. Within two years, AI compliance could be as complex and resource-intensive as HIPAA. Organizations that treat this as a strategic opportunity, not just a legal requirement, will be better equipped to navigate the change and use it to their advantage.
Illinois didn’t just ban AI therapy; it charted a course that will define health care AI regulation for years to come.