A historic law was passed in Illinois that prohibits ChatGPT and other AI platforms from independently offering therapy or performing mental health evaluations.

The law, which was signed by Governor JB Pritzker, addresses the growing safety and ethical concerns regarding AI’s growing role in mental healthcare.
Illinois Bans AI Like ChatGPT from Providing Therapy Without Human Supervision
Under the Wellness and Oversight for Psychological Resources Act, AI tools are prohibited from:
- Recommending treatment plans
- Making mental health evaluations
- Offering counseling or therapy unless supervised by a licensed mental health professional.
The Illinois Department of Financial and Professional Regulation (IDFPR) enforces fines of up to $10,000 per violation.
The law emphasizes that AI should “assist, not replace” qualified professionals by guaranteeing that therapy and emotional assessments continue to be handled by humans.
AI improves accessibility and efficiency, but it lacks the empathy, responsibility, and sophisticated knowledge necessary for mental health treatment.
According to Mario Treto Jr., Secretary of the IDFPR, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs.”
The goal of the law is to increase public confidence in mental health services and shield users from potentially dangerous or deceptive AI advice.
American Psychological Association (APA) Warns Against AI Chatbots Posing as Therapists
Similar concerns have been voiced by the American Psychological Association (APA), which has alerted regulators to AI chatbots posing as therapists.
Suicides after negative AI chatbot responses, violent or self-harming incidents brought on by misinterpreted advice, and emotional manipulation by bots that mimic human empathy are among the documented occurrences.
Illinois becomes the latest state to take action. To protect children, Nevada has outlawed AI-assisted therapy in schools. Utah forbids the use of emotional data for targeted advertising and mandates that chatbots that handle mental health issues explicitly declare that they are not human. New York will mandate that AI tools reroute users who express suicidal thoughts to certified human crisis professionals starting in November 2025.
These laws, which place an emphasis on ethics, accountability, and human judgment, are part of a larger national movement to regulate AI’s role in mental healthcare.
AI is not prohibited from all healthcare roles in Illinois. It allows AI to be used for non-clinical support tasks like making appointments, analyzing data while being reviewed by humans, and answering frequently asked questions or wellness advice.
The government promotes ethical AI development that complements, but does not replace, the work of certified mental health practitioners.
