Researchers at Stanford University warn that therapy chatbots powered by large language models may stigmatize users with mental health conditions and respond inappropriately.
A new paper titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” evaluates five therapy chatbots based on guidelines for good human therapists.
The study found significant risks in using chatbots for therapy, as they showed increased stigma towards conditions like alcohol dependence and schizophrenia compared to depression.
The researchers suggest that while AI tools are not ready to replace human therapists, they could potentially assist with tasks like billing, training, and supporting patients in therapy.