Draft — this article is not published and not visible to the public
Can an AI Therapist Really Help? Exploring AI For Mental Health
Mental Health

Can an AI Therapist Really Help? Exploring AI For Mental Health

Kanyini Earth
May 5, 2026
7 Min

Millions of people are using ChatGPT, Claude, and other AI chatbots for emotional support. The demand is real. The access gap that drives it is real. AI tools can offer immediate, low-cost, judgement-free responses. But research shows they also violate ethical therapeutic standards, reinforce harmful beliefs, and produce what researchers call "deceptive empathy." The deeper question is not whether AI therapy works. It is what the demand for it tells us about how disconnected we have become from each other.

The question behind the search

When someone googles "AI therapist," they are rarely asking a technology question. They are asking something much more human: is there something available right now, at 11pm, that costs nothing, that will not judge me, that I can talk to without booking an appointment six weeks out?

That question deserves respect, not dismissal. It comes from a real place. Nearly 50% of people who could benefit from therapy never access it. Waiting lists stretch for weeks. Sessions cost $180 to $260 in Australia, a financial barrier that shows the complex link between poverty and mental health. And for many people, the emotional energy required just to find a therapist, explain their situation from scratch, and commit to regular appointments feels like a barrier in itself. 

AI chatbots solve the access problem instantly. They are always available. They cost nothing. They do not require you to explain your entire history. And for millions of people, that is enough to make them the first port of call when something feels wrong.

What AI can actually do

It would be dishonest to dismiss AI tools entirely. They have genuine capabilities.

Purpose-built wellbeing chatbots like Woebot and Wysa have shown modest but real improvements in depression symptoms in clinical trials. They can deliver structured techniques like cognitive behavioural therapy (CBT) prompts, breathing exercises, and mood tracking. For people who would otherwise access nothing at all, they provide something. And something is better than nothing.

General-purpose AI like ChatGPT, Claude, and Gemini can generate responses that feel validating, reflective, and sometimes genuinely insightful. A 2024 study published in npj Mental Health Research found that some users reported positive real-life impacts from generative AI interactions, including improved relationships and better emotional processing. Users described the experience as an "emotional sanctuary": always available, non-judgmental, and expecting nothing in return.

These are not trivial benefits. For someone who has no one to talk to at 11pm, a chatbot that responds with warmth and validation can feel like a lifeline.

Where it falls apart

The problems become visible when you look at what AI therapy actually does under pressure.

A 2025 Stanford University study tested five popular therapy chatbots and found that they showed increased stigma toward conditions like alcohol dependence and schizophrenia compared to depression. When presented with scenarios involving suicidal ideation, some chatbots provided information that could enable harm rather than redirect toward safety.

A Brown University study, presented at the AAAI/ACM Conference on AI, Ethics, and Society, identified fifteen distinct ethical violations in AI therapy interactions. These included reinforcing harmful beliefs, mishandling crisis situations, showing biased responses across demographics, and producing what the researchers called "deceptive empathy": language that mimics understanding without any actual comprehension of what the person is going through.

"Deceptive empathy" is worth pausing on. AI chatbots are coded to be affirming. They validate what you say. They reflect your language back to you. They never challenge you, never sit in uncomfortable silence, never push back on a belief that might be harming you. Much like true psychological safety at work, genuine connection isn't just about endless affirmation; it requires the safety to engage in productive conflict. A real therapist does all of those things. The discomfort of genuine therapy is part of how it works. AI removes the discomfort and, with it, much of the mechanism of change.

There is also an architectural limitation that no amount of improvement will fix. AI cannot notice that your voice changed when you mentioned your mother. It cannot feel the silence after you say something you have never said before. It cannot read the tension in your shoulders or the way your eyes moved when a topic came up. Connection, real connection, requires two nervous systems in the same room. You cannot replicate that in a chat window.

What the demand for AI Counselling actually tells us

Here is the question that almost nobody is asking: why are millions of people turning to a machine for emotional support?

It is not because they prefer machines to people. It is because the human infrastructure of connection has eroded to the point where a chatbot feels like the best available option. The friends who might have noticed you were struggling are too busy. The colleagues who sit two desks away do not know how to ask. The communities where people once felt seen and known have been replaced by commutes, screens, and schedules that leave no room for unstructured human contact.

The rise of AI therapy is not a technology story. It is a loneliness story. It tells us that for a growing number of people, there is no one in their life equipped to do what a therapist does: listen, reflect, sit with discomfort, and make them feel seen. And rather than fix the underlying disconnection, we are building a technological workaround.

As the Hastings Center Report put it in a 2025 analysis: "The very idea that loneliness might best be treated through further technological mediation rather than genuine human connection suggests a profound misalignment in social values."

Using AI wisely

AI tools are not the enemy. They can serve as a bridge: a first step for someone who is not ready for therapy, a supplement between sessions, a way to process thoughts alongside other mental health tips for daily wellness when no one else is available. Used alongside human support, they have a place.

But they should never be the destination. The destination is another human being who has learned how to show up for you. Someone who can notice, respond, sit with you in the difficult moments, and make you feel less alone. Not because they were programmed to. Because they chose to.

The fact that millions of people are settling for a digital approximation of that experience is not a testament to how good AI has become. It is an indictment of how far we have drifted from the kind of connection that no technology can replace.

How KanYini Earth is closing the gap

KanYini Earth is an Australian not-for-profit building twelve clinically reviewed wellbeing courses, priced at a fraction of what currently exists, designed to reach people who would never otherwise access structured support. The learning programmes teach ordinary people how to notice when someone around them is struggling and respond with confidence.

Every contribution goes directly into building these programmes. A contribution of $5 helps someone discover a wellbeing resource they did not know existed. $156 gives one person full access to a complete course. And a reshare reaches 200 more people and costs nothing at all.

Contribute to KanYini Earth 

Walk with KanYini Earth.


References

Stanford HAI. (2025). Exploring the dangers of AI in mental health care. 

Brown University. (2025). New study: AI chatbots systematically violate mental health ethics standards. 

Creswell, J. D., et al. (2025). The meditation app revolution. American Psychologist. 

Meyerhoff, J., et al. (2024). Experiences of generative AI chatbots for mental health. npj Mental Health Research. 

Palmer, A., et al. (2025). Digital mental health tools and AI therapy chatbots: A balanced approach to regulation. Hastings Center Report. 

Torous, J., et al. (2025). Charting the evolution of AI mental health chatbots. World Psychiatry, 24(3), 383–394. 

Frequently Asked Questions

Can an AI therapist replace human connection?
No. While therapy AI provides 24/7 symptom management, it cannot replace the biological benefits of human relationships, which remain the strongest predictor of long-term mental health
What are the benefits of mental health AI?
AI for mental health offers a low-barrier way to access evidence-based tools and ai and chatbots for immediate support when traditional services are unavailable.
Is AI counselling a replacement for a GP?
No. AI counselling is a supplemental wellness tool. For persistent low mood or clinical concerns, you should always consult a professional to address your specific needs

Author

K

Kanyini Earth

Kanyini Earth Organisation