(2025-04-01) Doctorow Anyone Who Trusts An Ai Therapist Needs Their Head Examined

Cory Doctorow: Anyone who trusts an AI therapist needs their head examined. There's a debate to be had about whether AI chatbots make good psychotherapists. This is not an area of my expertise, so I'm not going to weigh in on that debate. But nevertheless, I think that if you use an AI therapist, you need your head examined:

I'm not an expert on psychotherapy, but I am an expert on privacy and corporate misconduct, and holy shit is the idea of a chatbot psychotherapist running on some Big Tech cloud a terrible idea. Because while I'm no expert on therapy, I have benefited from therapy, and I know this for certain: therapy requires confidentiality.

Shrinks are incredibly careful about privacy.

Now consider the chatbot therapist: what are its privacy safeguards? Well, the companies may make some promises about what they will and won't do with the transcripts of your AI sessions, but they are lying.
There is no subject on which AI companies have been more consistently, flagrantly, grotesquely dishonest than training data.

This isn't just any data, it's data that isn't replicated elsewhere on the internet. It's rare – it's unique. It's a competitive advantage. AI companies will 100%, without exception, totally use your private therapy data as training data.

What's more: they will leak your therapy sessions. They will leak them because they can't figure out how to prevent models from vomiting up their training data verbatim:

But they'll also leak because tech companies leak like hell. They are crawling with insider threats. If the AI company sticks around long enough, it'll leak your secrets. And if it goes bankrupt? That's even worse!

Now, maybe you're thinking, "OK, but that's a small price to pay if we can finally get therapy for everyone." After all, the country – the world – is in the midst of a terrible mental health crisis and there's a dire shortage of therapists.

Now, let's stipulate for the moment to the idea that chatbots are substitutes for human therapists – that, at the very least, they're better than nothing. I don't think that's true, but let's say it is. Even so, this is a bad tradeoff.

Here, try this thought-experiment: someone figures out a great business-model for to pay for therapy for poor people. "We turned therapy into a livestreamed reality TV show.

This gambit is called "predatory inclusion." Think of Spike Lee shilling cryptocurrency scams as a way to "build Black wealth" or Mary Kay promising to "empower women" by embroiling them in a bank-account-draining, multi-level marketing (MLM) cult.

But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone.


Edited:    |       |    Search Twitter for discussion