As TikTok fills with AI therapy hacks and ChatGPT becomes the go-to for 2am breakdowns, Unglossed explores the rise of artificial emotional support.
“It was 2am and I was crying in my room about my breakup, but instead of messaging my friend, I decided to ask ChatGPT for advice” says Maya Blackwell, 24. “It just felt a lot easier.”
And she’s not alone. On TikTok, videos tagged #ChatGPTtherapy have racked up millions of views, with people documenting their midnight meltdowns and existential spirals. It sounds silly, but in a world where BeReal is how we “check in” with each other, and Spotify doubles as a therapy session, maybe it’s not so strange that AI has become Gen Z’s universal therapist. At the end of the day, it’s free, non-judgemental, and, unlike the NHS, always available.
For Maya, it started as a low-stakes experiment. “I saw a TikTok of someone using it for journaling and thought I’d try. It started off really light, like ‘give me some journaling prompts’, and then spiralled into me writing full-on monologues to it about my anxiety.”
That anonymity and accessibility is what makes it so appealing, says Professor Kate Devlin, an expert in AI and Society at King’s College London. “To be able to talk via text to something you feel won’t judge you, that’s powerful. With another human, you might hesitate and be more reluctant to talk, but with AI, there’s a sense of impartiality that can feel a lot more comforting and safer.”
And in many ways, that’s the point. Traditional therapy is expensive, often difficult to access, and for many, intimidating. Waiting lists on the NHS can stretch over six months, and private sessions can cost upwards of £60 an hour, which is definitely out of reach for most young people juggling rent, bills, and the emotional chaos of your twenties.
ChatGPT, on the other hand, is free, available 24/7, and, as Maya puts it, “doesn’t judge your 2am intrusive thoughts.”
“I’ve actually been to therapy before and it helped, but it was expensive and really intense. This time, I just wanted something low-effort, because I couldn’t afford to dive in at £100 a session.” says Maya. “Therapy feels like a big thing sometimes, you’ve got to research, book in, explain your whole life from scratch, whereas ChatGPT is just there. I could be crying at 2am and it was still ‘available’. I think the biggest thing was that I didn’t feel like I had to perform or explain myself the same way I did with an actual therapist, I could just type exactly what I was feeling, however messy it was, and it instantly gave me something back. That helped way more than I expected.”
Professor Devlin agrees, saying that the rise of AI for emotional support is, in part, a reaction to a broken system. “We know that mental health services are under a great deal of pressure. There are people on waiting lists for months, even years. I’ve experienced this myself and other family members have too. So if you can have something to fill that gap until you get help, it’s no wonder people turn to it,” she says. But Professor Devlin is also clear about the limits.
“A big concern is that they’re unregulated. These platforms aren’t trained in clinical methods, and there are no guardrails. You’re also handing over deeply personal data to big tech companies that don’t have your best interests at heart.”
And that lack of regulation isn’t just an ethical issue, it’s a safety one. “People often forget that generative AI like ChatGPT isn’t a search engine. It can make things up, and it’s designed to do that. That’s a big risk when someone is vulnerable.”
But aside from privacy concerns, there’s a more human question to consider: is this trend ruining our ability to connect to real people?
It’s a valid fear, but Devlin’s research offers a more optimistic view. “There’s always a knee-jerk reaction that this tech will isolate us, but that’s not what the evidence shows,” she says. “In fact, my research suggests that AI companions might actually encourage more human interaction, because people who have AI companions often end up forming connections with others who do the same. We’ve seen whole subreddits where users discuss their AI partners, and they build friendships with each other because they have something in common.
“There are also people who report that they feel less socially anxious because they’ve been able to rehearse conversations or get advice from their AI companions, and then take that into the real world. So I think there’s a really positive aspect to evolving technology that often gets overlooked, it’s not replacing human interaction, it’s actually sometimes helping enable it.”
That said, it’s still not a substitute for real care. As Professor Devlin puts it, “The gold standard would be AI integrated into properly regulated, clinically supported apps, with human oversight and clear ways of flagging distress. On its own, it would be too risky. So we do need more investment into mental health services, because these apps may help, but they’re not a replacement, and the danger is that they’ll be treated as one.”
And that’s where the bigger concern lies, not just in how individuals use AI for support, but in how institutions might come to rely on it too. The real danger isn’t just emotional over-reliance or questionable advice, it’s that AI could become a convenient excuse to slash already strained mental health services… and that’s the opposite of what we need. Tools like ChatGPT might offer comfort at 2am, but they shouldn’t become a stand-in for real care, proper funding, or the human connection that therapy is built on. Tech might help fill the gap, but it shouldn’t be allowed to replace the safety net altogether.

To find out more about Professor Kate Devlin, please visit https://www.kcl.ac.uk/people/kate-devlin