Skip to main content

The Hidden AI Problem in Radiology: Why More Detection Creates More Risk

Guest Article in Onco’Zine – The International Oncology Network

June 13, 2025 | More detection isn’t always better. Angela Adams explores how AI in radiology may increase risk without smarter implementation and outcome-driven strategies.

AI is revolutionizing radiology with faster, more sensitive detection—but this progress comes with hidden risks. Angela Adams’ article explores how over-diagnosis, increased false positives, and workflow burdens can negatively impact care. Without smarter implementation and a focus on clinical outcomes—AI may create more noise than value. The takeaway? More detection doesn’t always mean better outcomes.

Artificial intelligence (AI) is rapidly transforming radiology by enhancing image interpretation and increasing detection of abnormalities, including subtle or early-stage findings. However, Inflo Health CEO Angela Adams’ article in Onco’Zine spotlights a critical and often overlooked consequence of this technological leap: the unintended risks of over-detection.

While AI tools improve sensitivity, they also identify many incidental findings—some clinically irrelevant, others ambiguous—which can lead to a surge in follow-up tests, unnecessary procedures, patient anxiety, and escalating healthcare costs. This raises a paradox: detection without context can result in more harm than good.

One major concern is the increase in false positives and overdiagnosis, particularly for findings that wouldn’t have progressed to harm if left alone. Radiologists and care teams are placed in a challenging position—navigating a sea of flagged results without a clear sense of clinical urgency or impact.

The article also emphasizes workflow strain. Radiologists may feel increased cognitive load from having to assess more flagged studies, leading to potential burnout and increased risk of human error. Moreover, attention may shift disproportionately toward AI-highlighted scans, leaving other critical reports under-prioritized.

Adams argues for a more nuanced approach to AI implementation, grounded in clinical relevance rather than volume of detection. This includes deploying AI systems that support workflow prioritization, incorporating thigh-reliability processes to ensure the expert voice of the clinician is amplified, and integrating processes that help contextualize findings.

Ultimately, the goal of AI in radiology should not be more detection—it should be better outcomes. To achieve this, health systems must focus on the outcome: better health for patients and populations.

Read the full article here.