Detection tells you a finding exists. Follow-through ensures the patient actually gets care. Those are not the same thing, and in most health systems the gap between them is where clinically significant incidental findings quietly disappear. If your operational measure of success is “the AI flagged the nodule,” you are measuring the first ten percent of the workflow and calling it done.
This post is about the other ninety percent — why it breaks, what it costs, and how to tell whether your current stack is actually closing the loop or just finding the problem and walking away.
What does detection mean in radiology AI?
Detection is identification. An AI, NLP model, or a radiologist flags a potentially significant finding — a lung nodule, an adrenal mass, a thyroid nodule, an incidental aortic abnormality — in a radiology report. A recommendation for follow-up is generated. That is where most “AI for incidental findings” tools stop. They produce a list, a queue, or an alert, and hand the rest of the job to humans.
Detection has gotten very good. FDA-cleared models routinely claim 95%+ sensitivity on narrow indications. Large NLP systems have been trained on hundreds of millions of reports. The number of findings entering the workflow has gone up. That is real progress, and it is not the problem.
What does follow-through mean?
Follow-through is everything after the finding is named. At a minimum, it includes:
- The ordering provider being notified in a way they actually see and act on
- The recommendation being accepted, routed, or revised by a responsible clinician
- The patient being contacted and the conversation documented
- A specialist referral being completed — including outside the EHR
- A follow-up appointment scheduled and kept
- The result of that appointment recorded, closed out, and reconciled with the original finding
Every one of those steps is a handoff. Every handoff is a place where patients get lost.
Where do incidental findings actually get lost?
In our work with health systems, the breakdown points cluster in four predictable places:
- The ordering provider never sees the follow-up recommendation — it is buried in the report, routed to an inbox nobody monitors, or sent to a physician who has since rotated out.
- The follow-up provider is outside the health system’s EHR. The referral becomes a fax, a phone call, or a handwritten note. No one can see whether it completed.
- The patient never knows. Outreach is attempted once, goes to voicemail, and is never logged as an open task.
- The appointment happens, but nothing updates the original record. The finding is still “open” months later because no system reconciled the visit back to the index report.
Detection tools cannot fix any of this. They were not built to. They were built to find the needle. The needle falls back into the haystack between steps.
What does the gap cost?
Clinical cost first. Incidental pathways find cancers that screening programs miss — partner data from large systems shows that incidental findings identify roughly half of cancers in patients who do not qualify for screening, and that the yield is multiples higher than screening alone for some cancer types. When the loop fails, those early-stage diagnoses are lost. Stages migrate. Outcomes get worse.
Legal cost second. Failure to communicate or act on a radiology finding is one of the most frequently cited causes of radiology malpractice. Juries do not distinguish between “the radiologist missed it” and “the finding was noted but nobody followed up.” The system is liable either way.
Operational cost third. A finding that sits in an unreconciled queue is not a neutral event. It consumes coordinator time, triggers repeat outreach cycles, creates compliance exposure, and erodes the ROI story behind whatever detection tool surfaced it in the first place.
How do I know if my team is measuring detection or completion?
A short audit. Pull a recent month of incidental findings flagged as requiring follow-up. For each one, answer four questions:
- Was the ordering provider’s receipt of the recommendation confirmed?
- Is there a scheduled follow-up event tied to the finding?
- Did the follow-up event actually happen?
- Is the result of that event reconciled against the original report?
If you cannot answer those four questions from a single system of record, you are not measuring completion. You are measuring detection and hoping.
What should buyers ask vendors in 2026?
“When your platform flags a finding, what percentage result in a completed follow-up visit reconciled back to the index report — and how do you know?”
That one question separates detection tools from completion platforms. Vendors who can show completion data — not detection accuracy, not extraction precision, but rate of reconciled completed care — are operating in a different category. Most cannot.
How Inflo thinks about this
Inflo is built around completion, not detection. We treat the finding as the starting line. Our job begins when the report is signed and ends when the loop is closed and documented — across providers, across specialties, and across the EHRs and non-EHR channels a real patient journey actually touches.
The market is converging on the language of “follow-up” and “closed loop.” Most of it is still describing detection with a queue on top. The operational question we ask every health system we talk to is the one your malpractice carrier is going to ask in three years: what percentage of your incidental findings reach completed care — and can you prove it?