Skip to main content

The Real Cost of Plugging GPT into Healthcare

Guest Article by Angela Adams, RN in ITPro Today

August 14, 2025 | Explores why GPT’s general-purpose power can be risky in healthcare and why purpose-built clinical AI offers safer, more reliable solutions.

General-purpose AI like GPT-4 and GPT-5 show remarkable capabilities, but unpredictable behavior make them risky for clinical use. In this article, our CEO outlines the hidden costs—hallucinations, model drift, and inconsistent outputs—and explains why purpose-built clinical AI, trained on medical data, delivers safer, more accurate, and more reliable outcomes for healthcare organizations.

In this ITPro Today, Industry Perspectives article, Angela Adams, RN and Inflo Health CEO tackles a critical question: What are the real risks and costs of deploying general-purpose AI, like GPT‑4, in healthcare settings?

At first glance, GPT‑4 and GPT-5’s performance is undeniably compelling. From its ability to pass medical exams and generate human-like clinical notes to rapidly analyzing vast amounts of patient data, its potential seems almost limitless and serves as an appealing beacon for healthcare innovation.

But real-world clinical practice demands more than lab-ready capabilities. The article presents a scenario: a radiologist identifies a small lung nodule—potentially benign or a red flag for cancer. When logged into an EHR, if a general-purpose LLM like GPT‑4 processes that note, it may hallucinate a non-existent protocol, shift interpretations after an update, or deliver inconsistent recommendations over time. These issues don’t just undermine trust, they can lead to erroneous treatment, delayed care, or worse.

The core message is clear: General-purpose AI can pose a risk in clinical environments. Its unpredictable behavior can jeopardize patient safety and erode accountability. Rather than rushing to adopt GPT‑inspired solutions, organizations should instead invest in purpose-built clinical AI.

Purpose-built models are trained and fine-tuned specifically for healthcare. Drawing upon biomedical literature, clinical case datasets, and diagnostic criteria, these specialized models deliver greater accuracy, consistency, and safety, tailored to the nuances of medical practice. They strike a balance between innovation and responsibility, offering clearer pathways to integration, compliance, and real-world impact.

Key Takeaways:

  • Impressive doesn’t mean safe: GPT-4 shines in controlled settings, but risk rises when bringing it to the bedside.
  • Hallucinations and drift are real threats: Inconsistent outputs and evolving model behavior pose serious challenges.
  • Purpose-built AI is the better foundation: Narrowly trained, medically-focused models offer safer, more reliable clinical integration.

Read the full article on ITPro Today.