top of page

Healthcare AI Efficiency: Are We Risking the Human Touch?

  • Dr Rich Greenhill
  • Dec 18
  • 13 min read

The Paradox of Healthcare AI - When the Cure Threatens the Care


AI and Human Hand
AI and Human Hand

There's an uncomfortable truth lurking beneath the glossy promises of healthcare AI: in our race to automate, optimize, and streamline, we may be engineering the very humanity out of medicine.


Walk into any hospital today and you'll witness a peculiar irony. Nurses stare at screens instead of patients. Physicians type furiously during appointments, their backs turned to the very people they trained years to help. And now, we're deploying AI to help us... do more of the same, just faster.


But faster isn't always better. And efficiency without empathy isn't healthcare—it's just data processing with human consequences.


The Healthcare Efficiency Arms Race: How We Got Here

Let's be honest about how we arrived at this moment. Healthcare didn't become inefficient by accident—we built it this way.


We created:

  • Electronic health records that promised liberation but delivered digital shackles

  • Quality metrics that measure everything except what matters most to patients

  • Prior authorization systems that prioritize bureaucracy over urgency

  • Documentation requirements that transform healers into scribes

  • Administrative complexity that steals two hours of work for every hour of patient care


And now, faced with a system groaning under its own weight, we're reaching for AI as if it were a magic wand. "Just automate it," we say. "Make it more efficient."


But here's the question no one wants to ask: What if efficiency is the wrong goal?


What We're Really Chasing—And What We're Losing

The healthcare AI pitch is seductive: ambient scribes that document visits, chatbots that answer patient questions, algorithms that predict deterioration, systems that automate everything from scheduling to discharge summaries.


The promise? Free up clinicians to focus on patients.


But that promise rests on a dangerous assumption: that the problem is simply time management, that if we could just shave minutes off documentation, reclaim hours from the inbox, eliminate administrative friction, then—finally—we could return to the bedside with full hearts and clear minds.


The reality is more complex and more troubling.


We're Focused on Solving the Wrong Problem

The crisis in healthcare isn't fundamentally about efficiency. It's about meaning.

Research shows that provider burnout affects 81% of clinicians, with high-stress levels primarily due to increased workloads and administrative burdens, and fewer than 45% trust their organization's leadership to prioritize patient care. [1]

Clinicians aren't burning out because they spend too much time with patients. They're burning out because they spend too little. They entered medicine to heal, to comfort, to bear witness to human suffering and resilience. Instead, they've become data entry clerks with medical degrees.


But here's the trap: If we use AI merely to accelerate an already broken system, we haven't solved anything.


We've just made the treadmill spin faster.


The Human Touch: More Than a Platitude

Let's talk about what actually happens in healthcare encounters that AI cannot replicate.


  • The elderly woman with heart failure doesn't just need her medication list updated. She needs someone to notice that she's stopped cooking for herself, that her husband died six months ago, that the loneliness is killing her as surely as her ejection fraction.

  • The teenager with diabetes doesn't just need insulin dose calculations. She needs someone to understand that managing blood sugars feels impossible when your parents are divorcing and your friends don't understand why you can't just "eat normally."

  • The middle-aged man with chest pain doesn't just need a troponin level. He needs someone to sense the terror beneath his stoic facade, the unspoken fear that he might not see his daughter graduate.


These aren't edge cases. This is healthcare.


And no algorithm, no matter how sophisticated, can read the room. AI doesn't notice the trembling hands, the averted eyes, the forced smile that masks desperation. It doesn't feel the weight of silence when asking about depression. It doesn't intuitively know when to push and when to back off.


A comprehensive review of generative AI in healthcare found that risks such as misinformation and the undermining of the patient-physician relationship were identified, with case studies highlighting both positive and negative outcomes[2].


The therapeutic relationship isn't a luxury—it's a clinical intervention. 


Study after study shows that patients who trust their physicians have better outcomes: higher medication adherence, improved chronic disease management, greater satisfaction, fewer lawsuits, and yes, even better biomarkers.

You can't automate trust. You can't algorithmically generate empathy. And you can't outsource the sacred duty of bearing witness to human suffering.


The Unintended Consequences of Technological "Solutions"

Here's what keeps me up at night: We're creating systems that are technically brilliant but humanly bankrupt.


The Ambient Scribe Paradox

Ambient AI scribes promise to restore eye contact during visits. Finally, physicians can focus on patients instead of computers!


But what happens when the AI mishears, misinterprets, or produces errors? Research on AI-generated discharge summaries documented that 18 of 100 reviews noted safety concerns, mostly involving omissions but also several inaccurate statements termed "hallucinations."[3]

Now the physician must carefully review every AI-generated note, essentially doing the documentation twice. Or worse, they don't review carefully, and errors propagate into the medical record, silently compromising patient safety.


And there's a subtler danger: When we're freed from the cognitive work of synthesizing a clinical encounter into prose, do we actually listen more deeply? Or do we listen differently, knowing the AI is "handling it"?


The Chatbot Comfort Trap

Chatbot

Recent studies show that AI chatbots achieve engagement rates over 90% for enrolled patients and care plan adherence rates as high as 97%,[4] which sounds

fantastic.




Until you ask:

  • What are we measuring?

  • Clicks?

  • Responses?

  • Checkbox completion?

  • Are patients more engaged—or just more compliant?


There's a profound difference between a patient who understands their disease because they've had repeated, patient-tailored conversations with a skilled educator, and one who dutifully responds to automated text prompts. The first has been empowered. The second has been processed.


And when patients do have real questions, nuanced concerns, emotional needs—the chatbot hits its limits. It refers them to "speak with your provider." Except the provider is buried in inbox messages, many of which the AI already tried (and failed) to address.


The Efficiency Treadmill

Perhaps most insidiously, efficiency gains rarely translate to more time with patients. They translate to more patients per day.


Health systems see that AI has saved clinicians 30 minutes daily and think: "Great! Now they can see two more patients." The administrative burden decreases, but the cognitive and emotional burden increases. And we're right back where we started—exhausted, depleted, and disconnected.


A quality improvement study of AI-generated draft replies to patient messages showed a mean utilization rate of 20% across clinicians, with notable adoption, usability, and improvement in assessments of burden and burnout—but notably, there was no improvement in time.[5] This finding should give us serious pause.


Efficiency without purpose is just exploitation wearing a tech bro hoodie.


The Systems We Built: A Reckoning

Let's acknowledge the uncomfortable truth: We created this mess.


Healthcare didn't become complex by necessity—it became complex by design. We built systems optimized for billing, for legal protection, for regulatory compliance, for corporate profit. We layered process upon process until the actual care of actual humans became an afterthought.


And now we're deploying AI to navigate the labyrinth we constructed.


It's like building a house with 47 locks on every door, then celebrating when we invent a robot that can unlock them all slightly faster.


Wouldn't it make more sense to remove some of the locks?


The Real Questions We Should Be Asking

Before we deploy another AI tool, we should ask:


1. Does this technology reconnect or further disconnect clinicians and patients?

Not "does it theoretically free up time," but: Does it create space for genuine human connection, or does it just create a different kind of barrier?


2. Are we solving the root cause or medicating the symptom?

If prior authorizations are crushing physicians, maybe the answer isn't AI that completes them faster—maybe the answer is eliminating most prior authorizations entirely.


3. Who benefits from this efficiency?

When we save time, who captures that value? Patients who get more attention? Clinicians who have breathing room? Or administrators who can pack more appointments into the day?


4. What are we optimizing for?

Throughput? Patient satisfaction scores? Clinician wellbeing? Population health outcomes? These aren't the same thing. And optimizing for the wrong metric can make everything worse.


5. What gets lost in translation?

Every technology mediates human interaction. The telephone changed how we communicate. Email changed how we relate. What changes when AI mediates the clinical encounter?


A Different Vision: Technology in Service of Humanity


Of course I'm not a Luddite.

I don't think we should abandon AI or reject technological progress. But I believe we need a radically different framework for how we think about healthcare technology.


Principle 1: Efficiency Must Serve Connection, Not Replace It

AI should eliminate the barriers between clinicians and patients—not become another barrier.


  • Good use of AI: An ambient scribe that reliably captures the visit, freeing the physician to maintain eye contact and truly listen.


  • Bad use of AI: A chatbot that fields patient questions so physicians "don't have to be bothered," creating the illusion of access while actually distancing patients from their care team.


The test: Does this technology make it easier or harder for a patient to connect with a human who cares about them?


Principle 2: Automate the Inhumane, Not the Human

There are genuinely soul-crushing tasks in healthcare that no one should have to do: Insurance company fax tag. Manually searching for missing lab results. Reconciling medication lists across seventeen different systems.


Automate those. Please!


But don't automate the moments that give healthcare meaning: The difficult conversation about goals of care. The careful history-taking that reveals domestic violence. The teaching moment where a patient finally understands their diagnosis.

Research has shown that the clinician-patient relationship is integral to the efficacy of care, especially in behavior change, where a patient's motivation is often linked to a sense of accountability to their clinician—an element that might diminish with technology-based programs.[6]


The test: Does this task require human judgment, empathy, or relational trust? If yes, be extremely cautious about automation.


Principle 3: Measure What Matters

Healthcare has become obsessed with measuring everything that's easy to measure (clicks, times, throughput) and ignoring everything that's hard to measure (trust, healing, meaning, connection).


AI will only amplify this tendency. Algorithms optimize for whatever metric we feed them. So we'd better be damn sure we're measuring the right things.


We need new metrics:

  • How much uninterrupted time did the clinician spend making eye contact with the patient?

  • Did the patient feel heard, respected, understood?

  • Did the clinician feel they practiced medicine consistent with their values today?

  • Was this encounter healing for both parties?


The test: If we optimize for this metric, would healthcare become more human or less?


Principle 4: Patient Agency Over Patient Processing

The shift toward patient-centered care isn't just marketing—it's a fundamental rethinking of power dynamics in medicine. Research demonstrates that the digital revolution in healthcare is paving the way for person-centric healthcare models that turn patients from passive recipients to active participants.[7]

But AI can either empower patients or process them more efficiently. The difference is enormous.


  • Empowerment: AI-generated patient education materials that translate complex medical jargon into understandable language, reviewed by clinicians and tailored to individual literacy levels.


  • Processing: Automated appointment reminders that don't allow for questions or flexibility, chatbots that funnel patients through decision trees regardless of their actual needs.


The test: Does this technology expand or constrain patient autonomy?


The Path Forward: A Balanced Approach

So what do we do? How do we harness AI's genuine potential without sacrificing healthcare's soul?


1. Start with "Why," Not "How"

Before implementing any AI tool, ask: What problem are we truly trying to solve?

If the answer is "documentation takes too long," dig deeper. Why? Is it because our documentation requirements are excessive? Because the EHR is poorly designed? Because we're documenting for billing rather than clinical care?


Maybe AI can help. But maybe we need to fix the underlying problem first.


2. Clinician-Led, Patient-Informed Design

Evidence shows that strategic adoption based on implementation science, incremental deployment, and balanced messaging around opportunities versus limitations helps promote safe, ethical generative AI integration.[8]

No more top-down mandates from administrators who haven't seen a patient in years. No more vendor demos that look great on PowerPoint but collapse in clinical reality.


Every AI tool should be:


  • Designed with input from frontline clinicians

  • Tested with actual patients, not focus groups

  • Implemented incrementally with real feedback loops

  • Evaluated honestly for net benefit—not just theoretical efficiency


3. The "Net Human Benefit" Test

I propose a simple test for any healthcare AI:


(Human Connection Enabled) - (Human Connection Lost) - (New Barriers Created) ≥ Positive Number


If we can't demonstrate net human benefit, we shouldn't implement it. Full stop.


4. Protect Sacred Time

Whatever time AI saves should be ring-fenced for patient care and clinician restoration.


Not more patients. Not more meetings. Not more quality improvement projects.

More time to sit with the patient who's crying. More time to think through a complex case. More time to teach a student. More time to breathe.


Health systems must commit: Efficiency gains will not be captured by increasing volume.


5. Transparency and Consent

Patients deserve to know when they're interacting with AI. They deserve to opt out. They deserve to request human contact without penalty.


As one systematic review on virtual health assistants noted, patients must be explicitly told they are interacting with an AI, not a human expert, to avoid deception and ensure autonomous decision-making.[6]


And clinicians deserve to say "no" to AI tools that undermine their ability to practice good medicine—without being labeled as resistant to change or technologically backward.


6. Continuous Ethical Evaluation

Technology isn't neutral. Every tool embodies values and makes assumptions about what matters.


We need ongoing ethics review that asks:

  • Who is being harmed?

  • What is being lost?

  • Who benefits and who pays the cost?

  • Are we creating the healthcare system we actually want?


The Healthcare We Actually Want

Let me tell you what I think patients want when they seek medical care:


  1. They want someone who will listen without interrupting.

  2. Someone who treats them as a whole person, not a collection of symptoms.

  3. Someone who remembers their story, who sees them as an individual, who cares whether they get better.

  4. They want compassion. Expertise. Time. Attention. Respect.


Not one patient that I know of has ever said: "I wish my doctor would see me faster so they could squeeze in more appointments." "I wish a chatbot would handle my questions instead of talking to an actual human." "I wish healthcare felt more like interacting with Amazon customer service."


And let me tell you what I have learned clinicians want:

  1. They want to practice medicine the way they dreamed about in medical school.

  2. They want time to think, to teach, to heal.

  3. They want to go home feeling like they made a difference, not like they survived another shift on the hamster wheel.

  4. They want to remember why they chose this profession. They want their work to be meaningful again.


AI should serve these goals. If it doesn't, we're building the wrong thing.


The Choice Before Us


We're at a crossroads. The decisions we make in the next few years will shape healthcare for generations.


  • One path leads to maximum efficiency: Healthcare as a frictionless transaction. Patients as data points. Clinicians as high-level automation engineers. Fast, cheap, optimized—and utterly soulless.


  • The other path leads to something harder but infinitely more valuable: Technology that serves humanity rather than replacing it. Efficiency in service of connection, not instead of it. AI as a tool wielded by humans who remember that healthcare is, fundamentally, one person caring for another.


The AI arms race toward efficiency is real. 


Billions of dollars are pouring into healthcare AI. Competitive pressure is intense. Early adopters tout their technological sophistication. Laggards are shamed as backward.


But we must resist the siren song of optimization at all costs.


Because there's something worse than an inefficient healthcare system: An efficient healthcare system that has forgotten how to care.


A Call to Courage

  • To healthcare leaders: Have the courage to ask whether each AI implementation actually serves patients and clinicians, or just makes your metrics look better. Have the courage to say "no" to tools that increase throughput at the expense of humanity.


  • To clinicians: Have the courage to demand technology that supports rather than supplants your ability to care. Have the courage to insist that efficiency gains be protected for patient care, not captured by the system.


  • To technologists: Have the courage to build tools that honor the profound intimacy and complexity of the clinical encounter. Have the courage to acknowledge what AI cannot do, and to celebrate the irreplaceable value of human connection.


  • To patients: Have the courage to demand care from humans, not just algorithms. Have the courage to insist on relationships, not just transactions.


  • And to all of us: Have the courage to imagine healthcare that is both efficient AND human. Both technologically sophisticated AND deeply relational. Both data-driven AND story-honoring.


It's possible. But only if we refuse to accept false choices between progress and humanity.


Efficiency is a means, not an end. The end is healing. The end is connection. The end is one human being helping another navigate suffering, find hope, and reclaim health.


Let us not sacrifice that on the altar of automation.


The SmartSigma AI Way


At SmartSigma AI, we're not interested in building faster hamster wheels. We're committed to helping healthcare organizations implement AI in ways that honor both efficiency and humanity.

The SmartSigma AI Way
The SmartSigma AI Way

Our Approach Is Different

We don't start with the technology—we start with the people. Our consulting process begins with understanding:


  • What's actually burning out your clinicians? (Hint: It's usually not what administrators think)

  • Where are the genuine opportunities for automation? (The soul-crushing tasks, not the meaningful ones)

  • What does patient-centered care mean in your organization? (Beyond the mission statement)

  • How can technology amplify rather than replace human connection? (The hard questions most vendors avoid)


Let's Have a Conversation

If you're a healthcare leader who believes that technology should serve humanity rather than replace it...


If you're tired of vendor pitches that promise efficiency but deliver more clicks and less connection...


If you want to harness AI's power while protecting what makes healthcare human...


We should talk.


Here's What Happens Next:


1. Free Consultation (30 minutes) No sales pitch. No obligation. Just an honest conversation about your challenges and whether we can help.


2. Discovery Session (If It Makes Sense) We'll dig deeper into your specific context, workflows, and goals. We'll tell you if AI is the right solution—and if it's not, we'll tell you that too.


3. Customized Roadmap If we move forward together, we'll create a strategic plan that balances efficiency with humanity, grounded in evidence and tailored to your organization.


Ready to Build Healthcare Technology That Actually Supports Caregivers and Patients?

📧 Email us: admin@smartsigmaai.com

🌐 Visit our website: www.smartsigmaai.com

📞 Schedule a consultation: Book a 30-minute call here


💬 Connect on LinkedIn: Follow SmartSigma AI for thoughtful perspectives on healthcare technology & implementation.



SmartSigma AI:

Intelligent Healthcare Solutions - Human-Centered Results.



References

  1. Kaur M, Singh H, Mahmood K, et al. Generative artificial intelligence use in healthcare: opportunities for clinical excellence and administrative efficiency. J Med Syst. 2025;49(1):14. doi:10.1007/s10916-024-02136-1

  2. Shokrollahi Y, Yarmohammadtoosky S, Nikahd MM, et al. Beyond the screen: the impact of generative artificial intelligence (AI) on patient learning and the patient-physician relationship. Cureus. 2025;17(1):e76268. doi:10.7759/cureus.76268

  3. Gallo K, Yan W, Kellogg M, Zhu JM. Generative artificial intelligence to transform inpatient discharge summaries to patient-friendly language and format. JAMA Netw Open. 2024;7(3):e242585. doi:10.1001/jamanetworkopen.2024.2585

  4. AI chatbots boost patient engagement and reduce clinician workload, study shows. Healthcare IT News. Published January 2025. Accessed December 18, 2025. https://www.healthcareitnews.com/news/ai-chatbots-boost-patient-engagement-and-reduce-clinician-workload-study-shows

  5. Gao CA, Howard FM, Markov NS, et al. Artificial intelligence-generated draft replies to patient inbox messages. JAMA Netw Open. 2024;7(3):e243201. doi:10.1001/jamanetworkopen.2024.3201

  6. Van Name MA, Yardley JE, Bebu I, et al. Virtual health assistants: a grand challenge in health communications and behavior change. Front Digit Health. 2024;6:1406684. doi:10.3389/fdgth.2024.1406684

  7. Sharma M, Savage C, Nair M, Larsson I, Svedberg P, Nygren JM. Artificial intelligence applications in health care practice: scoping review. J Med Internet Res. 2022;24(10):e40238. doi:10.2196/40238

  8. Bajwa J, Munir U, Nori A, Williams B. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci Commun. 2024;5(1):28. doi:10.1186/s43058-024-00559-0


Copyrighted Improve Healthcare LLC 2025- All Rights Reserved

bottom of page