Agentic AI Orchestration: The New Essential Leadership Competency for Healthcare's Next Era
- Dr Rich Greenhill
- Feb 25
- 8 min read
Updated: Feb 26
Healthcare has never had a shortage of leaders willing to implement the next great thing. What it has always struggled with is leaders willing to do the harder work of governing it once it's embedded. As thousands of the world's most committed healthcare executives prepare to gather in Houston this week for the 2026 American College of Healthcare Executives (ACHE)Annual Congress — under the theme Where Purpose Matters — I want to make the case that this distinction has never mattered more than it does right now.

The professionals who attend the ACHE Congress are, by definition, lifelong learners. That is not a credential — it is a disposition. They understand that healthcare leadership is never finished, that the body of knowledge required to lead well is always expanding, and that gathering once a year with peers and thought leaders isn’t a luxury — it’s a professional obligation. That same disposition is exactly what the emergence of agentic AI demands of us today. We are on the eve of a fundamental shift. Healthcare organizations are moving rapidly beyond isolated AI pilots to enterprise-wide agentic systems — autonomous agents that don’t just suggest actions, but execute multi-step workflows across clinical and operational environments. The critical question isn’t whether your organization will adopt these systems. Most already are.
The question that keeps me in thought — and the one I believe deserves far more attention in our field — is this:
Are our leaders prepared to lead once AI is embedded — not just to implement it?
Lessons from The Past: Implementation Is the Beginning, Not the Achievement
As a Six Sigma Master Black Belt, I have spent much of my career working at the intersection of quality science, leadership development, and organizational change. One of the most enduring lessons from that work is this: implementing a quality system and leading an organization that has one embedded are two entirely different competencies.
When Six Sigma arrived in healthcare, organizations invested heavily in training, deployment, and toolkits. Many achieved early wins. But the organizations that sustained those gains — that actually transformed their quality culture — were the ones whose leaders developed the competency to govern the system over time. They understood the methodology deeply enough to ask the right questions, challenge the outputs, and hold the organization accountable to the principles even when pressure mounted to cut corners. The organizations that faltered treated Six Sigma as a project. The ones that thrived treated it as a leadership discipline.
Agentic AI is following the same arc — and the stakes are higher. We are not talking about process improvement tools. We are talking about autonomous systems that interact directly with patients, clinical staff, and high-stakes decision workflows. The margin for underprepared leadership is razor thin.
What Agentic AI Actually Means for Healthcare Leaders
Generative AI creates content. Agentic AI acts. It executes multi-step workflows, coordinates across systems, and makes decisions — often without waiting for human input at each step. When deployed well, this is genuinely transformative. Leading academic medical centers are piloting agentic orchestration to address one of the most persistent constraints in oncology care: clinicians currently spend 1.5 to 2.5 hours per patient preparing for tumor board meetings, manually synthesizing imaging, pathology, clinical notes, and genomic data. Early implementations are compressing that preparation from hours to minutes.
But here is what the efficiency narrative obscures: those gains only materialize when the underlying orchestration is governed well. And governing it well is a leadership competency — not a technical one. This is the distinction I want every CEO, CMO, COO, and quality leader reading this to sit with: your technical teams can build and deploy agentic systems. Only your leadership can ensure those systems serve your patients safely, reliably, and in alignment with your organizational values.
Leading Agents That Interact with People
This is where I believe the conversation in our field needs to go further. Much of the current discourse around healthcare AI focuses on implementation: which tools to deploy, how to integrate them, what the ROI looks like. These are legitimate questions. But they are not leadership questions — they are project management questions.
The leadership question is different: How do you lead an organization where autonomous agents are active participants in patient care?
This is genuinely new territory. When an AI agent surfaces a treatment recommendation, communicates with a care team, or initiates a prior authorization workflow, it is not simply executing a task — it is acting as a proxy for your organization’s clinical judgment and values. The accountability for that action does not disappear into the algorithm. It flows upstream to the leaders who designed, deployed, and govern the system.
Just as we hold clinical leaders accountable for the cultures and protocols that shape how physicians practice, we must hold executive and quality leaders accountable for the frameworks that govern how agents act. This is not a technology governance problem. It is a leadership competency problem — one that belongs squarely in the domain of every leader in that Houston convention center this March.
Quality at Risk: The Hidden Cost of Poor Orchestration
Without competent orchestration leadership, organizations face a danger I find deeply concerning: quality degradation disguised as efficiency gains. Agents that optimize for speed over accuracy. Workflows that quietly bypass safety checks. Decisions made on incomplete clinical context — not because anyone intended harm, but because no one was leading the system with quality as the primary lens.
In Six Sigma terms, this is process drift — except instead of drifting product specifications, we are drifting clinical standards. And unlike a manufacturing defect, the consequences land on patients.
The quality leader’s role in an agentic AI environment is not to slow innovation. It is to ensure that automation amplifies clinical excellence rather than systematizing mediocrity. That requires understanding that orchestration architecture is a quality framework — and treating it accordingly.

The Governance Imperative: Where Purpose Lives or Dies
The theme of this year’s ACHE Congress — Where Purpose Matters — is not aspirational language. In the context of agentic AI, it is a governance mandate. Purpose in healthcare has always meant that every decision, every system, every protocol exists to serve the health and dignity of the patient. Agentic AI does not change that purpose. But it does demand that we are far more intentional about encoding it.
Here is where most organizations stumble: they deploy agents without lifecycle management. The result is what analysts are calling “agent sprawl” — duplicated agents, unclear accountability, inconsistent controls, and permissions that persist long after their original use case has passed. Agent sprawl doesn’t just create operational chaos. It creates quality gaps: conflicting clinical recommendations, patient information lost in automated handoffs, and safety protocols quietly overridden in the name of efficiency.
The Unified Agent Lifecycle Management (UALM) framework has emerged specifically to address this, mapping governance onto five control layers: an identity and persona registry so every agent has defined credentials and scope of practice; orchestration and cross-domain mediation to prevent conflicting actions; PHI-bounded context controls that respect patient privacy; runtime policy enforcement with kill-switch triggers for immediate intervention when quality thresholds are breached; and full lifecycle management linked to audit logging and credential revocation.
These are not technical specifications. They are leadership accountability structures — the agentic AI equivalent of credentialing, privileging, and peer review. Leaders who understand that parallel will be far better equipped to govern these systems than those who delegate governance entirely to their IT departments.
Real-World Impact: From Tumor Boards to Prior Auth
Oxford University’s Department of Oncology, in collaboration with Microsoft, has built and is currently piloting TrustedMDT — a multi-agent system using Microsoft’s Healthcare Agent Orchestrator that coordinates three specialized agents to summarize patient charts, determine cancer staging, and draft guideline-compliant treatment plans for tumor board review. The clinical evaluation at Oxford University Hospitals NHS Foundation Trust began in early 2026. Human clinicians retain final decision authority — but the orchestration ensures they arrive at that decision with complete, synthesized information rather than fragmented data that increases cognitive load and diagnostic error risk.
Stanford Medicine processes 4,000 tumor board patients annually and is exploring how multi-agent approaches could reduce fragmentation and surface insights from previously disconnected data elements. The applications extend well beyond oncology — navigation and triage, documentation support, medication safety, prior authorization orchestration, and capacity management are all active implementation domains.
What’s notable about the most successful implementations is not the sophistication of the technology. It is the maturity of the governance. Leaders at these organizations understood early that every orchestration decision — which agents have access to what data, how conflicts between agent recommendations are resolved, who is accountable when an automated system contributes to a patient outcome — is a quality decision.
A Lifelong Learning Imperative
The leaders who gather each year at ACHE Congress understand something that separates excellent executives from adequate ones: the work of leadership development never ends. Every year brings new evidence, new challenges, new frameworks that require updating how we think and how we lead. That is not a weakness — it is the entire point of the gathering.
Agentic AI orchestration is this year’s imperative. It will not be the last. But it may be the most consequential one our field has faced in a generation, precisely because the systems we are building now will interact directly with patients, make decisions on behalf of clinicians, and carry the imprint of our governance choices — or the absence of them — for years to come.
Organizations that excel at agentic orchestration don’t begin with the most complex workflows. Like any mature quality discipline, they start with lower-risk administrative applications, demonstrate value, build organizational confidence, and establish governance patterns before advancing to clinical decision support. The phased approach isn’t timidity — it is the disciplined methodology of leaders who understand that sustainable transformation requires a foundation.
The Bottom Line
Agentic AI represents a fundamental shift from reactive to proactive healthcare delivery. The organizations that thrive will not be those with the most sophisticated algorithms. They will be the ones whose leaders — CEOs, CMOs, COOs, and quality officers — understand orchestration as a strategic competency requiring the same investment in governance, accountability, and continuous learning that every other dimension of high-reliability care demands.
Here is what I believe and what the evidence is beginning to confirm: the greatest risk in healthcare AI is not a rogue algorithm. It is a leadership team that deployed one without knowing how to govern it. When that happens — and it will happen — the algorithm won't be accountable. You will be.
Frequently Asked Questions
Q: What is agentic AI orchestration in healthcare?
A: Agentic AI orchestration coordinates multiple autonomous AI agents to work together on complex healthcare workflows while maintaining governance, safety, and accountability. Unlike single AI tools, orchestration enables multi-agent systems to handle tasks like tumor board preparation, prior authorization, and clinical documentation across multiple systems.
Q: What is AI agent sprawl and why is it dangerous?
A: Agent sprawl occurs when organizations deploy multiple AI agents without centralized governance, leading to duplicated agents, unclear accountability, and inconsistent controls. In healthcare, this creates quality gaps, conflicting clinical recommendations, and compliance risks that can directly affect patient safety.
Q: What is the UALM framework?
A: Unified Agent Lifecycle Management (UALM) is a governance framework mapping AI agent oversight onto five control layers: identity registry, orchestration, PHI-bounded context, runtime policy enforcement, and lifecycle management. It helps healthcare organizations prevent agent sprawl while scaling AI safely.
Q: How long does it take to implement agentic AI orchestration?
A: Healthcare organizations typically begin with non-critical administrative tasks and progress to clinical applications over 12–24 months. This phased approach allows leaders to build governance patterns and demonstrate value before advancing to higher-stakes workflows — a methodology consistent with any mature quality discipline.e phased approach allows leaders to build governance patterns and demonstrate value before advancing to higher-stakes workflows.



