Static Models, Dynamic Risk: What AI Governance Can Learn from the Joint Commission's Biggest Shift in Decades
- Dr Rich Greenhill
- Mar 13
- 7 min read
There is a pattern hiding in plain sight across two of the most pressing conversations in healthcare leadership right now — and almost no one is connecting them.
The first conversation is about the Joint Commission. On January 1, 2026, the Joint Commission retired the National Patient Safety Goals (NPSGs) — a framework that shaped hospital accreditation for more than two decades — and replaced them with 14 National Performance Goals (NPGs), alongside the launch of its Accreditation 360: Continuous Engagement model. The transition marks what has been described as the most comprehensive overhaul of healthcare accreditation since Medicare's creation in 1965.
The second conversation is about artificial intelligence. Boards and C-suites across the country are grappling with how to govern AI responsibly in clinical and operational environments — how to deploy it safely, monitor it continuously, and ensure it remains trustworthy over time.
These two conversations look unrelated. They are not.
The Problem with Static Models
In the world of artificial intelligence, there is a well-documented phenomenon called model drift. It describes what happens when an AI model — trained on historical data, optimized for known conditions, and deployed into a production environment — begins to lose accuracy over time as the world around it changes.
The model doesn't break. It doesn't send an error message. It continues producing outputs that appear confident and coherent. But the environment it was trained on and the environment it is now operating in have diverged. Patient demographics shift. Treatment protocols evolve. Regulatory changes alter what quality metrics mean. And the model, trained on yesterday's patterns, quietly becomes less and less reliable — often without any obvious signal that something is wrong.
As researchers in AI governance have documented, model drift is "one of the most common and least visible risks of AI systems." Performance can decline even though the system appears to be functioning normally. A static model deployed in a dynamic environment will drift. The question is not whether — it is when, and whether your governance structure will catch it before it causes harm.
This is a well-understood technical challenge. What is less widely recognized is that the Joint Commission's transition away from the NPSG framework reflects a strikingly similar insight — applied to healthcare quality and patient safety.
The NPSG Framework and the Limits of Static Inspection
The NPSGs were introduced in 2002 to address a critical gap: hospitals lacked a common framework for preventing the most frequent, high-harm patient safety failures. Wrong-patient identification. Medication errors. Surgical site confusion. Infection transmission. The goals established baseline safeguards, drove real measurable improvements, and built a common language for patient safety across thousands of hospitals. They were genuinely consequential.
Over time, however, research and field observation documented a pattern in how many organizations responded to the framework in practice. Survey preparation became a distinct organizational activity — one that intensified in the months before a surveyor arrived. The question many teams focused on was: Can we demonstrate that our processes exist? In practice, for many institutions, compliance came to be defined by documentation rather than by demonstrated outcomes.
The Joint Commission itself recognized this drift between the framework's intent and its real-world effect. The transition to National Performance Goals reflects that recognition directly. Where the NPSGs emphasized process safeguards, the NPGs are explicitly outcomes-oriented — requiring hospitals to demonstrate that processes actually produce measurable results, not merely that processes are documented. As the Joint Commission has framed it, the new framework brings "a sharper focus to pressing issues in healthcare" through "14 measurable topics with clearly defined goals."
The Continuous Engagement model compounds this shift. Rather than the episodic three-year survey cycle, the Joint Commission has described Accreditation 360 as "a shift away from the traditional episodic 'survey-every-three-years' approach toward a continuous partnership for quality improvement." Continuous monitoring. Ongoing accountability. Adaptive oversight.
In other words: the Joint Commission identified a structural vulnerability in point-in-time inspection of a dynamic system — and redesigned the framework accordingly.
Sound familiar?
AI Governance Is Making the Same Mistake
Here is the uncomfortable parallel that every health system board and C-suite needs to sit with.
Governance researchers studying AI deployment across regulated industries have documented a consistent and concerning pattern: the AI governance frameworks currently taking shape at many major institutions closely resemble the compliance model the Joint Commission just moved beyond.
Policy documents. Ethics checklists. Validation reports generated at the point of deployment. Periodic reviews scheduled at fixed intervals. Compliance defined as having the right documentation in place.
Static oversight. Dynamic environment. The drift accumulates invisibly.
AI governance researchers have been sounding this alarm with increasing urgency. Traditional risk management frameworks, they note, were "built for static environments — where risks could be identified, assessed, and monitored periodically. But AI operates in a completely different realm. It's dynamic and adaptive, constantly evolving as it processes new data and adjusts its behavior."
Current approaches rely on "static evaluations, rigid error categories, and infrequent updates" — failing to catch model degradation or performance drift that emerges over time.
The International Association of Privacy Professionals has documented the same gap directly: AI governance can no longer be a "write up a set of rules and walk away" function. Risks like model drift "don't appear on a schedule, so the systems monitoring them can't be static. Governance must be as flexible and responsive as the AI it manages."
That is precisely the argument the Joint Commission just made about patient safety accreditation. And health system leaders have a unique advantage here: they just watched a major accreditation body work through this exact transition, in their own domain, in real time.
The Concept the Field Needs: Silent Performance Failure™
There is a term that deserves a place in the vocabulary of every health system board and executive team navigating both of these challenges: Silent Performance Failure™.
Silent Performance Failure™ describes the condition in which an organization — or a model, or a system — appears to be functioning acceptably by conventional measures, while actual performance has quietly deteriorated in ways those measures cannot detect. The dashboard shows green. The survey preparation is complete. The AI validation report was filed at deployment. But the gap between apparent performance and actual performance is widening, invisibly, until something fails visibly.
Silent Performance Failure™ is not a malfunction. It is what happens when measurement systems don't keep pace with operational reality. It is the organizational pattern the Joint Commission's transition is designed to address in patient safety. And it is what point-in-time AI governance frameworks are producing right now in organizations that believe a deployment checklist constitutes ongoing oversight.
The question every health system leader should be asking is: Where in our organization is Silent Performance Failure™ accumulating right now — and do our current measurement systems have any way to detect it?
For most institutions, the honest answer includes their AI governance infrastructure.
What This Means for Health System Boards and C-Suites
The parallel between the NPSG transition and AI governance is not just intellectually interesting. It is operationally actionable.
Deployment is not governance. An AI model validated at the time of deployment is not a model that is currently performing as intended. The environment has changed. Governance that stops at deployment is the structural equivalent of a triennial survey with no continuous monitoring between visits — exactly the model the Joint Commission just retired.
Measurement systems must detect drift, not just existence. The NPG framework asks: is your performance actually where it needs to be, continuously? The equivalent question for AI governance is: do you have monitoring infrastructure that can detect when a model's real-world performance has diverged from its validated performance — before a patient is harmed or a clinical decision is corrupted? Most health systems currently cannot answer yes.
Board-level accountability must extend to AI. NPG Goal 2 places the governing body and leadership team explicitly on the hook for culture of safety outcomes. The same accountability logic applies to AI: boards that approve AI deployment without approving continuous governance infrastructure are accepting risk they cannot currently see or measure.
The transition you just went through is a template. Health systems that treated the NPSG-to-NPG transition as a compliance update missed the strategic signal. The AI governance transition is happening faster, with less institutional guidance and higher stakes. Leaders who recognize the structural parallel have a meaningful head start.
The Lesson
The National Patient Safety Goals were not retired because they failed. They were retired because the field advanced beyond what a periodic inspection model could reliably support in an increasingly complex and dynamic care environment.
The same progression is coming for AI governance. Organizations that build continuous, adaptive oversight infrastructure now — that treat AI performance as an ongoing operational variable rather than a deployment-time compliance event — will be positioned well ahead of the regulatory and accreditation curve.
The Joint Commission already showed us what that transition looks like, and what it costs to delay it. In patient safety, it took two decades. In AI governance, healthcare leaders may not have that kind of time.
The question is whether health system boards and C-suites will recognize the pattern early enough to act — or whether they will spend years preparing for the AI governance equivalent of a triennial survey while Silent Performance Failure™ quietly accumulates beneath the surface.
Is Your Organization Operating with Silent Performance Failure™?
The Silent Performance Failure™ framework is a proprietary diagnostic tool developed by SmartSigma AI, designed to help health system leaders identify where apparent performance and actual performance have diverged — in patient safety, in AI governance, and across operational domains where static measurement systems may no longer be sufficient.

The framework gives boards and executive teams a structured approach to identifying governance blind spots before they become patient safety events.
Learn more at SmartSigma AI and schedule a discussion
Sources: Joint Commission National Performance Goals (effective January 1, 2026) — jointcommission.org; Joint Commission Accreditation 360: Continuous Engagement Model — jointcommission.org; Wolters Kluwer, "Joint Commission Accreditation 360 Implications for Nurse Leaders" (2025); EC-Council Cybersecurity Exchange, "Bias, Model Drift, Hallucination: Mapping AI Risks to Governance Controls" (2026); Censinet, "AI Risk Management: Why Traditional Frameworks Are Failing" (2026); International Association of Privacy Professionals, "Model Drift, Data Leaks and Deepfakes: Rethinking AI Governance" (2025); KPMG, "How AI Is Changing Model Risk Management" (2026); Springer Nature, "Navigating Healthcare AI Governance: the CAOS Framework" (2025).
