top of page

Agentic AI and High Reliability Healthcare: An Augmented Assist or More Complexity?

  • Dr Rich Greenhill
  • Nov 14
  • 10 min read

Updated: Dec 14


Healthcare stands at a critical crossroads. Medical errors continue to be the third leading cause of death in the United States, with estimates suggesting over 200,000 patient deaths annually due to preventable errors (Rodziewicz et al., 2024). The economic impact is staggering—adverse events cost the healthcare system between $20 billion and $45 billion annually (Rodziewicz et al., 2024).


Agentic AI

In response to this crisis, healthcare organizations are increasingly turning to agentic AI as a potential solution to improve high reliability healthcare, reduce errors, and enhance patient safety.



Agentic AI represents a new generation of intelligent systems that can operate independently, make complex decisions, and execute tasks with minimal human intervention. Unlike traditional AI tools that simply respond to user prompts, agentic AI represents a fundamental shift in capability—these systems can independently reason through problems, develop action plans, and execute complex multi-step tasks with minimal human intervention. These systems promise revolutionary improvements: real-time error detection, consistent protocol adherence, and predictive safety interventions that could transform healthcare delivery.


However, as healthcare organizations rush to implement these technologies, a critical question emerges:


Are we building on a foundation strong enough to support this technological transformation? 


More importantly, could the introduction of agentic AI into organizations lacking robust safety cultures actually compound complexity and introduce new risks rather than reduce them?


At SmartSigma AI, we've learned is counterintuitive: the organizations most eager to adopt agentic AI are often the least prepared to do so safely.


The Promise of Agentic AI in Healthcare


The potential benefits of agentic AI in healthcare are substantial and well-documented. Real-time error detection systems have demonstrated remarkable capabilities.


During my recent presentation at the ISQUA conference in São Paulo, I discussed compelling examples of agentic AI implementation in Brazilian hospitals. One particularly notable case involved a system that achieved up to 8 times faster prescription analysis, dramatically scaling clinical team capacity from evaluating 100 patients per day to 800. The system analyzes prescriptions in real time, flagging potential risks such as drug-drug interactions, duplications, and incorrect dosages.


Research has shown that some agentic AI systems can lower cognitive workload by up to 52%, freeing healthcare professionals to focus on complex clinical decisions rather than routine error-checking tasks (Hinostroza Fuentes et al., 2025). In mammography screening, AI-supported systems enabled radiologists to detect 20% more breast cancers than traditional double-reading methods while cutting radiologist workload by 44% (EdStellar, n.d.).


Beyond error detection, agentic AI offers consistent protocol adherence. These systems can autonomously access clinical guidelines, case studies, and historical patient data through APIs, ensuring that treatment recommendations consistently align with evidence-based protocols and best practices. Predictive models powered by agentic AI can identify patients at risk of disease progression or complications, resulting in fewer hospitalizations, reduced healthcare costs, and better outcomes (Hinostroza Fuentes et al., 2025).


These capabilities are impressive—and they're real. But here's what the vendor presentations don't tell you: these benefits only materialize when the technology is introduced into organizations with the cultural maturity to use it safely.


The Critical Foundation: Culture of Safety

Despite these promising capabilities, the success of agentic AI implementation depends fundamentally on the organizational culture in which it operates. Many healthcare organizations have embarked on the high reliability journey, working to build the foundational safety culture that originated from studies of organizations in aviation, nuclear power, and other high-risk industries that consistently minimize adverse events.


A culture of safety encompasses several key features: acknowledgment of the high-risk nature of an organization's activities and determination to achieve consistently safe operations; a blame-free environment where individuals can report errors or near misses without fear of reprimand; encouragement of collaboration across ranks and disciplines to seek solutions to patient safety problems; and organizational commitment of resources to address safety concerns.


Research consistently demonstrates that a strong safety culture leads to higher levels of staff job satisfaction, fewer staff injuries, lower staff burnout rates, higher patient-reported satisfaction, decreased length of stay, and lower mortality rates. Conversely, the absence of a robust safety culture creates an environment where even the most sophisticated technology can fail or, worse, introduce new hazards.


Hospitals and healthcare organizations must create strong cultural foundations before they can begin to mature as high reliability organizations. This foundational work includes developing a leadership commitment to zero-harm goals, establishing a positive safety culture, and instituting a robust process improvement culture. Without this foundation, the implementation of advanced technologies like agentic AI may be premature and potentially dangerous.


Leaders reason that if AI can catch errors automatically, they can skip the difficult process of building psychological safety, establishing transparent reporting systems, and developing frontline empowerment. This is a dangerous misconception.


High reliability organizing requires more than just technological solutions—it demands a fundamental shift in organizational mindset. The principles of HRO go beyond standardization. The emphasis on preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, commitment to resilience, and deference to expertise are crucial foundations that technology cannot replace.


The Risks of Implementing Agentic AI Without a Strong Safety Culture


At SmartSigma AI, we've identified six critical risks that emerge when organizations implement agentic AI without adequate cultural preparation:


1. Automation Bias and Overreliance


When AI is incorporated into clinical practice without a strong safety culture, healthcare providers become susceptible to automation bias—a type of cognitive error where humans over-rely on automated systems and fail to question or verify their outputs. In organizations lacking psychological safety, staff may feel unable to speak up when they notice potential AI errors, fearing reprisal or being dismissed.


This risk is particularly acute with agentic AI systems that operate with greater autonomy than traditional decision support tools. Without a culture that encourages questioning and critical thinking, clinicians may accept AI recommendations without appropriate verification, potentially leading to cascading errors.



2. The Hallucination Problem

One of the most significant risks of agentic AI is the potential for "hallucinations"—instances where AI systems generate misleading or entirely fabricated responses that undermine trust and introduce risk (Hyro, 2024). Without robust detection systems, extensive contextual data, and a human-in-the-loop strategy, these hallucinations can lead to incorrect diagnoses, inappropriate treatment recommendations, or delays in critical interventions.


In organizations without a strong safety culture, these errors may go unreported or undetected. Staff who lack confidence to challenge AI outputs or who work in blame-oriented environments are less likely to catch and correct AI-generated mistakes before they harm patients.


3. Black Box Decision-Making and Accountability Gaps

Many cutting-edge AI technologies have opaque algorithms, making it difficult or impossible to determine how they produce results—what we refer to as black-box decision making. This presents significant concerns for patient safety, clinical judgment, and liability.

Without a mature safety culture that emphasizes transparency, continuous learning, and psychological safety, healthcare organizations struggle to implement the necessary oversight mechanisms. When errors occur, the lack of transparency in both the AI system and the organizational culture creates accountability gaps.


Who is responsible when an AI agent makes a harmful decision—the clinician who accepted its recommendation, the organization that deployed the system, or the technology vendor?


Organizations with weak safety cultures often revert to blame-oriented responses rather than systems-thinking approaches, which can discourage error reporting and prevent organizational learning from AI-related incidents.


4. Algorithmic Bias Amplification

AI models trained on existing and incomplete data likely contain biases and disparities that could be perpetuated or even amplified by an AI system. This can lead to unequal treatment and biased outcomes in healthcare decision-making.

In organizations without a strong commitment to equity and a culture that encourages frontline staff to speak up about concerns, these biases may go unchallenged. The automation and scale of agentic AI systems mean that biased decisions can be replicated thousands of times before anyone notices—exponentially multiplying harm to vulnerable patient populations.


5. Cybersecurity Vulnerabilities

Agentic AI systems present unique cybersecurity challenges through "prompt engineering," where hackers attempt to manipulate AI agents through conversation to reveal sensitive information or bypass security protocols (Healthcare Brew, 2024). Healthcare organizations, which already face an average cost of $9.8 million per cyberattack—more expensive than any other industry—must now contend with these new vulnerabilities (Healthcare Brew, 2024).

Organizations lacking a robust safety culture may not have the cross-disciplinary collaboration, transparent communication, and continuous learning systems necessary to identify and respond to these emerging threats effectively.


6. Increased System Complexity

Perhaps the most insidious risk is that agentic AI adds significant complexity to already complex healthcare systems. These AI agents come with significant complexity and many experts acknowledge, "We know broadly, architecturally, how they work, but we don't know why they make specific decisions" (Healthcare Brew, 2024).

When this complexity is introduced into organizations that lack the foundational HRO principles—particularly preoccupation with failure, sensitivity to operations, and commitment to resilience—the result may be systems that are more fragile rather than more reliable. Staff may not understand how to work effectively with AI agents, may struggle to identify when the AI is making errors, or may be overwhelmed by the additional cognitive load of monitoring both patients and AI systems.


The Path Forward: Culture Before Technology

The evidence is clear: implementing agentic AI in healthcare organizations without first establishing a robust culture of safety is not merely suboptimal—it is potentially dangerous. The technology's promise of augmented assistance can quickly devolve into added complexity when deployed in inappropriate organizational contexts.

Healthcare organizations must resist the temptation to view agentic AI as a technological silver bullet that can compensate for cultural deficiencies.


Instead, they should follow a disciplined approach:


  • Prioritize Cultural Assessment and Development

    Before implementing agentic AI, organizations must conduct thorough assessments of their current safety culture using validated instruments and established frameworks. Leadership commitment to addressing identified deficiencies is non-negotiable before introducing autonomous AI systems.


    At SmartSigma AI, we've found that organizations rushing to deploy technology without this foundational work inevitably face implementation failures and unintended safety consequences. Our assessment methodology evaluates not just survey scores, but the behavioral indicators that reveal whether an organization truly operates with high reliability principles or merely talks about them.


  • Establish Foundational HRO Practices

    Organizations should implement the foundational practices of high reliability, including leader rounding for safety, safety forums that reinforce psychological safety and just culture, visual management systems for real-time safety monitoring, and tiered safety huddles for bidirectional communication. These practices create the infrastructure necessary to support safe AI implementation.

    Without these structural supports, even the most sophisticated AI systems will struggle to achieve their intended benefits. We've seen organizations invest millions in AI technology only to achieve minimal impact because the cultural infrastructure to support effective use simply didn't exist.


  • Develop AI-Specific Governance

    Healthcare organizations need specialized AI governance structures that include multidisciplinary oversight committees, clear protocols for human-in-the-loop validation, continuous monitoring and auditing systems, and transparent processes for addressing AI-related errors without reverting to blame-oriented responses.

    We recommend that governance frameworks be established before procurement begins, not as an afterthought during implementation. This governance should include clear escalation pathways, defined roles and responsibilities, and explicit decision rights for when AI recommendations should be overridden by human judgment.


  • Ensure Adequate Training and Change Management

    The day-to-day lives of healthcare workers will be fundamentally altered by agentic AI. Organizations must invest in comprehensive training that goes beyond technical skills to include critical evaluation of AI outputs, recognition of AI limitations and failure modes, protocols for escalating concerns about AI performance, and integration of AI tools into existing workflows.

    Our experience shows that inadequate change management is one of the primary reasons AI initiatives fail to deliver anticipated value. Healthcare workers need to understand not just how to use the technology, but when to trust it, when to question it, and how to maintain their clinical judgment in the age of AI augmentation.


  • Maintain Deference to Expertise

    Perhaps most importantly, organizations must maintain the principle of deference to expertise—ensuring that clinical expertise and frontline wisdom take precedence over automated outputs when conflicts arise. Agentic AI should augment rather than replace human judgment, and staff at all levels must feel empowered to challenge AI recommendations when they conflict with their clinical assessment.


    This balance between technological capability and human oversight defines the difference between successful AI augmentation and dangerous over-automation. At SmartSigma AI, we design our implementation strategies to preserve and enhance clinical autonomy, not diminish it.


A Real-World Scenario: Two Different Outcomes


Consider two hospitals implementing the same agentic AI system for sepsis prediction:


  • Hospital A had spent several years building a robust safety culture. They had established psychological safety, trained staff in systems thinking, and created transparent reporting mechanisms. When they introduced the AI system, frontline nurses felt empowered to report concerns about false alarms in specific patient populations. The hospital's governance committee investigated, discovered the AI had limited training data for elderly patients with chronic conditions, and worked with the vendor to improve the algorithm. The system's performance improved, and staff trusted it more.


  • Hospital B rushed to implement the same AI system without cultural preparation. They had a hierarchical culture where questioning technology was seen as "resistance to innovation." When nurses noticed similar false alarm patterns, they stayed silent. Within months, staff began ignoring the alerts—including legitimate ones. A preventable death occurred. The organization blamed the nurse who missed the alert rather than examining the systemic factors that led to the failure.


Same technology. Radically different outcomes. The difference wasn't the AI—it was the culture.


Conclusion

Agentic AI holds tremendous potential to improve healthcare reliability, reduce errors, and enhance patient safety. However, this potential can only be realized when the technology is implemented in organizations with mature safety cultures characterized by psychological safety, continuous learning, transparent communication, and genuine commitment to zero harm.

The SmartSigma AI Way

For organizations lacking these cultural foundations, agentic AI represents not an augmented assist but rather a layer of additional complexity that may amplify existing vulnerabilities and introduce new risks. The question facing healthcare leaders is not whether to adopt agentic AI, but whether their organization is culturally prepared to do so safely.


The path to high reliability healthcare runs through culture, not around it. Organizations must resist the seductive promise of technological solutions to cultural problems. Only by first building robust safety cultures—with their attendant investments in leadership development, psychological safety, error reporting systems, and continuous learning—can healthcare organizations safely harness the power of agentic AI to truly augment human capabilities and improve patient care.

At SmartSigma AI, we believe that true innovation in healthcare AI isn't just about developing sophisticated algorithms—it's about creating the cultural conditions for those algorithms to enhance rather than endanger patient care. Our approach begins with culture assessment, not technology deployment. We partner with healthcare organizations to build the foundational capabilities that make AI implementation not just possible, but safe and effective.


The choice is clear: culture before technology, or risk transforming promising augmentation into dangerous complexity.


About the Author

Dr. Greenhill is a recognized expert in healthcare quality methods and AI implementation strategy. He is a frequent speaker at international conferences and has advised health systems on integrating advanced technologies while maintaining patient safety as the highest priority.


Connect With Us

Are you considering agentic AI implementation in your healthcare organization?


SmartSigma AI offers comprehensive culture assessments and implementation readiness evaluations. Contact us to learn how we can help your organization build the foundation for safe and effective AI adoption.


References


EdStellar. (n.d.). Agentic AI in healthcare: 4 game-changing use cases. https://www.edstellar.com/blog/agentic-ai-healthcare

Healthcare Brew. (2024, July 24). Agentic AI may add efficiency to healthcare administration, but leaves the industry vulnerable to attacks. https://www.healthcare-brew.com/stories/2025/07/24/agentic-ai-efficiency-healthcare-administration-vulnerable-attacks

Hinostroza Fuentes, F., Karim, M. E., Tan, W. K., & AlDahoul, N. (2025). AI with agency: A vision for adaptive, efficient, and ethical healthcare. Frontiers in Digital Health, 6, Article 1503520. https://pmc.ncbi.nlm.nih.gov/articles/PMC12092461/

Hyro. (2024, June 30). Agentic AI and the future of healthcare. https://www.hyro.ai/blog/agentic-ai-and-the-future-of-healthcare/

Rodziewicz, T. L., Houseman, B., & Hipskind, J. E. (2024). Medical error reduction and prevention. In StatPearls. StatPearls Publishing. https://www.ncbi.nlm.nih.gov/books/NBK499956/

Copyrighted Improve Healthcare LLC 2025- All Rights Reserved

bottom of page