In this final installment of our five-part series, we look ahead: What can we do to make AI in medicine safe, fair, and sustainable? What policies and practices will ensure AI relieves burdens instead of creating new ones?

Fault lines in health care AI – Part five: The future of AI in medicine, next steps for policy and practice
As assistive AI tools grow in popularity in clinical settings—from diagnostics to decision support—they promise a future of fewer errors, faster workflows, and reduced cognitive burdens for health care providers. But behind these promises lie high-stakes trade-offs.
“Fault lines and power struggles in health care AI” is a five-part series examining the complex impact of artificial intelligence on medicine. This series explores the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape.
In parts one through four, we traced the fault lines emerging as AI becomes embedded in clinical decision-making. We explored how physicians are being held legally accountable for AI-driven errors, how this pressure contributes to emotional exhaustion, and how trust—between patients, providers, and technology—is both fragile and foundational. In this final installment, we look ahead: What can we do to make AI in medicine safe, fair, and sustainable? What policies and practices will ensure AI relieves burdens instead of creating new ones?
Part five: The future of AI in medicine, next steps for policy and practice
AI is often framed as a cure for many of medicine’s chronic ailments like decision fatigue, administrative overload, and medical error. But as we’ve seen throughout this series, that is not always the case. Doctors are being asked to act as both stewards and skeptics of AI—validating outputs they didn’t design, and bearing the blame when things go wrong. This is not just inefficient; it's unrealistic.
But the solution is not to scale back AI adoption. It’s to scale up support. To move forward, we need a shared agenda for safer implementation that balances innovation with oversight and pairs technical sophistication with human-centered design.
Building from the inside out
Responsible AI development must go beyond aspirational ethics. Health systems and developers should embed RAI principles like fairness, explainability, and safety into every step of design and deployment. That includes adopting transparency tools like “model cards”—short, plain-language summaries that describe an AI model’s intended use, limitations, performance characteristics, and any known biases. These cards allow clinicians to make informed decisions about when and how to rely on AI without needing to decode black-box algorithms.
RAI isn’t just a technical checklist. It’s a cultural commitment to protecting both patients and providers from the unforeseen risks that come with powerful new tools.
“Human-in-the-loop” is a collaborative model for integrating human expertise into the development of machine learning. And at its core it is similar to how health care is built on apprenticeship—residents learn under supervision, gradually gaining autonomy. AI implementation should follow this model. Clinicians must be involved not just as end-users, but as co-creators. Engaging “superusers” early in the design process ensures tools are tailored to clinical realities, and that the developers understand frontline needs.
That same principle should extend to rollouts. Rather than launching AI systems all at once, organizations should adopt incremental implementation strategies. Pilot programs with expert oversight allow for real-time feedback, error correction, and confidence-building. This “training wheels” approach supports collective learning, enabling clinicians to build trust in AI through guided experience.
Sharing the load
Earlier in this series, we reflected on the regulatory and legal frameworks emerging for health care AI. Undoubtedly innovation in health care AI is proceeding at a pace that far exceeds that of legislative action and guidance. What can we do in the meantime to bolster clinician confidence and reduce the stress associated with using AI in medical settings?
Hospitals and health networks should establish AI governance committees tasked with setting policies, reviewing use cases, ensuring fairness, and monitoring safety. These bodies must be interdisciplinary—bringing together ethicists, physicians, data scientists, and legal experts—and must have real authority over AI adoption and oversight.
AI governance can help shift responsibility from individual physicians to organizational systems. In doing so, it reframes AI as a collective endeavor with important outcomes: reducing fear, improving safety, and fostering trust.
Fostering a culture of safe disclosure
Borrowing from aviation, where pilots can report near-misses without fear of punishment, health care should establish safe reporting environments for AI-related incidents. If clinicians fear retribution for mistakes—especially those tied to AI—they will hide them. That hinders learning, erodes trust, and perpetuates avoidable harm.
Creating safe spaces for transparency not only protects clinicians; it strengthens the entire learning system. Over time, this kind of disclosure culture can help turn individual errors into shared knowledge—and ultimately, into better care.
What’s next?
The path to an AI-powered future in medicine is not just technical but also political, cultural, and deeply human. To harness AI’s potential without overburdening clinicians or compromising care, we must align our tools with the lived realities of medical practice. That means designing AI systems that are transparent, collaborative, and governed by shared accountability. It also means embedding AI into a professional ecosystem where doctors are empowered—not isolated—and where technology serves to elevate human judgment, not replace it.
As we conclude this series, the adage remains: “With great power comes great responsibility.” It is our duty to shape the future of health care AI through policies that support responsible AI, human-in-the-loop design, institutional governance, and a culture of care.
What to Read Next

Faculty/research
Fault lines in health care AI – Part four: Mapping the web of trust in AI-assisted health careFault lines in health care AI
Part one: The high stakes of AI in medicine
Part two: Who’s responsible when AI gets it wrong?
Part three: AI, the Hippocratic Oath, and burnout
Part four: Mapping the web of trust in AI-assisted health care
Contributors:
Ritu Agarwal is the Wm. Polk Carey Distinguished Professor of Information Systems and Health at the Johns Hopkins Carey Business School. She is also the founding co-director of the Center for Digital Health and Artificial Intelligence.
Gordon Gao is a professor at the Johns Hopkins Carey Business School in the fields of information systems, health, and economics. He is also a co-director of the Center for Digital Health and Artificial Intelligence.