This five-part series examines the complex impact of artificial intelligence on medicine by exploring the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape. Part two digs deeper into the question of liability: when AI contributes to a medical error, who takes the blame?

Fault lines in health care AI – Part two: Who’s responsible when AI gets it wrong?
As assistive AI tools grow in popularity in clinical settings—from diagnostics to decision support—they promise a future of fewer errors, faster workflows, and reduced cognitive burdens for health care providers. But behind these promises lie high-stakes trade-offs.
“Fault lines and power struggles in health care AI” is a five-part series examining the complex impact of artificial intelligence on medicine. This series explores the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape.
We’ll ask: Who is liable when AI gets it wrong? What are the unintended consequences for physician well-being? How is trust formed or broken? And what policies can ensure that AI supports, rather than undermines, the human side of care?
In part one, we explored the paradox of assistive AI in health care—tools meant to ease physician burdens may actually increase them by adding new legal and cognitive stressors. Now, in part two, we dig deeper into the question of liability: when AI contributes to a medical error, who takes the blame?
Part two: Who’s responsible when AI gets it wrong?
Judgment on trial
Artificial intelligence is increasingly integrated into clinical decision-making, from triage to diagnostics and treatment recommendations. But when AI gets it wrong, the legal system isn’t asking whether the algorithm failed. It’s asking what the physician did, and typically holding them solely responsible.
Under current U.S. malpractice law, liability rests on the “reasonable physician under similar circumstances” standard.1 Whether AI was used or not, courts judge the physician’s conduct. There is no doctrine assigning shared responsibility to AI systems, even when their recommendations directly influence patient care.
Diagnostic delays, flawed outputs, and reliance on digital tools have all been litigated, but the outcomes remain consistent: physicians bear the responsibility, regardless of how complex or opaque the technology may be.
As AI becomes more entrenched, malpractice claims involving it are inevitable. Without reform, courts may continue to assign full liability to physicians as the sole human actors in AI-assisted care.
Other fields offer alternatives. In aviation, fault is distributed across pilots, systems, and manufacturers when automation fails.3 Health care could benefit from a similar approach that recognizes the shared nature of decision-making.
Legal scholars such as W. Nicholson Price have proposed frameworks to distribute responsibility more equitably.4 The European Union’s AI Liability Directive is a step in that direction, applying non-fault rules to high-risk AI failures.5 U.S. law has not yet followed suit, but pressure is building.
What to Read Next

Faculty research
Fault lines in health care AI – Part one: The high stakes of AI in medicineThe moral blame game
Considering the question of blame from a different perspective—one that focuses on people’s moral intuitions—yields a very similar conclusion about these cases.
When mistakes happen, human instinct to cast blame shapes how we judge wrongness, intent, and moral responsibility. Research shows that the greater the harm inflicted on human victims, whether intentional or accidental, the greater the motivation to hold someone or something responsible.6
The question is who gets blamed. And here, behavioral research is quite clear: we judge humans much more strongly than AI for wrongness, intentionality, and blameworthiness.7,8 These judgments may also extend to other human agents involved in building the AI system, like companies and developers.8 But we generally do not attribute sufficient agency to AI to warrant moral condemnation. Physicians, not the tools they use, are still seen as the ones at fault and will be held responsible most of the time.
Whether more sophisticated AI will eventually shift these perceptions is an open question. When given a choice between blaming a human or a machine, particularly in a high-stakes medical setting, people rarely let the human off the hook.
As health systems adopt AI into routine practice, expectations around its use are rising. In the future, physicians could be held accountable not only for errors when using AI, but also for failing to use AI at all. This puts clinicians in a double bind: they are expected to rely on tools they may not fully understand, and are blamed whether they do or don’t follow algorithmic advice.
What’s next
"Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft – while they’re flying it,” said co-author Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School.
Rather than expecting doctors to single-handedly reconcile the complexities of AI integration, the authors advocate for health care organizations to provide infrastructure that supports clinical-AI collaboration: checklists, decision aids, documentation protocols, and ongoing simulation-based training.
“To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI, so they don’t need to second-guess the tools they’re using to make key decisions," said Myers.
Just as pilots train in flight simulators, doctors need practice environments to test AI’s boundaries safely. It’s also imperative that doctors be able to provide and incorporate feedback, so that when they report an error or override it improves future AI performance. Without such structures, the promise of assistive AI risks becoming a burden that physicians are asked to carry alone.
What’s next
For now, from both the legal and moral psychology perspectives, the answer to “who’s to blame” is straightforward: AI isn’t to blame even if it produced faulty information; the physician using it is. As technology, human-machine interactions, and the law evolve, that answer may shift, but for the time being, doctors—not AI—carry the responsibility to “first, do no harm.”
In part three of the “Fault lines in health care AI” series, we look at the human toll and how AI may be quietly chipping away at well-being.
Contributors:
Erik Helzer is the MBA program director and an associate professor of Management and Organization and Law and Ethics at the Johns Hopkins Carey Business School. He develops and applies psychological, organizational, and behavioral science insights to understand the cultivation of practical wisdom for leading in organizations. His research focuses on three facets of practical wisdom: ethical behavior and moral judgment, self-knowledge, and personal agency and adjustment. These topics have been the basis of his empirical research and serve as the foundation for his teaching in both MBA and Executive Education programs.
Stacey Lee is a professor of practice in Law and Ethics at the Johns Hopkins Carey Business School. Her expertise lies in business law, health law, and negotiations. She holds a joint appointment at the Bloomberg School of Public Health, and her research interests have focused on pharmaceutical manufacturers' international and domestic influence on access to medicines and transformative health care negotiations
References
[1] Mello, M. M., & Studdert, D. M. (2008). Professional Liability: Malpractice and Other Causes of Action. The New England Journal of Medicine, 359(19), 1936–1942.
[2] U.S. Food & Drug Administration. Clinical Decision Support Software: Guidance for Industry and FDA Staff (Sept. 2022).
[3] NTSB. Asiana Airlines Flight 214 Accident Report (2014).
[4] Price, W. Nicholson. “Black-Box Medicine.” Harvard Journal of Law & Technology, Vol. 28 (2015).
[5] European Commission. Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), 2022.
[6] Schein, C., & Gray, K. (2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22, 32–70.
[7] Wilson, A., Stefanik, C., & Shank, D. B. (2022). How do people judge the immorality of artificial intelligence versus humans committing moral wrongs in real-world situations? Computers in Human Behavior Reports, 8, 100229.
[8] Sullivan, Y. W., & Wamba, S. F. (2022). Moral judgments in the age of artificial intelligence. Journal of Business Ethics, 178, 917–943.