doctor looking at ai assisted monitor
Faculty research

Breadcrumbs

Fault lines in health care AI – Part one: The high stakes of AI in medicine

Why it matters:

This five-part series examines the complex impact of artificial intelligence on medicine by exploring the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape. Part one covers why AI might exacerbate the very problems it was intended to reduce, and how physicians are on the hook to bear the consequences.

As assistive AI tools grow in popularity in clinical settings—from diagnostics to decision support—they promise a future of fewer errors, faster workflows, and reduced cognitive burdens for health care providers. But behind these promises lie high-stakes trade-offs.

“Fault lines and power struggles in health care AI” is a five-part series examining the complex impact of artificial intelligence on medicine. This series explores the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape.

We’ll ask: Who is liable when AI gets it wrong? What are the unintended consequences for physician well-being? How is trust formed or broken? And what policies can ensure that AI supports, rather than undermines, the human side of care?

In this opening piece, we examine why assistive AI, which is intended to reduce error and stress, may actually increase both, and how physicians are bearing the brunt of this technological transition because of inadequate legal and organizational protections.

Part one: The high stakes of AI in medicine

Pressure wrapped in promise

Assistive artificial intelligence has captivated the health care sector with its potential to revolutionize patient care. In fields like radiology, neonatology, and cardiology, AI systems have been praised for spotting patterns human eyes might miss, offering early warnings, and generating data-informed recommendations on a scale previously unimaginable. In theory, AI can enhance accuracy, reduce medical errors, and give physicians the breathing room they need to focus on the human dimensions of care.

But the reality is more complicated. A recent brief published in JAMA Health Forum by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and University of Texas Austin McCombs School of Business outlines a worrisome paradox: instead of unilaterally assisting physicians, AI tools are creating new stressors. This is due in large part to the unresolved legal and organizational frameworks surrounding their use.

Physicians are asked to integrate AI into their decision-making processes without clear guidance on how or when to rely on it. And if the treatment leads to a poor outcome, patients often hold the physician, not the software, accountable. This expectation of flawless “calibration” between human judgment and machine output is not only unrealistic, it’s unsustainable.

What to Read Next

Superhumanization

Medical professionals have long been subjected to “superhumanization”—the notion that they must be infallible in their diagnoses and treatments. AI intensifies this expectation. As the JAMA Health Forum brief notes, physicians now face the dual challenge of avoiding both over-reliance on AI—which could mean blindly following flawed recommendations—and under-reliance, which could lead to rejecting helpful suggestions. Finding the right balance takes vigilance, careful judgement, a deep knowledge of the field, and constant second-guessing—all under the stressful conditions of urgency, legal risk, and emotional strain.

"AI was meant to ease the burden, but instead, it’s shifting liability onto physicians—forcing them to flawlessly interpret technology even its creators can't fully explain,” said co-author Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. “This unrealistic expectation creates hesitation and poses a direct threat to patient care."

The black-box nature of many AI tools compounds the problem. If even the engineers who designed the algorithm can’t explain why it produced a particular output, how can physicians be expected to vet that output under clinical conditions? And yet, courts and public opinion often place that burden squarely on the physician’s shoulders.

Making AI smarter and safer

"Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft – while they’re flying it,” said co-author Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School.

Rather than expecting doctors to single-handedly reconcile the complexities of AI integration, the authors advocate for health care organizations to provide infrastructure that supports clinical-AI collaboration: checklists, decision aids, documentation protocols, and ongoing simulation-based training.

“To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI, so they don’t need to second-guess the tools they’re using to make key decisions," said Myers.

Just as pilots train in flight simulators, doctors need practice environments to test AI’s boundaries safely. It’s also imperative that doctors be able to provide and incorporate feedback, so that when they report an error or override it improves future AI performance. Without such structures, the promise of assistive AI risks becoming a burden that physicians are asked to carry alone.

What’s next

As we launch this series, one thing is clear: AI is not just a tool—it’s a force that reshapes roles, responsibilities, and risks in medicine. If we fail to align technology with the realities of clinical practice, we risk replacing old burdens with new ones. In part two, we’ll examine what happens when AI gets it wrong—and why physicians are the ones who pay the price.


Contributors:

Christopher Myers is an associate professor of Management & Organization and the founding faculty director of the Center for Innovative Leadership at the Johns Hopkins Carey Business School, and holds joint faculty appointments in Anesthesiology & Critical Care Medicine at the Johns Hopkins School of Medicine and in Health Policy and Management at the Johns Hopkins Bloomberg School of Public Health. His research and teaching focus on individual learning, leadership development, and innovation, with particular attention to how people learn vicariously and share knowledge in health care organizations and other knowledge-intensive work environments.

Shefali Patil is an associate professor of Management at the McCombs School of Business, The University of Texas at Austin, and visiting associate professor of Management & Organizations at the Johns Hopkins Carey Business School. Her research focuses on accountability risks, as well as image and value conflicts between employees and stakeholders arising from emerging technologies and broader organizational dynamics. She teaches undergraduate and graduate courses in technology and accountability risk, leadership, and organizational behavior.

Discover Related Content