This five-part series examines the complex impact of artificial intelligence on medicine by exploring the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape. Part three looks through the lens of how these tensions contribute to physician burnout.

Fault lines in health care AI – Part three: AI, the Hippocratic Oath, and burnout
As assistive AI tools grow in popularity in clinical settings—from diagnostics to decision support—they promise a future of fewer errors, faster workflows, and reduced cognitive burdens for health care providers. But behind these promises lie high-stakes trade-offs.
“Fault lines and power struggles in health care AI” is a five-part series examining the complex impact of artificial intelligence on medicine. This series explores the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape.
We’ll ask: Who is liable when AI gets it wrong? What are the unintended consequences for physician well-being? How is trust formed or broken? And what policies can ensure that AI supports, rather than undermines, the human side of care?
In parts one and two, we explored how AI tools meant to ease physician burdens may actually increase them by adding new implementation hurdles and legal ramifications. Now, in part three, we take a closer look at how those challenges contribute to burnout, and what can be done about it.
Part three: AI, the Hippocratic Oath, and burnout
The heavy load of high-tech medicine
Burnout is not new in medicine, but AI brings it new dimensions. Defined by emotional exhaustion, depersonalization, and a reduced sense of efficacy, burnout has reached alarming levels among physicians. AI, paradoxically introduced as a solution, is often worsening the problem. Instead of offloading tasks, it introduces ambiguity. Instead of simplifying decisions, it adds layers of complexity. Doctors are now not just decision-makers, but also interpreters, validators, and watchdogs of a technology they may not fully understand.
One major culprit is the “calibration burden”—the pressure to know when to trust AI and when to override it. That ongoing mental calculation is mentally taxing. It pulls physicians out of intuitive, patient-centered care and into a constant risk-benefit analysis. Since physicians are held liable for AI’s mistakes, they may expend considerable mental energy deciding whether or not to act based on AI recommendations. Over time, this cognitive load doesn’t lighten; it accumulates.
What to Read Next

Faculty research
Fault lines in health care AI – Part two: Who’s responsible when AI gets it wrong?Isolation in automation
Beyond the cognitive burden, AI can isolate rather than connect. Medicine has traditionally been a team sport: clinicians confer, consult, and collaborate. But AI can sideline these interactions. If the algorithm offers a diagnosis or treatment plan, there may be less incentive—or less time—to seek out a second opinion. The result? Fewer conversations, fewer collaborative moments, and fewer opportunities to feel supported.
This erosion of workplace social fabric has real consequences. Research in organizational psychology shows that shared struggle builds resilience. When teams face problems together, they emerge more cohesive and mentally strong. Conversely, when individuals face complex challenges alone—especially ones they can’t easily explain to others—stress multiplies.
AI also reshapes the doctor-patient relationship. When clinicians spend more time validating machine output or navigating data dashboards, they may have less time and focus for listening, empathizing, or making eye contact. These micro-moments matter. Patients feel the difference—and so do physicians.
The modern Hippocratic challenge
The Hippocratic Oath’s directive to “do no harm” isn’t just about clinical decisions—it’s also about the ethos of care. When AI subtly shifts the clinician’s role from caregiver to systems operator, it may alter that ethos. The question becomes not just “Is this treatment correct?” but also “Am I still the kind of doctor I aspired to be?”
The good news is that this erosion is not inevitable. AI doesn’t have to be isolating. With thoughtful implementation, it can enhance collaboration, reduce burden, and improve both care and well-being. But that requires intentionality: embedding AI within teams, designing systems that augment rather than replace human connection, and prioritizing mental health as a core metric of success.
What’s next
AI’s effects on physician burnout are not just technical—they’re deeply human. The systems we build reflect the values we prioritize. If we want AI to heal more than it harms, we must design with empathy, support, and social connection at the forefront. Once AI systems become highly accurate and physicians find them dependable, then we can expect physician burnout and patient outcomes to improve.
In part four, we’ll explore the role of trust: how it’s built, broken, and repaired in an AI-driven health care system.
Fault lines in health care AI
Part one: The high stakes of AI in medicine
Part two: Who’s responsible when AI gets it wrong?
Contributors:
Michelle Barton is an associate professor of practice at the Johns Hopkins Carey Business School with expertise in organizational and team resilience, managing uncertainty, and interpersonal effectiveness during adversity.
Shubhranshu Singh is an associate professor in the research track at the Johns Hopkins Carey Business School. His expertise is in the area of competitive marketing strategy, and he has a specific interest in developing markets. He recently contributed “Artificial Intelligence on Call: The Physician’s Decision of Whether to Use AI in Clinical Practice,” (with Tinglong Dai), to the Journal of Marketing Research.