Associate Professor Christopher Myers proposes new regulations to reduce the burden doctors face from new assistive AI products.

Is AI in medical decision-making creating a superhuman burden on doctors?
In health care, as in so many fields, advances in artificial intelligence hold the promise to transform delivery of care, reducing medical error and improving the accuracy of diagnoses while alleviating physician fatigue by freeing their time from mundane tasks.
In practice, however, the implementation of assistive AI technology risks having the opposite impact on doctors, according to Christopher Myers, associate professor of Management and Organization at the Johns Hopkins Carey Business School.
Because health care organizations are adopting AI technology at a much faster pace than laws governing its use can be put in place, the regulatory gap “imposes an immense, almost superhuman, burden on physicians,” says Myers.
“They are expected to rely on AI to minimize medical errors yet bear responsibility for determining when to override or defer to these systems,” reports Myers and his colleagues Shefali V. Pail of the University of Texas at Austin and Yemeng-Lu Myers of Johns Hopkins Medicine in a Viewpoint article published recently in JAMA Health Forum.
“The narrative is that AI will lighten the burden on doctors,” says Myers. “But the reality is that AI-generated guidance is just one more thing for physicians to consider and it adds pressure to always reach the right answer.”
Caught in a bind
Complicating the picture for physicians is the “black box” nature of many AI systems, which obscures how recommendations are generated. Myers notes that doctors frequently “express concerns about the lack of interpretability and transparency in AI outputs. ‘Sometimes it’s right and sometimes it’s wrong, and I don’t know when or why.’”
Such concerns are valid, he adds, given ongoing challenges involving AI “hallucinations”—information generated by AI that is inaccurate or nonsensical.
So, if AI spits out a diagnosis or recommendation that is surprising to a doctor, “that’s a real bind to be put in,” Myers says. Is AI recognizing something truly rare and unexpected the doctor would have overlooked? Or is it a hallucination? Sussing out an accurate conclusion requires considerable extra work on the part of doctors, he says, which can exacerbate issues of burnout.
In doing that extra work, they must navigate two opposing error risks: false positives, in which they rely on erroneous AI guidance; and false negatives, involving an under-reliance on accurate AI guidance.
Such decision-making places a disproportionate moral responsibility on physicians, say Myers and his co-authors. They point to research showing that when it comes to adverse outcomes involving AI inputs, physicians are consistently viewed as the most liable party — more than health care organizations, AI vendors, or regulatory bodies.
Doctors have long been held to an almost superhuman standard, expected to exhibit extraordinary mental, physical, and moral attributes, says Myers. “AI risks intensifying this super-humanization, heightening the potential for increased burnout and errors, and ultimately undermining the very goals assistive AI seeks to achieve,” the authors write.
What to Read Next

research
Can AI provide real-time health care support?Shifting the burden
Myers and colleagues offer recommendations for alleviating this burden on physicians. While attention until now has focused on improving the trustworthiness of AI systems, which they acknowledge is important, they believe a fundamental shift is needed. In “the next frontier,” they note, health care organizations must step up to shoulder more of the burden to support doctors’ calibration efforts, so physicians know how and when to leverage AI to avoid issues of liability.
“It’s not enough to roll out these AI systems to doctors and just say, ‘Good luck!’” Myers says.
Organizations could begin to implement standard practices such as checklists and guidelines for doctors to evaluate AI inputs, then use gathered data to track clinical outcomes and identify patterns of effective and ineffective AI applications. The insights gained could then be shared with doctors and further refined through input from interdisciplinary teams involving physicians, administrators, data scientists, AI engineers, and legal experts.
“Right now, we’re not aware of systematic efforts to gather feedback from doctors on when — and whether — AI tools are trustworthy and when they are not,” says Myers. “We’re calling for more attention and deliberate efforts toward getting that kind of data and systemwide calibration, rather than leaving it up to the individual physician.”
The authors also suggest integrating AI training into medical education and on-site programs through simulation training, providing “a low-stakes environment for experimentation” that would allow doctors to build their confidence and familiarity with AI systems while reducing the risks involved with treating actual patients.
Myers says the ultimate goal should be to create an environment “where physicians are supported, not super-humanized,” when incorporating AI into their decision-making.