doctor using VR headset
research

Breadcrumbs

Malpractice concerns impact physician decisions to consult AI

Why it matters:

Technologies assisted by artificial intelligence are making inroads into nearly every aspect of life, but Tinglong Dai and Shubhranshu Singh are exploring reasons why some doctors are resisting.

Rapid advances in artificial intelligence-powered tools are driving change in countless career fields — from higher education to legal services to financial analysis. In health care, where AI systems increasingly hold the potential to aid doctors in their medical decision-making, technology’s promise comes with a cautionary note: How might concerns around malpractice liability come into play?

Johns Hopkins Carey Business School faculty members Tinglong Dai and Shubhranshu Singh are exploring that central question in their latest research paper, “Artificial Intelligence on Call: The Physician’s Decision of Whether to Use AI in Clinical Practice.”

Tinglong Dai

Tinglong Dai

“Liability concerns have always been instrumental in influencing how physicians make decisions,” says Dai, a professor of Operations Management and Business Analytics, who serves on the leadership team of the Hopkins Business of Health Initiative and holds a joint appointment at the Johns Hopkins University School of Nursing. “It’s not an exaggeration to say the health care industry has been shaped by the legal industry, and with AI coming on the scene, things get really heated.”

Until now, the two researchers say, scholars have paid little attention to when and how doctors use medical AI in their treatment decisions. In this latest project, the duo aimed to better understand a physician’s decision to use — or not use — AI in their daily practice, particularly in view of potential legal liability.

“We’re interested not just in whether doctors will turn to AI for information, but also how they might use that information in deciding how to treat patients,” says Singh, an associate professor of marketing, who also holds a joint appointment at Johns Hopkins’ Krieger School of Arts & Sciences.

Better safe than sorry?

To zero in on this question, the researchers developed a theoretical model aimed at exploring physician decision-making. They modeled the assistive AI system as an informational device that provides predictive information — information that is informative but imperfect. In the model, the information is intended to guide the doctor in deciding to “play it safe” and provide the standard care, or to offer a more personalized, nonstandard treatment plan.

Given the capabilities of AI, one might expect physicians to use the technology most often when they are uncertain about the optimal treatment plan for a patient. But in their modeling, the researchers found the opposite to be true.

“Somewhat surprisingly,” says Dai, “we find the physician may use AI technology the most in ‘low uncertainty’ cases, when they are pretty sure of a prospective treatment plan but avoid using it in higher-uncertainty cases.”

That’s largely due to concerns about legal liability, Dai and Singh found. Under the current medical liability system, if physicians take the step to consult an AI tool in a case of high uncertainty, then decide to deviate from the AI recommendation, they are exposed to legal risks in the event that something goes wrong with the patient, the researchers say.

Shubhranshu Singh

Shubhranshu Singh

Monetary incentives could also be coming into play around the use of AI in surefire cases, if physicians can bill more for the use of AI technology “as an extra item,” Singh says. “Or perhaps if the doctor wants to defend a more aggressive treatment that is more expensive, he or she would consult AI and use the information to justify that decision.”

Either way, says Dai, “Instead of saying, ‘I am going to use AI to tell me something different,’ our model shows that physicians would use it — in fact overuse it — when they expect it to agree with their assessment. This means that AI is not being used to its fullest potential.”

Too ‘smart’ to ignore

The first part of their study examined physician decision-making under the current medical liability legal framework, known as the standard of care. But Dai and Singh are aware this standard may soon change in response to advances in technology.

In the second part of their paper, they model a proposed new standard of care, not yet in practice. This “new rulebook” is based on the premise that AI has advanced to the point that it is highly accurate. But here again, concerns around medical malpractice liability come into play.

“As AI systems become more accurate, physicians may have a stronger tendency to avoid using AI for certain patients,” the researchers note in their paper.

Singh explains the finding this way: “Once AI becomes very precise in telling what needs to be done for a patient, and the information is very likely accurate, it becomes very difficult for a doctor to consult AI and then discard the information” since taking that step could come back to haunt the physician in a subsequent malpractice case, he says. “So to proactively protect themselves from legal liability, they may opt not to generate AI in the first place.”

This “play it safe” approach, notes Dai, will slow progress in improving health care. “Advances in AI have the potential to change the way we practice medicine, for the better,” he says. “In an ideal world, physicians would use AI more in high uncertainty cases, not just to confirm what they already know.”

What to Read Next

Changing malpractice liability

Changing physician behavior around the use of AI will require nothing short of a sea change in the way that medical liability is defined, the researchers say. Until now, liability has been shaped almost entirely by those in the legal community. Singh says current liability laws take a one-size-fits-all approach. To accommodate the complexities arising through increased use of AI, that approach must shift, he notes.

“We need to bring doctors, lawyers, patient advocates and AI developers together to shape a new legal environment that supports improved patient care,” says Dai. “That’s the way we will truly be able to add value to health care.”

TAGS:

Discover Related Content