Top doctors limit number of tests they order to signal diagnostic prowess to peers
Some expert medical diagnosticians may limit the number of patient tests they order, as a way to signal a high level of competence to their peers, according to a Johns Hopkins University study.
These “high-type experts” engage in “undertesting” despite an increase in diagnostic techniques, including artificial intelligence tools that can assess patient condition more accurately than past methods, say co-authors Tinglong Dai and Shubhranshu Singh, associate professors at the Johns Hopkins Carey Business School.
While they care about their patients’ welfare and might feel that tests can burden patients with steep financial, physical, and emotional costs, the doctors are also concerned about maintaining their reputations as top-flight diagnosticians, Dai and Singh note.
“These doctors believe that they know what’s best and that testing isn’t always necessary. Testing has implications and may lead some of their peers to have less regard for their inherent diagnostic skills,” explains Dai.
Singh adds, “Past studies have shown an industry bias against doctors who perform numerous tests.”
“Low-type experts” – less experienced and not as well trained as the high-type doctors – tend not to undertest out of concern they might otherwise miss important information. They realize that the loss to patients’ welfare would be much larger if they engaged in undertesting by not ordering blood analyses, X-rays, ultrasound scans, and other tests. These low-type experts choose not to sacrifice patients’ welfare for the reputational benefit of being perceived as high-type experts.
The researchers’ findings are contained in the paper “Conspicuous by Its Absence: Diagnostic Expert Testing Under Uncertainty,” published in Marketing Science. The conclusions were based on a game-theoretic model created by the authors and supported by in-depth interviews with medical professionals.
Most investigations of medical diagnostics have focused on high levels of testing, particularly in the context of financial incentives linked to testing. Dai and Singh’s paper differs by examining the motives behind low levels of testing. As the authors write, “Undertesting has emerged as an important source of misdiagnosis but has not received due attention from either the public or the health care community.”
Experts’ reluctance to perform tests becomes more problematic as AI-enabled diagnostic tools “are set to transform much of the health care sector [by] leveraging big data and deep learning to aid physicians in reaching more precise diagnosis,” the authors write.
Yet situations exist when highly skilled doctors are unable to signal their diagnostic ability through undertesting. These occur when the reputational payoff is either very large (for example, in a specialty in which the doctor heavily depends on peer referrals) or very small (as in a specialty in which the doctor relies less on peer referrals).
A possible way to discourage undertesting by physicians would be to offer them financial incentives, the authors suggest. They add, however, that top doctors would likely see taking incentives as a poor reflection on their diagnostic skills and thus would decline them.
Undertesting has at least one positive effect, Dai says.
“You could argue that it helps by making various doctors’ skill levels known among their peers,” he adds. “But given what’s at stake in health care, maybe we should worry more about patients and their outcomes rather than how doctors make their reputations known to one another.”