Professor Tinglong Dai explores whether AI tools benefit doctors as gatekeepers or a second opinion for care. The answer depends on patient risk.

AI in health care: When it should lead—and when it should follow
Artificial intelligence tools are rapidly gaining traction in health care, with 1,247 AI systems cleared by the U.S. Food and Drug Administration as of May 2025, notes Johns Hopkins Carey Business Professor Tinglong Dai.
“There are two contrasting narratives, and we really need to figure out the optimal sequencing—about how best to integrate AI into health care workflows”
Tinglong Dai, the Bernard T. Ferrari Professor of Operations Management and Business Analytics
Most of these AI systems are for screening or diagnostic purposes. In practice, they can be used as either “gatekeepers” or second opinions. “There are two contrasting narratives, and we really need to figure out the optimal sequencing—about how best to integrate AI into health care workflows,” says Dai, the Bernard T. Ferrari Professor of Operations Management and Business Analytics.
It's a question with important ramifications for all patient populations, including people living in low-resource settings where access to health care—particularly specialty care—is extremely limited and optimizing health resources is essential, he notes.
To help address the workflow question, Dai and collaborator Simrita Singh, of Santa Clara University, modeled a health care system in which patients can consult a specialist, an AI system, or both. Using a two-step decision-making process, they zeroed in on whether the patient should first consult AI or the specialist. Their findings, presented in a working paper, show that the answer differs depending on patients’ health issues.
“In general, the gatekeeper approach is preferable for low-risk settings, whereas the second opinion approach is better suited for high-risk patients for whom avoiding missed diagnoses is a primary concern,” says Dai.
Perhaps most surprisingly, the researchers found that AI should not be used for “intermediate risk” patients—those for whom the uncertainty of their condition is highest. “This challenges the premise that AI is most useful for reducing uncertainty,” Dai notes.The impact of ‘anchoring’
Key to the team’s modeling is an initial signal known as the “anchoring effect,” a specific reference point or anchor that influences subsequent estimations and decisions. “This is relevant because the order in which the specialist and AI see the patient affects the physician’s initial impression” — i.e., the initial diagnosis becomes an anchor that can bias later assessments, the team notes. “Thus, if AI screens first, the specialist may anchor to that initial result, even if it is inaccurate. Conversely, if the specialist examines the patient first, his initial diagnosis may anchor AI’s algorithmic assessment.”
Dai says their modeling found that “because AI systems are generally more sensitive than humans, they are less likely to miss a positive case, but they are more likely to flag something as positive even if nothing is wrong,” he says. Thus, using AI as a gatekeeper could lead to false-positive diagnoses and unnecessary treatments.
Secondly, the duo showed that using AI as a second opinion leads to fewer false-negative diagnoses than using AI as a gatekeeper or not using AI at all. “This idea—that the specialist can use AI as a second opinion to fill in gaps and reduce human oversight—is a key insight of our work,” Dai says.
Clinical context matters
Taken together, Dai says these findings suggest that whether AI should be used as a gatekeeper or as a second opinion “depends on whether avoiding false negatives or false positives is more important in a given clinical context.”
In general, he notes, using AI as a gatekeeper is most beneficial for low-risk patients, for whom the downside of false negatives is least severe. Into this camp might fall large-scale screening efforts for diabetic retinopathy, for example, where AI can screen out the healthy majority of patients so that human specialists can focus on the small proportion likely to have this eye condition, which is a complication of diabetes and typically develops slowly. In contrast, the duo found that using AI as a second opinion is optimal for high-risk patients, for whom missing a diagnosis has serious consequences, such as in the case of cancer.
When it comes to patients who fall into the “intermediate” range, with conditions that are the most uncertain, the team’s theoretical result found that physician-AI collaboration may sometimes yield worse outcomes than physician decision-making alone. That’s because physicians in uncertain settings are more apt to dismiss the AI system’s findings if they differ from their own, Dai explains.
What to Read Next

research
Can AI provide real-time health care support?Defining the best pathway
As the use of AI in health care continues to rise dramatically, Dai says their research, which aims to reduce the cost of access and improve the quality of health care, provides “practical pathways” for integrating AI into routine medical decision-making, particularly in places where resources are extremely limited.
“Our work also advances the understanding of the optimal sequencing of human and AI interventions in service operations, drawing a parallel to optimal design problems in supply chain and service operations,” the researchers note in their paper.
As technology advances, Dai says he sees a role for AI to additionally act as a “pathfinder” for the optimal patient pathway, and his work includes a prototype for just that. “Given a patient’s demographic group, medical history, and other factors, an AI program could be written to figure out which pathway—AI first, clinician first, or no AI—is best,” he says.