physician with hand outstretched
Faculty/research

Breadcrumbs

Fault lines in health care AI – Part four: Mapping the web of trust in AI-assisted health care

Why it matters:

This five-part series examines the complex impact of artificial intelligence on medicine by exploring the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape. Part four is about the importance of trust between patients, doctors, and AI tools.

As assistive AI tools grow in popularity in clinical settings—from diagnostics to decision support—they promise a future of fewer errors, faster workflows, and reduced cognitive burdens for health care providers. But behind these promises lie high-stakes trade-offs.

“Fault lines and power struggles in health care AI” is a five-part series examining the complex impact of artificial intelligence on medicine. This series explores the ethical, legal, and organizational tensions surfacing in the new AI-enabled medical landscape.

We’ll ask: Who is liable when AI gets it wrong? What are the unintended consequences for physician well-being? How is trust formed or broken? And what policies can ensure that AI supports, rather than undermines, the human side of care?

In the first three parts, we explored how AI tools intended to ease physician burdens may actually increase them by introducing new implementation hurdles and legal ramifications, and how those challenges contribute to burnout. Part four examines the essential avenues of trust required for a well-functioning system of AI in health care.

Part four: Mapping the web of trust in AI-assisted health care

Artificial intelligence is no longer on the margins of medicine. More than a thousand AI-enabled tools have earned FDA clearance, and hospitals across the country are deploying them to help diagnose diseases, guide treatments, and streamline operations. But for all the technical progress, one human factor still determines whether AI succeeds or fails: trust.

In a recent perspective, “Trust in AI-assisted health systems and AI’s trust in humans,” published in npj Health Systems, Johns Hopkins Carey Business School Professors Tinglong Dai, Michael Darden, and Mario Macis, along with Hopkins Business of Health Initiative Graduate Academy mentee Madeline Sagona argue that trust shapes every step of AI adoption. 

“It runs both ways,” says Dai, the Bernard T. Ferrari Professor at Carey. “Patients and clinicians need to trust AI tools, but those tools also depend on the quality of human inputs. If one link in the trust chain breaks, the entire system suffers.”

Trust is tricky

When patients trust their clinicians, they’re more likely to initiate care early, follow through on treatments, and report better outcomes. AI can help strengthen that bond, especially when it frees doctors from screens and lets them focus more on face-to-face care. Patients respond well when physicians are present and attentive rather than toggling between windows on their computers.

But that same technology can also raise concerns. If patients feel that doctors are relying more on machine output than their clinical judgment, trust may erode. For historically underserved communities—especially Black, Hispanic, and Native American patients—this skepticism runs deeper. Long-standing inequities have left many starting from a place of mistrust. If a Black patient receives a misclassified risk score from an opaque algorithm, it reinforces both medical harm and systemic doubt.

Clinicians, too, have reasons to hesitate. They want AI that is accurate, validated on the right populations, and clear about who’s responsible if something goes wrong. But legal and ethical uncertainty creates tension. U.S. malpractice law encourages clinicians to follow the “standard of care.” As AI adoption grows, that standard is shifting. Ignoring a high-performing model could expose a doctor to liability—yet blindly following a flawed recommendation could do the same. This legal ambiguity discourages uptake and fuels burnout. 

Clinicians worry that the system is quick to push AI tools but slow to support the people using them. Health systems that include providers in selecting, monitoring, and refining AI tools—not just mandating them from above—tend to earn more confidence. Trust grows when clinicians believe not only in the tool’s performance, but also in the motivations behind its use.

Trust in the broader health system is equally fragile. Most people engage with that system not through doctors, but through bills, insurance denials, and long wait times. If AI is seen mainly as a way to speed up claims denials or maximize revenue, people will assume it serves insurers, not patients. Systems that want to rebuild faith should focus on transparency. Publishing independent audits, sharing clinical performance data, and demonstrating that AI is improving patient outcomes—not just financial ones—sends  a far stronger and more credible signal.

Supporting judgment, not replacing it

Clinicians are at the center of the web of trust. They must trust the AI tools they use, but also the organizations that deploy them. If AI is introduced without consultation, or if its implementation feels like surveillance rather than support, trust crumbles. Doctors begin to see AI not as a partner, but as a threat to autonomy and judgment.

Hospitals and health systems that involve clinicians in AI selection, training, and evaluation tend to fare better. When clinicians understand how an algorithm was developed, what populations it was trained on, and how it will be monitored, they’re more likely to use it confidently and appropriately. The key is alignment. Clinicians need to see that leadership is using AI to support judgment, not to monitor or replace it.

Even AI systems themselves rely on trust—just in reverse. Algorithms don’t operate in a vacuum. They learn from the data and feedback they’re given. If that information is biased, incomplete, or ignored, performance degrades. 

Consider the problems that come with poor training data and a lack of feedback. If care for certain populations has been underfunded during data gathering, the resulting datasets will be unbalanced and may encode racial bias. Then, if clinicians override AI alerts, but those corrections aren’t captured, the model can’t learn. These are two distinct issues: the quality of the training data and the structure of the feedback loop. Addressing both matters. We need systems where clinician corrections update the model, and where clinicians get clear, timely information about how the AI is learning. When that feedback loop works, AI becomes a better and more trusted partner.

So, how do we assess trust within this system? We look at behavior. Are patients showing up for care , and sticking with it? Are doctors routinely overriding AI recommendations, or are they increasingly relying on them? Are patients staying with a provider, or switching, complaining, or opting out entirely? Are clinicians burning out or quitting, feeling that the technology meant to help them is dragging them down? Is the AI itself improving over time, incorporating feedback, and adjusting to the real world?

These trends are measurable. And they should be stratified—not just overall, but by race, gender, socioeconomic status, and geography. That’s how we turn trust from an abstract goal into a concrete design principle.

What’s next?

Trust is not a feature—it’s a design principle. It must be engineered into every aspect of AI implementation, from patient interfaces to clinical workflows to algorithmic training cycles. Without trust, AI adoption will falter. With it, the technology can become a transformative ally in delivering care.

In the final part of our series, we’ll consider what guiding policies and best practices can shape a future where AI in medicine truly serves everyone—safely, fairly, and compassionately?

Fault lines in health care AI

Part one: The high stakes of AI in medicine
Part two: Who’s responsible when AI gets it wrong?
Part three: AI, the Hippocratic Oath, and burnout

What to Read Next

Contributors:

Tingalong Dai portrait

Tinglong Dai is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, specializing in Operations Management and Business Analytics. He holds a joint faculty appointment at the Johns Hopkins School of Nursing. He is a member of the Johns Hopkins University Council and serves on the leadership team of the Hopkins Business of Health Initiative. He also co-leads the University’s Bloomberg Distinguished Professorship Cluster on Global Advances in Medical Artificial Intelligence.


Michael Darden portrait

Michael Darden is an associate professor at the Carey Business School. He is also a research faculty fellow at the National Bureau of Economic Research and a co-editor at the Journal of Human Resources. Darden is the Academic Program Director of the Health Care Management program.


Mario Macis portrait

Mario Macis is a professor of Economics at the Johns Hopkins Carey Business School. He is also core faculty at the Hopkins Business of Health Initiative, affiliate faculty at the JHU Berman Institute of Bioethics, and faculty research fellow at the National Bureau of Economic Research and the Institute of Labor Economics.

Media Inquiry

Discover Related Content