Professors Chris Myers and Tinglong Dai joined the Washington Post Live
business of health

Breadcrumbs

What does it take to lead the implementation of reliable AI in health care delivery?

Why it matters:

Between ethics and outcomes is implementation—an element of artificial intelligence few people are talking about.

There is plenty of conversation about what AI is capable of doing in health care settings, and the pros and cons of using it. But how to implement it—when, why, and for whom—is a topic the C-suite hasn’t explored as much, according to two Johns Hopkins Carey Business School faculty.

Professors Chris Myers and Tinglong Dai joined the Washington Post Live for its Post Next: AI & Health event in Washington, D.C., an opportunity for health care leaders to hear from experts on growth, governance, and operations in the field. Despite the early hour, there was a buzz in the room at the Washington Post headquarters. 

The gap 

Balancing the innovation of rapidly evolving AI systems with the responsibility of safe and ethical treatment requires users in health care settings to have some faith, not just in the AI system itself, but in the people implementing it.  Carey Professors Chris Myers and Tinglong Dai looked at the trust factor from all angles, exploring the complexities of rolling out health care AI systems ethically, strategically, and carefully.
Myers, director of Carey’s Center for Innovative Leadership, identified the gap in the AI conversation.

“We talk a lot about what goes into the development of these AI tools—‘Will frontline providers make use of these tools? Will patients be receptive to them?’” he said. “But then there’s this massive gap in the middle of how we get from one end to the other, and that's all about how these tools get procured, how they get implemented, how they get calibrated.”
Myers and Dai discussed how somebody must be responsible for that procurement, implementation, and calibration, and it calls upon our trust to believe that person—often the chief technology officer—is overseeing the process carefully.

“In terms of CTO … the T really stands for trust, because trust is just front and center in health care,” said Dai, Carey’s Berna rd T. Ferrari Professor of Business.

Children’s National Hospital Executive Vice President and CIO Matt MacVey underscored this when he shared that AI has changed the focus of his role.  
“My job is less technology, more change management,” MacVey said. “It is incredibly sophisticated in the work we need to do from a process change perspective. We're going into an environment that does exist, and we're adapting workflows for this new reality and new possibility that AI can bring to us.” 

Responsible implementation

Myers and Dai shared how their research explores concrete topics like recalls of AI-assisted medical devices and nebulous concepts like peer perception among doctors who use AI in decision-making. 

Referring to randomized trials at Johns Hopkins, Dai said, “We found that the vast majority of physicians believe AI will help them do a better job at diagnosing patients, treating patients. However, when they see a doctor use AI, they tend to think that doctor is less competent and provides worse service than doctors who don't use AI as much.”

It all points to the need for strategic, evidence-based implementation as opposed to a rush to roll out AI tools based on promise alone. Dai emphasized the need for real-world evidence. 

“If AI is not tested on real people, then real people like us will become the test,” he said. “So we need to do more clinical trials about AI. There's no substitute for real-world evidence.”

Myers discussed incorporating liability in the implementation of AI systems through training models and feedback so providers aren’t bearing a “superhuman” burden.

“Are there policies, are there practices that we can roll out as organizations that are starting to proactively take on more of that risk?” Myers posited. He explained that we need to incorporate user feedback more regularly “so that we're not just introducing [AI] as a shiny new toy and leaving it to the providers to figure out how they're going to implement it and what risks they're taking.”

Addressing the pain points

Amir Dan Rubin, CEO and founding managing partner of Healthier Capital, discussed that “shiny new toy” excitement as it relates to investing in innovation.  Rubin said that, already, investors have poured $11 billion into health care AI systems. He explained that funders are reacting to frustrations on all sides of the health care equation: consumers struggling with access and experience; providers feeling burned out; payers who are overwhelmed by costs; and systems that are inefficient for everyone. 

“Being a patient here is a full-time job,” said Rubin. But he pointed out that AI is already making a difference in addressing these problems by automating patient communication, improving care coordination, reducing administrative burdens, and optimizing treatment planning. 

People have to be incentivized to make a change, he said, and in the case of adopting AI tools, it is imperative that developers answer these mission-driven questions: Does it improve health outcomes? Does it solve a problem? And is it economical in the long run?

Beyond those concerns, Rubin added, is fear. Silence hung over the room as he recalled a teenager’s recent suicide after consulting AI software multiple times about his plans. Now, Rubin said, AI regulation is important. Already,  AI in mental health treatment is under scrutiny after several notable state decisions and news events.

What to Read Next

Establishing guardrails

Representative Ami Bera (D-California) agreed emphatically that we need to govern AI in health care. He laid out why safety regulations are a concern for Congress and how we need to balance innovation and responsibility. 
“We are extremely vulnerable, as a world,” said Bera.
He gave the example of how social media didn’t start out with governance and oversight. 

“We didn’t do that in social media, right? We just let social media kind of evolve. That was a mistake, and now we’re trying to go back and put up some of those guardrails, and it’s really hard. AI is evolving so quickly that we really do have to put up guardrails that protect from bad things happening, but also don't stifle innovation. And that's the sweet spot of legislation.”

Media Inquiry

Discover Related Content