Experts explore challenges and policy implications for bringing AI to the health care sector
Will AI be the prescription for better health?
The TV series Star Trek envisioned a futuristic world where a holographic doctor driven by artificial intelligence could diagnose and treat any ailment on command. We are unlikely to replace doctors with computer programs, but AI will inevitably make doctors more insightful and productive. Soon, we could have AI algorithms that assist or even augment many levels of clinical decision-making. In fact, AI-powered assistants have helped doctors interpret mammograms and other medical imaging for the last 20 years.
Aside from the technical challenges of developing AI-based health care tools, fundamental questions remain as to how these technologies will be regulated and adopted in the real world. Questions such as: What problems will AI solve? Will AI applications improve outcomes, reduce costs, or create efficiencies? Will they enhance human decision-making or replace it? Will these AI technologies be applicable across multiple settings, and will they be integrated with other systems or work alone?
With new technologies on the horizon, Johns Hopkins Carey Business School’s Alonzo and Virginia Decker Professor Phillip Phan, and Johns Hopkins School of Medicine Professor of Anesthesia and Critical Care Medicine James Fackler, set out to explore some of the challenges and policy implications for bringing AI to the health care sector. The result was the day-long Symposium on the Regulatory Future of AI Medical Devices, which brought together an impressive group of experts from government, academia, and industry. Their discussions often raised as many questions as answers.
Big data challenge
The health care system is built on virtually limitless amounts of data from lab tests, diagnostic tools, wearable devices, medical records, and patients themselves. It is challenging for AI developers and regulators to make use of the volumes of information.
“The data exists. It's out there, it's just not integrated, it's not correlated, it's not in a usable form for really anyone right now,” said Jen Kuskowski, who heads government affairs in the Americas for Siemens Healthineers. “It's segregated, it's siloed, and that's what creates the challenge. I think AI has a huge potential and opportunity to help, but there have to be guardrails.”
Accuracy and applicability are still questions. Several symposium speakers expressed concerns that longstanding racial and socioeconomic biases could influence data. For example, research demonstrates that darker-skinned patients receive higher rates of false readings from devices that measure blood oxygen levels, called pulse oximeters.
AI in the doctor’s office
When and if more AI-assistive technologies become available, it will be up to the clinicians to put them into practice.
In his keynote remarks, American Medical Association Vice President of Strategic Insights Christopher Khoury, said the health care community faces record levels of burnout in part from what he calls a “collection failure,” which he describes as a failure to generate quality and productivity improvements compared to other business sectors.
According to Khoury, physicians want to know whether new AI technologies will work and understand the evidence supporting these algorithms. They also want to know their liability form using an AI tool, and how they will be compensated for using AI. Plus, AI-using hospitals and health systems have their own organizational cultures and hierarchies.
“We can't think of AI systems as just technologies and tools out there,” he said. “We have to think of them broadly as sociotechnical systems. We know the design, deployment, and development of AI needs to be done with physicians, not to physicians.”
But getting the physicians and health care systems squared away won’t eliminate all the barriers. Khoury noted that new technologies won’t be helpful if patients and physicians don’t trust them, or if they become another burden for health professionals to carry. That’s a problem because patients struggle to adopt new technologies
“I don't know how many of you or [your] elderly parents or grandparents that have struggled to log into a patient portal. It's really hard,” he said.
Insights from innovators
Siemens Healthineers Vice President and General Manager for North American Digital and Automation Peter Shen shared perspectives on how security impacts the development and scaling for AI-based tools from data management to accuracy to protection.
“Within Siemens Healthineers, we've created a best practice where we've established an office of big data, and their sole function is to make sure that all this governance and stewardship is in place before we actually interact with any of that data in terms of algorithm training,” Shen explained. “They also have the ability to essentially police who has access to that data.” Shen said it is also essential that clinicians understand how an AI tool works and how it reaches its conclusions.
Of course, this is the business of health care, so there’s a question of billing. AI companies and clinicians will need to be compensated. Shen’s team has been advocating for a temporary payment pathway for algorithm-related services based on manufacturer’s costs.
“If we're able to create some sort of predictable and consistent reimbursement around this specific type of artificial intelligence, we see this as an opportunity to really drive adoption and also spread that adoption,” said Shen, “not just in large academic medical centers, but quite frankly to where health systems might benefit the most, which is in the rural and underserved populations as well.
What to Read Next
career outcomes
Swinging for the fences with AIThe regulation question
Regulation is another challenge. Carey alumna Therese Canares (MD/MBA ’21) is a pediatrician with Johns Hopkins Medicine and co-founder of Curie DX, a startup working on ways to predict strep throat using smartphone images. She says the current regulatory environment makes it difficult for small companies and “bootstrap” innovators to develop useful projects and services. She wants more clarity from the FDA.
“It determines what approach we take with our limited set of resources, how we position our product, and where we strategize with designing our AI and our outputs so that it can meet the criteria and go through a regulated pathway that is feasible for our business model,” she said.
The Johns Hopkins Nexus Awards initiative funded the Shaping the Future of AI Medical Devices conference. The Nexus Awards support collaborative projects and programs for the Johns Hopkins University Bloomberg Center in Washington, D.C.