technical researchers in a lab holding a high tech screen with the outline of a human body
research

Breadcrumbs

Human values are key to creating AI that will benefit humanity

Why it matters:

While the future of artificial superintelligence remains speculative, a Hopkins Business of Health Initiative webinar addresses the need to supersede dangerous possibilities by producing human-centered foundations to advance technology, especially in health care-related AI.

The speculative nature of artificial intelligence can be a projection of imagination and hope, painted with what-ifs and wish casting. Science fiction imagines super intelligent AI in the form of Stanley Kubrick’s mission-cursed HAL 9000, or Iron Man’s helpful sidekick Jarvis, or the power-hungry, cloud-based Entity of Mission Impossible, while Open AI CEO Sam Altman imagines a world covered with data centers—or perhaps data centers orbiting the planet—and culture and tech writer Max Read describes an endless river of AI slop already copying itself. 

AI experts do not yet understand when, if, or how an artificial superintelligence may emerge, but the use and investment in AI across the workplace is a reality of doing business today, and the daily use of task-based AI support is commonplace. Whether a true ASI ever arrives, current iterations of AI are already changing the face of business. Regardless of predictions, the future may look nothing like the version we imagine. It’s clear that the potential for super intelligent AI requires pragmatic and proactive approaches to addressing the foundations of AI and new technology that keep humanity in mind.

 To explore the need for a human-centered approaches to AI, the Hopkins Business of Health Initiative brought together experts in the field for “Artificial Superintelligence and the Future of Healthcare” part the of the initiative’s monthly webinar series Conversations on the Business of Health. The series brings together national leaders and HBHI’s resident experts to advance emerging business of health topics. 

Carey’s Bernard T. Ferrari Professor Tinglong Dai and Johns Hopkins Medicine Associate Professor Risa Wolf moderated the discuss around the current state of AI in health care and the preparations necessary to ensure a healthy and positive path toward artificial superintelligence. Guest speakers included Professor Bryant Y. Lin, director of Medical Humanities and Arts at Stanford’s Center for Biomedical Ethics, and Mount Sinai’s Girish N. Nadkarni, director of the Hasso Plattner Institute for Digital Health and the hospital’s chief AI officer.  

 The panelists discussed how advanced technology and AI are already supporting health care workers, while addressing the need for structural human-centered oversight and how reliance on AI can create a gap between knowledge and experience. Throughout the conversation, a clear consensus emerged around the necessity for early planning and work with the advent of ASI towards developing humanity and morality as the foundation of all AI systems, especially those that will drive health care.

Installing guardrails

Both Nadkarni and Lin, stressed the importance of health care providers getting AI right. Though Nadkarni argued that health care providers have so far failed to hardcode human values into their systems. Lin agreed with this concern, claiming that current systems are not built for AI readiness. The speed at which AI develops doesn’t appear to show the kind of thinking both Nadkarmi and Lin argue will be necessary for beneficial AI.

 “We don’t know how society will change as we have advanced more in AI,” Lin said. “I think we need to be consciously embedding human values and human-centered care in whatever new technology is really important.”

According to the speakers, identifying and implementing the structural changes necessary for the advancement of AI requires a level of responsibility from tech leaders and policymakers to build guardrails that hardcode humanity into the rules and applications of AI. This acceptance requires users to see AI as a tool rather than a replacement; teaching people to use it responsibly as a tool to build and maintain critical thinking muscles, muscle memory, and personal knowledge.

 Nadkarni and Lin caution that many people already depend on AI and are accepting of technology in their everyday lives; AI systems have already created potential gaps through de-skilling or mis-skilling in the process of prioritizing efficiency. For example, some AI, trained on the most common sickness, diagnosis, and treatments, could free up physicians to focus on more complex illnesses and treatments. Lin argues, however, that this seemingly beneficial outcome could also create a gap in experience and learned skills that could eventually be forgotten. And although these skills may not be necessary to know regularly due to technological support, they are foundational for physicians.

 Lin argues that health systems should address the importance of maintaining skills and knowledge that may vanish under this over-reliance on AI, especially for diagnosis. It’s vital for physicians to trust their critical thinking skills and evaluation processes when engaging with an AI recommendation and know when the recommendation is wrong. The physician should serve as a filter before diagnosis and treatment reach the patient. Nadkarni’s concerns mirror this sentiment, especially with health care education. 

 “We haven’t changed medical education in a while, but I think we now need to incorporate decision sciences, AI literacy, [and] human-centered design empathy all into medical education because the role of the doctor is going to evolve,” Nadkarni said.

Media Inquiry

Discover Related Content