Gordon Gao’s research seeks to quantify the relationship between online ratings and underlying quality of physicians, and how online ratings impact consumer choice.
Making online physician reviews more effective
When Gordon Gao moved to College Park, Maryland, from Philadelphia in 2005, the newly minted Wharton School PhD needed to find a pediatrician and primary care physician for his young family. With no family or close friends in the area to ask for advice, Gao turned instead to his health insurance company’s website.
“There were hundreds of physicians I could choose from, but they all looked the same. Other than providing their names, educational background and languages, the listing didn’t provide any helpful information,” Gao recalls. “But we all know that doctors are not equal and there is a huge variation in quality. So how do you find a good doctor?”
To answer that question, Gao turned to the then nascent tool of online doctor reviews — ultimately setting him on a path of academic research that is yielding key insights for patients, physicians, health insurance providers, and online physician rating platforms.
The impact of online ratings
Gao joined the Johns Hopkins Carey Business School in August 2022 and is co-director of the Carey Business School’s new Center for Digital Health and AI (CDHAI) with colleague Ritu Agarwal, the William Polk Carey Distinguished Professor. They have published a variety of far-ranging analyses that explore crucial questions such as: How useful are online ratings in predicting physician quality of care? How do online ratings — positive and negative — impact consumer decision-making? And: How can online review systems be constructed to be most effective in guiding patient decision-making?
The answers he and his Carey Business School colleagues are finding could have critical implications for the nation’s health care system, which is projected to reach $6.2 trillion in spending by 2028, according to the Centers for Medicare and Medicaid Services.
When Gao embarked upon his first major study in 2010, “online ratings for physicians were fairly new,” he says, “and research into online consumer reviews focused mostly on goods and services — from movies to video games to beer — rather than professional services. There were significant gaps in our understanding of the nature of these ratings.”
In a 2015 study published in JAMA Internal Medicine, one of the first efforts to quantify the relationship between online ratings and underlying quality of physicians, Gao and co-authors analyzed a sample of 1,299 primary care physicians. The researchers examined quality measures drawn from American Board of Internal Medicine practice improvement modules and patient survey responses and considered website physician rating measures drawn from eight free publicly available health-based websites.
“We found literally no correlation at all between physician website ratings and clinical quality,” Gao says. They did, however, find a small correlation between “patient experience” measures and perceived quality. “Patient experience looks at how well a doctor communicates with you; how warm they might be. So, if you want to find a ‘nice’ doctor, our study found that online reviews could be helpful — but whether or not a physician is nice is probably not always a good reflection of his or her clinical competence.”
In a subsequent study published in 2021 in Management Science, Gao, Agarwal, and Aishwary Deep Shukla of Simon Fraser University in British Columbia looked to gain insights into consumer decision-making by examining the effects of online “word-of-mouth” data. Their findings, which analyzed clickstream data from a leading online doctor appointment booking platform in India, have important implications for both doctors and online platform creators.
“We found that when the abundance of word-of-mouth data is high, consumers consider fewer doctors, browse for a shorter duration, and focus on doctors who are geographically more proximate,” the researchers conclude. In contrast, when there is less data — i.e. fewer doctors from which to choose — prospective patients browse for longer and consider doctors who are farther away.
“This suggests that in order to give customers a better experience and possibly to promote social welfare, platforms planning to launch should aim to create a market that is as word-of-mouth abundant as possible from the start,” Gao says.
Creating a bottleneck, stacking the deck
Interestingly, he and his colleagues also uncovered what they describe as a “cannibalization effect.” That is, doctors with the highest word-of-mouth ratings—those physicians ranking in the top 33 percent—gain a disproportionate boost in new appointment bookings. But this comes at the expense of those physicians who are new to the market and not yet rated.
“This could mean that doctors new to the area or new to medicine don’t get a chance to grow their practice because patients are instead going to the highly rated, more senior doctors,” Gao says. “But that creates a traffic jam for those doctors, creating longer wait times for patients to be seen.” To provide some protection for both patients and physicians, Gao concludes, online rating platforms should consider developing strategies to support and/or promote unrated practitioners.
How far back in time should online physician ratings go — and what happens when ratings change over time? Those are the questions that informed Gao’s most recent cross-sectional study, published in JAMA Internal Medicine Letters (August 2022), which tracked ratings for 1,141,176 physicians. The researchers found that on average, patient ratings were 5.3 years old, and in 41.54 percent of cases, the ratings from the most recent three years were “meaningfully different from the displayed rating based on all available years.”
What to Read Next
business of health
Johns Hopkins receives $1.6 million NIH grant to commercialize innovations to treat substance use disordersThat’s problematic, says Gao. “Giving old and new ratings the same weight is clearly wrong. A doctor’s current performance is what matters most to patients, but most online rating platforms don’t take ‘recency’ into consideration.” Thus, it can be difficult for doctors to shed past negative reviews and weaken the incentive to improve their performance.
But simply “retiring” older reviews probably isn’t the answer either, Gao notes. “There’s a trade-off. Discarding outdated information reduces the volume of reviews, which reduces the overall reliability of the data.”
One solution suggested by Gao and colleagues: Incorporate both older and more recent information, but increase the weight given to the more recent ratings, especially for doctors whose ratings change over time.
What’s next
Looking ahead, Gao hopes to build on this work to examine how doctors can use online reviews to better connect with their patients. And together with Agarwal, Gao aims to pursue new lines of inquiry into how health insurance plans and health care organizations can better leverage the power of big data to make online physician reviews more effective.
“The current online review system treats doctors as if they were a book or a movie,” says Gao. “It’s becoming easier than ever to collect and quantify patient feedback. We need to be much more systematic in how we leverage that data, with the ultimate goal of improving patient satisfaction and physician performance.”