Bias in a common health care algorithm disproportionately hurts black patients
Simple tweaks to the machine-learning program could eliminate the disparity, researchers say
By Sujata Gupta
A widely used algorithm that helps hospitals identify high-risk patients who could benefit most from access to special health care programs is racially biased, a study finds.
Eliminating racial bias in that algorithm could more than double the percentage of black patients automatically eligible for specialized programs aimed at reducing complications from chronic health problems, such as diabetes, anemia and high blood pressure, researchers report in the Oct. 25 Science.
This research “shows how once you crack open the algorithm and understand the sources of bias and the mechanisms through which it’s working, you can correct for it,” says Stanford University bioethicist David Magnus, who wasn’t involved in the study.
To identify which patients should receive extra care, health care systems in the last decade have come to rely on machine-learning algorithms, which study past examples and identify patterns to learn how to a complete task.
The top 10 health care algorithms on the market — including Impact Pro, the one analyzed in the study — use patients’ past medical costs to predict future costs. Predicted costs are used as a proxy for health care needs, but spending may not be the most accurate metric. Research shows that even when black patients are as sick as or sicker than white patients, they spend less on health care, including doctor visits and prescription drugs. That disparity exists for many reasons, the researchers say, including unequal access to medical services and a historical distrust among black people of health care providers. That distrust stems in part from events such as the Tuskegee experiment (SN: 3/1/75), in which hundreds of black men with syphilis were denied treatment for decades.
As a result of this faulty metric, “the wrong people are being prioritized for these [health care] programs,” says study coauthor Ziad Obermeyer, a machine-learning and health policy expert at the University of California, Berkeley.
Concerns about bias in machine-learning algorithms — which are now helping diagnose diseases and predict criminal activity, among other tasks — are not new (SN: 9/6/17). But isolating sources of bias has proved challenging as researchers seldom have access to data used to train the algorithms.
Obermeyer and colleagues, however, were already working on another project with an academic hospital (which the researchers decline to name) that used Impact Pro and realized that the data used to get that algorithm up and running were available on the hospital’s servers.
So the team analyzed data on patients with primary care doctors at that hospital from 2013 to 2015 and zoomed in on 43,539 patients who self-identified as white and 6,079 who identified as black. The algorithm had given all patients, who were insured through private insurance or Medicare, a risk score based on past health care costs.
Patients with the same risk scores should, in theory, be equally sick. But the researchers found that, in their sample of black and white patients, black patients with the same risk scores as white patients had, on average, more chronic diseases. For risk scores that surpassed the 97th percentile, for example, the point at which patients would be automatically identified for enrollment into specialized programs, black patients had 26.3 percent more chronic illnesses than white patients — or an average of 4.8 chronic illnesses compared with white patients’ 3.8. Less than a fifth of patients above the 97th percentile were black.
Obermeyer likens the algorithm’s biased assessment to patients waiting in line to get into specialized programs. Everyone lines up according to their risk score. But “because of the bias,” he says, “healthier white patients get to cut in line ahead of black patients, even though those black patients go on to be sicker.”
When Obermeyer’s team ranked patients by number of chronic illnesses instead of health care spending, black patients went from 17.7 percent of patients above the 97th percentile to 46.5 percent.
Obermeyer’s team is partnering with Optum, the maker of Impact Pro, to improve the algorithm. The company independently replicated the new analysis and compared chronic health problems among black and white patients in a national dataset of almost 3.7 million insured people. Across risk scores, black patients had almost 50,000 more chronic conditions than white patients, evidence of the racial bias. Retraining the algorithm to rely on both past health care costs and other metrics, including preexisting conditions, reduced the disparity in chronic health conditions between black and white patients at each risk score by 84 percent.
Because the infrastructure for specialized programs is already in place, this research demonstrates that fixing health care algorithms could quickly connect the neediest patients to programs, says Suchi Saria, a machine-learning and health care researcher at Johns Hopkins University. “In a short span of time, you can eliminate this disparity.”