How racial bias can sink an algorithm’s effectiveness

Researchers have detected racial bias in an algorithm commonly used by health systems to make decisions about patient care, according to a new study published in Science.

The algorithm, the study’s authors explained, is deployed throughout the United States to evaluate patient needs.

“Large health systems and payers rely on this algorithm to target patients for ‘high-risk care management’ programs,” wrote Ziad Obermeyer, MD, school of public health at the University of California, Berkeley, and colleagues. “These programs seek to improve the care of patients with complex health needs by providing additional resources, including greater attention from trained providers, to help ensure that care is well coordinated. Most health systems use these programs as the cornerstone of population health management efforts, and they are widely considered effective at improving outcomes and satisfaction while reducing costs.”

While studying the algorithm—which the team noted does not specifically track race—Obermeyer et al. found its predictions are focused on health costs such as insurance claims more than health needs. Black patients generate “lesser medical expenses, conditional on health, even when we account for specific comorbidities,” which means accurate predictions of costs are going to automatically contain a certain amount of racial bias.

Correcting this unintentional issue, the authors noted, could increase the percentage of black patients receiving additional help thanks to the algorithm from 17.7% to 46.5%. So they worked to find a solution. And by retraining the algorithm to focus on a combination of health and cost instead of just future costs, the researchers achieved an 84% reduction in bias. They are continuing this work into the future, “establishing an ongoing (unpaid) collaboration” to make the algorithm even more effective.   

“These results suggest that label biases are fixable,” the authors wrote. “Changing the procedures by which we fit algorithms (for instance, by using a new statistical technique for decorrelating predictors with race or other similar solutions) is not required. Rather, we must change the data we feed the algorithm—specifically, the labels we give it.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 18 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”