• Hypertrophic Cardiomyopathy (HCM)
  • Vaccines: 2023 Year in Review
  • Eyecare
  • Urothelial Carcinoma
  • Women's Health
  • Hemophilia
  • Heart Failure
  • Vaccines
  • Neonatal Care
  • NSCLC
  • Type II Inflammation
  • Substance Use Disorder
  • Gene Therapy
  • Lung Cancer
  • Spinal Muscular Atrophy
  • HIV
  • Post-Acute Care
  • Liver Disease
  • Pulmonary Arterial Hypertension
  • Biologics
  • Asthma
  • Atrial Fibrillation
  • Type I Diabetes
  • RSV
  • COVID-19
  • Cardiovascular Diseases
  • Breast Cancer
  • Prescription Digital Therapeutics
  • Reproductive Health
  • The Improving Patient Access Podcast
  • Blood Cancer
  • Ulcerative Colitis
  • Respiratory Conditions
  • Multiple Sclerosis
  • Digital Health
  • Population Health
  • Sleep Disorders
  • Biosimilars
  • Plaque Psoriasis
  • Leukemia and Lymphoma
  • Oncology
  • Pediatrics
  • Urology
  • Obstetrics-Gynecology & Women's Health
  • Opioids
  • Solid Tumors
  • Autoimmune Diseases
  • Dermatology
  • Diabetes
  • Mental Health

Healthcare Prediction Algorithm Found to be Biased Against Blacks

Article

A new study from UC Berkeley and Chicago Booth reveals significant racial bias in software program.

Broken Bridges

From predicting who will be a repeat offender to who's the best candidate for a job, computer algorithms are now making complex decisions in lieu of humans. But increasingly, many of these algorithms are being found to replicate the same racial, socioeconomic, or gender-based biases they were built to overcome.

This racial bias extends to software widely used in the healthcare industry, potentially affecting access to care for millions of Americans, according to a new study by researchers at the University of California, Berkeley, the University of Chicago Booth School of Business, and Partners HealthCare in Boston.

The new study, published October 25 in the journal Science, found that a type of software program that determines who gets access to high-risk healthcare management programs routinely lets healthier whites into the programs ahead of blacks who are less healthy. Fixing this bias in the algorithm could more than double the number of black patients automatically admitted to these programs, the study reveals.

"We found that a category of algorithms that influences healthcare decisions for over 100 million Americans shows significant racial bias," says Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth and senior author of the study. 

"The algorithms encode racial bias by using healthcare costs to determine patient 'risk,' or who was mostly likely to benefit from care management programs," says Ziad Obermeyer, acting associate professor of health policy and management at UC Berkeley and lead author of the paper.

"Because of the structural inequalities in our healthcare system, blacks at a given level of health end up generating lower costs than whites," Obermeyer says. "As a result, black patients were much sicker at a given level of the algorithm's predicted risk."

Related: Top Ways to Curtail Gender Bias in Healthcare

By tweaking the algorithm to use other variables to predict patient risk, such as costs that could be avoided by preventative care, researchers were able to correct much of the bias that was initially built into the algorithm.

"Algorithms by themselves are neither good nor bad," Mullainathan says. "It is merely a question of taking care in how they are built. In this case, the problem is eminently fixable––and at least one manufacturer appears to be working on a fix. We would encourage others to do so." 

More generally, Obermeyer says, incorporating routine audits into algorithm developers' workflows would help. "For algorithms, just as for medicine, we'd prefer to prevent problems instead of curing them."

Digging up the roots of algorithmic bias

Uncovering algorithmic bias––be it in the criminal justice system, in hiring decisions, or in healthcare––is often hindered by the fact that many of the prediction algorithms in use today are designed by private companies and are proprietary, making it difficult for data scientists and researchers to analyze them.

To tackle this problem, Mullainathan and Obermeyer partnered with researchers at an academic hospital that was using a risk-based algorithm to determine which patients were getting preferential access to a high-risk care management program. Programs like this are designed to improve care for patients with complex medical needs by providing them with additional attention and resources.

For 43,539 white patients and 6,079 black patients enrolled in the hospital, researchers obtained the algorithm-predicted risk score and compared it to more direct measures of a patient's health, including number of chronic illnesses and other biomarkers. 

They found that, for a given risk score, blacks had significantly poorer health than their white counterparts. 

"Instead of being trained to find the sickest, in a physiological sense, (these algorithms) ended up being trained to find the sickest in the sense of those whom we spend the most money on," Mullainathan says. "And there are systemic racial differences in healthcare in who we spend money on." 

Patients whose risk scores landed in the top 97% were automatically identified for enrollment in the care management program. By correcting for the health disparities between blacks and whites, researchers found that the percentage of black people in the automatic enrollee group jumped from 18% to 47%. 

But there is room for hope, Obermeyer says. Training the algorithm to determine risk based on other measurable variables, such as avoidable cost, or the number of chronic conditions that needed treatment in a year, significantly reduced the racial bias.

And when alerted to the bias built into its algorithm, the software manufacturer was very motivated to address the issue, Obermeyer adds.

"Algorithms can do terrible things, or algorithms can do wonderful things. Which one of those things they do is basically up to us," Obermeyer says. "We make so many choices when we train an algorithm that feel technical and small. But these choices make the difference between an algorithm that's good or bad, biased or unbiased. So, it's often very understandable when we end up with algorithms that don't do what we want them to do, because those choices are hard."

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.