FDA-Approved AI Algorithm More Likely to Detect False Positive Breast Cancer Cases in Black Women

News
Article

Wrong diagnoses were also made in women with dense, fatty breasts.

© Adin - stock.adobe.com

Breast Cancer AI © Adin - stock.adobe.com

The FDA-approved AI algorithm called ProFound AI was more likely to show a false-positive cancer diagnosis in the mammograms of Black women, when compared to White, Hispanic or Asian women, new research shows. Results from the study were published Tuesday in Radiology.

A team of researchers from the Duke University School of Medicine led by Yinhao Ren, Ph.D. and Derek L. Nguyen, looked at a diverse set of breast cancer screenings performed at the Duke University Medical Center between 2016 and 2019. Scan demographics were 27% White, 26% Black, 28% Asian and 19% Hispanic with an average age of 54. All scans were negative at the time of the tests. When researchers ran all scrans through the algorithm, they found that 17% of cases (816 of 4855) were positive for cancer. In addition, Black patients were 45% more likely to have a false positive test score than White patients.

Due to burnout and staffing shortages in radiologists, the use of artificial intelligence is looking more and more promising, even though the impact of patient characteristics such as race has not been thoroughly studied, the authors write. Easier turnaround of test results is especially attractive since many breast clinics are switching from traditional digital mammography to digital breast tomosynthesis, or 3D mammography, which is more comprehensive and but takes radiologists twice as long to read.

Although this study only focused on one AI vendor, it’s likely these inaccuracies are widespread, the authors note. The FDA currently does not require AI to be trained on diverse data sets, which contributes to the inaccuracies. AI software companies are also reluctant to reveal their training methods, which are classified as intellectual property.

If future AI models are trained on images with little diversity, these issues will persist, leading to an overdiagnosis of cancer and unnecessary fear in patients. This may ultimately widen healthcare disparities in communitiesand breed distrust in AI, which has potential to be of use if used correctly.

“The Food and Drug Administration should provide clear guidance on the demographic characteristics of samples used to develop algorithms, and vendors should be transparent about how their algorithms were developed,” the researchers write. “Continued efforts to train future AI algorithms on diverse data sets are needed to ensure standard performance across all patient populations.”

Related Content
© 2024 MJH Life Sciences

All rights reserved.