The Promise and Peril of AI in Healthcare

February 25, 2020

Casey Ross, national technology correspondent for STAT, explains during a February 13 webinar, the benefits of AI in healthcare and how it must demonstrably improve outcomes for patients.

Artificial intelligence (AI) can be used to identify outbreaks such as the coronavirus, which, to date, has resulted in nearly 1,800 reported deaths and more than reported 71,000 infections.

In a February 13 webinar, Casey Ross, national technology correspondent for STAT, pointed to efforts by John Brownstein, PhD, chief innovation officer at Boston Children’s Hospital, to use machine learning to review social media posts, reports by physicians, news outlets, and information released by official public health entities to assess the condition’s outbreak beyond China’s borders. 

Brownstein’s work is proof that AI is showing its value in tracking the outbreak of the disease, says Ross.

Closer to home, healthcare systems around the country use AI to inform operational tasks such as scheduling. Some healthcare organizations use AI to pinpoint patients who need additional care, says Ross. For example, it’s used in sepsis detection and prediction, the assessment of readmission risk, and the identification of patients who are deteriorating.

Not yet embraced at health systems is the direct use of AI to diagnose and treat patients, he added. The FDA has approved products to diagnose and treat patients, but these products are currently in early implementation stages.

Related: Google’s AI System Can Detect Breast Cancer Better Than Doctors

According to Ross, there are many barriers to clinical adoption of AI. Three of these barriers are:

  • A lack of universally accepted standards. This makes the adoption of AI systems to diagnose and treat patients challenging. The absence of clarity on the required evidence threshold for the use of AI also makes implementation more difficult.

  • Testing and implementing AI algorithms is expensive. Testing AI algorithms on external datasets in different geographies is time-consuming and costly. While frameworks have been developed to assess data quality, this isn’t addressed in a clear or consistent way in most studies.
    “It’s a barrier to building trust in the technology and the quality of data it’s relying on,” says Ross.

  • Lack of interoperability. The scalability of implementing AI is made difficult by the lack of interoperability between EHRs and AI solutions. This also hurts the economic feasibility of adopting AI for clinical use, he added.

A cautionary tale
Ross pointed to the use of computer-aided detection (CAD) software that was approved by the FDA in 1998 for screening mammography as a cautionary tale. The software worked well in testing, in that, it helped radiologists “zero in on” areas of interest within scans, says Ross. In 2002, CMS increased reimbursement for CAD, which then attained widespread adoption.

In a 2015 study published in JAMA Internal Medicine, Constance Lehman, MD, PhD, professor of radiology at Harvard Medical School, revealed that the use of CAD wasn’t associated with an improvement in patient outcomes for the metrics she and her co-authors studied. In some cases, the patients did worse, pointed out Ross, who describes Lehman as “not an AI naysayer by any measure.”

Lehman, who’s also director of breast imaging and co-director of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital, is collaborating with colleagues at MIT on an AI follow-up tool for classifying breast density and breast cancer risk prediction. Those efforts are now undergoing testing and Ross plans to follow efforts to commercialize the technology.

“It took 20 years to undercover [the lack of impact on patient incomes of CAD]," says Ross. "Who knows how much waste was incurred in that time period? And if that mistake is not to be repeated...I think medicine and developers of AI have to learn from these kinds of examples.”

A best practice
An AI algorithm’s value is determined by its usefulness to a physician at the point of care, says Ross. That’s why Rochester, Minn.-based Mayo Clinic embeds machine-learning engineers with physicians to observe them as they practice medicine. For example, machine-learning engineers follow physicians on rounds, witness procedures, and attend clinical meetings. This interaction also helps physicians understand how AI works, he said.

The health system is currently studying the impact of a low-ejection fraction algorithm on clinical decision-making in primary-care practices.

Two of the questions researchers are trying to answer include:

  • Are clinicians, based on the algorithm, ordering more echocardiograms?

  • Is the algorithm catching cases where doctors feel that step is warranted?

The takeaway for the meaningful use of AI in healthcare? AI must demonstrably improve outcomes for patients, said Ross. 

Aine Cryts is a writer based in Boston.