AI For Fraud Detection

Publication
Article
MHE PublicationMHE April 2022
Volume 32
Issue 4

Humans still have a role, but in a recent survey, about half of insurers said they are using AI to help cut down on fraud, waste and abuse.

The era of big data has led to a transformation of healthcare, including the ability for providers to leverage massive amounts of real-world data to help inform high-stakes clinical decisions. Yet, when it comes to fraud, waste and abuse (FWA), big data can also make it easier to bury the needles of fraudulent claims in an ever-growing haystack of data.

But maybe the “needle” metaphor does not do justice to the scale of FWA. The National Health Care Anti-Fraud Association said a conservative estimate is that approximately 3% of annual healthcare spending is spent on fraudulent claims, whereas other estimates put that number closer to 10% of total healthcare spending, which would translate to over $300 billion annually. In fact, just a single company, Highmark Inc., reported savings of $245 million as a result of efforts to combat FWA. The Pittsburgh-based insurer said its savings were due to the work of the company’s Financial Investigations and Provider Review department, but they were also thanks to the department’s artificial intelligence (AI) software.

“AI allows Highmark to detect and prevent suspicious activity more quickly, update insurance policies and guidelines, and stay ahead of new schemes and bad actors,” says Melissa Anderson, Highmark’s chief auditor and compliance officer.

Highmark is not alone. A host of software providers are now offering AI products designed to identify errors or unusual activity that may be indicative of fraud. A report released in July 2021 by PYMNTS.com and Brighterion AI, a Mastercard company, found 44% of the largest insurers it surveyed were using AI to detect FWA. The report was based on a survey of 100 healthcare executives with responsibilities for or direct knowledge of FWA.

Jodi G. Daniel, a partner at Crowell & Moring LLP, says AI can be a powerful tool in a data-heavy industry like healthcare. “When you’re talking about massive amounts of data, technology can help to detect patterns and can flag things that may be out of the ordinary or suspicious, so a human being can look at it,” she notes. Daniel leads the law firm’s digital health practice and previously headed the Office of Policy in the Office of the National Coordinator for Health Information Technology.

Worry about false positives

Health insurers are also proceeding with some caution. The insurance executives indicated in the Brighterion AI survey that cost savings, regulatory pressures and a high level of adaptability were important factors when choosing an AI provider, but they also cited accuracy as a key concern (95%). That’s because false positives — cases that look like fraud at first glance but are legitimate — are a major hurdle in terms of managing FWA, the report found.

In fact, among the largest companies surveyed, 66% said reducing false positives was “extremely important” when choosing an AI provider. Increasing detection of FWA with AI was labeled extremely important by only 25% of executives from the largest insurers. Among smaller insurers, reducing false positives was less likely to be extremely important (30%), and increasing FWA detection was more likely to be labeled extremely important (53%).

Daniel says that accuracy is important with any AI tool, which is why regulators prefer to see a human involement not just software. She also says there is great risk associated with full automation, particularly in cases involving treatment decisions. “If you look at (FDA) oversight of clinical decision support tools, they treat tools that have a physician intermediary or clinician intermediary very differently than those that are fully automated,” says Daniel.

Jared Kaltwasser is a freelance writer in Iowa and a regular contributor to Managed Healthcare Executive®.

Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.