A survey from Vizient of CMS previews reports from member hospitals. Here are the findings.
The Internet has ushered in a wave of consumerism that has transformed how people shop for virtually everything-including their healthcare. Consumers now expect to find detailed information about a product or service online, and they also rely heavily on online user reviews and comparison ratings to inform their purchase decisions. Given the seriousness of shopping for healthcare as opposed to say, a dishwasher, the reliability of this information is paramount. When patients are sick, they should have absolute confidence in the information they are getting to evaluate where to seek treatment.
The latest example of consumerism in healthcare came on July 27 in the form of the Overall Hospital Star Ratings issued by CMS. While CMS should be commended for its efforts to give consumers a simpler way to evaluate hospital care quality, there are many opportunities to improve the data and methodology behind its new ratings system.
The Star Ratings offer a simplified view of 64 measures of hospital care quality across domains including safety, mortality, and patient satisfaction globally-rolled up into 1 to 5 stars. Hospitals were given preview reports of their ratings in February, which at the time were set to be published in April. That is when the backlash began.
A survey of CMS preview reports from member hospitals by Vizient, Inc. showed that the statistical methodology used by CMS inappropriately impacted the ratings for hospitals, most significantly for academic medical centers (AMCs). The main findings from Vizient’s analysis included:
· AMCs were potentially disproportionately represented as “poorly performing hospitals” (1 or 2 stars out of 5)
· Unique AMC patient population characteristics were not considered. Characteristics such as multiple co-morbidities, acute patient transfers to a higher level of care and low socioeconomic status were not adequately accounted for or risk adjusted in key measures such as readmissions and mortality
· Improvement efforts hospitals made in April 2016 in key measure groups (roughly 64% of all groups) would not be reflected to the public for two more years because of data latency
In addition to potentially skewing ratings against AMCs, these issues also made it extremely challenging for hospitals to effectively use the ratings for performance improvement initiatives.
Despite objections voiced by hospital leaders, industry groups and nearly 300 members of Congress, CMS released the ratings in late July while pledging to provide more transparency with data and methodology.
Next: Ongoing methodology issues
Proponents of the ratings say that hospitals should use the information to “up their game” by improving staff training and addressing the flaws in care delivery that are impacting their rating. The result would be healthier, happier patients and healthier hospital margins. Hospitals agree. If only the Star Ratings enabled them to do that.
Vizient’s review of the latest Star Ratings methodology has identified ongoing issues that have the potential to inappropriately impact ratings and hinder a hospital’s ability to use the information to improve.
The statistical methodology of latent variable modeling does not appear analytically appropriate for the data. Through analysis of the data and methodology, Vizient determined the latent variable modeling approach does not add analytic rigor to the scoring used to calculate Star Ratings. Instead, it adds unnecessary variability or ‘swings’ due to quarterly changes in weighted measures.
As a result, hospitals performing close to a Star Rating cut point values. Hospitals whose performance is more impacted by changes in weighted measures are also likely to seesaw between rating categories with each quarterly release. Imagine that you are shopping for a car and its ratings changed every quarter for the same make and model. How confident would you feel about that purchase? Moreover, from the automobile manufacturer’s point of view, how would you prioritize future improvements when measures used to evaluate quality are weighted differently every three months?
The methodology includes duplicative measures so certain measures inappropriately influence the overall rating. The aggregate PSI 90 score includes many hospital-acquired harm measures, including the PSI-7 central line associated bloodstream infection (CLASBI) measure, which counts similar patients as the National Healthcare Safety Network (NSHN) CLABSI measure included in the scoring. Many of the hospital-wide and condition-specific readmission scores are also counted multiple times.
The methodology lacks appropriate adjustment for socioeconomic status. When looking at 30-day all-cause readmission rates, ratings need to adjust for socioeconomic status and health literacy. Hospitals should absolutely ensure appropriate and accessible follow up after inpatient care, including access to outpatient pharmaceuticals, but there are many other complicating factors that unfairly impact hospitals serving challenging patient populations.
The risk adjustment methodology does not consider transfer status of patients. Patients transferred tend to have the highest acuity and the highest risk for poor outcomes including mortality and readmissions. By the time some of these patients arrive to the next level of care, even with perfect care, their condition has progressed so significantly that their outcomes are seriously jeopardized.
Next: Take action
While CMS’s methodology isn’t perfect, there are some takeaways from Vizient’s analysis of the Star Rating preview reports that hospital leaders should consider to help guide performance improvement efforts and positively influence future ratings.
Start by reviewing the April and July reports and the recently issued October preview report from CMS. Look for areas where performance is consistently influencing the rating like patient satisfaction, mortality, or safety. Use internal data to determine if there has been improvement in an area which, due to older data used by CMS to generate ratings, is not being reflected.
Hospital executives should also focus improvement efforts and resources on a few key areas that will make the most impact on their rating. For example, bedside multi-disciplinary rounding improves patient satisfaction, decreases medical errors, and increases efficiency, all of which will improve ratings. Early ambulation decreases occurrences of pressure ulcers, deep vein thromboses, and can reduce length of stay, which will also improve ratings. Also, given the issue of data latency, leaders should look at their areas of strength in current CMS reports and be sure internal data shows the positive trend is continuing.
Improving quality is a team sport and hospital staff needs to understand their role and see that it is a top priority for the administration. There are a number of ways to develop a performance-oriented culture within your organization:
· Make data available so that everyone understands how their unit and the institution are performing
· Align goals to some key metrics in the CMS reports
· Encourage ideas for performance improvement from all staff
· Trial performance improvements in one service line or in one nursing unit to make implementation more manageable
· Report out on progress regularly and celebrate success
There is still wide and strong bipartisan support for changing the methodology and addressing overall concerns with the Star Ratings.
The House of Representatives introduced legislation, H.R. 5927, the Hospital Quality Rating Transparency Act of 2016. Since the legislation was introduced when the ratings release was imminent, the language requires that CMS remove the ratings from Hospital Compare if they are posted prior to the bill’s enactment (which they were). The legislation also calls for the methodology and data to be validated by a third party.
While the House bill is a step in the right direction, the Senate has not introduced companion legislation and Congress is in recess until Labor Day, which casts doubt over the future of this legislation.
Regardless, Vizient strongly supports meaningful information to help the public make informed healthcare decisions. However, the need for an easy system must be weighed with oversimplifying complicated decisions. Factors that determine quality of care for a cardiac condition, for example, can be very different than those for an orthopedic condition. Blending too many metrics into one rating could send the wrong message to patients evaluating their care options. It is equally important for hospitals to understand their scores so that the quality of healthcare can be improved across the nation. CMS’s current methodology can be improved in many areas to achieve these goals.
David Levine, MD, FACEP, is vice president of advanced analytics/informatics and medical director at Vizient, Inc.