Medicaid Saves Lives, But Policymakers Must Look at the Full Evidence

News
Article

Recent analysis reveals Medicaid's impact on mortality, highlighting the need for diverse research methods to inform policy changes effectively.

Recent studies have shown evidence that Medicaid reduces mortality, however, debate continues over how policymakers should weigh different types of research when considering changes to coverage, according to an analysis in JAMA Health Forum.

The debate and discussions gained urgency after the passage of the “One Big Beautiful Bill” act, which the Congressional Budget Office (CBO) estimated would reduce Medicaid coverage by about 10 million people. Based on these discussions, the key policy question was how such cuts could affect mortality.

Understanding why mortality is difficult to measure reveals the challenges policymakers face.

Mortality is the ultimate health outcome, but also one of the most difficult to study, according to the JAMA analysis. Deaths are relatively rare over short periods, particularly among non-elderly populations, which means detecting significant effects requires very large samples and long follow-up.

This challenge has influenced how researchers understand Medicaid.

Some observers pointed to the RAND Health Insurance Experiment of the 1970s and 1980s and the 2008 Oregon Health Insurance Experiment as evidence that Medicaid doesn’t save lives. These randomized controlled trials (RCTs) remain influential because of their ability to establish causal relationships, but they were not designed to measure mortality.

The RAND study varied cost-sharing levels among insured individuals but did not include a group without insurance. The Oregon experiment assigned Medicaid enrollment through a lottery, but the number of participants was relatively small—only about 10,000 gained insurance, with about 70 deaths over the follow-up period. As a result, the study lacked statistical power to detect effects on mortality. As the authors noted, “This is absence of evidence, not evidence of absence.”

Looking beyond RCTs to larger natural experiments provides a more complete picture.

Because RCTs tend to be poorly suited to studying rare outcomes, researchers have turned to quasi-experimental approaches, according to the analysis. These studies reveal differences in Medicaid eligibility expansions across states and years, allowing for much larger sample sizes and longer time frames.

“Several recent studies have found robust evidence that Medicaid reduces mortality,” the JAMA Health Forum authors wrote.

These studies were particularly valuable not only because they had the statistical power to detect effects, but also because they reflected the context of large, geographically diverse Medicaid expansions.

The Medicaid mortality debate illustrates broader lessons for evidence-based policymaking. For instance, null results only matter if the study can show that any real effect is too small to be important. In the Oregon experiment, the mortality estimate was consistent with both substantial reductions and increases in mortality. Its findings didn’t show that Medicaid had no effect, but they were too unclear to draw conclusions about this outcome.

Additionally, while RCTs are useful for showing cause and effect, they don’t always apply well to real-world policy situations.Natural experiments, though less controlled, can capture larger populations and provide the power needed to study rare but unfortunate outcomes such as mortality.

Finally, the authors stressed the need to piece together evidence across multiple study designs.

“Credible conclusions about infrequent outcomes like mortality emerge from integrating evidence across multiple approaches,” they argued.

Overall, the author found that no single study is sufficient for evaluating a major policy change such as expanding or cutting Medicaid, but taken together, the body of research can guide decision-making.

For health leaders, the evidence highlights the real-world stakes of policy decisions. For payers, providers and policymakers, the priority is that evidence must be interpreted with caution and in context. Relying solely on RCTs or on studies with insufficient sample sizes risks overlooking the broader picture.

Newsletter

Get the latest industry news, event updates, and more from Managed healthcare Executive.

Recent Videos
2 experts in this video
2 experts in this video
© 2025 MJH Life Sciences

All rights reserved.