Cost-Effectiveness Analysis: How Should We Deal With Skeptics?

Issue 3

 
 

SCOTT RAMSEY, MD, PHD

Senior Partner and Chief Medical Officer, Curta
Adjunct Professor at the University of Washington, School of Pharmacy, CHOICE Institute
Professor at the University of Washington, School of Medicine

 

I recently received a blistering critique by a clinician who reviewed a cost-effectiveness analysis that I submitted to a medical specialty journal. Our study compared two treatments for patients with leukemia and lymphoma. The comments started with questions about the validity of the randomized controlled trial that our cost-effectiveness analysis was embedded with. It then escalated to a broad attack on economic evaluations in general, questioning its legitimacy as a research field and rejecting the idea that anyone should make a decision about the technology based on our findings.

Wow. Readers who have published cost-effectiveness analyses probably have had a similar experience. As someone who now has received enough of these types of comments to fill a small book, this latest tirade got me thinking: perhaps sharing a few thoughts on how I deal with criticisms of our cost-effectiveness studies might be helpful to others. With that in mind, here’s a summary of my approach to dealing with critics and skeptics:

1.     Is it possible that they have a valid point?

“Always consider the possibility that you may be wrong. Especially when you are absolutely certain you are right.”

The author William Brinkley once famously said: “Always consider the possibility that you may be wrong. Especially when you are absolutely certain you are right.” It can be difficult not to be defensive when receiving criticism about a project that you have spent months or years working on and feel you know intimately. When faced with a challenge to a study, it’s worthwhile stepping back and imagining that you are an expert in something other than cost-effectiveness analysis who has just read the paper. Our studies are by nature highly complex syntheses of multiple studies and data elements from epidemiology, clinical medicine, economics, and policy to name a few. Someone who spends their time in one of those fields might have a useful insight, even if they chose to deliver it caustically.

2.     What is the core of their argument?

Sometimes, critics have an issue with a single parameter or assumption that in their view threatens the validity of the study. If that seems to be the case, consider how to respond to that issue specifically. If the critique includes multiple complaints, try to find a theme or a cross-cutting issue. Sometimes laundry lists of posited shortcomings are valid, but often they point to the fact that the reviewer has an issue with the base-case incremental cost-effectiveness ratio (ICER)[1]. If the critic is a peer reviewer for a journal where the paper has been submitted, consider whether it is possible to acknowledge their issue through modification of one or more model parameters. Does it fundamentally change the result?  If not, it is useful to walk the reviewer through the process and share the findings. If modifying the suggested value(s) strongly influences the main finding, consider the likelihood that the reviewer’s estimates are more accurate (see #1). In this case, it is a matter of defending the original choices or making changes, modifying the primary result and possibly the conclusion.

If the concern is broader—for example, that the wrong comparators were used, or something that would impact the model structure—it is worth consulting with others before responding. I recall an example where I was told that I picked the wrong comparator for an evaluation of a new cancer therapy, but when I asked experts, they assured me that the one suggested by the critic is an emerging therapy that doesn’t yet have enough evidence and experience to be part of standard practice. This points to another frequent issue behind a criticism: Is there a bigger agenda behind the argument?

3.     Is there an advocacy angle?

“Maybe we should all just die and not call 911.”

This was the criticism I received from a surgeon who read a cost-effectiveness study I published that showed an unfavorable ICER for a new surgical procedure for patients with emphysema. A little googling allowed me to find out that he was one of the early developers of the technique and had an equity stake in a technology that was integral to the procedure. I had just gored this individual’s ox, metaphorically speaking.

Over the years, I have found that the most frequent critics of cost-effectiveness studies are groups who have a professional or personal stake in the intervention of interest. Do they have a right to be upset? Yes and no. Many advocates view cost-effectiveness studies as threatening to their world view or perhaps very livelihoods. That might be the case if insurers and healthcare authorities used these studies to broadly deny access to therapies. In practice, at least in the United States, this is rarely the case. Yes, emerging therapies often are problematic: the techniques haven’t been perfected, the right patient population hasn’t been identified, managing adverse effects is a work in progress. Perhaps a cost-effectiveness study might modestly influence initial access, but is that a bad thing? I don’t personally have a problem pointing out that advocates often want their technologies to be used earlier and more widely than the evidence supports. Reimbursement restrictions fall when the evidence of safety and effectiveness improves. In these cases, being a little generous with the commenter’s perspective and acknowledging the importance of their expertise or product can go a long way to defusing an argument.

I am much more sympathetic to concerns raised by patients, particularly when the technology in question holds promise for substantially improving or even curing a disease that afflicts them. Sometimes, the problem is that the price is just too high for the benefit. Most insured patients will only pay a fraction of the price unless of course it isn’t covered by their plan, so it’s easy to relate to their concern about access. As noted, it’s worth mentioning that in the United States, broad restrictions rarely happen, and perhaps aggressive negotiation on price is warranted. Of course, when the benefit is highly uncertain, the best point of access for emerging technologies is clinical trials, but this may not be an option for many. In some cases, when an expensive technology works well, cost-effectiveness can point out that technologies with a high price can be high value because they offer so much benefit. Often the problem is that we don’t have a lot of certainty that this is the case, because of the limitations in the evidence. This is worth pointing out (with as little jargon as possible): uncertainty is a big factor behind caution when the risks and benefits are not yet clear and the price is high.

4.     Is the problem that there is a lot of uncertainty?

Those of us who live in a world of estimating probabilities sometimes forget that (most) others see the world a little differently. Saying that there is a 30% chance that an intervention will be cost-effective at a Willingness to Pay threshold of $100,000 per quality-adjusted life year (QALY) is not always viewed in the way that we hope it would. In short, people often use heuristics to simplify a complex question: “You’ve just said that the technology is low value and should be restricted.” Evaluations of technologies, particularly those in very early phases of clinical use, are by nature exercises in uncertainty. That doesn’t take away from the point of the exercise: a cost-effectiveness study is designed to synthesize an array of evidence that is by definition imperfect in order to estimate the potential value of the technology for patients and society. This is the reality that gets lost: we have to make a decision about what to do with a new technology and sometimes the best decision doesn’t feel satisfying at all. I acknowledge this in my responses: there is a chance that we will make a wrong decision. We can always do more studies, but they are costly, sometimes not feasible, and will delay the decision even further (See Value of Information analysis [2]).

5.     Not responding might be the best strategy

Critiques often come with statements or positions that are frankly ridiculous. Sure, it can feel so easy (and satisfying!) to push back against clearly wrong or heavily biased comments. Over the years, I have learned the value of Upton Sinclair’s famous comment: “It is difficult to get a man to understand something, when his salary depends on his not understanding it” [3]. Better just to let it go.

In summary, those who work in cost-effectiveness analysis for any amount of time will be criticized, sometimes fiercely. Working under the expectation that one will be challenged can be a good thing: it makes us work harder to represent the problem we are studying as accurately and thoroughly as possible. When we receive a strong criticism, it’s good practice to walk a bit in the critic’s shoes, so to speak, before responding.

As for the reviewer I mentioned at the beginning? We acknowledged their concerns but chose not to respond specifically to their comments. The paper is now in press.

 
 

Scott Ramsey, MD, PhD

Senior Partner and Chief Medical Officer, Curta

 
 

FOOTNOTES:

  1. ICER – incremental cost-effectiveness ratio, AKA the “bottom line”.

  2. Value of Information analysis (VOI) is an approach that estimates the economic return on reducing the chance of making the wrong decision; i.e., covering a technology that doesn’t work, or denying access to a technology that is beneficial. To reduce that risk, one needs to run more clinical trials, but those trials are costly and delay broad access to the technology.

  3. I acknowledge the dated gendering of this quote. It, of course, applies to everyone.

Previous
Previous

Searching for Orphans in Haystacks: A Real World Data Dilemma

Next
Next

What is the Shelf Life of a Model?