How to spot when drug companies spin clinical trial results

Investors, analysts, doctors, and even patients face an avalanche of news from biotech companies about their human trials of experimental drugs, and wading through all that data to draw reasonable conclusions is a challenging task. This week, STAT has published a 2023 update of its Guide to Interpreting Clinical Trial Results, which can help consumers of company readouts navigate the process.

The update offers new examples to illustrate the terms, metrics, and numerous red flags included in the original 2020 report, which was authored by senior biotech writer Adam Feuerstein and the late Sharon Begley, who was STAT’s senior science writer until her death in 2021. The new version retains “all the lessons and pithy advice” of the original report, writes clinical trials expert and physician Frank David in his introduction, noting that they are “more useful than ever today.” 

advertisement

Updating the report was a solo assignment for Adam. We asked him about some of the key elements of the report, as well as about the genesis of the original report and the experience of working with Sharon on it.

STAT’s Guide to Interpreting Clinical Trial Results makes it clear that all too often the readouts offered publicly by biotech companies are anything but straightforward. Just how bad — or perhaps we should ask how creative — is “spin” today compared to when the STAT guide was first issued?

Spin is eternal. As long as companies (and scientists) conduct clinical trials, there will be efforts made to bend negative results in a more favorable light. The tactics used to spin bad data also evolve, but probably less slowly. I’m more amazed at how often companies stick to the standard spin playbook in the mistaken belief that, maybe, this time, it will work. It doesn’t.

advertisement

What is the most common example of spin in readouts coming out now?

Post hoc subgroup analyses will always be popular. Trying to salvage a negative study by identifying a smaller group of patients after the fact where results might be more favorable is such a common spin tactic that STAT commissioned an entire report on the subject. In case you were wondering, no, subgroup analyses rarely work. 

I’m also seeing more companies trying to circumvent scrutiny by selectively disclosing study results, or omitting relevant information that may undermine the rosy conclusions they want to communicate. I’m referring to tactics like burying bad news at the very bottom of press releases, or less-than-fulsome accounting of side effects. Properly evaluating the results of a clinical trial requires paying equal attention to the information not disclosed.

Can you give us the most egregious recent example of spin?

My recent favorite was a company’s presentation of study results claiming to show that its experimental drug not only slowed cognitive decline in people with Alzheimer’s, but in some cases, the drug improved cognition. That would have been a big win, except digging deep into the company’s data showed clearly that a majority of the people enrolled into its study didn’t even have Alzheimer’s. Duh! The truth: The drug did nothing. 

Controlled trials with patients “blinded” and randomized to different treatment arms remain the gold standard of study design, but you note that such studies are not always possible — for instance, new cancer drugs are most commonly approved based on single-arm clinical trials. How have study designs evolved in recent years, and how that might affect the ability to assess study results?

Data from randomized studies are best, but you’re right, there are some instances where single-arm studies are appropriate and accepted, such as in rare, life-threatening diseases where the use of a placebo may not be ethical. It’s also more challenging to assess the benefit of a drug when tested in a single-arm study. And while the FDA continues to approve some cancer drugs based on tumor-response data collected in single-arm studies, the agency is increasingly asking for hybrid studies, where response rate data might be collected in the first part of a study, and more stringent survival data is collected in a second part. 

Tell us a little bit about how the 2020 guide came about, your experience in collaborating with Sharon on it, and how it felt to return to it to write the 2023 update on your own. 

The idea for writing the 2020 guide emerged from an online webinar (I guess we’d call them Zoom meetings these days) on the same topic that Sharon Begley and I did for STAT readers in December 2018. I remember the day well, and fondly, because Boston was buried under a blizzard, but we still somehow managed to trek into STAT’s office. Working with Sharon was a thrill. She not only knew more than anyone about clinical trial analysis, and how to detect spin, but she was a gifted and generous teacher. 

Revisiting the guide for 2023 gave me an opportunity to look back at Sharon’s work. To no one’s surprise, her contributions needed almost no revisions. That’s Sharon. Timeless. I miss her so much, as does the entire STAT family.