Data from many drug trials for stroke go unpublished
Clinically useful data may lie buried, researchers say.
By Janet Raloff
Each year, some 15 million people suffer a stroke, the sudden interruption of blood supplies to the brain; it’s usually due to ischemia, the blockage of arteries. Yet important details from roughly one in five drug trials for the acute treatment of this (far and away the most common type of stroke) have never entered the public domain, a new study finds.
The masked data come from 125 trials that together involved more than 16,000 participants and tests of 89 different drugs.
Burying trial data – whether intentional or not – “sets aside the altruism of participants” to become guinea pigs for the greater good, says study author Peter Sandercock, a neurologist specializing in stroke. At least as importantly, he maintains, it “potentially biases the assessment of the effects of therapies and may lead to premature discontinuation of research into promising treatments.”
His team at the University of Edinburgh, Scotland, investigated many hundreds of trials. Although some date back to the 1950s, Sandercock observes that trials of therapies for this type of stroke didn’t really take off until the 1980s. (Research on the less-common hemorrhagic stroke still awaits its heyday, he says.) What this means is that most trials surveyed in the new study will have taken place in recent decades.
After compiling a list of all known trials, the researchers scoured databases and journal archives for evidence that details and outcomes had been made publically available. And indeed, most trials were eventually described in peer-reviewed journals. Although these reports might have had deficiencies, Sandercock acknowledges, such citations were deemed to constitute that a trial had been “fully” published.
But many drug trials that were mentioned in press clippings or conference abstracts appeared to have no corresponding journal article. Other trials with similarly untraceable data had been briefly alluded to only in press releases, perhaps by participating hospitals or funding sources. In all of these instances, the Edinburgh team pored over the literature to find possible investigators associated with the trials, and then contacted them, sometimes repeatedly, asking whether their data had ever been formally published.
Unless they learned otherwise, they now assume the trial was never formally published. And that the raw data or report to funding agencies probably languish within the files of some hospital – or maybe sit boxed in some researcher’s attic.
The nondisclosure of data from such long and costly trials likely traces to a host of reasons, including a medical-publishing system that prizes novel and/or dramatic data, even if they might not represent the best or statistically strongest findings. That said, even data from small, weak or poorly designed trials should be made publically available, Sandercock argues. If journals pass on publishing these or the authors lose interest when hypotheses don’t pan out, there should be alternative outlets for the data, he says.
Some findings might hint at ineffective therapies that should be retired from service even if they are inexpensive and widely used.
Other stats or observations might point to possible risks deserving further follow-up. For instance, the new study uncovered mortality data associated with 22 unpublished trials that turned up 636 deaths. “No information was available on whether the experimental drug had contributed to any of those deaths,” Sandercock allows. But most physicians would like to know whether there were any common features that tended to distinguish those who succumbed, such as age, gender, weight, severity of initial stroke, or accompanying illnesses such as diabetes.
Details of the Edinburgh analysis were published online April 22 in Trials.
Responding to an editorial
It’s hardly the first study to find that the data from many clinical trials languish or become buried. It’s not even the first such study focusing on stroke trials. But it is the most comprehensive.
I asked Sandercock what prompted his delving into the tedious morasse. And he pointed to a 2008 commentary in Practical Neurology by Kameshwar Prasad on the public health implications of delays in the publication of findings from clinical trials. In it, Prasad pointed to questions about trials involving two drugs for acute ischemic stroke: piracetam and fraxiparine. Early trials of each drug suggested they worked (although in piracetam’s case, possibly only in a subgroup of treated individuals).
But a followup trial of piracetam began in 1998. And as of the publication date of Prasad’s editorial, no journal had published clues to whether the new trial confirmed the initial one’s findings. And that, argued the New Delhi-based neurologist, “raises the suspicion that the results may be neutral of unfavourable to pircetam. If this is true, patients with stroke are receiving an ineffective and potentially harmful treatment.”
Moreover, he asserted, because this drug is used primarily in developing countries, like India, where people don’t tend to have insurance, poor patients may be needlessly shelling what little money they have for no medical gain. Which would certainly compound the human tragedy, he says.
In the case of fraxiparine, the initial trial that suggested it worked involved 312 patients and was published in the New England Journal of Medicine. Which means it got a lot of attention. A far larger trial failed to confirm the drug’s promise, Prasad said. But sketchy details of its findings were published in abstract form only – in 1998 – and to this day remain off the radar screen of most neurologists. “Hence,” Prasad concluded, “thousands of patients with acute ischaemic stroke continue to incur the risks and costs associated with the use of this drug without any certainty at all that it improves their outcome.”
Sandercock had read Prasad’s commentary shortly before medical student Lorna Gibson approached him looking for a research project to tackle. Sandercock suggested she comb through the Cochrane Stroke Group website, an archived repository of publically available information on stroke treatments – including press releases or newspaper clippings mentioning trials planned or in progress.
Gibson accepted the challenge, and the new study was off and running. The Edinburgh group extended its investigation to other databases as well. And in the end, Gibson identified 940 clinical trials that seemed to fit the bill. Further checking would indicate that some didn’t. They might have involved the wrong type of stroke or had other problems that took them out of contention.
Despite all that, her group showed that some 20 percent of all trials seemed to remain unpublished.
Not just lost data, but a huge “waste”
And such a finding points to a massive waste. Not only of time, but also of money and patients’ good will, argue Iain Chalmers of the James Lind Initiative in Oxford, England, and Paul Glasziou of the University of Oxford’s Centre for Evidence-Based Medicine. Last year, the pair published their own commentary in the Lancet that examined the causes and degree of waste that can occur at every stage in a clinical trial.
It starts when researchers decide what to study. And it often is not the most pressing issue affecting patients with a particular disease, they note. Nor even focuses, necessarily, on diseases affecting the most people. Which might not be a big deal if financing for such trials were not tight, restricting how many will ultimately be conducted. Some studies also suffer from serious design problems – ones that will compromise the value of or ability to interpret whatever they turn up. Then there’s the issue of whether a trial’s findings will get published. And if they do, whether they are presented in a way that does not seriously bias their interpretation, rendering them “unusable.”
The bottom line: Together, these problems compound the magnitude of waste. Immensely. It’s certainly Glasziou conclude, “that the dividends from tens of billions of dollars of investment in research are lost every year because of correctable problems.” And while their analysis focused on problems with clinical trials, they said “we believe it is reasonable to assume that the problems also apply to other types of research.”
Even climate impacts…
As an interesting addendum, Chalmers sent me an email yesterday indicating that the issue of waste extends well beyond dollars, pounds Sterling, or even rupies. He pointed to a pollution estimate published last year in the British Medical Journal that suggested carbon-dioxide emissions associated with the average randomized medical trial comes to 78.4 metric tons.
Responding to the implications of that assessment, Chalmers and Glaziou fired back a letter pointing out that “This carbon cost occurs whether or not a trial is published. Every year an estimated 12,000 trials which should have been fully reported are not. Hence under-reporting of trials wastes just under a million tonnes of carbon dioxide annually (the equivalent of carbon emissions from about 800,000 round trip flights between London and New York).”
Okay, that number is a back-of-the-envelope calculation. But it does reinforce that there are many ways to tally the costs of not publishing trial data. And they’re all large.