Monday, September 3, 2007
Article on cnn.com -- Study: Drug-Coated Heart Stents May Not Be So Bad After All, from Dow Jones. Here's an excerpt:
On Sunday, James and colleagues from Sweden presented follow-up results from data reported in December at the FDA safety hearing, and later published in the New England Journal of Medicine. They said that at that time, based on three years of data, patients with drug-eluting stents had an 18% increased chance of dying compared to patients with bare metal stents.
With more patients included and an extra year of data, the numbers now tell a different story.
After four years of tracking patients with drug-lined stents, James said that there was now no significant difference between patients who received the drug stents versus those who received the bare metal ones: patients with drug stents had only a 1% increased chance of dying. Newer drug stents are also better than earlier versions, some of which had to be recalled.
Experts are not entirely sure what might explain the research reversal, but more selective stent use might help explain the change, they said.
In the last year, use of drug stents has dropped dramatically. James said that in Sweden, only about 15% of eligible patients were now receiving them, compared to nearly 60% in previous years. And in the U.S., use has dropped from about more than 90% of eligible heart patients to about 70 percent.
The article also notes that the drop in stent use following the initial study results lead manufacturer Johnson & Johnson to cut 5,000 jobs.
The change in stent study results is a cautionary reminder of the need to make informed policy judgments based on a complete scientific record. The most egregious example in mass tort litigation is the silicone breast implant litigation, which lead to the bankrupting of a major company, Dow Corning, based on an incomplete scientific record. When the final scientific verdict was in -- too late for Dow Corning and other defendants -- the Institute of Medicine issued a report rejecting the medical causation that underlay plaintiffs' claims.
But how do we know when the new study finding danger is accurate or inaccurate (as a result of confounding, chance, or bias)? Phenylpropanolamine (PPA), the ingredient in cough-cold medications and appetite suppressants, was one of the most used drugs in the United States for decades, but a single epidemiological study (based primarily on the findings involving a subset of only a handful of individuals suffering stroke who took PPA in appetite suppressants) lead to the FDA's requesting the drug be voluntarily withdrawn from the market. (For a critique of the study, see Download stier_hennekens_ppa_and_hs_in_the_hsp_annals_of_epidemiology_2006.pdf.) What's worse, removing a drug from the market based on an incomplete record likely forecloses further research, which leaves litigants with a necessarily incomplete scientific record on which to base their claims.
So what's a judge to do? I would argue that judges should continue to adhere strictly to Daubert's dictates that only expert opinions based on reliable science pass through the gates -- even if further studies do not appear to be forthcoming. Justice requires that liability be fixed and payments required only where there is a solid basis for the claim, which includes medical causation. But where the underlying science meets Daubert evidentiary standards, but may be incomplete, defendants need to be ready not only to present any cogent criticisms of the study, but also to point to examples where the preliminary science was wrong. A glaring one: when the New England Journal of Medicine in 1981 published an article finding that coffee drinking may have been producing more than half of the cases of pancreatic cancer in the U.S. -- later studies failed to confirm those findings. Starbucks, I'm sure, is happy the scientific record didn't remain incomplete.