The institution of peer-reviewed, published research is not a magic wand that ensures that all findings are disseminated to all relevant parties. I wrote in Genesis and Genes about an important paper that appeared in Nature in March 2012:
During a decade as head of global cancer research at Amgen, [Glenn] Begley identified 53 “landmark” publications – papers in top journals, from reputable labs – for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. Result: 47 of the 53 studies (89%) could not be replicated. He described his findings in a commentary piece published in the journal Nature in March 2012. In a Reuters report, Begley said “It was shocking. These are the studies the pharmaceutical industry relies on to identify new targets for drug development… As we tried to reproduce these papers we became convinced you can’t take anything at face value.” Begley’s experience echoes a report from scientists at Bayer AG. In a 2011 paper titled Believe it or not, they analyzed in-house projects that built on “exciting published data” from basic science studies. “Often, key data could not be reproduced,” wrote Khusru Asadullah, vice president and head of target discovery at Bayer HealthCare in Berlin, and colleagues. Of 47 cancer projects at Bayer during 2011, less than one-quarter could reproduce previously reported findings, despite the efforts of three or four scientists working full time for up to a year. Bayer dropped the projects.
Bayer and Amgen found that the prestige of a journal was no guarantee a paper would be solid. “The scientific community assumes that the claims in a preclinical study can be taken at face value,” Begley and Lee Ellis of MD Anderson Cancer Center wrote in Nature. They and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers. Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies. “We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
A reader has now kindly drawn my attention to a TED talk by Dr. Ben Goldacre. The talk is entitled What Doctors Don’t Know about the Drugs they Prescribe. [Goldacre refers to the Nature paper mentioned above at about 02:00]. The points made by Dr. Goldacre are relevant to all branches of science.
Early on in the talk, Dr. Goldacre refers to a case study of the drug Lorcainide. This drug was designed to suppress arrhythmia, irregular beating of the heart. It was thought that since patients often suffer from arrhythmia after a heart attack, suppressing irregular heart-beating may be salubrious. In a preliminary study, fifty heart-attack patients were given Lorcainide, while another fifty patients were given a placebo. In the first group, ten patients died; in the second, only one patient died. The researchers concluded that Lorcaininde was dangerous. But because this trial was deemed a failure – the drug had no commercial prospects – the study was never published. This had tragic consequences. Not knowing the results of the unpublished study, other research groups in the following years who also thought that arrhythmia-suppressing drugs have medical potential brought similar medicines to market. According to Goldacre, this led to the unnecessary death of more than 100 000 patients.
A little further in his talk Dr. Goldacre discusses the drug Reboxetin, manufactured by Pfizer. With some asperity, Dr. Goldacre announces that he was misled by the published results regarding this drug, as indeed he was. Seven clinical trials were conducted to test the effectiveness of Reboxetin. One trial produced a positive result i.e. the drug produced better results than a placebo, and was published. Six trials produced negative results – and were not published. Goldacre says that he – and presumably thousands of other doctors – freely prescribed this drug on the basis of the published results.
Next, Dr. Goldacre turns to all the antidepressant trials submitted to the FDA for approval over a 15-year period. There were 38 positive trials and 36 negative trials submitted to the FDA. But when one looks at the publication record of these studies in the peer-reviewed academic literature, one finds that of the 38 positive trials, 37 were published; of the 36 negative trials, only 3 were published.
Dr. Goldacre peppers his talk with pungent comments about the rotten state of things. He says that publication bias is so prevalent that it cuts to the core of evidence-based medicine; the state of affairs “is a systematic flaw in the core of medicine”; he uses phrases like “This is a disaster” and terms like “cancer” to describe the situation.
The issues discussed by Dr. Goldacre in his TED talks and in his books are by no means limited to medical science. Publication bias is a systemic problem within science. One manifestation of this is that research that does not conform to basic tenets of sundry paradigms is simply not published, thus distorting the impression that scientists and the public form about important topics.
I discussed this in some length in Genesis and Genes. I pointed out that the problems begin right at the outset of one’s university training. Laboratory exercises, in which students ostensibly investigate some phenomenon, are actually exercises in reproducing textbook results. I quoted the distinguished evolutionary biologist Massimo Pigliucci (who has doctorates in genetics, botany, and the philosophy of science):
Things are often only marginally better in college or university classes… Worse yet, most of these exercises are “prepackaged” labs designed to obtain a predetermined outcome, which often enough does not occur because of the carelessness of both students and teaching assistants. The latter are then tempted to do the worst thing they could possibly do in teaching science: tell the students that they should have gotten result X instead, and to write up their reports as if they had. Is it a surprise, then, that the whole enterprise becomes meaningless and that most students think science is either too difficult for them to grasp or, worse, is actually done by cooking the results to come out according to a priori expectations…
I pointed out that in the vast majority of science programs, no modules are offered in the psychology of research or the history/philosophy of science. The result is that science undergraduates who are unaware of the myriad biases which affect research become scientists who are often unaware of the biases affecting the publication of research results.
In this context, I referred to a talk given by Professor Michael Merrifield, an astronomer at Nottingham University. Merrifield was discussing a purely technical issue – measuring the distance between our Sun and the centre of the Milky Way galaxy. This is not anthropology or sociology, but the research is nonetheless subject to psychological forces:
And, more worrying, is something that scientists like to push under the carpet… there’s psychology in this as well. If, in 1985, I made a measurement of the distance [from the Sun] to the centre of the galaxy when everyone said it was ten kilo-parsecs, and I got an answer that said it was seven kilo-parsecs, I would have thought, “Well, I must have done something wrong” and I would have stuck it in some filing cabinet and forgot about it; whereas if I had got an answer that agreed with the consensus, I’d probably have published it… In this error process, there’s also psychology. As I say, scientists are very uncomfortable about this, because we have this idea that what we are doing is objective and above such things. But actually, there is a lot of human interaction and psychology in the way we do science.
This phenomenon – sticking your results into a filing cabinet because they stray uncomfortably far from the consensus – is common in contemporary science. The failure to publish anomalous or “wrong” results is as prevalent in astronomy, physics or biology as it is in medical science.
Publication bias is bad science (One of the books authored by Dr. Goldacre is entitled Bad Science, and one of his TED talks is called Battling Bad Science). And awareness of this aspect of modern science is crucial in the process of becoming an informed consumer of science. The picture conveyed to the public – whether the topic is climate change, evolutionary biology, cosmology or a host of other areas – is by no means one that reflects all the research being done. It is distorted by numerous factors. Sometimes, there are financial factors (as in the case of Lorcainide); sometimes, there are political factors (an example is climate science); sometimes, there are paradigm considerations (read about Daniel Shechtman, Robin Warren and Alfred Wegener). The bottom line is that the final picture is often significantly incomplete.
The post Replication of Experimental Data:
Retrieved 10th April 2013.
 A reader of Genesis and Genes who trained as an aeronautical engineer wrote to me to express how, upon reading this passage in the book, he was struck by the fact that this had happened to him repeatedly throughout his university career, without his being consciously aware of the phenomenon.
Retrieved 10th April 2013.