Archive for May, 2013

Peer Review

May 27, 2013

One factor that clearly distinguishes informed consumers of science and the general public is the attitude these groups have towards the process of peer-review. The general public entertains unrealistic, highly-idealised visions of a process by which scientific research is assessed by peers. In theory, peer review is supposed to act as a filter, weeding out the crackpots; in practice, it often turns out to be a way to enforce orthodoxy.

Copernicus’s heliocentric cosmology, Galileo’s mechanics, Newton’s gravity and equations of motion – these ideas never appeared in journal articles. They appeared in books that were reviewed, if at all, by associates of the author. The peer-review process as we know it was instituted after the Second World War, largely due to the huge growth of the scientific enterprise and the enormous pressure on academics to publish ever more papers.

Since the 1950s, peer-review has worked as follows: a scientist wishing to publish a paper in a journal submits a copy of the paper to the editor of a journal. The editor forwards the paper to several academics whom he considers to be experts on the matter, asking whether the paper is worthy of publication. These experts – who usually remain anonymous – submit comments about the paper that constitute the “peer review”. The referees judge the content of the paper on criteria such as the validity of the claims made in the paper, the originality of the work, and whether the work, even if correct and original, is important enough to be worthy of publication. Often, the journal editor will require the author to amend his paper in accordance with the recommendations of the referees.

Prior to the War, university professors were mainly teachers, carrying a teaching load of five or six courses per semester (a typical course load nowadays is one or two courses). Professors with this onerous teaching burden were not expected to write papers. The famous philosopher of science Sir Karl Popper wrote in his autobiography that the dean of the New Zealand university where Popper taught during World War II said that he regarded Popper’s production of articles and books a theft of time from the university.

But at some point, universities came to realise that their prestige – and with it the grants they received from governments and corporations – depended not so much on the teaching skills of their professors but rather on the scholarly reputation of these professors. And this reputation could only be enhanced through publications. Teaching loads were reduced to allow professors more time for research and the production of papers; salaries began to depend on one’s publication record. Before the War, salaries of professors of the same rank (associate professor, assistant professor, adjunct professor, full professor etc.) were the same (except for an age differential, which reflected experience). Nowadays, salaries of professors in the same department of the same age and rank can differ by more than a factor of two.

One consequence of all this is that the production of papers has increased by a factor of more than one thousand over the past fifty years. The price paid for this fecundity is a precipitous decline in quality. Before the War, when there was no financial incentive to publish papers, scientists wrote them as a labour of love. These days, papers are written mostly to further one’s career. One thus finds that nowadays, most papers are never cited by anyone except their author(s).

Philip Anderson, who won a Nobel Prize for physics, writes that

In the early part of the postwar period [a scientist’s] career was science-driven, motivated mostly by absorption with the great enterprise of discovery, and by genuine curiosity as to how nature operates. By the last decade of the century far too many, especially of the young people, were seeing science as a competitive interpersonal game, in which the winner was not the one who was objectively right as [to] the nature of scientific reality, but the one who was successful at getting grants, publishing in Physical Review Letters, and being noticed in the news pages of Nature, Science, or Physics Today… [A] general deterioration in quality, which came primarily from excessive specialization and careerist sociology, meant quite literally that more was worse.[1]

More is worse. As Nature puts it, “With more than a million papers per year and rising, nobody has time to read every paper in any but the narrowest fields, so some selection is essential. Authors naturally want visibility for their own work, but time spent reading their papers will be time taken away from reading someone else’s.” The number of physicists has increased by a factor of one thousand since the year 1900. Back then, ten percent of all physicists in the world had either won a Nobel Prize or had been nominated for it. Things are much the same in chemistry. The American Chemical Society made a list of the most significant advances in chemistry over the last 100 years. There has been no change in the rate at which breakthroughs in chemistry have been made in spite of the thousand-fold increase in the number of chemists. In the 1960s, US citizens were awarded about 50 000 patents in chemistry-related areas per year. By the 1980s, the number had dropped to 40 000. But the number of papers has exploded. One result of this publish-or-perish mentality is that groundbreaking papers are often rejected because they are submitted to referees who are incapable or unwilling to recognise novel ideas. Consider these examples.

Rosalyn Yalow won the Nobel Prize in Physiology in 1977. She describes how her Nobel-winning paper was received: “In 1955 we submitted the paper to Science… the paper was held there for eight months before it was reviewed. It was finally rejected. We submitted it to the Journal of Clinical Investigations, which also rejected it.”[2]

Günter Blobel also won a Nobel Prize in Physiology, in 1999. In a news conference given just after he was awarded the prize, Blobel said that the main problem one encounters in one’s research is “when your grants and papers are rejected because some stupid reviewer rejected them for dogmatic adherence to old ideas.” According to the New York Times, these comments “drew thunderous applause from the hundreds of sympathetic colleagues and younger scientists in the auditorium.”[3]

Mitchell J. Feigenbaum thus described the reception that his revolutionary papers on chaos theory received: “Both papers were rejected, the first after a half-year delay. By then, in 1977, over a thousand copies of the first preprint had been shipped. This has been my full experience. Papers on established subjects are immediately accepted. Every novel paper of mine, without exception, has been rejected by the refereeing process. The reader can easily gather that I regard this entire process as a false guardian and wastefully dishonest.”[4]

Theodore Maiman invented the laser, an achievement whose importance is not doubted by anyone. The leading American physics journal, Physical Review Letters, rejected Maiman’s paper on constructing a laser.[5]

John Bardeen, the only person to have ever won two Nobel Prizes in physics, had difficulty publishing a theory in low-temperature solid-state physics that went against the paradigm.[6]

Stephen Hawking needs no introduction. According to his first wife Jane, when Hawking submitted to Nature what is generally regarded as his most important paper on black hole evaporation, the paper was initially rejected.[7] The physicist Frank J. Tipler writes that “I have heard from colleagues who must remain nameless that when Hawking submitted to Physical Review what I personally regard as his most important paper, his paper showing that a most fundamental law of physics called ‘unitarity’ would be violated in black hole evaporation, it, too, was initially rejected.”

 Conventional wisdom in contemporary geophysics holds that the Hawaiian Islands were formed sequentially as the Pacific Plate moved over a hot spot deep inside the Earth. This idea was first developed in a paper by the Princeton geophysicist Tuzo Wilson. Wilson writes: “I… sent [my paper] to the Journal of Geophysical Research. They turned it down… They said my paper had no mathematics in it, no new data, and that it didn’t agree with the current views. Therefore, it must be no good. Apparently, whether one gets turned down or not depends largely on the reviewer. The editors, too, if they don’t see it your way, or if they think it’s something unusual, may turn it down. Well, this annoyed me…”[8]

There is not much incentive for referees to carefully adjudicate their fellow-scientists’ papers. As Nature puts it: “How much time do referees expend on peer review? Although referees may derive benefits from reviewing, it still represents time taken away from other activities (research, teaching and so forth) that they would have otherwise prioritized. Referees are normally unpaid but presumably their time has some monetary value, as reflected in their salaries.”

In 2006, Nature published an essay by Charles G. Jennings, a former editor with the Nature journals and former executive director of the Harvard Stem Cell Institute. As an editor, Jennings was intimately familiar with the peer-review system, and knows full well how badly misunderstood this process is by the public:

Whether there is any such thing as a paper so bad that it cannot be published in any peer reviewed journal is debatable. Nevertheless, scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.

Jennings writes that “many papers are never cited (and one suspects seldom read)”. These papers are written, to a large extent, because “To succeed in science, one must climb this pyramid [of journals]: in academia at least, publication in the more prestigious journals is the key to professional advancement.” Advancement, in this context, is measured by career rewards such as recruitment and promotion, grant funding, invitations to speak at conferences, establishment of collaborations and media coverage.

***

Many in the scientific community recognise the ills that plague the peer-review process, and experiments are being conducted to improve – or sidestep – the current dispensation. For example, some journals no longer grant referees the protection of anonymity. Instead, reviewers are identified and their critiques of papers are made available to the author of the paper being reviewed. The author is then able to defend his paper. This may ameliorate the problem of reviewers who hamper the publication of a paper for less than noble reasons (such as professional jealousy).

At any rate, informed consumers of science understand that peer-review is far from perfect. It is an efficient way of strangling new ideas, rather than a vehicle for promoting truly novel ideas. The peer-review system often stifles true innovation, allowing the reigning paradigm to squash all competition unfairly. This is especially true in controversial areas like biological evolution.

***

References:

My two main references for this post are:

  1. An essay by the physicist Frank J. Tipler entitled Refereed Journals: do they insure quality or enforce orthodoxy? The essay appeared in the volume Uncommon Dissent: Intellectuals who find Darwinism Unconvincing, William A. Dembski (editor), ISI Books, 2004.
  2. A 2006 editorial in Nature, available here: http://www.nature.com/nature/peerreview/debate/nature05032.html. Retrieved 26th May 2013.

[1] Philip Anderson, in Brown, Pais and Pippard, editors, Twentieth Century Physics, American Institute of Physics Press, 1995,page 2029.

[2] Walter Shropshire Jr., editor, The Joys of Research, Smithsonian Institution Press, 1981, page 109.

[3] New York Times, 12th October 1999, page A29.

[4] Mitchell J. Feigenbaum, in Brown, Pais and Pippard, editors, Twentieth Century Physics, American Institute of Physics Press, 1995, page 1850.

[5] Ibid. page 1426.

[6] Lillian Hoddeson, True Genius: The Life and Science of John Bardeen, Joseph Henry Press, 2002, page 300.

[7] Jane Hawking, Music to Move the Stars: A Life with Stephen Hawking, Trans-Atlantic Publications, 1999, page 239.

[8] Walter Shropshire Jr., editor, The Joys of Research, Smithsonian Institution Press, 1981, page 130.

The Science Mystique

May 20, 2013

A reader has kindly drawn my attention to an article by a physician, Jalees Rehman, which treads territory that will be familiar to readers of TorahExplorer. In this post, I reproduce some of Dr. Rehman’s points, interspersed with my comments.[1]

***

Dr. Rehman begins by discussing what he terms the doctor mystique – “Doctors had previously been seen as infallible saviors who devoted all their time to heroically saving lives and whose actions did not need to be questioned” – a notion now rapidly crumbling. Informed patients have access to an immense amount of information with which to question the decisions of their physicians – “Instead of blindly following doctors’ orders, they want to engage their doctor in a discussion and become an integral part of the decision-making process.” In addition, patients nowadays are more aware of various factors that can skew doctors’ judgement:

The recognition that gifts, free dinners and honoraria paid by pharmaceutical companies strongly influence what medications doctors prescribe has led to the establishment of important new rules at universities and academic journals to curb this influence…

I discussed related issues in posts such as Dr. John Ioannidis and the Reality of Research and Dr. Ben Goldacre and the Reproducibility of Research.

Dr. Rehman’s essay, however, is devoted to another myth, one that he calls The Science Mystique. He correctly notes that it still persists where similar notions – the feminine mystique and the doctor mystique – have disappeared or are disintegrating. But Dr. Rehman is clear that the science mystique is vulnerable:

As with other mystiques, it [i.e. The Science Mystique] consists of a collage of falsely idealized and idolized notions of what science constitutes. This mystique has many different manifestations, such as the firm belief that reported scientific findings are absolutely true beyond any doubt, scientific results obtained today are likely to remain true for all eternity and scientific research will be able to definitively solve all the major problems facing humankind.

Quite right. Readers of Genesis and Genes will be familiar with a comment made by the physicist and philosopher Sir John Polkinghorne:

Many people have in their minds a picture of how science proceeds which is altogether too simple. This misleading caricature portrays scientific discovery as resulting from the confrontation of clear and inescapable theoretical predictions by the results of unambiguous and decisive experiments… In actual fact… the reality is more complex and more interesting than that.

Science is a human – read fallible – endeavour. Informed consumers of science understand that a host of factors influence research. Beyond the technical aspects of research, there are societal factors, political factors, ideological factors, financial factors and dozens more, some of which I discussed in the first chapter of Genesis and Genes. One consequence of this is that scientific findings come in a spectrum of credibility, ranging from solid to hopelessly speculative and ideological.

Dr. Rehman:

This science mystique is often paired with an over-simplified and reductionist view of science. Some popular science books, press releases or newspaper articles refer to scientists having discovered the single gene or the molecule that is responsible for highly complex phenomena, such as a disease like cancer or philosophical constructs such as morality.

Indeed. Most members of the public are not informed consumers of science, and are easily swayed by simplistic or exaggerated claims. A common example of exaggerated claims swallowed by the public comes from palaeontology. A fossil is unearthed and proclaimed as the latest earliest ancestor of human beings. After the media frenzy subsides and the public’s attention is diverted, the claims inevitably prove to be hollow. [For several excellent examples of the genre, see the chapter entitled Human Origins and the Fossil Record in Science and Human Origins.][2] This is true with respect to complicated concepts and phenomena like cancer or morality, as Dr. Rehman writes, but it is all the more true with respect to over-arching theories that purport to explain ultimate questions about the universe or life. The gullible public is unaware of the tremendous superstructure of assumptions, ideological commitments and technical difficulties that go into scientists’ absolutist statements about such subjects.

Dr. Rehman continues:

As flattering as it may be, few scientists see science as encapsulating perfection. Even though I am a physician, most of my time is devoted to working as a cell biologist. My laboratory currently studies the biology of stem cells and the role of mitochondrial metabolism in stem cells. In the rather antiquated division of science into “hard” and “soft” sciences, where physics is considered a “hard” science and psychology or sociology are considered “soft” sciences, my field of work would be considered a middle-of-the-road, “firm” science. As cell biologists, we are able to conduct well-defined experiments, falsify hypotheses and directly test cause-effect relationships. Nevertheless, my experience with scientific results is that they are far from perfect and most good scientific work usually raises more questions than it provides answers. We scientists are motivated by our passion for exploration, and we know that even when we are able to successfully obtain definitive results, these findings usually point out even greater deficiencies and uncertainties in our knowledge.

An important qualification is needed here. Researchers like Dr. Rehman are usually aware that in their field, perfection is elusive. But they are often largely ignorant of other fields, and may harbour unrealistic views of the reliability of research in those fields.

Readers of Genesis and Genes will recall chapter 3, in which I described how scientists from half-a-dozen different disciplines were attempting to determine the age of the Earth in the latter part of the 19th century. It was frequently the case that practitioners of one discipline, aware of the limitations of their own field, failed to understand that other fields were just as vulnerable, but for different reasons. This led to a situation in which a mirage was created that there was independent confirmation, arising from several different disciplines, regarding the age of the Earth. This turned out to be completely illusory.

Dr. Rehman now turns to reproducibility of research:

One key problem of science is the issue of reproducibility. Psychology is currently undergoing a soul-searching process[3] because many questions have been raised about why published scientific findings have such poor reproducibility when other psychologists perform the same experiments. One might attribute this to the “soft” nature of psychology, because it deals with variables such as emotions that are difficult to quantify and with heterogeneous humans as their test subjects. Nevertheless, in my work as a cell biologist, I have encountered very similar problems regarding reproducibility of published scientific findings. My experience in recent years is that roughly only half the published findings in stem cell biology can be reproduced when we conduct experiments according to the scientific methods and protocols of the published paper.

Recall that earlier, Dr. Rehman characterised his field, cell biology, as a ‘firm’ science, somewhere between physics and psychology on a spectrum similar to the ‘proof continuum’ I discussed in Genesis and Genes. As he says, cell biology is an area of science where ostensibly objective parameters exist that should ensure the reproducibility of research. Alas, to a significant degree, reproducibility is elusive. Cell biology is not sociology or anthropology; nor are we talking about drug trials here (where as much as 90% of published studies may be wrong). Nonetheless, upwards of 50% of the research in cell biology is not reproducible. One is reminded of this passage in Genesis and Genes:

[Glenn] Begley [who served, for a decade, as head of global cancer research at Amgen] met for breakfast at a cancer conference with the lead scientist of one of the problematic studies. “We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”

Dr. Rehman:

On the other hand, we devote a limited amount of time and resources to replicating results, because there is no funding available for replication experiments. It is possible that if we devoted enough time and resources to replicate a published study, tinkering with the different methods, trying out different batches of stem cells and reagents, we might have a higher likelihood of being able to replicate the results. Since negative studies are difficult to publish, these failed attempts at replication are buried and the published papers that cannot be replicated are rarely retracted. When scientists meet at conferences, they often informally share their respective experiences with attempts to replicate research findings. These casual exchanges can be very helpful, because they help us ensure that we do not waste resources to build new scientific work on the shaky foundations of scientific papers that cannot be replicated.

The difficulty of publishing negative results and the lack of incentive to verify other researchers’ results are recognised as major contributors to systemic problems within contemporary science. The average member of the public labours under the illusion that mechanisms such as peer-review suffice to ensure that whatever is published in a mainstream journal is infallible. This, of course, constitutes child-like naivety. As Nature put it in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.”

Dr. Rehman:

Most scientists are currently struggling to keep up with the new scientific knowledge in their own field, let alone put it in context with the existing literature. As I have previously pointed out,[4] more than 30-40 scientific papers are published on average on any given day in the field of stem cell biology. This overwhelming wealth of scientific information inevitably leads to a short half-life of scientific knowledge… What is considered a scientific fact today may be obsolete within five years.

Quite true. As I wrote in Genesis and Genes,

A paper published in the Proceedings of the National Academy of Scientists in 2006 noted that “More than 5 million biomedical research and review articles have been published in the last 10 years.” That’s an average of 1370 papers per day. And this is just biomedical research.

This deluge of information, and the fact that “What is considered a scientific fact today may be obsolete within five years”, has important repercussions for informed consumers of science. Those who follow the evolution debate are aware of how the ephemeral nature of scientific knowledge can have an impact on what was only recently considered absolute. Whether it is Tree of Life research, Junk DNA or the discovery of numerous instances of Lamarckian heredity, there have been breathtaking turnarounds in recent years. Basic prudence dictates that when evolutionary biologists invoke ‘overwhelming evidence’ on the basis of whatever, that their claims be taken with a sack of salt.

Dr. Rehman:

One aspect of science that receives comparatively little attention in popular science discussions is the human factor. Scientific experiments are conducted by scientists who have human failings, and thus scientific fallibility is entwined with human frailty. Some degree of limited scientific replicability is intrinsic to the subject matter itself… At other times, researchers may make unintentional mistakes in interpreting their data or may unknowingly use contaminated samples… However, there are far more egregious errors made by scientists that have a major impact on how science is conducted. There are cases of outright fraud… [but] Such overt fraud tends to be unusual… However, what occurs far more frequently than gross fraud is the gentle fudging of scientific data, consciously or subconsciously, so that desired scientific results are obtained. Statistical outliers are excluded, especially if excluding them helps direct the data in the desired direction. Like most humans, scientists also have biases and would like to interpret their data in a manner that fits with their existing concepts and ideas.

Bravo. This is a major theme of Genesis and Genes, and it is crucial in becoming an informed consumer of science. In this short essay, Rehman obviously cannot describe all the influences that have an impact on scientific research. One of Rehamn’s more important omissions is that there is an enormous amount of conditioning which influences scientists – like everyone else – long before they step into the laboratory. Take evolution. If you grew up in the West any time in the last fifty years, you will have encountered innumerable instances in which the claims of evolutionary biology would have been seared into your consciousness, from David Attenborough documentaries to museum dioramas to advertising campaigns named The evolution of the office to countless articles in New Scientist. Scientists do not enter their research careers with a tabula rasa. As professor John Polkinghorne puts it,

Scientists do not look at the world with a blank gaze; they view it from a chosen perspective and bring principles of interpretation and prior expectations… to bear upon what they observe. Scientists wear (theoretical) “spectacles behind the eyes”.

Dr. Rehman:

Human fallibility not only affects how scientists interpret and present their data, but can also have a far-reaching impact on which scientific projects receive research funding or the publication of scientific results. When manuscripts are submitted to scientific journals or when grant proposal are submitted to funding agencies, they usually undergo a review by a panel of scientists who work in the same field and can ultimately decide whether or not a paper should be published or a grant funded. One would hope that these decisions are primarily based on the scientific merit of the manuscripts or the grant proposals, but anyone who has been involved in these forms of peer review knows that, unfortunately, personal connections or personal grudges can often be decisive factors.

Correct. If you happen to be conducting climate research that produces unpopular results, for example, you can be almost sure that your findings will not be published in the most prestigious journals. If you happen to suspect that the brilliant mathematician Irving Segal was right, and that the linear relationship that Edwin Hubble saw between the redshift and apparent brightness of galaxies is perhaps illusory, you are almost certain to receive very little telescope time. Exploring the natural world to your heart’s content, following your curiosity wherever it leads you – that picture of how science was done was fairly accurate up to about the middle of the 19th century. Affluent gentleman scientists could indulge their curiosity about how nature operates. These days, the confines within which research is done will be dictated, to a significant extent, by whatever is considered acceptable by the majority of the community.

***

The science mystique will eventually topple, and that will be a liberating moment for science. It will usher in an age in which scientists and the public alike will be informed consumers of science, able to accurately assess various findings of scientists and assign to them appropriate levels of credibility.

***

See also:

The post Dr. John Ioannidis and the Reality of Research:

https://torahexplorer.com/2013/05/05/dr-john-ioannidis-and-the-reality-of-research/

The post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

References:

[1] See http://www.3quarksdaily.com/3quarksdaily/2013/02/the-science-mystique-.html.

Retrieved 17th May 2013.

[2] Discovery Institute Press, 2012.

[3] Dr. Rehman cites this paper at this point:

http://pps.sagepub.com/content/7/6/537.full.

Retrieved 19th May 2013.

[4] Dr. Rehman cites the following article:

http://www.scilogs.com/next_regeneration/science-journalism-and-the-inner-swine-dog/.

Retrieved 19th May 2013.

Darwinism and Morality

May 13, 2013

William Provine is a biologist and historian of biology at Cornell University. He is forthright about biological evolution and its implications, writing, for example, that evolution is the greatest engine of atheism ever invented. Provine summarises the consequences of the belief in evolution as follows:

Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent.[1]

In this post, we will concentrate on points 3 and 5 above.

***

The evolutionary view is that moral law is something humans create as an evolved adaptation – a conviction that something is right or wrong arises out of the struggle for survival. All the notions we associate with moral and ethical principles are merely adaptations, foisted upon us by evolutionary mechanisms in order to maximize survival.

Provine’s logic is unassailable, if you grant his premises. His point of departure is that nothing exists beyond matter and energy. Matter and energy may manifest themselves in relatively simple forms – a hydrogen molecule, perhaps – and in complex forms, as in a butterfly or human being. But in the end, it all boils down to quarks, electrons and other denizens of the subatomic world. It follows that there cannot be an objective foundation for morality, and that human free will is an illusion, the result of complex neuronal interactions.

This is a popular (inevitable, really) notion among contemporary evolutionists. In 1985, the entomologist E.O. Wilson and the philosopher of science Michael Ruse co-authored an article in which they wrote that “Ethics as we understand it is an illusion fobbed off on us by our genes to get us to co-operate.” In his 1998 book Consilience, Wilson argued that “Either ethical precepts, such as justice and human rights, are independent of human experience or else they are human inventions.” He rejected the former explanation, which he called transcendentalist ethics, in favour of the latter, which he named empiricist ethics.[2]

Indeed, the whole field of sociobiology, founded by Wilson in the 1970s, presupposes that morality is the product of evolutionary processes and tries to explain most human behaviours by discovering their alleged reproductive advantage in the evolutionary struggle for existence. (Stephen Jay Gould is among numerous evolutionists who ridiculed the field for its proclivity to invent just-so stories).

One implication of the belief that human beings do not possess moral freedom is that criminals cannot be held responsible for their deeds. University of Chicago biologist Jerry Coyne thus writes – in a post entitled Is There Moral Responsibility? – that he does not believe in moral responsibility:

I favor the notion of holding people responsible for good and bad actions, but not morally responsible. That is, people are held accountable for, say, committing a crime, because punishing them simultaneously acts as a deterrent, a device for removing them from society, and a way to get them rehabilitated – if that’s possible. To me, the notion of moral responsibility adds nothing to this idea.  In fact, the idea of moral responsibility implies that a person had the ability to choose whether to act well or badly, and (in this case) took the bad choice. But I don’t believe such alternative “choices” are open to people, so although they may be acting in an “immoral” way, depending on whether society decides to retain the concept of morality (this is something I’m open about), they are not morally responsible.  That is, they can’t be held responsible for making a choice with bad consequences on the grounds that they could have chosen otherwise.[3]

David Baggett describes how this notion manifests itself in contemporary academia:

I have found a recent trend among a number of naturalistic ethicists and thinkers to be both interesting and mildly exasperating, but most of all telling. Both one like John Shook, Senior Research Fellow at the Center for Inquiry in Amherst, New York… and Frans de Waal, author most recently of The Bonobo and the Atheist (to adduce but a few examples) seem to be gravitating toward functional categories of morality. Talk of belief and practice replaces talk of truth; references to moral rules exceed those of moral obligations; and prosocial instincts supplant moral authority.[4]

But these notions are hardly recent. As the historian Richard Weikart puts it, “The idea that evolution undermines objective moral standards is hardly a recent discovery of sociobiology, however. In the Descent of Man, Charles Darwin devoted many pages to discussing the evolutionary origin of morality, and he recognized what this meant: morality is not objective, is not universal, and can change over time. Darwin certainly believed that evolution had ethical implications.” Ever since then, evolutionists have been arguing that human free will is a mirage and that morality is subjective. Here are representative examples of a vast genre.[5]

***

Cesare Lombroso (1835-1909) was a leading criminologist who authored the landmark study Criminal Man in 1876. According to Lombroso, infanticide, parricide, theft, cannibalism, kidnapping and antisocial actions could be explained largely as a throwback to earlier stages of Darwinian evolution. In earlier stages of development such behaviours aided survival and were therefore bred into biological organisms by natural selection. William Noyes, one of Lombroso’s American disciples, explained that “In the process of evolution, crime has been one of the necessary accompaniments of the struggle for existence.” Invoking modern science in general and Charles Darwin’s work in particular, Italian jurist Enrico Ferri (1856-1929), one of Lombroso’s top disciples, argued that it was no longer reasonable to believe that human beings could make choices outside the realm of material cause and effect. Ferri applauded Darwin for showing “that man is not the king of creation, but merely the last link of the zoological chain, that nature is endowed with eternal energies by which animal and plant life… are transformed from the invisible microbe to the highest form, man.” Ferri looked forward to the day when crime would be treated as a “disease”.

Ludwig Büchner (1824–1899) was a German medical doctor who became president of the Congress of the International Federation of Freethinkers. He was an outspoken atheist and authored Force and Matter, a materialist tract that went through fifteen editions in German and four in English. He was one of the most energetic popularisers of Darwin’s work in the German-speaking world. Büchner wrote that “the vast majority of those who offend against the laws of the State and of Society ought to be looked upon rather as unfortunates who deserve pity than as objects of execration.” Büchner argued that the [alleged] brain abnormalities in many criminals showed that they were throwbacks to “the brains of pre-historic men.”

***

Born into wealth and privilege, Nathan Leopold and Richard Loeb were Chicagoan graduate students who decided to commit the perfect crime. In the spring of 1924, they abducted and murdered 14-year old Bobby Franks. They were eventually apprehended and confessed to their crime.

Clarence Darrow was hired to save Leopold and Loeb from the gallows. Yes – Clarence Darrow of the famous Monkey Trial in Tennessee. Darrow was a true believer in evolution. According to him, the question before the court was whether it would embrace “the old theory” that “a man does something… because he wilfully, purposely, maliciously and with a malignant heart sees fit to do it” or the new theory of modern science that “every human being is the product of the endless heredity back of him and the infinite environment around him.” According to Darrow, Leopold and Loeb murdered Franks “… because they were made that way…”

Robert Crowe, the state’s chief prosecutor in the case, challenged “Darrow’s dangerous philosophy of life.” He read to the court a speech Darrow had delivered to prisoners at a county jail more than twenty years earlier. Darrow had told the prisoners that there was no moral difference between themselves and those who were outside jail. “I do not believe people are in jail because they deserve to be. They are in jail simply because they cannot avoid it, on account of circumstances which are entirely beyond their control, and for which they are in no way responsible.” “There ought to be no jails”, he told the prisoners.

***

In his book Crime: Criminals and Criminal Justice (1932), University of Buffalo criminologist Nathaniel Cantor ridiculed “the grotesque notion of a private entity, spirit, soul, will, conscience or consciousness interfering with the orderly processes of body mechanisms.” Because we humans are no different in principle to any other biological organism, “man is no more ‘responsible’ for becoming wilful and committing a crime than the flower for becoming red and fragrant. In both cases the end products are predetermined by the nature of protoplasm and the chance of circumstances.” The sociologist J.P. Shalloo wrote in the 1940s that it was the “world-shaking impact of Darwinian biology, with its emphasis upon the long history of man and the importance of heredity for a clear understanding of man’s biological constitution” that finally opened the door to a truer understanding of crime than traditional views.

***

Evolution is not only a scientifically untenable theory, but also a morally bankrupt, corrosive spiritual poison that undermines the foundations of human society.

***

See also: the post Random and Undirected:

https://torahexplorer.com/2013/04/29/random-and-undirected/

References:

[1] Abstract of Dr. William Provine’s 1998 Darwin Day Keynote Address, Evolution: Free will and punishment and meaning in life. This used to be available at http://fp.bio.utk.edu/darwin/frmain.html. I was not able to retrieve it.

[2] See the article by the historian Richard Weikart here:

http://www.evolutionnews.org/2012/05/at_emory_univer_1059491.html.

Retrieved 12th May 2013.

[3] See http://whyevolutionistrue.wordpress.com/2013/05/03/is-there-moral-responsibility/.

Retrieved 12th May 2013.

[4] See http://www.firstthings.com/blogs/firstthoughts/2013/04/26/watering-down-the-categories/#comments.

Retrieved 12th May 2013.

[5] Much of the material in the rest of this post is from the superb Darwin Day in America by John G. West, ISI Books, 2007.

Dr. John Ioannidis and the Reality of Research

May 5, 2013

I mentioned Dr. John Ioannidis a number of times in Genesis and Genes, as well as in several posts. A reader has kindly referred me to an excellent article about Dr. Ioannidis that appeared in The Atlantic.[1] Below are some pertinent points from the article, interspersed with my comments.

David H. Freedman, who wrote the article in The Atlantic, notes that “Medical research is not especially plagued with wrongness. Other meta-research experts[2] have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right).”

Understanding the factors that can distort research is a crucial step in becoming an informed consumer of science. Below, we look at some issues that are raised in the Atlantic article, and suggest how they may be relevant to other fields of science.

***

John Ioannidis may be one of the most influential – and popular – scientists today. In 2005, he published a paper in PLoS [Public Library of Science] Medicine that remains the most downloaded in the journal’s history. He has published papers with 1,328 different co-authors at 538 institutions in 43 countries. In 2009 he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world. Ioannidis is one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, that much of what biomedical researchers conclude in peer-reviewed published studies – conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fibre or less meat, or when they recommend surgery for heart disease or back pain – is misleading, exaggerated, and often just wrong. Ioannidis charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

In the PLoS Medicine paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the tendency to focus on exciting rather than plausible theories, medical researchers will come up with wrong findings most of the time. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. [Vioxx, Zelnorm, and Baycol were among the widely prescribed drugs found to be safe and effective in large randomized controlled trials before the drugs were yanked from the market as unsafe or not so effective, or both.] The article articulated Ioannidis’ conclusion that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and using the peer-review process to suppress unpopular views. These are all phenomena that are well-known to informed consumers of science, but still invisible, to a significant extent, to the general public.

In a seminal paper that was published in the Journal of the American Medical Association, Ioannidis zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. So a large fraction of the most acclaimed research in medicine is untrustworthy.

***

There are many reasons for the dismal record of medical research, and we shall only consider a few factors. Ioannidis suggests that the desperate quest for research grants has gone a long way toward weakening the reliability of medical research. Readers of Genesis and Genes will recall the passage from Seed:

Cash-for-science practices between the nutrition and drug companies and the academics that conduct their research may also be playing a role. A survey of published results on beverages earlier this year found that research sponsored by industry is much more likely to report favorable findings than papers with other sources of funding. Although not a direct indication of bias, findings like these feed suspicion that the cherry-picking of data, hindrance of negative results, or adjustment of research is surreptitiously corrupting accuracy. In his essay, Ioannidis wrote, “The greater the financial and other interest and prejudices in a scientific field, the less likely the research findings are to be true.”[3]

In The Atlantic article, Ioannidis is blunt about one important factor in this situation. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results – and, lo and behold, they were getting them. We think of the scientific process as being objective and rigorous, but in fact it’s easy to manipulate results, sometimes unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.” The fact that financial conflicts of interest are a feature of contemporary science is familiar to readers of Genesis and Genes:

I randomly pulled out from my shelf an issue of Scientific American. It happened to be the September 23, 2004 issue. It carried this announcement, made by the Center for Science in the Public Interest: “Some scientists and consumer advocates have called for a re-evaluation of studies that led to lower cholesterol guidelines. Among other concerns: eight of nine authors of the recommendations had ties to firms that make cholesterol-lowering statin drugs.” This is a thoroughly typical news item in science magazines. This particular note was so ordinary that it warranted all of a tiny mention on page 17. Anyone who reads science publications will periodically come across such items.

Ioannidis says that perhaps only a minority of researchers were succumbing to this type of bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the single less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal?

***

Another issue discussed by Ioannidis is the process of peer-review. The average member of the public (who is, needless to say, not an informed consumer of science) considers peer-review to be a magic pill. Peer-review is supposed to be an objective process, manned by referees who have no personal stake in the research they are reviewing, and who have all the time in the world to devote to carefully checking other peoples’ results. The real world, alas, is a little less rosy. Biased, erroneous, and even blatantly fraudulent studies easily slip through peer-review. In a 2006 editorial, Nature stated that “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.”

Furthermore, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues – that is, their potential reviewers – in ways that only seem like breakthroughs. One example is the glut of hyped papers touting gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are plain dubious.

***

Here is one example of a point made by Ioannidis in the context of medical research which is applicable to palaeontology. Ioannidis says, “Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it. It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”

This phenomenon will be familiar to readers of Genesis and Genes. In the section on the alleged evolution of dinosaurs to birds, I discussed the work of researchers like Professor John A. Ruben of Oregon State University, whose work casts heavy doubt on the reigning paradigm. I wrote:

The Science Daily report from which these quotations are taken continues: “The conclusions [of the Oregon State University researchers] add to other… evidence that may finally force many palaeontologists to reconsider their long-held belief that modern birds are the direct descendants of ancient, meat-eating dinosaurs…” Professor Ruben adds, “But old theories die hard, especially when it comes to some of the most distinctive and romanticized animal species in world history.” He continues, “Frankly, there’s a lot of museum politics involved in this, a lot of careers committed to a particular point of view even if new scientific evidence raises questions.”

Furthermore, Ioannidis found that even when a research error is publicised, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed – in one case for at least 12 years after the results were discredited.

***

Early in his career, Ioannidis was disabused of the notion that mechanisms like randomized trials and double-blind studies were magic wands that ensure infallibility. In poring over medical journals, Ioannidis was struck by how many findings of all types were refuted by later findings. This is particularly visible in medical research. One month ago, TIME Magazine published an article entitled Spin Doctors.[4] The article states:

Mammograms help you live longer. Or wait; they may not… In the medical world, this kind of uncertainty is increasingly common… Enter the US Preventive Services Task Force (USPSTF), a panel of independent experts charged by Congress with sifting through all the studies about health procedures…

In a side-bar entitled Four Surprising Recommendations, TIME highlights four prominent turnabouts:

  • What you may have heard: Taking estrogen and progestin after menopause can lower the risk of heart disease and bone fractures. What you may not have: The USPSTF says supplemental estrogen can increase the risk of breast cancer and does not protect against heart disease, as earlier studies suggested.
  • What you may have heard: All men over age 50 should get regular blood tests for prostate cancer. What you may not have: Those blood tests, which detect many growths that are not cancerous, can lead to risky interventions. Plus, many prostate tumors are slow-growing and don’t need to be removed, even if they are cancerous.
  • What you may have heard: Women should start annual screening for breast cancer at age 40. What you may not have: Women in their 40s have lower cancer rates than older women and higher rates of false positives that lead to additional tests and procedures that may come with complications.
  • What you may have heard: Vitamin D and calcium can strengthen bones and lower the risk of fractures in postmenopausal women. What you may not have: They may slow bone loss, but recommended doses may not be high enough to lower the risk of fractures. And too much calcium can increase the risk of heart disease.

The article in The Atlantic makes much the same point: mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; widely prescribed antidepressants such as Prozac, Zoloft, and Paxil have been revealed to be no more effective than a placebo for most cases of depression; staying out of the sun entirely can actually increase cancer risks; taking fish oil, exercising, and doing puzzles doesn’t really help fend off Alzheimer’s disease; and peer-reviewed studies have come to opposite conclusions on whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.

One important reason for this see-sawing is that most studies involve a relatively small number of participants and run for a relatively short time, perhaps five years. The reason for this is straightforward – it’s expensive and cumbersome to run experiments for thirty or forty years. But the price paid for these short-term savings is that the results of clinical trials are more often than not incorrect. Let’s see why.

Randomized controlled trials constitute the gold standard in medical research. These studies compare how one group responds to a treatment against how an identical group fares without the treatment. Various checks and balances are used to try to shield the researchers from bias, and, consequently, these trials had long been considered nearly unshakable evidence. But these trials, too, are sometimes wrong. “I realized even our gold-standard research had a lot of problems,” Ioannidis says. Before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.

In a typical nutrition or drug study, researchers follow a few thousand people for a number of years, tracking what they eat and what supplements they take, and how their health changes over the course of the study. Then they ask, ‘What did vitamin E do? What did vitamin C or D or A do? What changed with calorie intake, or protein or fat intake? What happened to cholesterol levels? Who got what type of cancer?’

After this, complex statistical models are used to find all sorts of correlations between, say, Vitamin X and cancer Y. When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have good reason to take more vitamin X, and physicians routinely pass these recommendations on to patients. But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbohydrates; and even on the question of whether being overweight is more likely to extend or shorten your life. Ioannidis suggests a simple approach to these studies: ignore them all.

For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects. But even if a study managed to highlight a genuine health connection to some nutrient, a given individual is unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act in concert, and changing the intake of any one nutrient is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you [this is why I explained in Genesis and Genes that science is strongest when it deals with observable, repeatable and limited phenomena.] Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health ‘markers’ such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.

On the relatively rare occasions when a study does go on long enough to track mortality, the findings frequently upend those of the shorter studies. (For example, though the vast majority of studies of overweight individuals link excess weight to ill health, the longest of them have not convincingly shown that overweight people are likely to die sooner, and a few of them have seemingly demonstrated that moderately overweight people are likely to live longer.) Now add to the above ubiquitous measurement errors (for example, people habitually misreport their diets in studies) and routine misanalysis (researchers rely on complex software capable of juggling results in ways they do not always understand).

If a study somehow avoids every one of these pitfalls and finds a real connection to long-term changes in health, you’re still not guaranteed to benefit, because studies report average results that typically represent a vast range of individual outcomes. Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller. “The odds that anything useful will survive from any of these studies are poor,” says Ioannidis – dismissing in a breath a good chunk of the research into which $100 billion a year in the United States is sunk.

I have pointed out before (see the post Blowing Hot and Cold, for example), that the problem of tackling research that is diffuse – the opposite of limited – is by no means restricted to medical research. Take the climate. It is affected by many dozens, perhaps hundreds, of factors. In the context of human health, we know that there can be a huge difference between what is detected over a 5-year study as opposed to what ultimately transpires when subjects die fifty years later. In climate studies, too, there may be enormous differences between what is measured over a few decades and what happens over millennia.

Furthermore, as we saw above, most medical studies do not actually track the individual’s health as a whole; rather, they measure ‘markers’ which are taken as proxies for overall health. The assumption that markers are good proxies for overall health is, at best, dubious. In climate science too, it is often ‘markers’ that are used to indicate the overall ‘health’ of the climate, and this may well lead to erroneous conclusions. Consider glaciers.[5]

In 1895, geologists thought the world was freezing up due to the ‘great masses of ice’ that were frequently seen farther south than before. The New York Times reported that icebergs were so bad, and that they decreased the temperature of Iceland so much, that inhabitants fearing a famine were ‘emigrating to North America.’ But in 1902 the Los Angeles Times, in a story on disappearing glaciers in the Alps said the glaciers were not ‘running away,’ but rather ‘deteriorating slowly, with a persistency that means their final annihilation.’ The melting led to alpine hotel owners having trouble keeping patrons. It was established that it was a ‘scientific fact’ that the glaciers were ‘surely disappearing.’ But the glaciers instead grew once more.

The Boston Daily Globe reported in 1923 that the purpose of an Arctic expedition it was covering was to determine the beginning of the next ice age, ‘as the advance of glaciers in the last 70 years would indicate.’ When that era of ice-age reports melted away, retreating glaciers were again highlighted. In 1953’s Today’s Revolution in Weather, William Baxter wrote that ‘the recession of glaciers over the whole earth affords the best proof that climate is warming’. He gave examples of glaciers melting in Lapland, the Alps, and Antarctica. In 1952, the New York Times reported on the global warming studies of climatologist Dr. Hans W. Ahlmann, whose ‘trump card’ ‘has been the melting glaciers.’ The next year the paper said that ‘nearly all the great ice sheets are in retreat.’ U.S. News and World Report agreed, noting on January 8, 1954 that ‘winters are getting milder, summers drier. Glaciers are receding, deserts growing.’

But in the 1970s, glaciers did an about face. Lowell Ponte, in his 1976 book The Cooling, warned that ‘The rapid advance of some glaciers has threatened human settlements in Alaska, Iceland, Canada, China, and the Soviet Union.’

In 1951, TIME magazine noted that permafrost in Russia was receding northward at up to 100 yards per year. But in a June 24, 1974, article, TIME stated that the cooling trend was here to stay. The report was based on ‘telltale signs’ such as the ‘unexpected persistence and thickness of pack ice in the waters around Iceland.’ The Christian Science Monitor in the same year noted ‘glaciers which had been retreating until 1940 have begun to advance.’ The article continued, ‘the North Atlantic is cooling down about as fast as an ocean can cool.’ And the New York Times noted that in 1972 the ‘mantle of polar ice increased by 12 percent’ and had not returned to ‘normal’ size. North Atlantic sea temperatures declined, and shipping routes were ‘cluttered with abnormal amounts of ice.’ Furthermore, the permafrost in Russia and Canada was advancing southward, according to the December 29 article that closed out 1974.

Two points are crucial. Markers for ultra-complex entities such as human health or the climate may or may not be useful indicators of overall health. Secondly, it may well be that studies of ‘markers’ – whether of human health or the climate – may require a lifetime (in the case of humans) or several centuries (in the case of global climate) to teach us anything significant. Shorter studies may well be misleading, as is certainly the case in many clinical studies.

***

In a nutshell, becoming an informed consumer of science involves the realization that science is a human endeavour. It is subject to a galaxy of factors beyond the nuts and bolts of the laboratory work, from political considerations that determine how much funding is funneled to particular fields to the interpretation of complex statistical analyses of murky results. As the physicist and philosopher John Polkinghorne has written,

Many people have in their minds a picture of how science proceeds which is altogether too simple. This misleading caricature portrays scientific discovery as resulting from the confrontation of clear and inescapable theoretical predictions by the results of unambiguous and decisive experiments… In actual fact… the reality is more complex and more interesting than that.

To its credit, the medical community seems to have embraced the work done by Ioannidis and its implications. The Atlantic reports that:

Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’ paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.”

But Ioannidis is pessimistic about anything changing soon:

His bigger worry, he says, is that while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”

***

Dr. John Ioannidis’ work deals with medical research, which is – at least theoretically – readily amenable to the tools of science. Even here, it is obvious that science consumers should ration out credibility carefully. The fact that you read about evidence-based medicine or peer-reviewed studies or randomized trials is by no means a guarantee that you’ve been touched by Truth. And this is all in the realm of the here and now. Contemporary science is vastly overrated when it deals with issues that go beyond those that affect medical research, and involve huge extrapolations, chains of reasoning and assumptions and numerous ideological commitments.

 ***

See also: the post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

The post Blowing Hot and Cold:

https://torahexplorer.com/2013/03/11/blowing-hot-and-cold-2/

References:

[1] See http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/.

Retrieved 5th May 2013.

[2] Meta-research involves the analysis – often with advanced statistical tools – of a large number of primary studies performed by other researchers.

[3] See http://seedmagazine.com/content/article/dirty_little_secret/. Retrieved 5th June 2011.

[4] See http://www.time.com/time/magazine/article/0,9171,2139710,00.html

Retrieved 4th May 2013.

[5] The information on the media coverage of glaciers comes from a report by the Media Research Council entitled Fire and Ice:

http://www.mrc.org/special-reports/fire-and-ice

Retrieved 5th May 2013.