Origin of Life and Philosophical Outlook

June 28, 2013

In Signature in the Cell, Dr. Stephen Meyer presented a comprehensive and accessible history of research into the origin of life. In this post, we take a bird’s eye view of research into this area over the past three-quarters of a century. We shall also digress in order to get a snapshot of how ideological commitments shape the views of many scientists.

***

Let’s begin with Dr. Ernst Chain. Chain won a Nobel Prize for his contribution to the development of penicillin. I mentioned him in Genesis and Genes, in the context of the discussion about whether evolutionary theory is relevant to nuts-and-bolts research in biology. I cited an article by Philip Skell (1918-2010), who was a distinguished professor of chemistry and a member of the National Academy of Sciences in the USA and a prominent Darwin sceptic. In a 2009 article in Forbes.com entitled The Dangers of Overselling Evolution, he made the point that evolutionary theory makes no contribution to actual research:

In 1942, Nobel Laureate Ernst Chain wrote that his discovery of penicillin (with Howard Florey and Alexander Fleming) and the development of bacterial resistance to that antibiotic owed nothing to Darwin’s and Alfred Russel Wallace’s evolutionary theories.[1]

Chain understood the immensity of the task of trying to explain life in naturalistic terms. In The Life of Ernst Chain: Penicillin and Beyond, we read that:

I have said for years that speculations about the origin of life lead to no useful purpose as even the simplest living system is far too complex to be understood in terms of the extremely primitive chemistry scientists have used in their attempts to explain the unexplainable that happened billions of years ago.[2]

In August 1954, Dr. George Wald, another Nobel Laureate, wrote in Scientific American:

There are only two possibilities as to how life arose. One is spontaneous generation arising to evolution; the other is a supernatural creative act of God. There is no third possibility… a supernatural creative act of God. I will not accept that philosophically because I do not want to believe in God, therefore I choose to believe that which I know is scientifically impossible; spontaneous generation arising to Evolution.

 This statement may seem astonishingly frank to many members of the public. Informed consumers of science, in contrast, are aware that much of the debate around the origin of life and biological evolution has precious little to do with drawing inevitable conclusions from straightforward evidence. It is far more about worldviews and ideologies, and only extremely naive observers assume that this does not apply to scientists who participate in the debate. Wald makes it perfectly clear that his direction was dictated by his philosophical leanings, and that is true of many scientists and Western intellectuals. Consider the views of Thomas Nagel. Nagel is a courageous thinker whose latest book, Mind and Cosmos, is a fierce demolition of Darwinian evolution.[3] But Nagel will only go so far. In The Last Word, which appeared in 1997, he offered a candid account of his philosophical inclinations:

I am talking about something much deeper—namely, the fear of religion itself. I speak from experience, being strongly subject to this fear myself: I want atheism to be true and am made uneasy by the fact that some of the most intelligent and well-informed people I know are religious believers… It isn’t just that I don’t believe in God and, naturally, hope that I’m right in my belief. It’s that I hope there is no God! I don’t want there to be a God; I don’t want the universe to be like that.[4]

 The fact that faith – the faith of many scientists in the ability of unguided matter and energy to create life – drives much of the discussion about evolution, was underscored by Dr. Gerald Kerkut, Professor  Emeritus of Neuroscience at the University of Southampton, who wrote in 1960 that: 

The first assumption was that non-living things gave rise to living material. This is still just an assumption… There is, however, little evidence in favor of abiogenesis and as yet we have no indication that it can be performed… it is therefore a matter of faith on the part of the biologist that abiogenesis did occur and he can choose whatever method… happens to suit him personally; the evidence for what did happen is not available.

 Harold Urey won a Nobel Prize for chemistry, but is probably more famous for participating, with his graduate student Stanley Miller, in what became known as the Miller-Urey experiment. Writing in The Christian Science Monitor on 4th January 1962, Urey wrote: 

All of us who study the origin of life find that the more we look into it, the more we feel it is too complex to have evolved anywhere. We all believe as an article of faith that life evolved from dead matter on this planet. It is just that its complexity is so great, it is hard for us to imagine that it did.

 Hubert Yockey, the renowned information theorist, wrote in the Journal of Theoretical Biology in 1977 that:

One must conclude that… a scenario describing the genesis of life on earth by chance and natural causes which can be accepted on the basis of fact and not faith has not yet been written.

Richard Dickerson, a molecular biologist at UCLA, wrote in 1978 in Scientific American that: 

The evolution of the genetic machinery is the step for which there are no laboratory models; hence one can speculate endlessly, unfettered by inconvenient facts. The complex genetic apparatus in present-day organisms is so universal that one has few clues as to what the apparatus may have looked like in its most primitive form.[5]

 Francis Crick needs no introduction. In Life Itself, published in 1981, he wrote that: 

Every time I write a paper on the origin of life, I determine I will never write another one, because there is too much speculation running after too few facts.

 Crick’s conclusion is that:

The origin of life seems almost to be a miracle, so many are the conditions which would have had to have been satisfied to get it going.[6]

 Prominent origin-of-life researcher Leslie Orgel wrote in New Scientist in 1982 that:

Prebiotic soup is easy to obtain. We must next explain how a prebiotic soup of organic molecules, including amino acids and the organic constituents of nucleotides evolved into a self-replicating organism. While some suggestive evidence has been obtained, I must admit that attempts to reconstruct the evolutionary process are extremely tentative.[7]

 The views of Nobel Prize winner Fred Hoyle are particularly interesting. He struggled with the conflict between his ardent atheism and his knowledge of the excruciating difficulty of positing a naturalistic start to life. Writing in 1984, Hoyle stated that: 

From my earliest training as a scientist I was very strongly brain-washed to believe that science cannot be consistent with any kind of deliberate creation. That notion has had to be very painfully shed. I am quite uncomfortable in the situation, the state of mind I now find myself in. But there is no logical way out of it; it is just not possible that life could have originated from a chemical accident.[8]

 The writer Andrew Scott hit the nail on the head when he wrote, in 1986, that most scientists’ adherence to naturalistic accounts of the origin of life owed little to the evidence and much to ideological commitments:

But what if the vast majority of scientists all have faith in the one unverified idea? The modern ‘standard’ scientific version of the origin of life on earth is one such idea, and we would be wise to check its real merit with great care. Has the cold blade of reason been applied with sufficient vigor in this case? Most scientists want to believe that life could have emerged spontaneously from the primeval waters, because it would confirm their belief in the explicability of Nature – the belief that all could be explained in terms of particles and energy and forces if only we had the time and the necessary intellect.[9]

 This conclusion is mirrored in the words of Paul Davies, a theoretical physicist and authority on origin-of-life studies. Writing in 2002, Davies affirms that it is scientists’ adherence to methodological naturalism that drives their agenda and conclusions:

First, I should like to say that the scientific attempt to explain the origin of life proceeds from the assumption that whatever it was that happened was a natural process: no miracles, no supernatural intervention. It was by ordinary atoms doing extraordinary things that life was brought into existence. Scientists have to start with that assumption.[10]

 In 1988, Klaus Dose, another prominent origin-of-life theorist, summed up the situation nicely when he wrote that: 

More than 30 years of experimentation on the origin of life in the fields of chemical and molecular evolution have led to a better perception of the immensity of the problem of the origin of life on Earth rather than to its solution. At present all discussions on principal theories and experiments in the field either end in stalemate or in a confession of ignorance.[11]

 Carl Woese was a pioneer in taxonomy, and one of the major figures in 20th century microbiology. His view of the origin of life: 

In one sense the origin of life remains what it was in the time of Darwin – one of the great unsolved riddles of science. Yet we have made progress…many of the early naïve assumptions have fallen or have fallen aside…while we do not have a solution, we now have an inkling of the magnitude of the problem.[12]

 Paul Davies, too, writes that no substantive progress has been made in this area since Darwin’s time. In a recent short paper suggesting that life be viewed as a software package, Davies writes:

Darwin pointedly left out an account of how life first emerged, “One might as well speculate about the origin of matter,” he quipped. A century and a half later, scientists still remain largely in the dark about life’s origins. It would not be an exaggeration to say that the origin of life is one of the greatest unanswered questions in science.[13]

 Readers of Genesis and Genes will recall Richard Lewontin’s admission that his mathematical models of evolutionary mechanisms are a sham – they do not correspond to reality. The biologist Lynn Margulis reminisced:

Population geneticist Richard Lewontin gave a talk here at UMass [University of Massachusetts] Amherst about six years ago, and he mathematized all of it – changes in the population, random mutation, sexual selection, cost and benefit. At the end of his talk he said, “You know, we’ve tried to test these ideas in the field and the lab, and there are really no measurements that match the quantities I’ve told you about.” This just appalled me. So I said, “Richard Lewontin, you are a great lecturer to have the courage to say it’s gotten you nowhere. But then why do you continue to do this work?” And he looked around and said, “It’s the only thing I know how to do, and if I don’t do it I won’t get grant money.” So he’s an honest man, and that’s an honest answer.

 Lewontin, who is one of the most prominent geneticists in the world and a protégé of one of the founders of neo-Darwinism, Theodosius Dobzhansky, was equally forthright about the role that faith plays in moulding scientists’ approach to important issues. In his review of a book by Carl Sagan, Lewontin wrote in 1997 that:

We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.[14]

 Stuart Kauffman of the Santa Fe Institute is one of the world’s leading origin-of-life researchers and a leading expert on self-organisational systems. He writes:

Anyone who tells you that he or she knows how life started on the earth some 3.45 billion years ago is a fool or a knave. Nobody knows.[15]

 In Genesis and Genes, I also quoted the biochemist Franklin Harold. In his book The Way of the Cell, Harold frankly acknowledged that “We must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.”[16] Regarding the origin of life, Harold writes that:

It would be agreeable to conclude this book with a cheery fanfare about science closing in, slowly but surely, on the ultimate mystery; but the time for rosy rhetoric is not yet at hand. The origin of life appears to me as incomprehensible as ever, a matter for wonder but not for explication.[17]

 Massimo Pigliucci was formerly a professor of evolutionary biology and philosophy at the State University of New York at Stony Brook, and holds doctorates in genetics, botany, and the philosophy of science. He is currently the chairman of the department of philosophy at City University of New York. He is a prominent international proponent of evolution and the author of several books. Writing in 2003, Pigliucci writes that “[I]t has to be true that we really don’t have a clue how life originated on Earth by natural means.”[18]

In 2007, we find science writer Gregg Easterbrook writing in Wired: “What creates life out of the inanimate compounds that make up living things? No one knows. How were the first organisms assembled? Nature hasn’t given us the slightest hint. If anything, the mystery has deepened over time.”[19]

 Also in 2007, Harvard chemist George M. Whitesides, in accepting the highest award of the American Chemical Society, wrote: “The Origin of Life. This problem is one of the big ones in science. It begins to place life, and us, in the universe. Most chemists believe, as do I, that life emerged spontaneously from mixtures of molecules in the prebiotic Earth. How? I have no idea… On the basis of all the chemistry that I know, it seems to me astonishingly improbable.”[20] 

As recently as 2011, Scientific American acknowledged that origin-of-life research has gotten nowhere in the last century. In an article by John Horgan, we read that:

Dennis Overbye just wrote a status report for the New York Times on research into life’s origin, based on a conference on the topic at Arizona State University. Geologists, chemists, astronomers, and biologists are as stumped as ever by the riddle of life.[21]

 Also writing in 2011, Dr. Eugene Koonin provided a neat summary of the utter failure of this endeavour: 

The origin of life is one of the hardest problems in all of science… Origin of Life research has evolved into a lively, interdisciplinary field, but other scientists often view it with skepticism and even derision. This attitude is understandable and, in a sense, perhaps justified, given the “dirty” rarely mentioned secret: Despite many interesting results to its credit, when judged by the straightforward criterion of reaching (or even approaching) the ultimate goal, the origin of life field is a failure – we still do not have even a plausible coherent model, let alone a validated scenario, for the emergence of life on Earth. Certainly, this is due not to a lack of experimental and theoretical effort, but to the extraordinary intrinsic difficulty and complexity of the problem. A succession of exceedingly unlikely steps is essential for the origin of life… these make the final outcome seem almost like a miracle.[22]

***

The area of origin-of-life research is fascinating not only for its own sake, but also in the way that it exposes what many uninformed members of the public take for granted, namely, that scientists are driven by data, and data alone. I elaborated on this misconception in Genesis and Genes, demonstrating that the commitment of many scientists to methodological naturalism is a far more important factor than the scientific evidence in reaching conclusions about life on Earth.

***

 See Also:

The post Certitude and Bluff:

https://torahexplorer.com/2013/01/15/certitude-and-bluff/

References:

Some of the quotations in this post come from an article by Rabbi Moshe Averick, published in The Algemeiner. The article can be read here:

http://www.algemeiner.com/2012/09/27/speculation-faith-and-unproven-assumptions-the-history-of-origin-of-life-research-in-scientists-own-words/

Retrieved 26th June 2013.

[1] The article can be read here:

http://www.forbes.com/2009/02/23/evolution-creation-debate-biology-opinions-contributors_darwin.html.

Retrieved 2nd November 2010.

[2] R.W. Clark, Weidenfeld and Nicolson, London (1985), page 148.

[3] To read more about Nagel and his latest book, see these reviews:

http://www.newrepublic.com/article/112481/darwinist-mob-goes-after-serious-philosopher

http://www.weeklystandard.com/articles/heretic_707692.html

[4] See http://www.jidaily.com/914e2?utm_source=Jewish+Ideas+Daily+Insider

Retrieved 27th June 2013.

[5] Richard E. Dickerson, “Chemical Evolution and the Origin of Life”, Scientific American, Vol. 239, No. 3, September 1978, page77.

[6] Life Itself, New York, Simon and Schuster, 1981, page 88.

[7] Leslie E. Orgel, “Darwinism at the very beginning of life”, New Scientist, Vol. 94, 15 April 1982, page 150.

[8] Fred Hoyle, Evolution from Space, New York, Simon and Shuster, 1984, page 53.

[9] Andrew Scott, “The Creation of Life: Past, Future, Alien”, Basil Blackwell, 1986, page 111.

[10] Paul Davies, “In Search of Eden, Conversations with Paul Davies and Phillip Adams”.

[11] Klaus Dose, “The Origin of Life: More Questions Than Answers”, Interdisciplinary Science Reviews, Vol. 13, No. 4, 1988, page 348.

[12] Carl Woese, Gunter Wachtershauser, “Origin of Life” in Paleobiology: A Synthesis, Briggs and Crowther – Editors (Oxford: Blackwell Scientific Publications, 1989.

[13] See: http://arxiv.org/abs/1207.4803.

Retrieved 27th June 2013.

[14] “Billions and Billions of Demons”, Richard Lewontin, 9th January 1997, New York Times Book Review.

[15] At Home in the Universe, London, Viking, 1995, page 31.

[16] Franklin Harold, The Way of the Cell: Molecules, Organisms and the Order of Life, Oxford University Press, 2001, page 205.

[17] Ibid. page 251.

[18] Massimo Pigliucci, “Where Do We Come From? A Humbling Look at the Biology of Life’s Origin,” in Darwin, Design and Public Education, eds. John Angus Campbell and Stephen C. Meyer (East Lansing, MI: Michigan State University Press, 2003), page 196.

[19] Gregg Easterbrook, “Where did life come from?” Wired, page 108, February, 2007.

[20] George M. Whitesides, “Revolutions in Chemistry: Priestly Medalist George M. Whitesides’ address”, Chemical and Engineering News, 85 (March 26, 2007): p. 12-17. See http://ismagilovlab.uchicago.edu/GMW_address_priestley_medal.pdf.

Retrieved 22nd April 2012.

[21] John Horgan, Scientific American, 28th February 2011.

[22] Eugene Koonin, The Logic of Chance: The Nature and origin of Biological Evolution (Upper Saddle River, NJ, FT Press, 2011, page 391.

Advertisements

Genesis and Genes on Television

June 15, 2013

A local television station, SABC 2, recently featured Genesis and Genes. The segment, which is about 7-minutes long, is now available on YouTube. Here is the link:

 

http://www.youtube.com/watch?v=SiqEDnN0aM8

Science as a Self-Correcting Mechanism

June 9, 2013

Writing in the Huffington Post recently, Karl Giberson, a prominent proponent of theistic evolution, appealed to the well-known argument that science is a self-correcting mechanism.[1] He writes,

Science – and this includes evolution – is a self-correcting enterprise. I know little of psychiatry, but I am not shocked to discover that critical voices have emerged and are being heard. This is the norm for science. Seemingly secure science is often modified – think Newtonian physics – and entire fields even disappear, like phrenology (studying personality via bumps on the skull). Anyone who understands the scientific community knows it to be full of renegade individualists only too eager to overturn the status quo. This aggressive self-examination is the reason why we now understand the world so well…

The reality is different from this idyllic description, and informed consumers of science know that, public relations aside, there are serious doubts as to the extent to which science is a self-correcting enterprise. For example, the epidemiologist John Ioannidis wrote a paper in 2012 entitled Why Science Is Not Necessarily Self-Correcting.[2] The abstract begins as follows:

The ability to self-correct is considered a hallmark of science. However, self-correction does not always happen to scientific evidence by default. The trajectory of scientific credibility can fluctuate over time, both for defined scientific fields and for science at-large. History suggests that major catastrophes in scientific credibility are unfortunately possible and the argument that “it is obvious that progress is made” is weak.

 Ioannidis proceeds to mention one mechanism which renders self-correction less than perfect:

 Efficient and unbiased replication mechanisms are essential for maintaining high levels of scientific credibility. Depending on the types of results obtained in the discovery and replication phases, there are different paradigms of research: optimal, self-correcting, false nonreplication, and perpetuated fallacy. In the absence of replication efforts, one is left with unconfirmed (genuine) discoveries and unchallenged fallacies.

 What the last sentence means is that, if replication of research results is not a ubiquitous feature of science, there will be unchallenged fallacies. They will not be corrected. And, as we have discussed several times in this forum, replicability of research is a major weakness in contemporary science.

 Ioannidis is too savvy about problems with contemporary science to swallow Karl Giberson-type propaganda:

The self-correction principle does not mean that all science is correct and credible. A more interesting issue than this eschatological promise is to understand what proportion of scientific findings are correct (i.e., the credibility of available scientific results).

 And:

 Even if we believe that properly conducted science will asymptotically trend towards perfect credibility, there is no guarantee that scientific credibility continuously improves and that there are no gap periods during which scientific credibility drops or sinks (slightly or dramatically). The credibility of new findings and the total evidence is in continuous flux. It may get better or worse.

 The paper by Ioannidis is enlightening. I was particularly pleased to discover that arguments I made in Genesis and Genes mirrored those made by Ioannidis. So I reproduce here the section of the book which deals with the issue of science as a self-correcting mechanism:

Jonathan: I’ve heard it said that the fact that new theories replace old theories only proves that science is a self-correcting enterprise. Do you agree?

YB: That’s a nice way to put a happy face on it. But there are two serious problems with this suggestion. Firstly, even if science were this gigantic super-tanker that eventually turns around, it might be too slow for the individual who lived while the old paradigm prevailed. Let’s consider the demise of the eternal universe paradigm. Until 1965, most scientists believed that the universe had never been created – it was eternal. This stood in total contrast to the Torah view that the universe was created at a specific point. By 1965, the old paradigm had collapsed, and was replaced by the Big Bang model, according to which the universe came into existence, apparently ex nihilo. Now imagine a person who died in 1950. Does it help him that science is a self-correcting mechanism? His entire life was spent in the shadow of the monolithic scientific consensus that the universe is eternal. Since he, like all of us, was not a prophet, he could not foresee that some time after his death, the scientific paradigm that dominated his life would crumble and be replaced with a radically different picture. If this person had been a Jew, he would have lived his entire life with unresolved tension between the scientific paradigm that the universe is eternal, and Jewish belief in the creation of the universe. So this business of self-correction, even if it were true, is only good for historians. It won’t help your average individual struggling with a particular issue and having only one lifetime.

 Jonathan: I see. But you mentioned that there were two problems with this suggestion.

YB: Yes. The second problem is this: Why do you believe that science is a self-correcting mechanism? It is because we know that in specific cases, certain beliefs that the scientific community subscribed to turned out to be wrong and were discarded. But there is no way to estimate in what percentage of all cases science indeed reverses its course. Oh, I know the party line about how scientists constantly scrutinise the evidence, compare their hypotheses to experimental results and the rest of it. But we saw enough in the previous chapters to appreciate that in real life, it hardly ever reaches this ideal. I described some stories that had happy endings, like the one involving Dr. Robin Warren, who established that bacteria cause some ulcers. But do you know how many stories had a sad ending? Can you estimate how often in the past a researcher had a hunch but abandoned his line of research when he was subjected to ridicule? Do you have any way of estimating which ending happens more frequently, the sad or the happy? What if for every case like Dr. Warren’s, there were a hundred scientists who had a promising insight or idea, but were deterred by the initial rejection they experienced? We only hear the stories with a happy ending. But scientists are human beings, and most human beings don’t have a thick skin.

See Also:

The post Dr. John Ioannidis and the Reality of Research:

https://torahexplorer.com/2013/05/05/dr-john-ioannidis-and-the-reality-of-research/

The post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

 

References:

[1] See http://www.huffingtonpost.com/karl-giberson-phd/evolutions-refusal-to-die_b_3292734.html.

Retrieved 9th June 2013.

[2] See http://pps.sagepub.com/content/7/6/645.

Retrieved 9th June 2013.

Brain Scam

June 3, 2013

Imagine that Tom is analysing a work of literature – The Grapes of Wrath, say. He looks at the plot, characterisation, historical context, and uses the various tools of literary analysis. But now Tom takes the study further, and begins to examine the type of paper that the book was printed on. Next, he looks at the ink used, employing gas chromatography to elucidate the chemical makeup of its ingredients. What if at some point Tom insists that the book can be fully understood through this latter, scientific methodology, and that The Grapes is nothing more than the sum of its parts – the molecular interactions between ink droplets and the cellulose in the paper?

Science has made great progress in the last three centuries by pressing the cause of reductionism. The idea is that underneath complex phenomena and entities are simpler, more fundamental layers that can be studied in order to fully elucidate the complex conglomerate. For example, biology has benefited by exploiting the reductionist tools of biochemistry – reducing complex biological phenomena to the level of chemistry. But, as in our example above, the process can go haywire, as when claims are made that human beings are no more than a collection of biochemical responses to stimuli and neuronal interactions. [Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality (2012) disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.]

Informed consumers of science need to be aware of reductio ad absurdum in the realm of brain scans. The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And in The Invisible Gorilla (2010), Christopher Chabris and Daniel Simons advise readers to be wary of such “brain porn”. But popular magazines, science websites and books are frenzied consumers of- and proselytisers for these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The genre is inexhaustible: “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on. The attempt to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies is nothing less than an intellectual pestilence, a plague of neuroscientism, also known as neurobabkes. For years, the uninformed public has been deluged by references to innumerable studies that “explain” the most complex, subtle and ethereal phenomena on the basis of some colour-drenched picture of a sliced brain. The accompanying report, which purports to explain why human beings love, or envy, or believe in God, or prefer Coke to Pepsi, is heavy on neuro-babble. This is reductionist science run amok. The ubiquity of headlines containing phrases like brain scans show is matched only by the confusion they create in the minds of the public, uninformed about science as it is. So let’s revise some basics.

The human brain is, so far as we know, the most complex object in the universe. That a part of it “lights up” on a functional magnetic resonance imaging (fMRI) scan does not mean that the rest is inactive; it means that certain areas in the brain have an elevated oxygen consumption when a subject performs a task such as reading or reacting to stimuli such as pictures or sounds. The significance of this is not necessarily obvious. Technicolor brain scans are not anything remotely like photographs of the brain in action in real time. Scientists cannot “read” minds. Paul Fletcher, Professor of health neuroscience at Cambridge University, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he says, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

In fact, a new branch of the neuroscience-explains-everything genre may be created at any time by simply attaching the prefix “neuro” to whatever. So “neuroeconomics” is the latest in a line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”, and there is “neurotheology”, “neuromarketing” and other assorted neurononsense.

When the media conjure up stories with titles like “Brain Scans Show Vegetarians and Vegans More Empathic than Omnivores”, the content is almost entirely fictitious. It would be hilarious if not for the fact that the masses out there take this as Science – magisterial, peremptory, authoritative. Examples of this pop-science abound. Marketing consultant Martin Lindstrom tells us that people “love” their iPhones. This conclusion is based on the fact that brain scans of telephone users listening to their personal ring tones showed a “flurry of activation” in the insula, a prune-sized area of the brain. But researchers at UCLA claimed that photos of former presidential candidate John Edwards provoked feelings of “disgust” in subjects because they “lit up” the… insula. Is dopamine “the molecule of intuition”, as Jonah Lehrer suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine (2012)? Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking (2012), meanwhile, calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it. Other stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy.

***

Informed consumers of science are aware that just about any conclusion in science – but especially in psychiatry, neurology and psychology – is possible, if you pick your evidence carefully. “Having outlined your theory,” says Professor Fletcher, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula… You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.” The insula plays a role in a broad range of psychological experiences, including empathy and disgust, but also sudden insight, uncertainty, and the awareness of bodily sensations, such as pain, hunger, and thirst. With such a broad physiological portfolio, it is no surprise that the insula is activated in many fMRI studies.

Even more versatile than the insula is the infamous amygdala. Invariably described as “primitive” or even “reptilian”, the amygdala shows increased activation when one experiences fear, but it also springs to life when one encounters novel or unexpected stimuli. The multi-functionality of most brain areas renders reasoning backwards from neural activation depicted by a scan to the subjective experience of the brain’s owner a dubious strategy. This approach – formally referred to as “reverse inference,” – is nothing but a high-tech and expensive Rorschach test, inviting interpreters to read whatever they wish into ambiguous findings.  There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.)

Brain imaging is ubiquitous in pop science mostly because the images are mediagenic. The technology lulls the hoi-polloi into thinking that the most complex entities and phenomena are reducible to simple images on a screen, a perfect fit for a generation hooked on iGadgets. Pretty pictures of the brain can seduce us into drawing simplistic conclusions, leading us to ask more of these images than they can possibly deliver. And the pictures inspire uncritical devotion: a 2008 study, notes Fletcher, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

Even if brain scans were reliable indicators of brain activity, it is not straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Furthermore, let’s remember that we do not have the faintest clue about the biggest mystery of all – how a lump of grey matter produces the conscious experience we take for granted.

***

Brain scams are not the only area where scientists and science reporters overreach. The same is true of gene studies that purport to pin down the most intricate human characteristics and behaviours to this or that gene, reducing human beings to nothing but a collection of amino acids.

And the same is true of evolutionary biology, which purports to reduce human beings to the sum total of random mutations. Any claim about diffuse phenomena that is made on the basis of reductionism should be treated with suspicion.

***

References:

See the following two articles:

http://ideas.time.com/2013/05/30/dont-read-too-much-into-brain-scans/

Retrieved 3rd June 2013.

http://www.newstatesman.com/culture/books/2012/09/your-brain-pseudoscience

Retrieved 3rd June 2013.

Peer Review

May 27, 2013

One factor that clearly distinguishes informed consumers of science and the general public is the attitude these groups have towards the process of peer-review. The general public entertains unrealistic, highly-idealised visions of a process by which scientific research is assessed by peers. In theory, peer review is supposed to act as a filter, weeding out the crackpots; in practice, it often turns out to be a way to enforce orthodoxy.

Copernicus’s heliocentric cosmology, Galileo’s mechanics, Newton’s gravity and equations of motion – these ideas never appeared in journal articles. They appeared in books that were reviewed, if at all, by associates of the author. The peer-review process as we know it was instituted after the Second World War, largely due to the huge growth of the scientific enterprise and the enormous pressure on academics to publish ever more papers.

Since the 1950s, peer-review has worked as follows: a scientist wishing to publish a paper in a journal submits a copy of the paper to the editor of a journal. The editor forwards the paper to several academics whom he considers to be experts on the matter, asking whether the paper is worthy of publication. These experts – who usually remain anonymous – submit comments about the paper that constitute the “peer review”. The referees judge the content of the paper on criteria such as the validity of the claims made in the paper, the originality of the work, and whether the work, even if correct and original, is important enough to be worthy of publication. Often, the journal editor will require the author to amend his paper in accordance with the recommendations of the referees.

Prior to the War, university professors were mainly teachers, carrying a teaching load of five or six courses per semester (a typical course load nowadays is one or two courses). Professors with this onerous teaching burden were not expected to write papers. The famous philosopher of science Sir Karl Popper wrote in his autobiography that the dean of the New Zealand university where Popper taught during World War II said that he regarded Popper’s production of articles and books a theft of time from the university.

But at some point, universities came to realise that their prestige – and with it the grants they received from governments and corporations – depended not so much on the teaching skills of their professors but rather on the scholarly reputation of these professors. And this reputation could only be enhanced through publications. Teaching loads were reduced to allow professors more time for research and the production of papers; salaries began to depend on one’s publication record. Before the War, salaries of professors of the same rank (associate professor, assistant professor, adjunct professor, full professor etc.) were the same (except for an age differential, which reflected experience). Nowadays, salaries of professors in the same department of the same age and rank can differ by more than a factor of two.

One consequence of all this is that the production of papers has increased by a factor of more than one thousand over the past fifty years. The price paid for this fecundity is a precipitous decline in quality. Before the War, when there was no financial incentive to publish papers, scientists wrote them as a labour of love. These days, papers are written mostly to further one’s career. One thus finds that nowadays, most papers are never cited by anyone except their author(s).

Philip Anderson, who won a Nobel Prize for physics, writes that

In the early part of the postwar period [a scientist’s] career was science-driven, motivated mostly by absorption with the great enterprise of discovery, and by genuine curiosity as to how nature operates. By the last decade of the century far too many, especially of the young people, were seeing science as a competitive interpersonal game, in which the winner was not the one who was objectively right as [to] the nature of scientific reality, but the one who was successful at getting grants, publishing in Physical Review Letters, and being noticed in the news pages of Nature, Science, or Physics Today… [A] general deterioration in quality, which came primarily from excessive specialization and careerist sociology, meant quite literally that more was worse.[1]

More is worse. As Nature puts it, “With more than a million papers per year and rising, nobody has time to read every paper in any but the narrowest fields, so some selection is essential. Authors naturally want visibility for their own work, but time spent reading their papers will be time taken away from reading someone else’s.” The number of physicists has increased by a factor of one thousand since the year 1900. Back then, ten percent of all physicists in the world had either won a Nobel Prize or had been nominated for it. Things are much the same in chemistry. The American Chemical Society made a list of the most significant advances in chemistry over the last 100 years. There has been no change in the rate at which breakthroughs in chemistry have been made in spite of the thousand-fold increase in the number of chemists. In the 1960s, US citizens were awarded about 50 000 patents in chemistry-related areas per year. By the 1980s, the number had dropped to 40 000. But the number of papers has exploded. One result of this publish-or-perish mentality is that groundbreaking papers are often rejected because they are submitted to referees who are incapable or unwilling to recognise novel ideas. Consider these examples.

Rosalyn Yalow won the Nobel Prize in Physiology in 1977. She describes how her Nobel-winning paper was received: “In 1955 we submitted the paper to Science… the paper was held there for eight months before it was reviewed. It was finally rejected. We submitted it to the Journal of Clinical Investigations, which also rejected it.”[2]

Günter Blobel also won a Nobel Prize in Physiology, in 1999. In a news conference given just after he was awarded the prize, Blobel said that the main problem one encounters in one’s research is “when your grants and papers are rejected because some stupid reviewer rejected them for dogmatic adherence to old ideas.” According to the New York Times, these comments “drew thunderous applause from the hundreds of sympathetic colleagues and younger scientists in the auditorium.”[3]

Mitchell J. Feigenbaum thus described the reception that his revolutionary papers on chaos theory received: “Both papers were rejected, the first after a half-year delay. By then, in 1977, over a thousand copies of the first preprint had been shipped. This has been my full experience. Papers on established subjects are immediately accepted. Every novel paper of mine, without exception, has been rejected by the refereeing process. The reader can easily gather that I regard this entire process as a false guardian and wastefully dishonest.”[4]

Theodore Maiman invented the laser, an achievement whose importance is not doubted by anyone. The leading American physics journal, Physical Review Letters, rejected Maiman’s paper on constructing a laser.[5]

John Bardeen, the only person to have ever won two Nobel Prizes in physics, had difficulty publishing a theory in low-temperature solid-state physics that went against the paradigm.[6]

Stephen Hawking needs no introduction. According to his first wife Jane, when Hawking submitted to Nature what is generally regarded as his most important paper on black hole evaporation, the paper was initially rejected.[7] The physicist Frank J. Tipler writes that “I have heard from colleagues who must remain nameless that when Hawking submitted to Physical Review what I personally regard as his most important paper, his paper showing that a most fundamental law of physics called ‘unitarity’ would be violated in black hole evaporation, it, too, was initially rejected.”

 Conventional wisdom in contemporary geophysics holds that the Hawaiian Islands were formed sequentially as the Pacific Plate moved over a hot spot deep inside the Earth. This idea was first developed in a paper by the Princeton geophysicist Tuzo Wilson. Wilson writes: “I… sent [my paper] to the Journal of Geophysical Research. They turned it down… They said my paper had no mathematics in it, no new data, and that it didn’t agree with the current views. Therefore, it must be no good. Apparently, whether one gets turned down or not depends largely on the reviewer. The editors, too, if they don’t see it your way, or if they think it’s something unusual, may turn it down. Well, this annoyed me…”[8]

There is not much incentive for referees to carefully adjudicate their fellow-scientists’ papers. As Nature puts it: “How much time do referees expend on peer review? Although referees may derive benefits from reviewing, it still represents time taken away from other activities (research, teaching and so forth) that they would have otherwise prioritized. Referees are normally unpaid but presumably their time has some monetary value, as reflected in their salaries.”

In 2006, Nature published an essay by Charles G. Jennings, a former editor with the Nature journals and former executive director of the Harvard Stem Cell Institute. As an editor, Jennings was intimately familiar with the peer-review system, and knows full well how badly misunderstood this process is by the public:

Whether there is any such thing as a paper so bad that it cannot be published in any peer reviewed journal is debatable. Nevertheless, scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.

Jennings writes that “many papers are never cited (and one suspects seldom read)”. These papers are written, to a large extent, because “To succeed in science, one must climb this pyramid [of journals]: in academia at least, publication in the more prestigious journals is the key to professional advancement.” Advancement, in this context, is measured by career rewards such as recruitment and promotion, grant funding, invitations to speak at conferences, establishment of collaborations and media coverage.

***

Many in the scientific community recognise the ills that plague the peer-review process, and experiments are being conducted to improve – or sidestep – the current dispensation. For example, some journals no longer grant referees the protection of anonymity. Instead, reviewers are identified and their critiques of papers are made available to the author of the paper being reviewed. The author is then able to defend his paper. This may ameliorate the problem of reviewers who hamper the publication of a paper for less than noble reasons (such as professional jealousy).

At any rate, informed consumers of science understand that peer-review is far from perfect. It is an efficient way of strangling new ideas, rather than a vehicle for promoting truly novel ideas. The peer-review system often stifles true innovation, allowing the reigning paradigm to squash all competition unfairly. This is especially true in controversial areas like biological evolution.

***

References:

My two main references for this post are:

  1. An essay by the physicist Frank J. Tipler entitled Refereed Journals: do they insure quality or enforce orthodoxy? The essay appeared in the volume Uncommon Dissent: Intellectuals who find Darwinism Unconvincing, William A. Dembski (editor), ISI Books, 2004.
  2. A 2006 editorial in Nature, available here: http://www.nature.com/nature/peerreview/debate/nature05032.html. Retrieved 26th May 2013.

[1] Philip Anderson, in Brown, Pais and Pippard, editors, Twentieth Century Physics, American Institute of Physics Press, 1995,page 2029.

[2] Walter Shropshire Jr., editor, The Joys of Research, Smithsonian Institution Press, 1981, page 109.

[3] New York Times, 12th October 1999, page A29.

[4] Mitchell J. Feigenbaum, in Brown, Pais and Pippard, editors, Twentieth Century Physics, American Institute of Physics Press, 1995, page 1850.

[5] Ibid. page 1426.

[6] Lillian Hoddeson, True Genius: The Life and Science of John Bardeen, Joseph Henry Press, 2002, page 300.

[7] Jane Hawking, Music to Move the Stars: A Life with Stephen Hawking, Trans-Atlantic Publications, 1999, page 239.

[8] Walter Shropshire Jr., editor, The Joys of Research, Smithsonian Institution Press, 1981, page 130.

The Science Mystique

May 20, 2013

A reader has kindly drawn my attention to an article by a physician, Jalees Rehman, which treads territory that will be familiar to readers of TorahExplorer. In this post, I reproduce some of Dr. Rehman’s points, interspersed with my comments.[1]

***

Dr. Rehman begins by discussing what he terms the doctor mystique – “Doctors had previously been seen as infallible saviors who devoted all their time to heroically saving lives and whose actions did not need to be questioned” – a notion now rapidly crumbling. Informed patients have access to an immense amount of information with which to question the decisions of their physicians – “Instead of blindly following doctors’ orders, they want to engage their doctor in a discussion and become an integral part of the decision-making process.” In addition, patients nowadays are more aware of various factors that can skew doctors’ judgement:

The recognition that gifts, free dinners and honoraria paid by pharmaceutical companies strongly influence what medications doctors prescribe has led to the establishment of important new rules at universities and academic journals to curb this influence…

I discussed related issues in posts such as Dr. John Ioannidis and the Reality of Research and Dr. Ben Goldacre and the Reproducibility of Research.

Dr. Rehman’s essay, however, is devoted to another myth, one that he calls The Science Mystique. He correctly notes that it still persists where similar notions – the feminine mystique and the doctor mystique – have disappeared or are disintegrating. But Dr. Rehman is clear that the science mystique is vulnerable:

As with other mystiques, it [i.e. The Science Mystique] consists of a collage of falsely idealized and idolized notions of what science constitutes. This mystique has many different manifestations, such as the firm belief that reported scientific findings are absolutely true beyond any doubt, scientific results obtained today are likely to remain true for all eternity and scientific research will be able to definitively solve all the major problems facing humankind.

Quite right. Readers of Genesis and Genes will be familiar with a comment made by the physicist and philosopher Sir John Polkinghorne:

Many people have in their minds a picture of how science proceeds which is altogether too simple. This misleading caricature portrays scientific discovery as resulting from the confrontation of clear and inescapable theoretical predictions by the results of unambiguous and decisive experiments… In actual fact… the reality is more complex and more interesting than that.

Science is a human – read fallible – endeavour. Informed consumers of science understand that a host of factors influence research. Beyond the technical aspects of research, there are societal factors, political factors, ideological factors, financial factors and dozens more, some of which I discussed in the first chapter of Genesis and Genes. One consequence of this is that scientific findings come in a spectrum of credibility, ranging from solid to hopelessly speculative and ideological.

Dr. Rehman:

This science mystique is often paired with an over-simplified and reductionist view of science. Some popular science books, press releases or newspaper articles refer to scientists having discovered the single gene or the molecule that is responsible for highly complex phenomena, such as a disease like cancer or philosophical constructs such as morality.

Indeed. Most members of the public are not informed consumers of science, and are easily swayed by simplistic or exaggerated claims. A common example of exaggerated claims swallowed by the public comes from palaeontology. A fossil is unearthed and proclaimed as the latest earliest ancestor of human beings. After the media frenzy subsides and the public’s attention is diverted, the claims inevitably prove to be hollow. [For several excellent examples of the genre, see the chapter entitled Human Origins and the Fossil Record in Science and Human Origins.][2] This is true with respect to complicated concepts and phenomena like cancer or morality, as Dr. Rehman writes, but it is all the more true with respect to over-arching theories that purport to explain ultimate questions about the universe or life. The gullible public is unaware of the tremendous superstructure of assumptions, ideological commitments and technical difficulties that go into scientists’ absolutist statements about such subjects.

Dr. Rehman continues:

As flattering as it may be, few scientists see science as encapsulating perfection. Even though I am a physician, most of my time is devoted to working as a cell biologist. My laboratory currently studies the biology of stem cells and the role of mitochondrial metabolism in stem cells. In the rather antiquated division of science into “hard” and “soft” sciences, where physics is considered a “hard” science and psychology or sociology are considered “soft” sciences, my field of work would be considered a middle-of-the-road, “firm” science. As cell biologists, we are able to conduct well-defined experiments, falsify hypotheses and directly test cause-effect relationships. Nevertheless, my experience with scientific results is that they are far from perfect and most good scientific work usually raises more questions than it provides answers. We scientists are motivated by our passion for exploration, and we know that even when we are able to successfully obtain definitive results, these findings usually point out even greater deficiencies and uncertainties in our knowledge.

An important qualification is needed here. Researchers like Dr. Rehman are usually aware that in their field, perfection is elusive. But they are often largely ignorant of other fields, and may harbour unrealistic views of the reliability of research in those fields.

Readers of Genesis and Genes will recall chapter 3, in which I described how scientists from half-a-dozen different disciplines were attempting to determine the age of the Earth in the latter part of the 19th century. It was frequently the case that practitioners of one discipline, aware of the limitations of their own field, failed to understand that other fields were just as vulnerable, but for different reasons. This led to a situation in which a mirage was created that there was independent confirmation, arising from several different disciplines, regarding the age of the Earth. This turned out to be completely illusory.

Dr. Rehman now turns to reproducibility of research:

One key problem of science is the issue of reproducibility. Psychology is currently undergoing a soul-searching process[3] because many questions have been raised about why published scientific findings have such poor reproducibility when other psychologists perform the same experiments. One might attribute this to the “soft” nature of psychology, because it deals with variables such as emotions that are difficult to quantify and with heterogeneous humans as their test subjects. Nevertheless, in my work as a cell biologist, I have encountered very similar problems regarding reproducibility of published scientific findings. My experience in recent years is that roughly only half the published findings in stem cell biology can be reproduced when we conduct experiments according to the scientific methods and protocols of the published paper.

Recall that earlier, Dr. Rehman characterised his field, cell biology, as a ‘firm’ science, somewhere between physics and psychology on a spectrum similar to the ‘proof continuum’ I discussed in Genesis and Genes. As he says, cell biology is an area of science where ostensibly objective parameters exist that should ensure the reproducibility of research. Alas, to a significant degree, reproducibility is elusive. Cell biology is not sociology or anthropology; nor are we talking about drug trials here (where as much as 90% of published studies may be wrong). Nonetheless, upwards of 50% of the research in cell biology is not reproducible. One is reminded of this passage in Genesis and Genes:

[Glenn] Begley [who served, for a decade, as head of global cancer research at Amgen] met for breakfast at a cancer conference with the lead scientist of one of the problematic studies. “We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”

Dr. Rehman:

On the other hand, we devote a limited amount of time and resources to replicating results, because there is no funding available for replication experiments. It is possible that if we devoted enough time and resources to replicate a published study, tinkering with the different methods, trying out different batches of stem cells and reagents, we might have a higher likelihood of being able to replicate the results. Since negative studies are difficult to publish, these failed attempts at replication are buried and the published papers that cannot be replicated are rarely retracted. When scientists meet at conferences, they often informally share their respective experiences with attempts to replicate research findings. These casual exchanges can be very helpful, because they help us ensure that we do not waste resources to build new scientific work on the shaky foundations of scientific papers that cannot be replicated.

The difficulty of publishing negative results and the lack of incentive to verify other researchers’ results are recognised as major contributors to systemic problems within contemporary science. The average member of the public labours under the illusion that mechanisms such as peer-review suffice to ensure that whatever is published in a mainstream journal is infallible. This, of course, constitutes child-like naivety. As Nature put it in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.”

Dr. Rehman:

Most scientists are currently struggling to keep up with the new scientific knowledge in their own field, let alone put it in context with the existing literature. As I have previously pointed out,[4] more than 30-40 scientific papers are published on average on any given day in the field of stem cell biology. This overwhelming wealth of scientific information inevitably leads to a short half-life of scientific knowledge… What is considered a scientific fact today may be obsolete within five years.

Quite true. As I wrote in Genesis and Genes,

A paper published in the Proceedings of the National Academy of Scientists in 2006 noted that “More than 5 million biomedical research and review articles have been published in the last 10 years.” That’s an average of 1370 papers per day. And this is just biomedical research.

This deluge of information, and the fact that “What is considered a scientific fact today may be obsolete within five years”, has important repercussions for informed consumers of science. Those who follow the evolution debate are aware of how the ephemeral nature of scientific knowledge can have an impact on what was only recently considered absolute. Whether it is Tree of Life research, Junk DNA or the discovery of numerous instances of Lamarckian heredity, there have been breathtaking turnarounds in recent years. Basic prudence dictates that when evolutionary biologists invoke ‘overwhelming evidence’ on the basis of whatever, that their claims be taken with a sack of salt.

Dr. Rehman:

One aspect of science that receives comparatively little attention in popular science discussions is the human factor. Scientific experiments are conducted by scientists who have human failings, and thus scientific fallibility is entwined with human frailty. Some degree of limited scientific replicability is intrinsic to the subject matter itself… At other times, researchers may make unintentional mistakes in interpreting their data or may unknowingly use contaminated samples… However, there are far more egregious errors made by scientists that have a major impact on how science is conducted. There are cases of outright fraud… [but] Such overt fraud tends to be unusual… However, what occurs far more frequently than gross fraud is the gentle fudging of scientific data, consciously or subconsciously, so that desired scientific results are obtained. Statistical outliers are excluded, especially if excluding them helps direct the data in the desired direction. Like most humans, scientists also have biases and would like to interpret their data in a manner that fits with their existing concepts and ideas.

Bravo. This is a major theme of Genesis and Genes, and it is crucial in becoming an informed consumer of science. In this short essay, Rehman obviously cannot describe all the influences that have an impact on scientific research. One of Rehamn’s more important omissions is that there is an enormous amount of conditioning which influences scientists – like everyone else – long before they step into the laboratory. Take evolution. If you grew up in the West any time in the last fifty years, you will have encountered innumerable instances in which the claims of evolutionary biology would have been seared into your consciousness, from David Attenborough documentaries to museum dioramas to advertising campaigns named The evolution of the office to countless articles in New Scientist. Scientists do not enter their research careers with a tabula rasa. As professor John Polkinghorne puts it,

Scientists do not look at the world with a blank gaze; they view it from a chosen perspective and bring principles of interpretation and prior expectations… to bear upon what they observe. Scientists wear (theoretical) “spectacles behind the eyes”.

Dr. Rehman:

Human fallibility not only affects how scientists interpret and present their data, but can also have a far-reaching impact on which scientific projects receive research funding or the publication of scientific results. When manuscripts are submitted to scientific journals or when grant proposal are submitted to funding agencies, they usually undergo a review by a panel of scientists who work in the same field and can ultimately decide whether or not a paper should be published or a grant funded. One would hope that these decisions are primarily based on the scientific merit of the manuscripts or the grant proposals, but anyone who has been involved in these forms of peer review knows that, unfortunately, personal connections or personal grudges can often be decisive factors.

Correct. If you happen to be conducting climate research that produces unpopular results, for example, you can be almost sure that your findings will not be published in the most prestigious journals. If you happen to suspect that the brilliant mathematician Irving Segal was right, and that the linear relationship that Edwin Hubble saw between the redshift and apparent brightness of galaxies is perhaps illusory, you are almost certain to receive very little telescope time. Exploring the natural world to your heart’s content, following your curiosity wherever it leads you – that picture of how science was done was fairly accurate up to about the middle of the 19th century. Affluent gentleman scientists could indulge their curiosity about how nature operates. These days, the confines within which research is done will be dictated, to a significant extent, by whatever is considered acceptable by the majority of the community.

***

The science mystique will eventually topple, and that will be a liberating moment for science. It will usher in an age in which scientists and the public alike will be informed consumers of science, able to accurately assess various findings of scientists and assign to them appropriate levels of credibility.

***

See also:

The post Dr. John Ioannidis and the Reality of Research:

https://torahexplorer.com/2013/05/05/dr-john-ioannidis-and-the-reality-of-research/

The post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

References:

[1] See http://www.3quarksdaily.com/3quarksdaily/2013/02/the-science-mystique-.html.

Retrieved 17th May 2013.

[2] Discovery Institute Press, 2012.

[3] Dr. Rehman cites this paper at this point:

http://pps.sagepub.com/content/7/6/537.full.

Retrieved 19th May 2013.

[4] Dr. Rehman cites the following article:

http://www.scilogs.com/next_regeneration/science-journalism-and-the-inner-swine-dog/.

Retrieved 19th May 2013.

Darwinism and Morality

May 13, 2013

William Provine is a biologist and historian of biology at Cornell University. He is forthright about biological evolution and its implications, writing, for example, that evolution is the greatest engine of atheism ever invented. Provine summarises the consequences of the belief in evolution as follows:

Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent.[1]

In this post, we will concentrate on points 3 and 5 above.

***

The evolutionary view is that moral law is something humans create as an evolved adaptation – a conviction that something is right or wrong arises out of the struggle for survival. All the notions we associate with moral and ethical principles are merely adaptations, foisted upon us by evolutionary mechanisms in order to maximize survival.

Provine’s logic is unassailable, if you grant his premises. His point of departure is that nothing exists beyond matter and energy. Matter and energy may manifest themselves in relatively simple forms – a hydrogen molecule, perhaps – and in complex forms, as in a butterfly or human being. But in the end, it all boils down to quarks, electrons and other denizens of the subatomic world. It follows that there cannot be an objective foundation for morality, and that human free will is an illusion, the result of complex neuronal interactions.

This is a popular (inevitable, really) notion among contemporary evolutionists. In 1985, the entomologist E.O. Wilson and the philosopher of science Michael Ruse co-authored an article in which they wrote that “Ethics as we understand it is an illusion fobbed off on us by our genes to get us to co-operate.” In his 1998 book Consilience, Wilson argued that “Either ethical precepts, such as justice and human rights, are independent of human experience or else they are human inventions.” He rejected the former explanation, which he called transcendentalist ethics, in favour of the latter, which he named empiricist ethics.[2]

Indeed, the whole field of sociobiology, founded by Wilson in the 1970s, presupposes that morality is the product of evolutionary processes and tries to explain most human behaviours by discovering their alleged reproductive advantage in the evolutionary struggle for existence. (Stephen Jay Gould is among numerous evolutionists who ridiculed the field for its proclivity to invent just-so stories).

One implication of the belief that human beings do not possess moral freedom is that criminals cannot be held responsible for their deeds. University of Chicago biologist Jerry Coyne thus writes – in a post entitled Is There Moral Responsibility? – that he does not believe in moral responsibility:

I favor the notion of holding people responsible for good and bad actions, but not morally responsible. That is, people are held accountable for, say, committing a crime, because punishing them simultaneously acts as a deterrent, a device for removing them from society, and a way to get them rehabilitated – if that’s possible. To me, the notion of moral responsibility adds nothing to this idea.  In fact, the idea of moral responsibility implies that a person had the ability to choose whether to act well or badly, and (in this case) took the bad choice. But I don’t believe such alternative “choices” are open to people, so although they may be acting in an “immoral” way, depending on whether society decides to retain the concept of morality (this is something I’m open about), they are not morally responsible.  That is, they can’t be held responsible for making a choice with bad consequences on the grounds that they could have chosen otherwise.[3]

David Baggett describes how this notion manifests itself in contemporary academia:

I have found a recent trend among a number of naturalistic ethicists and thinkers to be both interesting and mildly exasperating, but most of all telling. Both one like John Shook, Senior Research Fellow at the Center for Inquiry in Amherst, New York… and Frans de Waal, author most recently of The Bonobo and the Atheist (to adduce but a few examples) seem to be gravitating toward functional categories of morality. Talk of belief and practice replaces talk of truth; references to moral rules exceed those of moral obligations; and prosocial instincts supplant moral authority.[4]

But these notions are hardly recent. As the historian Richard Weikart puts it, “The idea that evolution undermines objective moral standards is hardly a recent discovery of sociobiology, however. In the Descent of Man, Charles Darwin devoted many pages to discussing the evolutionary origin of morality, and he recognized what this meant: morality is not objective, is not universal, and can change over time. Darwin certainly believed that evolution had ethical implications.” Ever since then, evolutionists have been arguing that human free will is a mirage and that morality is subjective. Here are representative examples of a vast genre.[5]

***

Cesare Lombroso (1835-1909) was a leading criminologist who authored the landmark study Criminal Man in 1876. According to Lombroso, infanticide, parricide, theft, cannibalism, kidnapping and antisocial actions could be explained largely as a throwback to earlier stages of Darwinian evolution. In earlier stages of development such behaviours aided survival and were therefore bred into biological organisms by natural selection. William Noyes, one of Lombroso’s American disciples, explained that “In the process of evolution, crime has been one of the necessary accompaniments of the struggle for existence.” Invoking modern science in general and Charles Darwin’s work in particular, Italian jurist Enrico Ferri (1856-1929), one of Lombroso’s top disciples, argued that it was no longer reasonable to believe that human beings could make choices outside the realm of material cause and effect. Ferri applauded Darwin for showing “that man is not the king of creation, but merely the last link of the zoological chain, that nature is endowed with eternal energies by which animal and plant life… are transformed from the invisible microbe to the highest form, man.” Ferri looked forward to the day when crime would be treated as a “disease”.

Ludwig Büchner (1824–1899) was a German medical doctor who became president of the Congress of the International Federation of Freethinkers. He was an outspoken atheist and authored Force and Matter, a materialist tract that went through fifteen editions in German and four in English. He was one of the most energetic popularisers of Darwin’s work in the German-speaking world. Büchner wrote that “the vast majority of those who offend against the laws of the State and of Society ought to be looked upon rather as unfortunates who deserve pity than as objects of execration.” Büchner argued that the [alleged] brain abnormalities in many criminals showed that they were throwbacks to “the brains of pre-historic men.”

***

Born into wealth and privilege, Nathan Leopold and Richard Loeb were Chicagoan graduate students who decided to commit the perfect crime. In the spring of 1924, they abducted and murdered 14-year old Bobby Franks. They were eventually apprehended and confessed to their crime.

Clarence Darrow was hired to save Leopold and Loeb from the gallows. Yes – Clarence Darrow of the famous Monkey Trial in Tennessee. Darrow was a true believer in evolution. According to him, the question before the court was whether it would embrace “the old theory” that “a man does something… because he wilfully, purposely, maliciously and with a malignant heart sees fit to do it” or the new theory of modern science that “every human being is the product of the endless heredity back of him and the infinite environment around him.” According to Darrow, Leopold and Loeb murdered Franks “… because they were made that way…”

Robert Crowe, the state’s chief prosecutor in the case, challenged “Darrow’s dangerous philosophy of life.” He read to the court a speech Darrow had delivered to prisoners at a county jail more than twenty years earlier. Darrow had told the prisoners that there was no moral difference between themselves and those who were outside jail. “I do not believe people are in jail because they deserve to be. They are in jail simply because they cannot avoid it, on account of circumstances which are entirely beyond their control, and for which they are in no way responsible.” “There ought to be no jails”, he told the prisoners.

***

In his book Crime: Criminals and Criminal Justice (1932), University of Buffalo criminologist Nathaniel Cantor ridiculed “the grotesque notion of a private entity, spirit, soul, will, conscience or consciousness interfering with the orderly processes of body mechanisms.” Because we humans are no different in principle to any other biological organism, “man is no more ‘responsible’ for becoming wilful and committing a crime than the flower for becoming red and fragrant. In both cases the end products are predetermined by the nature of protoplasm and the chance of circumstances.” The sociologist J.P. Shalloo wrote in the 1940s that it was the “world-shaking impact of Darwinian biology, with its emphasis upon the long history of man and the importance of heredity for a clear understanding of man’s biological constitution” that finally opened the door to a truer understanding of crime than traditional views.

***

Evolution is not only a scientifically untenable theory, but also a morally bankrupt, corrosive spiritual poison that undermines the foundations of human society.

***

See also: the post Random and Undirected:

https://torahexplorer.com/2013/04/29/random-and-undirected/

References:

[1] Abstract of Dr. William Provine’s 1998 Darwin Day Keynote Address, Evolution: Free will and punishment and meaning in life. This used to be available at http://fp.bio.utk.edu/darwin/frmain.html. I was not able to retrieve it.

[2] See the article by the historian Richard Weikart here:

http://www.evolutionnews.org/2012/05/at_emory_univer_1059491.html.

Retrieved 12th May 2013.

[3] See http://whyevolutionistrue.wordpress.com/2013/05/03/is-there-moral-responsibility/.

Retrieved 12th May 2013.

[4] See http://www.firstthings.com/blogs/firstthoughts/2013/04/26/watering-down-the-categories/#comments.

Retrieved 12th May 2013.

[5] Much of the material in the rest of this post is from the superb Darwin Day in America by John G. West, ISI Books, 2007.

Dr. John Ioannidis and the Reality of Research

May 5, 2013

I mentioned Dr. John Ioannidis a number of times in Genesis and Genes, as well as in several posts. A reader has kindly referred me to an excellent article about Dr. Ioannidis that appeared in The Atlantic.[1] Below are some pertinent points from the article, interspersed with my comments.

David H. Freedman, who wrote the article in The Atlantic, notes that “Medical research is not especially plagued with wrongness. Other meta-research experts[2] have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right).”

Understanding the factors that can distort research is a crucial step in becoming an informed consumer of science. Below, we look at some issues that are raised in the Atlantic article, and suggest how they may be relevant to other fields of science.

***

John Ioannidis may be one of the most influential – and popular – scientists today. In 2005, he published a paper in PLoS [Public Library of Science] Medicine that remains the most downloaded in the journal’s history. He has published papers with 1,328 different co-authors at 538 institutions in 43 countries. In 2009 he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world. Ioannidis is one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, that much of what biomedical researchers conclude in peer-reviewed published studies – conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fibre or less meat, or when they recommend surgery for heart disease or back pain – is misleading, exaggerated, and often just wrong. Ioannidis charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

In the PLoS Medicine paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the tendency to focus on exciting rather than plausible theories, medical researchers will come up with wrong findings most of the time. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. [Vioxx, Zelnorm, and Baycol were among the widely prescribed drugs found to be safe and effective in large randomized controlled trials before the drugs were yanked from the market as unsafe or not so effective, or both.] The article articulated Ioannidis’ conclusion that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and using the peer-review process to suppress unpopular views. These are all phenomena that are well-known to informed consumers of science, but still invisible, to a significant extent, to the general public.

In a seminal paper that was published in the Journal of the American Medical Association, Ioannidis zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. So a large fraction of the most acclaimed research in medicine is untrustworthy.

***

There are many reasons for the dismal record of medical research, and we shall only consider a few factors. Ioannidis suggests that the desperate quest for research grants has gone a long way toward weakening the reliability of medical research. Readers of Genesis and Genes will recall the passage from Seed:

Cash-for-science practices between the nutrition and drug companies and the academics that conduct their research may also be playing a role. A survey of published results on beverages earlier this year found that research sponsored by industry is much more likely to report favorable findings than papers with other sources of funding. Although not a direct indication of bias, findings like these feed suspicion that the cherry-picking of data, hindrance of negative results, or adjustment of research is surreptitiously corrupting accuracy. In his essay, Ioannidis wrote, “The greater the financial and other interest and prejudices in a scientific field, the less likely the research findings are to be true.”[3]

In The Atlantic article, Ioannidis is blunt about one important factor in this situation. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results – and, lo and behold, they were getting them. We think of the scientific process as being objective and rigorous, but in fact it’s easy to manipulate results, sometimes unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.” The fact that financial conflicts of interest are a feature of contemporary science is familiar to readers of Genesis and Genes:

I randomly pulled out from my shelf an issue of Scientific American. It happened to be the September 23, 2004 issue. It carried this announcement, made by the Center for Science in the Public Interest: “Some scientists and consumer advocates have called for a re-evaluation of studies that led to lower cholesterol guidelines. Among other concerns: eight of nine authors of the recommendations had ties to firms that make cholesterol-lowering statin drugs.” This is a thoroughly typical news item in science magazines. This particular note was so ordinary that it warranted all of a tiny mention on page 17. Anyone who reads science publications will periodically come across such items.

Ioannidis says that perhaps only a minority of researchers were succumbing to this type of bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the single less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal?

***

Another issue discussed by Ioannidis is the process of peer-review. The average member of the public (who is, needless to say, not an informed consumer of science) considers peer-review to be a magic pill. Peer-review is supposed to be an objective process, manned by referees who have no personal stake in the research they are reviewing, and who have all the time in the world to devote to carefully checking other peoples’ results. The real world, alas, is a little less rosy. Biased, erroneous, and even blatantly fraudulent studies easily slip through peer-review. In a 2006 editorial, Nature stated that “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.”

Furthermore, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues – that is, their potential reviewers – in ways that only seem like breakthroughs. One example is the glut of hyped papers touting gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are plain dubious.

***

Here is one example of a point made by Ioannidis in the context of medical research which is applicable to palaeontology. Ioannidis says, “Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it. It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”

This phenomenon will be familiar to readers of Genesis and Genes. In the section on the alleged evolution of dinosaurs to birds, I discussed the work of researchers like Professor John A. Ruben of Oregon State University, whose work casts heavy doubt on the reigning paradigm. I wrote:

The Science Daily report from which these quotations are taken continues: “The conclusions [of the Oregon State University researchers] add to other… evidence that may finally force many palaeontologists to reconsider their long-held belief that modern birds are the direct descendants of ancient, meat-eating dinosaurs…” Professor Ruben adds, “But old theories die hard, especially when it comes to some of the most distinctive and romanticized animal species in world history.” He continues, “Frankly, there’s a lot of museum politics involved in this, a lot of careers committed to a particular point of view even if new scientific evidence raises questions.”

Furthermore, Ioannidis found that even when a research error is publicised, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed – in one case for at least 12 years after the results were discredited.

***

Early in his career, Ioannidis was disabused of the notion that mechanisms like randomized trials and double-blind studies were magic wands that ensure infallibility. In poring over medical journals, Ioannidis was struck by how many findings of all types were refuted by later findings. This is particularly visible in medical research. One month ago, TIME Magazine published an article entitled Spin Doctors.[4] The article states:

Mammograms help you live longer. Or wait; they may not… In the medical world, this kind of uncertainty is increasingly common… Enter the US Preventive Services Task Force (USPSTF), a panel of independent experts charged by Congress with sifting through all the studies about health procedures…

In a side-bar entitled Four Surprising Recommendations, TIME highlights four prominent turnabouts:

  • What you may have heard: Taking estrogen and progestin after menopause can lower the risk of heart disease and bone fractures. What you may not have: The USPSTF says supplemental estrogen can increase the risk of breast cancer and does not protect against heart disease, as earlier studies suggested.
  • What you may have heard: All men over age 50 should get regular blood tests for prostate cancer. What you may not have: Those blood tests, which detect many growths that are not cancerous, can lead to risky interventions. Plus, many prostate tumors are slow-growing and don’t need to be removed, even if they are cancerous.
  • What you may have heard: Women should start annual screening for breast cancer at age 40. What you may not have: Women in their 40s have lower cancer rates than older women and higher rates of false positives that lead to additional tests and procedures that may come with complications.
  • What you may have heard: Vitamin D and calcium can strengthen bones and lower the risk of fractures in postmenopausal women. What you may not have: They may slow bone loss, but recommended doses may not be high enough to lower the risk of fractures. And too much calcium can increase the risk of heart disease.

The article in The Atlantic makes much the same point: mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; widely prescribed antidepressants such as Prozac, Zoloft, and Paxil have been revealed to be no more effective than a placebo for most cases of depression; staying out of the sun entirely can actually increase cancer risks; taking fish oil, exercising, and doing puzzles doesn’t really help fend off Alzheimer’s disease; and peer-reviewed studies have come to opposite conclusions on whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.

One important reason for this see-sawing is that most studies involve a relatively small number of participants and run for a relatively short time, perhaps five years. The reason for this is straightforward – it’s expensive and cumbersome to run experiments for thirty or forty years. But the price paid for these short-term savings is that the results of clinical trials are more often than not incorrect. Let’s see why.

Randomized controlled trials constitute the gold standard in medical research. These studies compare how one group responds to a treatment against how an identical group fares without the treatment. Various checks and balances are used to try to shield the researchers from bias, and, consequently, these trials had long been considered nearly unshakable evidence. But these trials, too, are sometimes wrong. “I realized even our gold-standard research had a lot of problems,” Ioannidis says. Before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.

In a typical nutrition or drug study, researchers follow a few thousand people for a number of years, tracking what they eat and what supplements they take, and how their health changes over the course of the study. Then they ask, ‘What did vitamin E do? What did vitamin C or D or A do? What changed with calorie intake, or protein or fat intake? What happened to cholesterol levels? Who got what type of cancer?’

After this, complex statistical models are used to find all sorts of correlations between, say, Vitamin X and cancer Y. When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have good reason to take more vitamin X, and physicians routinely pass these recommendations on to patients. But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbohydrates; and even on the question of whether being overweight is more likely to extend or shorten your life. Ioannidis suggests a simple approach to these studies: ignore them all.

For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects. But even if a study managed to highlight a genuine health connection to some nutrient, a given individual is unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act in concert, and changing the intake of any one nutrient is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you [this is why I explained in Genesis and Genes that science is strongest when it deals with observable, repeatable and limited phenomena.] Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health ‘markers’ such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.

On the relatively rare occasions when a study does go on long enough to track mortality, the findings frequently upend those of the shorter studies. (For example, though the vast majority of studies of overweight individuals link excess weight to ill health, the longest of them have not convincingly shown that overweight people are likely to die sooner, and a few of them have seemingly demonstrated that moderately overweight people are likely to live longer.) Now add to the above ubiquitous measurement errors (for example, people habitually misreport their diets in studies) and routine misanalysis (researchers rely on complex software capable of juggling results in ways they do not always understand).

If a study somehow avoids every one of these pitfalls and finds a real connection to long-term changes in health, you’re still not guaranteed to benefit, because studies report average results that typically represent a vast range of individual outcomes. Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller. “The odds that anything useful will survive from any of these studies are poor,” says Ioannidis – dismissing in a breath a good chunk of the research into which $100 billion a year in the United States is sunk.

I have pointed out before (see the post Blowing Hot and Cold, for example), that the problem of tackling research that is diffuse – the opposite of limited – is by no means restricted to medical research. Take the climate. It is affected by many dozens, perhaps hundreds, of factors. In the context of human health, we know that there can be a huge difference between what is detected over a 5-year study as opposed to what ultimately transpires when subjects die fifty years later. In climate studies, too, there may be enormous differences between what is measured over a few decades and what happens over millennia.

Furthermore, as we saw above, most medical studies do not actually track the individual’s health as a whole; rather, they measure ‘markers’ which are taken as proxies for overall health. The assumption that markers are good proxies for overall health is, at best, dubious. In climate science too, it is often ‘markers’ that are used to indicate the overall ‘health’ of the climate, and this may well lead to erroneous conclusions. Consider glaciers.[5]

In 1895, geologists thought the world was freezing up due to the ‘great masses of ice’ that were frequently seen farther south than before. The New York Times reported that icebergs were so bad, and that they decreased the temperature of Iceland so much, that inhabitants fearing a famine were ‘emigrating to North America.’ But in 1902 the Los Angeles Times, in a story on disappearing glaciers in the Alps said the glaciers were not ‘running away,’ but rather ‘deteriorating slowly, with a persistency that means their final annihilation.’ The melting led to alpine hotel owners having trouble keeping patrons. It was established that it was a ‘scientific fact’ that the glaciers were ‘surely disappearing.’ But the glaciers instead grew once more.

The Boston Daily Globe reported in 1923 that the purpose of an Arctic expedition it was covering was to determine the beginning of the next ice age, ‘as the advance of glaciers in the last 70 years would indicate.’ When that era of ice-age reports melted away, retreating glaciers were again highlighted. In 1953’s Today’s Revolution in Weather, William Baxter wrote that ‘the recession of glaciers over the whole earth affords the best proof that climate is warming’. He gave examples of glaciers melting in Lapland, the Alps, and Antarctica. In 1952, the New York Times reported on the global warming studies of climatologist Dr. Hans W. Ahlmann, whose ‘trump card’ ‘has been the melting glaciers.’ The next year the paper said that ‘nearly all the great ice sheets are in retreat.’ U.S. News and World Report agreed, noting on January 8, 1954 that ‘winters are getting milder, summers drier. Glaciers are receding, deserts growing.’

But in the 1970s, glaciers did an about face. Lowell Ponte, in his 1976 book The Cooling, warned that ‘The rapid advance of some glaciers has threatened human settlements in Alaska, Iceland, Canada, China, and the Soviet Union.’

In 1951, TIME magazine noted that permafrost in Russia was receding northward at up to 100 yards per year. But in a June 24, 1974, article, TIME stated that the cooling trend was here to stay. The report was based on ‘telltale signs’ such as the ‘unexpected persistence and thickness of pack ice in the waters around Iceland.’ The Christian Science Monitor in the same year noted ‘glaciers which had been retreating until 1940 have begun to advance.’ The article continued, ‘the North Atlantic is cooling down about as fast as an ocean can cool.’ And the New York Times noted that in 1972 the ‘mantle of polar ice increased by 12 percent’ and had not returned to ‘normal’ size. North Atlantic sea temperatures declined, and shipping routes were ‘cluttered with abnormal amounts of ice.’ Furthermore, the permafrost in Russia and Canada was advancing southward, according to the December 29 article that closed out 1974.

Two points are crucial. Markers for ultra-complex entities such as human health or the climate may or may not be useful indicators of overall health. Secondly, it may well be that studies of ‘markers’ – whether of human health or the climate – may require a lifetime (in the case of humans) or several centuries (in the case of global climate) to teach us anything significant. Shorter studies may well be misleading, as is certainly the case in many clinical studies.

***

In a nutshell, becoming an informed consumer of science involves the realization that science is a human endeavour. It is subject to a galaxy of factors beyond the nuts and bolts of the laboratory work, from political considerations that determine how much funding is funneled to particular fields to the interpretation of complex statistical analyses of murky results. As the physicist and philosopher John Polkinghorne has written,

Many people have in their minds a picture of how science proceeds which is altogether too simple. This misleading caricature portrays scientific discovery as resulting from the confrontation of clear and inescapable theoretical predictions by the results of unambiguous and decisive experiments… In actual fact… the reality is more complex and more interesting than that.

To its credit, the medical community seems to have embraced the work done by Ioannidis and its implications. The Atlantic reports that:

Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’ paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.”

But Ioannidis is pessimistic about anything changing soon:

His bigger worry, he says, is that while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”

***

Dr. John Ioannidis’ work deals with medical research, which is – at least theoretically – readily amenable to the tools of science. Even here, it is obvious that science consumers should ration out credibility carefully. The fact that you read about evidence-based medicine or peer-reviewed studies or randomized trials is by no means a guarantee that you’ve been touched by Truth. And this is all in the realm of the here and now. Contemporary science is vastly overrated when it deals with issues that go beyond those that affect medical research, and involve huge extrapolations, chains of reasoning and assumptions and numerous ideological commitments.

 ***

See also: the post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

The post Blowing Hot and Cold:

https://torahexplorer.com/2013/03/11/blowing-hot-and-cold-2/

References:

[1] See http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/.

Retrieved 5th May 2013.

[2] Meta-research involves the analysis – often with advanced statistical tools – of a large number of primary studies performed by other researchers.

[3] See http://seedmagazine.com/content/article/dirty_little_secret/. Retrieved 5th June 2011.

[4] See http://www.time.com/time/magazine/article/0,9171,2139710,00.html

Retrieved 4th May 2013.

[5] The information on the media coverage of glaciers comes from a report by the Media Research Council entitled Fire and Ice:

http://www.mrc.org/special-reports/fire-and-ice

Retrieved 5th May 2013.

Random and Undirected

April 29, 2013

The Origin of Species is essentially about eliminating the need to invoke God to explain life. And Charles Darwin made no bones about it. In a letter to his mentor, the geologist Charles Lyell on 11th October 1859, Darwin wrote,

But I entirely reject as in my judgment quite unnecessary any subsequent addition “of new powers, & attributes & forces”; or of any “principle of improvement”… If I were convinced that I required such additions to the theory of natural selection, I would reject it as rubbish. I would give absolutely nothing for the theory of Natural Selection, if it requires miraculous additions at any one stage of descent.[1]

This has consistently been the position of Darwin’s heirs and the vast majority of evolutionary biologists. Ernst Mayr, widely considered to be one of the most important and influential biologists of the twentieth century, wrote:

The Darwinian revolution was not merely the replacement of one scientific theory by another, but rather the replacement of a worldview, in which the supernatural was accepted as a normal and relevant explanatory principle, by a new worldview in which there was no room for supernatural forces.[2]

 Julian Huxley was the grandson of Darwin’s Bulldog, Thomas Huxley, and a prominent biologist in his own right. He wrote:

Darwinism removed the whole idea of God as the Creator of organisms from the sphere of rational discussion. Darwin pointed out that no supernatural designer was needed; since natural selection could account for any new form of life, there is no room for a supernatural agency in its evolution.[3]

George Gaylord Simpson, one of the leading palaeontologists of the twentieth century, wrote that “Man is the result of a purposeless and natural process that did not have him in mind.”[4]

In Genesis and Genes, I quoted the late Stephen Jay Gould, one of the most famous scientists and popularisers of science in the latter part of the twentieth century. Gould  frequently discussed the “radical philosophical content of Darwin’s message” and its denial of purpose in the universe:

First, Darwin argues that evolution has no purpose… Second, Darwin maintained that evolution has no direction… Third, Darwin applied a consistent philosophy of materialism to his interpretation of nature. Matter is the ground of all existence; mind, spirit, and God as well, are just words that express the wondrous results of neuronal complexity.[5]

Contemporary biology textbooks are adamant that Darwinian evolution is unguided. A popular college biology textbook by Douglas Futuyma declares that “[B]y coupling undirected, purposeless variation to the blind, uncaring process of natural selection, Darwin made theological or spiritual explanations of the life processes superfluous.”[6]

This is what you will find in Invitation to Biology:

Now the new biology asked us to accept the proposition that, like all other organisms, we too are the products of a random process that, as far as science can show, we are not created for any special purpose or as part of any universal design.[7]

 And Evolution (by Strickberger) has this to say:

The advent of Darwinism posed even greater threats to religion by suggesting that biological relationships, including the origin of humans and of all species, could be explained by natural selection without the intervention of a god… In this scheme a god of design and purpose is not necessary…[8]

 And Evolution (by Barton) explains that evolution involves “random genetic drift,” “random mutation,” “random variation,” “random … individual fitness,” and “random reproduction”.

 In 1997, the National Association of Biology Teachers in the USA removed from its description of the evolution of life an assertion that it was an “unsupervised, impersonal, unpredictable and natural process.” Ninety-nine academics, including over 70 evolutionary biologists, sent a letter of protest to the NABT asserting that evolution indeed is “an impersonal and unsupervised process… The NABT leaves open the possibility that evolution is in fact supervised in a personal manner. This is a prospect that every evolutionary biologist should vigorously and positively deny.”[9]

Evolutionary biologists and authors are often at pains to emphasise this point. University of Chicago evolutionary biologist and author Jerry Coyne makes the point concisely:

But any injection of teleology into evolutionary biology violates precisely the great advance of Darwin’s theory: to explain the appearance of design by a purely materialistic process — no deity required.[10]

And the biochemist Larry Moran of the University of Toronto writes that:

The main mechanisms are natural selection and random genetic drift and those two mechanisms act on populations containing variation. The variation is due to the presence of mutations and mutations arise “randomly” with respect to ultimate purpose or goal.[11]

 This sentiment is often encountered in academic papers:

 Mutation is the central player in the Darwinian theory of evolution – it is the ultimate source of heritable variation, providing the necessary raw material for natural selection. In general, mutation is assumed to create heritable variation that is random and undirected.[12]

Francisco Ayala is a former Roman Catholic priest and world-famous evolutionary biologist [See the post Tactics and Deceit to read more about Ayala]. In a 2007 paper Ayala wrote:

Chance is, nevertheless, an integral part of the evolutionary process. The mutations that yield the hereditary variations available to natural selection arise at random. Mutations are random or chance events because (i) they are rare exceptions to the fidelity of the process of DNA replication and because (ii) there is no way of knowing which gene will mutate in a particular cell or in a particular individual. However, the meaning of “random” that is most significant for understanding the evolutionary process is (iii) that mutations are unoriented with respect to adaptation; they occur independently of whether or not they are beneficial or harmful to the organisms. Some are beneficial, most are not, and only the beneficial ones become incorporated in the organisms through natural selection.[13]

What Professor Ayala means by point (iii) can be economically expressed as follows: the Darwinian process is bereft of foresight. This has direct and obvious bearing on the philosophical content of biological evolution, as Ayala points out:

It was Darwin’s greatest accomplishment to show that the complex organization and functionality of living beings can be explained as the result of a natural process – natural selection – without any need to resort to a Creator or other external agent… The scientific account of these events does not necessitate recourse to a preordained plan, whether imprinted from the beginning [this is sometimes referred to as front-loading – YB] or through successive interventions by an omniscient and almighty Designer. Biological evolution differs from a painting or an artifact in that it is not the outcome of preconceived design.

Ayala’s conclusion is concisely expressed:

This is Darwin’s fundamental discovery, that there is a process that is creative although not conscious. And this is the conceptual revolution that Darwin completed: the idea that the design of living organisms can be accounted for as the result of natural processes governed by natural laws. This is nothing if not a fundamental vision that has forever changed how mankind perceives itself and its place in the universe.

 ***

Notwithstanding the above – and we could go on and on demonstrating that the community of evolutionary biologists, as a whole, conceives of evolution as a non-teleological process – there are those who style themselves theistic evolutionists. This position often results in confusion, as we shall presently see.

In July 2005, Christoph Cardinal Schönborn wrote an op-ed in the New York Times in which he stated that “evolution in the neo-Darwinian sense – an unguided, unplanned process of random variation and natural selection – is not [true].”[14]

Ken Miller, a biologist, textbook-writer and prominent exponent of theistic evolution, responded:

But the Cardinal is wrong in asserting that the neo-Darwinian theory of evolution is inherently atheistic. Neo-Darwinism, he tells us, is an ideology proposing that an “unguided, unplanned process of random variation and natural selection” gave rise to all life on earth, including our own species. To be sure, many evolutionists have made such assertions in their popular writings on the “meaning” on evolutionary theory. But are such assertions truly part of evolution as it is understood by the “mainstream biologists” of which the Cardinal speaks? Not at all… This means that biological evolution, correctly understood, does not make the claim of purposelessness.[15]

Huh? Is Miller serious about “mainstream biologists” believing anything except that evolution is unguided and unplanned? Besides everything we said above, consider the following. In 2005, the Kansas State Board of Education sought to introduce changes to the biology syllabus in order to foster critical thinking among students. This involved allowing teachers to introduce scientific criticisms of evolutionary biology. In response, no fewer than 38 Nobel laureates (!) under the auspices of – wait for it – the Elie Wiesel Foundation for Humanity signed a joint statement to the KSBE informing them that,

… evolution is understood to be the result of an unguided, unplanned process of random variation and natural selection.[16]

Perhaps Professor Miller does not consider these Nobel Prize-winners representative of the mainstream. But besides his capacity to ignore the obvious, Miller also contradicts himself. Five editions of Miller’s textbook, Biology, stated that “evolution works without either plan or purpose… Evolution is random and undirected.”[17] In his book Finding Darwin’s God, we find the following:

  • Random, undirected process of mutation had produced the ‘right’ kind of variation for natural selection to act upon (Page 51).
  • A random, undirected process like evolution (Page 102).
  • Blind, random, undirected evolution [could] have produced such an intricate set of structures and organs… (Page 137).
  • The random, undirected processes of mutation and natural selection (Page 145)
  • Evolution is a natural process, and natural processes are undirected (Page 244).

 Both the 1991 and 1994 editions of Miller & Levine’s Biology: The Living Science contain the following passage:

Darwin knew that accepting his theory required believing in philosophical materialism, the conviction that matter is the stuff of all existence and that all mental and spiritual phenomena are its by-products. Darwinian evolution was not only purposeless but also heartless – a process in which the rigors of nature ruthlessly eliminate the unfit. Suddenly, humanity was reduced to just one more species in a world that cared nothing for us. The great human mind was no more than a mass of evolving neurons. Worst of all, there was no divine plan to guide us.[18] [Italics in the original.]

***

The confusion generated by Professor Miller’s apparently-schizophrenic writings is, unfortunately, not limited to the Gentile community. I wrote in Genesis and Genes that,

Evolution is inherently indifferent to religion; deities need not apply. But there will always be those who wish to reconcile the irreconcilable. They want to take the world’s most efficient engine for atheism, slap on a veneer of verses, and recast it as a Torah ideal. The result is about as appetising as frosting on a bar of soap. There cannot be a rapprochement between mutually-exclusive concepts. The attempt to apply a layer of religious respectability to evolution is vacuous.

 See Also: The post Tactics and Deceit

https://torahexplorer.com/2013/01/17/423/

 References:

 [1] Letter from Darwin to Charles Lyell, 11th October 1859. See Darwin Correspondence Database,

http://www.darwinproject.ac.uk/entry-2503.

Retrieved 28th April 2013.

[2] Ernst Mayr, book review of Evolution and God, Nature 248 (March 22, 1974): 285.

[3] Tax, S. and Callender, C. (Eds.), Evolution after Darwin, Issues in Evolution (volume III), The University of Chicago Press, Chicago, USA, page 45, 1960.

[4] George Gaylord Simpson, The Meaning of Evolution, revised edition (New Haven: Yale University Press, 1967), page 345.

[5] Stephen Jay Gould, Ever Since Darwin: Reflections in Natural History, pg. 12–13 (W.W. Norton & Co. 1977).

[6] Douglas J. Futuyma, Evolutionary Biology, 3rd edition, Sinauer Associates, 1998, page 5.

[7] Helena Curtis and N. Sue Barnes, Invitation to Biology, 3rd edition. New York: Worth Publishers, 1981:474-75.

[8] Monroe W. Strickberger, Evolution, 3rd edition. Sudbury: Jones and Bartlett Publishers, 2000:70-71.

[9] The Nature of Nature, Bruce L. Gordon and William A. Dembski, editors. ISI Books, Wilmington, Delaware, 2011, page 41.

[10] See http://whyevolutionistrue.wordpress.com/2009/04/22/truckling-to-the-faithful-a-spoonful-of-jesus-helps-darwin-go-down/.

Retrieved 28th April 2013.

[11] See http://sandwalk.blogspot.co.uk/2012/08/is-unguided-part-of-modern-evolutionary.html.

Retrieved 28th April 2013.

[12] An environmentally induced adaptive (?) insertion event in flax, Yiming Chen, Robin Lowenfeld and Christopher A. Cullis, International Journal of Genetics and Molecular Biology Vol. 1 (3), pages 038-047, June 2009. The paper can be read here: http://www.acadjourn.org/IJGMB/PDF/pdf2009/June/Chen%20et%20al..pdf. Retrieved 11th July 2011.

[13] Francisco J. Ayala, “Darwin’s greatest discovery: Design without designer” Proceedings of the National Academy of Sciences USA, 104 (May 15, 2007): 8567-8573. I saw this in an article by Casey Luskin dated 11th August 2012 on the website Evolution News and Views.

[14] See http://www.millerandlevine.com/km/evol/catholic/schonborn-NYTimes.html.

Retrieved 28th April 2013.

[15] See http://www.millerandlevine.com/km/evol/catholic/op-ed-krm.html.

Retrieved 28th April 2013.

[16] The letter used to be available at:

http://media.ljworld.com/pdf/2005/09/15/nobel_letter.pdf

I was not able to retrieve it.

[17] Kenneth Miller and Joseph Levine, Biology (1st ed., 1991), p. 658; (2nd ed., 1993), p. 658; (3rd ed., 1995), p. 658; (4th ed., 1998), p. 658; (5th ed. 2000), p. 658. See article by Casey Luskin here:

http://www.evolutionnews.org/2009/11/truth_or_dare_with_dr_ken_mill027891.html

Retrieved 28th April 2013.

[18] Joseph Levine & Kenneth Miller, Biology: Discovering Life (1st ed., D.C. Heath and Co., 1992), pg. 152; (2nd ed. D.C. Heath and Co., 1994), p. 161.

OPERA or Soap Opera?

April 22, 2013

In the post Dr. Ben Goldacre and the Reproducibility of Scientific Research, I discussed a systemic problem within contemporary science, viz. publication bias. Not all results of scientific research are published; results that stray uncomfortably far from sundry paradigms are sometimes not even submitted by their authors to journals.

A reader objected to this, citing the OPERA experiment as an example of negative results being fearlessly published. Matt wrote:

But I can provide you with countless examples of the researchers deciding to put their results out to the larger community anyway… even at the risk of humiliation if they are found to have messed up. For a recent example, look up the “superluminal neutrino” results from the OPERA experiment.

I didn’t need to look up the OPERA episode, being very familiar with it. But that sorry affair has little in common with the theme of the post Dr. Ben Goldacre and the Reproducibility of Scientific Research, as will be seen in this post.

***

OPERA stands for Oscillation Project with Emulsion tRacking Apparatus. In September 2011, the experiment electrified the world with the announcement that superluminal neutrinos – subatomic particles that travel faster than light – had been discovered. Physicists usually respond to such grand claims with a laconic “Important, if true.” In this case, had the results been correct, they would not just be important; they would “kill modern physics as we know it”, as Laura Patrizii, leader of OPERA’s Bologna group, put it. The story ended ignominiously, if predictably. The results were found to be incorrect, partly due to some loose cables. But let’s begin at the beginning.

Modern physics is often done by large groups of scientists working together. When you have large collaborations like the OPERA group, it’s prudent to seek consensus before making announcements about the research. Dmitri Denisov, a physicist at Fermilab in Batavia, Illinois, says it is standard procedure to wait to publish a paper until everyone in the collaboration has signed on. “We really strive to have full agreement,” he says. “In some cases it takes months, sometimes up to a year, to verify that everyone in the collaboration is happy.” In the case of OPERA, 15 of the 160 members refused to add their names to the original paper because they felt the announcement and submission of the results for publication were premature. “I didn’t sign because I thought the estimated error was not correct,” said team member Luca Stanco of the National Institute of Nuclear Physics in Italy. In a New Scientist article, Stanco was quoted as saying that “We should have been more cautious, more careful, presented the result in not such a strong way, more preliminarily. Experimentalists in physics can make mistakes. But the way in which we handle them, the way we present them – we have some responsibility for that.” Physics World mentioned Caren Hagner, leader of the OPERA group at Hamburg University and one of the people whose name did not appear on the pre-print. She too argued that the collaboration should have carried out the extra checks before submitting the paper for peer review.

The OPERA operatives were in such a rush to announce their scoop to the world that they failed to apply basic prudence. Janet Conrad, a particle physicist at MIT, said that much of the negative reaction from the physics establishment to the announcement stemmed from the fact that there were insufficient experimental checks carried out prior to the announcement. “A [paper in] Physical Review Letters is four pages long. An experiment is vastly more complicated than that,” she says. “So we have to rely on our colleagues having done all of their cross checks. We don’t expect to make a retraction within a year.” Fermilab’s Joseph Lykken concurred. “Precisely because these are big, complicated experiments, the collaborations have a responsibility to both the scientific community and to the taxpayers to perform due diligence checks of the validity of their results,” he said. “The more surprising the result, the more time one must spend on validation. Anyone can make a mistake, but scientific collaborations are supposed to catch the vast majority of mistakes through internal vetting long before a new result sees the light of day.” CERN physicist Alvaro De Rujula also had strong words in this regard. “The theory of relativity is exquisitely well-tested and consistent. Superluminal neutrinos were far, far too much in violation of the rules to be believed, even for a nanosecond. That ought to have made the OPERA management have everything checked even more carefully. Alas, it turned out not to be a subtle error, but mainly a bad connection, the very first thing one checks when anything misbehaves.”

Then, again in violation of good practice, the OPERA results were announced to the press rather than first presented to peers through the usual science channels. The physicist Lawrence M. Krauss, director of the Origins Project at Arizona State University (and an ardent atheist) authored an op-ed in the Los Angeles Times entitled Colliding Theories, with the subtitle Findings that showed faster-than-light travel were released too soon. He wrote,

What is inappropriate, however, is the publicity fanfare coming before the paper has even been examined by referees. Too often today, science is done by news release rather than waiting for refereed publication.

What makes all of this even more surprising is that the OPERA collaboration did not have a direct competitor from which a scoop had to be snatched. The physicists were in a position to carefully check and re-check their results before rushing off to make their announcement.

As a result of the fiasco, OPERA spokesman Antonio Ereditato of the University of Bern in Switzerland and experimental coordinator Dario Autiero of the Institute of Nuclear Physics in Lyon, France, resigned following a 16-13 no-confidence vote from the collaboration’s other leaders. An indication of just how embarrassing this episode was for physics is that CERN (European Centre for Nuclear Research), the European collaboration that supplied the neutrinos to the OPERA experiment, had no official comment on the resignations, distancing itself from the OPERA experiment despite its central role in publicizing the original results. Physics World reported that a press officer for CERN refused to be identified and emphasised that OPERA was “not a CERN collaboration” since it “only sends [OPERA] a beam of neutrinos.”

To some extent, the OPERA debacle was about grabbing headlines. As one report put it,

If faster than light neutrinos do exist, there need to be many rounds of testing, independent analyses and rigorous peer review before we can start announcing dents in Einstein’s bedrock theories. But, as is abundantly clear in this world of fierce media competition, social media and science transparency, any theory is a good theory so long as it makes a good story — as long as the scientific method has been followed and the science is correctly represented by the writer, that is. [Italics in the original.]

***

Let us digress for a moment to discuss a few points that are relevant to material discussed in Genesis and Genes, before returning to the topic of publication bias.

I explained in Genesis and Genes that the public almost always misunderstands what is meant by “measurement” in the context of contemporary science. Measurements in cosmology and physics do not mean that someone is doing something as prosaic and straightforward as reading a temperature off a thermometer. The procedure is far more complicated, and introduces enormous amounts of complexity into the endeavour. This is something that Professor Krauss stressed in the article he penned for the LA Times:

The claim that neutrinos arrived at the Gran Sasso National Laboratory in Italy from CERN’s Large Hadron Collider in Switzerland on average 60 billionths of a second before they would have if they were traveling at light speed relies on complicated statistical analysis. It must take into account the modeling of the detectors and how long their response time is, careful synchronization of clocks and a determination of the distance between the CERN accelerator and the Gran Sasso detector accurate to a distance of a few meters. Each of these factors has intrinsic uncertainties that, if misestimated, could lead to an erroneous conclusion.

Informed consumers of science realise that words like measure – which convey a high degree of certainty to the public – in reality reflect something far murkier. This is why, in the post Missing Mass, I pointed out that cosmology is much more theory than observation. The public, inasmuch as it knows anything about the expansion of the universe, for example, entertains fantasies about astronomers watching galaxies flying off into the cosmic sunset, like an airplane slowly moving across the distant horizon. That’s nonsense. To “measure” the expansion of the universe, inferences are made on the basis of complex statistical analyses which depend on layer upon layer of assumption and analysis. In Genesis and Genes, I discussed the work of the brilliant mathematician and member of the National Academy of Sciences Irving Segal. I wrote that,

The most recent study by Segal and his colleagues contained a detailed analysis of Hubble’s law based on data from the de Vaucoleurs survey of bright cluster galaxies, which includes more than 10 000 galaxies. (It is worthwhile noting that Edwin Hubble’s own analysis was based on a puny sample of twenty galaxies.) The results are astounding. The linear relationship that Hubble saw between redshift and apparent brightness could not be seen by Segal and his collaborators. “By normal standards of scientific due process,” Segal wrote, “the results of [Big Bang] cosmology are illusory.”

The debate between Segal and his detractors was not about who had more acute eyesight; it was about ultra-complex models and statistical analysis. This should give informed consumers of science pause when they encounter reports of “measurements” in cutting-edge science.

***

I pointed out in Genesis and Genes that there exists a misconception of science as the ultimate cosmopolitan pursuit, devoid of any nationalistic flavour which might influence research. The truth is that, being a human endeavour, such factors do influence scientific research. Remember the part about acupuncture? I wrote,

Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every trial concluded that acupuncture was an effective medical treatment for certain conditions. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the United Kingdom, and only fifty-six per cent of these studies found any therapeutic benefits.

Controlled, double-blind clinical trials are not magic bullets. One’s cultural background influences research, and this was a factor in OPERA also. One news item states that “The large international collaboration has had to contend not just with the usual personality conflicts, but also with cultural differences between Italian, Northern European, and Japanese scientists. The added scrutiny from the controversial result exacerbated those tensions.”

***

At any rate, the OPERA experience has little to do with ordinary, day-to-day publication bias. Once OPERA produced a tsunami of publicity with its premature announcement of superluminal neutrinos, its leaders had no choice but to come clean about the various failures that plagued their experiment. Back on the ranch, far from the limelight, the fact is that uncomfortable results are often just ignored, exiled to distant directories in one’s hard-drive. They don’t make headlines; they don’t provoke resignations; they just don’t get reported and published. And that produces a distorted picture in the minds of scientists and the public with regards to important issues. In the article by Professor Krauss in the LA Times, Kraus – who is an enthusiastic adherent of scientism – writes that

What is inappropriate, however, is the publicity fanfare coming before the paper has even been examined by referees. Too often today, science is done by news release rather than waiting for refereed publication. Because a significant fraction of experimental results ultimately never get published or are not later confirmed, providing unfiltered results to a largely untutored public is irresponsible. [Emphasis added.]

One can quibble with Krauss regarding how much filtering – this is a synonym for prejudice – must be done to protect the public from unorthodox research findings. But the fact is that a significant portion of research is never published. One reason for this is that researchers are trapped in paradigms that stain certain research results as wrong. As Nottingham University astronomer Michael Merrifield explains,

And, more worrying, is something that scientists like to push under the carpet… there’s psychology in this as well. If, in 1985, I made a measurement of the distance [from the Sun] to the centre of the galaxy when everyone said it was ten kilo-parsecs, and I got an answer that said it was seven kilo-parsecs, I would have thought, “Well, I must have done something wrong” and I would have stuck it in some filing cabinet and forgot about it; whereas if I had got an answer that agreed with the consensus, I’d probably have published it… In this error process, there’s also psychology. As I say, scientists are very uncomfortable about this, because we have this idea that what we are doing is objective and above such things. But actually, there is a lot of human interaction and psychology in the way we do science.

Some in the science establishment try to avoid confronting this reality by invoking ideal worlds, in which various safeguards eliminate any residual doubt from experiments. But scientific research – like virtually all human activity – is more ambiguous than these scientists would have you believe. In Genesis and Genes I quoted the physicist and philosopher Sir John Polkinghorne:

Many people have in their minds a picture of how science proceeds which is altogether too simple. This misleading caricature portrays scientific discovery as resulting from the confrontation of clear and inescapable theoretical predictions by the results of unambiguous and decisive experiments… In actual fact… the reality is more complex and more interesting than that.

***

Nobody doubts that there are many sincere politicians out there. And nobody denies that there is a gaping gulf between election-season promises and post-election reality. After the debris of elections is cleared and the votes tallied, the real, gritty, grey world of horse-trading, budgetary constraints, political alliances and a host of other factors intervene to make politicians, well, politicians.

Science – including the realm of the hard sciences – is a human endeavour. Scientific research is subject to a galaxy of factors beyond the nuts and bolts of the laboratory. It is affected by every condition related to human nature. OPERA is a good example of an experiment going awry because of mundane weaknesses such as impulsivity, the pursuit of glory and bad judgment. But the fact that scientific research happens in the real world and not in some idealized version thereof is just as true in the day-to-day research that never makes headlines.

Informed consumers of science recognise this, and recognise the limitations that these weaknesses impose upon the credibility of scientific research. Science is strong – though never infallible – when it explores phenomena that are repeatable, observable and limited. Its credibility diminishes rapidly as it meanders from these parameters. And when science makes absolute statements about the history of the universe or life, you should take them with a sack of salt.

***

See also:

The post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

The post Missing Mass:

https://torahexplorer.com/2013/03/07/missing-mass/

References:

The quotations about OPERA in this post come from the following sources:

http://news.discovery.com/space/opera-leaders-resign-after-no-confidence-vote-120404.htm

http://news.discovery.com/space/faster-than-light-neutrino-theory-almost-certainly-wrong-111012.htm

http://www.newscientist.com/article/dn21093-fasterthanlight-neutrino-result-to-get-extra-checks.html

http://www.newscientist.com/article/dn21656-leaders-of-controversial-neutrino-experiment-step-down.html

http://articles.latimes.com/2011/oct/04/opinion/la-oe-krauss-neutrino-20111004.

http://physicsworld.com/cws/article/news/2011/oct/07/tension-emerges-within-opera-collaboration

Retrieved 21st April 2013.

Professor Merrifield can be watched here:

 http://www.youtube.com/watch?v=gzvPH6A5CmQ