OPERA or Soap Opera?

In the post Dr. Ben Goldacre and the Reproducibility of Scientific Research, I discussed a systemic problem within contemporary science, viz. publication bias. Not all results of scientific research are published; results that stray uncomfortably far from sundry paradigms are sometimes not even submitted by their authors to journals.

A reader objected to this, citing the OPERA experiment as an example of negative results being fearlessly published. Matt wrote:

But I can provide you with countless examples of the researchers deciding to put their results out to the larger community anyway… even at the risk of humiliation if they are found to have messed up. For a recent example, look up the “superluminal neutrino” results from the OPERA experiment.

I didn’t need to look up the OPERA episode, being very familiar with it. But that sorry affair has little in common with the theme of the post Dr. Ben Goldacre and the Reproducibility of Scientific Research, as will be seen in this post.

***

OPERA stands for Oscillation Project with Emulsion tRacking Apparatus. In September 2011, the experiment electrified the world with the announcement that superluminal neutrinos – subatomic particles that travel faster than light – had been discovered. Physicists usually respond to such grand claims with a laconic “Important, if true.” In this case, had the results been correct, they would not just be important; they would “kill modern physics as we know it”, as Laura Patrizii, leader of OPERA’s Bologna group, put it. The story ended ignominiously, if predictably. The results were found to be incorrect, partly due to some loose cables. But let’s begin at the beginning.

Modern physics is often done by large groups of scientists working together. When you have large collaborations like the OPERA group, it’s prudent to seek consensus before making announcements about the research. Dmitri Denisov, a physicist at Fermilab in Batavia, Illinois, says it is standard procedure to wait to publish a paper until everyone in the collaboration has signed on. “We really strive to have full agreement,” he says. “In some cases it takes months, sometimes up to a year, to verify that everyone in the collaboration is happy.” In the case of OPERA, 15 of the 160 members refused to add their names to the original paper because they felt the announcement and submission of the results for publication were premature. “I didn’t sign because I thought the estimated error was not correct,” said team member Luca Stanco of the National Institute of Nuclear Physics in Italy. In a New Scientist article, Stanco was quoted as saying that “We should have been more cautious, more careful, presented the result in not such a strong way, more preliminarily. Experimentalists in physics can make mistakes. But the way in which we handle them, the way we present them – we have some responsibility for that.” Physics World mentioned Caren Hagner, leader of the OPERA group at Hamburg University and one of the people whose name did not appear on the pre-print. She too argued that the collaboration should have carried out the extra checks before submitting the paper for peer review.

The OPERA operatives were in such a rush to announce their scoop to the world that they failed to apply basic prudence. Janet Conrad, a particle physicist at MIT, said that much of the negative reaction from the physics establishment to the announcement stemmed from the fact that there were insufficient experimental checks carried out prior to the announcement. “A [paper in] Physical Review Letters is four pages long. An experiment is vastly more complicated than that,” she says. “So we have to rely on our colleagues having done all of their cross checks. We don’t expect to make a retraction within a year.” Fermilab’s Joseph Lykken concurred. “Precisely because these are big, complicated experiments, the collaborations have a responsibility to both the scientific community and to the taxpayers to perform due diligence checks of the validity of their results,” he said. “The more surprising the result, the more time one must spend on validation. Anyone can make a mistake, but scientific collaborations are supposed to catch the vast majority of mistakes through internal vetting long before a new result sees the light of day.” CERN physicist Alvaro De Rujula also had strong words in this regard. “The theory of relativity is exquisitely well-tested and consistent. Superluminal neutrinos were far, far too much in violation of the rules to be believed, even for a nanosecond. That ought to have made the OPERA management have everything checked even more carefully. Alas, it turned out not to be a subtle error, but mainly a bad connection, the very first thing one checks when anything misbehaves.”

Then, again in violation of good practice, the OPERA results were announced to the press rather than first presented to peers through the usual science channels. The physicist Lawrence M. Krauss, director of the Origins Project at Arizona State University (and an ardent atheist) authored an op-ed in the Los Angeles Times entitled Colliding Theories, with the subtitle Findings that showed faster-than-light travel were released too soon. He wrote,

What is inappropriate, however, is the publicity fanfare coming before the paper has even been examined by referees. Too often today, science is done by news release rather than waiting for refereed publication.

What makes all of this even more surprising is that the OPERA collaboration did not have a direct competitor from which a scoop had to be snatched. The physicists were in a position to carefully check and re-check their results before rushing off to make their announcement.

As a result of the fiasco, OPERA spokesman Antonio Ereditato of the University of Bern in Switzerland and experimental coordinator Dario Autiero of the Institute of Nuclear Physics in Lyon, France, resigned following a 16-13 no-confidence vote from the collaboration’s other leaders. An indication of just how embarrassing this episode was for physics is that CERN (European Centre for Nuclear Research), the European collaboration that supplied the neutrinos to the OPERA experiment, had no official comment on the resignations, distancing itself from the OPERA experiment despite its central role in publicizing the original results. Physics World reported that a press officer for CERN refused to be identified and emphasised that OPERA was “not a CERN collaboration” since it “only sends [OPERA] a beam of neutrinos.”

To some extent, the OPERA debacle was about grabbing headlines. As one report put it,

If faster than light neutrinos do exist, there need to be many rounds of testing, independent analyses and rigorous peer review before we can start announcing dents in Einstein’s bedrock theories. But, as is abundantly clear in this world of fierce media competition, social media and science transparency, any theory is a good theory so long as it makes a good story — as long as the scientific method has been followed and the science is correctly represented by the writer, that is. [Italics in the original.]

***

Let us digress for a moment to discuss a few points that are relevant to material discussed in Genesis and Genes, before returning to the topic of publication bias.

I explained in Genesis and Genes that the public almost always misunderstands what is meant by “measurement” in the context of contemporary science. Measurements in cosmology and physics do not mean that someone is doing something as prosaic and straightforward as reading a temperature off a thermometer. The procedure is far more complicated, and introduces enormous amounts of complexity into the endeavour. This is something that Professor Krauss stressed in the article he penned for the LA Times:

The claim that neutrinos arrived at the Gran Sasso National Laboratory in Italy from CERN’s Large Hadron Collider in Switzerland on average 60 billionths of a second before they would have if they were traveling at light speed relies on complicated statistical analysis. It must take into account the modeling of the detectors and how long their response time is, careful synchronization of clocks and a determination of the distance between the CERN accelerator and the Gran Sasso detector accurate to a distance of a few meters. Each of these factors has intrinsic uncertainties that, if misestimated, could lead to an erroneous conclusion.

Informed consumers of science realise that words like measure – which convey a high degree of certainty to the public – in reality reflect something far murkier. This is why, in the post Missing Mass, I pointed out that cosmology is much more theory than observation. The public, inasmuch as it knows anything about the expansion of the universe, for example, entertains fantasies about astronomers watching galaxies flying off into the cosmic sunset, like an airplane slowly moving across the distant horizon. That’s nonsense. To “measure” the expansion of the universe, inferences are made on the basis of complex statistical analyses which depend on layer upon layer of assumption and analysis. In Genesis and Genes, I discussed the work of the brilliant mathematician and member of the National Academy of Sciences Irving Segal. I wrote that,

The most recent study by Segal and his colleagues contained a detailed analysis of Hubble’s law based on data from the de Vaucoleurs survey of bright cluster galaxies, which includes more than 10 000 galaxies. (It is worthwhile noting that Edwin Hubble’s own analysis was based on a puny sample of twenty galaxies.) The results are astounding. The linear relationship that Hubble saw between redshift and apparent brightness could not be seen by Segal and his collaborators. “By normal standards of scientific due process,” Segal wrote, “the results of [Big Bang] cosmology are illusory.”

The debate between Segal and his detractors was not about who had more acute eyesight; it was about ultra-complex models and statistical analysis. This should give informed consumers of science pause when they encounter reports of “measurements” in cutting-edge science.

***

I pointed out in Genesis and Genes that there exists a misconception of science as the ultimate cosmopolitan pursuit, devoid of any nationalistic flavour which might influence research. The truth is that, being a human endeavour, such factors do influence scientific research. Remember the part about acupuncture? I wrote,

Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every trial concluded that acupuncture was an effective medical treatment for certain conditions. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the United Kingdom, and only fifty-six per cent of these studies found any therapeutic benefits.

Controlled, double-blind clinical trials are not magic bullets. One’s cultural background influences research, and this was a factor in OPERA also. One news item states that “The large international collaboration has had to contend not just with the usual personality conflicts, but also with cultural differences between Italian, Northern European, and Japanese scientists. The added scrutiny from the controversial result exacerbated those tensions.”

***

At any rate, the OPERA experience has little to do with ordinary, day-to-day publication bias. Once OPERA produced a tsunami of publicity with its premature announcement of superluminal neutrinos, its leaders had no choice but to come clean about the various failures that plagued their experiment. Back on the ranch, far from the limelight, the fact is that uncomfortable results are often just ignored, exiled to distant directories in one’s hard-drive. They don’t make headlines; they don’t provoke resignations; they just don’t get reported and published. And that produces a distorted picture in the minds of scientists and the public with regards to important issues. In the article by Professor Krauss in the LA Times, Kraus – who is an enthusiastic adherent of scientism – writes that

What is inappropriate, however, is the publicity fanfare coming before the paper has even been examined by referees. Too often today, science is done by news release rather than waiting for refereed publication. Because a significant fraction of experimental results ultimately never get published or are not later confirmed, providing unfiltered results to a largely untutored public is irresponsible. [Emphasis added.]

One can quibble with Krauss regarding how much filtering – this is a synonym for prejudice – must be done to protect the public from unorthodox research findings. But the fact is that a significant portion of research is never published. One reason for this is that researchers are trapped in paradigms that stain certain research results as wrong. As Nottingham University astronomer Michael Merrifield explains,

And, more worrying, is something that scientists like to push under the carpet… there’s psychology in this as well. If, in 1985, I made a measurement of the distance [from the Sun] to the centre of the galaxy when everyone said it was ten kilo-parsecs, and I got an answer that said it was seven kilo-parsecs, I would have thought, “Well, I must have done something wrong” and I would have stuck it in some filing cabinet and forgot about it; whereas if I had got an answer that agreed with the consensus, I’d probably have published it… In this error process, there’s also psychology. As I say, scientists are very uncomfortable about this, because we have this idea that what we are doing is objective and above such things. But actually, there is a lot of human interaction and psychology in the way we do science.

Some in the science establishment try to avoid confronting this reality by invoking ideal worlds, in which various safeguards eliminate any residual doubt from experiments. But scientific research – like virtually all human activity – is more ambiguous than these scientists would have you believe. In Genesis and Genes I quoted the physicist and philosopher Sir John Polkinghorne:

Many people have in their minds a picture of how science proceeds which is altogether too simple. This misleading caricature portrays scientific discovery as resulting from the confrontation of clear and inescapable theoretical predictions by the results of unambiguous and decisive experiments… In actual fact… the reality is more complex and more interesting than that.

***

Nobody doubts that there are many sincere politicians out there. And nobody denies that there is a gaping gulf between election-season promises and post-election reality. After the debris of elections is cleared and the votes tallied, the real, gritty, grey world of horse-trading, budgetary constraints, political alliances and a host of other factors intervene to make politicians, well, politicians.

Science – including the realm of the hard sciences – is a human endeavour. Scientific research is subject to a galaxy of factors beyond the nuts and bolts of the laboratory. It is affected by every condition related to human nature. OPERA is a good example of an experiment going awry because of mundane weaknesses such as impulsivity, the pursuit of glory and bad judgment. But the fact that scientific research happens in the real world and not in some idealized version thereof is just as true in the day-to-day research that never makes headlines.

Informed consumers of science recognise this, and recognise the limitations that these weaknesses impose upon the credibility of scientific research. Science is strong – though never infallible – when it explores phenomena that are repeatable, observable and limited. Its credibility diminishes rapidly as it meanders from these parameters. And when science makes absolute statements about the history of the universe or life, you should take them with a sack of salt.

***

See also:

The post Dr. Ben Goldacre and the Reproducibility of Research:

https://torahexplorer.com/2013/04/10/dr-ben-goldacre-and-the-reproducibility-of-research/

The post Missing Mass:

https://torahexplorer.com/2013/03/07/missing-mass/

References:

The quotations about OPERA in this post come from the following sources:

http://news.discovery.com/space/opera-leaders-resign-after-no-confidence-vote-120404.htm

http://news.discovery.com/space/faster-than-light-neutrino-theory-almost-certainly-wrong-111012.htm

http://www.newscientist.com/article/dn21093-fasterthanlight-neutrino-result-to-get-extra-checks.html

http://www.newscientist.com/article/dn21656-leaders-of-controversial-neutrino-experiment-step-down.html

http://articles.latimes.com/2011/oct/04/opinion/la-oe-krauss-neutrino-20111004.

http://physicsworld.com/cws/article/news/2011/oct/07/tension-emerges-within-opera-collaboration

Retrieved 21st April 2013.

Professor Merrifield can be watched here:

 http://www.youtube.com/watch?v=gzvPH6A5CmQ

Advertisements

8 Responses to “OPERA or Soap Opera?”

  1. marcandetty Says:

    Yashar cochacho!

  2. Matt Says:

    As a neutrino physicist and as the commenter that Yoram has addressed, here is my response:

    I consider the Opera result to be an excellent teaching opportunity and an example of how the scientific process is *supposed* to work. This why I find it sad that Yoram has chosen to present such a disparaging and incomplete portrayal of the controversy.

    The Opera “superluminal neutrino” result was indeed very controversial in particle physics. A good number of my colleagues felt that it was too premature for Opera to release their findings. Others felt that researchers had not performed sufficient cross checks. What Yoram fails to mention is that an equally large number of people in the community -myself included- felt that the results were strong enough to warrant an open discussion in the larger community. We supported Opera’s decision to “go public” (perhaps with some caveats) and many of us still stand by that position. Among those who disagree, very few feel so strongly as to call it a “fiasco” or “debacle” as Yoram does (see my reference [1], particularly the conclusion).

    The claims that Opera “did not do enough cross checks” or that they “rushed their results” are highly subjective and difficult judgement calls to make. They are not at all straightforward. When my colleagues and I use these phrases, we do not mean them in a colloquial sense. We hold them to the very high standards by which scientific research is conducted. In my field, it is very common for teams of many individuals to single-mindedly dedicate many months and years to a single analysis or measurement. The neutrino velocity measurements made by Opera took the team of investigators spanned over two years (ref [2], pp 1), with 6 additional months spent meticulously on cross-checks [4]. Few people in their lives will ever experience such rigor as was exhibited by the Opera investigators. There is little question that they put considerable effort and rigor into there measurement. The concern was merely that Opera still wasn’t careful enough. Such a judgement is easier made in hindsight, than foresight.

    An important point about the Opera measurement was that the investigators conducted the measurement “blindly” (see my ref [3], slide #4). This means that nobody (not even the researchers themselves) could look at the final result until they demonstrated all of their cross-checks to the satisfaction of an independent group of their peers (for example, see my ref [5]). Only after earning peer approval of their measurement were they allowed see what the result was. The very point of a blind measurement is that there aren’t supposed to be “take-backs”. Once a blind analysis is deemed fit for publication, it is wrong to retract it merely because the unblinded results happen to go against convention. In a previous article, Yoram accused the field of “sweeping anomalous results under the carpet”. In other articles he accuses scientists of being dogmatic defenders of “the paradigm”. I cited the Opera measurement of as one of very many examples where the experimenters placed methodological protections against such a biases. I am thus baffled why Yoram criticizes Opera for doing exactly what he accuses most scientists of not doing! He has yet to address this.

    When it comes to a blind analysis, some scientists believe that there is a point at which a blinded result is “so obviously wrong” that it is OK to take it back. Others feel that once you have done your best and decide to unblind it, you should just present the result to the public and leave it to others to repeat the measurement. This is line along which the Opera controversy played out.

    Yoram accuses the Opera researchers of releasing their results merely to “grab headlines”. I strongly doubt this. The results were so contradictory to the most basic physics principles that everyone felt they were probably wrong. By going public, Opera took a much bigger risk than if they hid their measurement away. What Yoram also fails to mention is that the results were leaked to the press before Opera held their press conference [1]. They felt, given the media hype, that they had to come out and address the questions…which is why the media circus preceded the final publication.

    I urge readers of this blog to watch the original presentation, the day the results were made officially public (http://indico.cern.ch/conferenceDisplay.py?confId=155620). Listen to and make note of all of the careful cross-checks performed. They should make note of the cautious conclusion slide, asking for the community to scrutinize and replicate their results. Watch the detailed Q&A session. Judge for yourself….

    In the end, basic physics won out. Not only were the Opera results found to be in err. Several experiments repeated the measurements, definitively showing that neutrinos did NOT go faster than light [7],[8]. When Opera released their results, three previous measurements of neutrino speeds already been made (ref [1], pp 2), but they were not very precise. The Opera results did exactly what they were supposed to do: they pushed the issue to the forefront of the physics community, and several other experiments responded to the challenge. This is how science works.

    An informed consumer of science should always withhold judgements when a brand new result hits the presses, especially if that result seems to completely reverse a century of scientific progress. In this world of ubiquitous media and real-time blogging the public is presented with the live play-by-play of new scientific results unfolding before their very eyes. Now more than ever, it is important that “informed scientific consumers” are able to place new scientific findings in proper context. As a scientists and educator I place great importance on this. I think the most critical issue in science literacy is in teaching the public how to discriminate between the mature, robust findings of science and those findings which are controversial or “too soon to say for sure”. The Opera experiment is a great teaching lesson in this respect, but we also see an excellent example of the self-correcting nature of the scientific process in this story as well. Looking at the community response to Opera, we see all of the hallmarks of good science: attention to biasing factors, blinding, repeatability, and careful public scrutiny.

    [1] http://www.nature.com/nature/journal/v484/n7394/full/484287b.html
    [2] The original Opera paper: http://arxiv.org/pdf/1109.4897v4.pdf
    [3] The official presentation of the Opera results on 9/23/11: http://indico.cern.ch/conferenceDisplay.py?confId=155620
    [4] http://www.nature.com/news/2011/110927/full/477520a.html
    [5] http://www-cdf.fnal.gov/physics/statistics/notes/cdf6576_blind.pdf
    [6] http://www.fnal.gov/pub/today/archive/archive_2012/today12-06-08.html
    [7] http://arxiv.org/abs/1208.2629

  3. mwetstein Says:

    After a lengthy and very negative narrative against Opera, Yoram offhandedly remarks that “At any rate, the OPERA experience has little to do with ordinary, day-to-day publication bias”. He asserts: “Back on the ranch, far from the limelight, the fact is that uncomfortable results are often just ignored, exiled to distant directories in one’s hard-drive. They don’t make headlines; they don’t provoke resignations; they just don’t get reported and published. And that produces a distorted picture in the minds of scientists and the public with regards to important issues.”

    I have provided a detailed picture of the standard blinding procedures used in my field and exactly how those protect against such a bias. I used the Opera example because it is a well known one. If Yoram wants to disregard my Opera example merely because it was “in the limelight”, here is a list of other examples of anomalies that were not so flashy or newsworthy:

    http://member.ipmu.jp/sourav.mandal/anomalies.html

    I recently saw a great talk by the grad student at the CDF experiment who found the problems in their “W+jets bump” (see the linked list). I asked him about it afterwards, and even his careful counter-analysis was performed blindly! There are many more examples than shown on the above list. As I’ve said, it is common in my field for experiments to risk publishing anomalous results, rather than “exile them to distant directories in ones hard drive” as Yoram insinuates. If Yoram wants to argue against this point, he needs to support that claim and not just assert it.

    Towards the end of his posting, Yoram suggests that measurements in physics are often neither easy nor straightforward as people might think: “The procedure is far more complicated, and introduces enormous amounts of complexity into the endeavour,” he says. I certainly agree that new measurement techniques are difficult and complex. The single measurement in my dissertation took 5 years of my life. But, Yoram falls into the trap of “argument by incredulity”. He states:

    “Measurements in cosmology and physics do not mean that someone is doing something as prosaic and straightforward as reading a temperature off a thermometer.”

    In this example, Yoram regards reading a thermometer as “prosaic and straightforward”. What he fails to note is that the underlying assumptions and theoretical premises behind thermometers were not always straightforward. It was only after decades of scientific advancement in thermodynamics and ideal gas theory that thermometer measurements earned the credibility and reliability to be considered “prosaic”. There was a time that nuclear magnetic resonance was highly experimental. Now MRI measurements are standard practice in medicine. The underlying machinery behind an MRI is every bit as complicated as the machinery astronomers use. Yet, that does not stop it from being reliable. Going back to our neutrino example: the MINOS experiment has greatly improved on the measurement techniques used by Opera to measure neutrino speeds. Such measurements, once fraught with peril and unknowns, are indeed becoming “prosaic” in the field. This is how science progresses.

    I highly encourage Yoram to pursue basic literacy on these matters before he feels compelled to make sweeping critiques and present himself as an expert.

    As always, I would like to point out that there is a substantial difference between skepticism as a rational approach to science, and merely presenting laundry lists of rationalizations meant to cast doubt upon even well-supported, we’ll matured scientific findings. The former approach (skepticism) is a rational strategy, the latter approach (doubt) is decidedly irrational. And, as Rabban Gamliel says in Pirke Avos, “Provide yourself with a teacher and remove yourself from doubt”. Words to live by in science as well as Torah.

  4. mwetstein Says:

    BTW, for a fun history of the “prosaic” thermometer: http://inventors.about.com/od/tstartinventions/a/History-Of-The-Thermometer.htm

    Took a few hundred years to become accurate and reliable.

  5. Yoram Bogacz Says:

    The post OPERA or Soap Opera makes two primary points. Firstly, publication bias is a feature of contemporary science. Secondly, OPERA is not a typical case of scientists publishing uncomfortable results.
    Publication bias is a reality, which is why such a nifty name exists for the phenomenon – I didn’t invent it. I quoted a typical statement about publication bias, penned by Professor Lawrence Krauss: “Because a significant fraction of experimental results ultimately never get published or are not later confirmed…” Notice that Krauss refers to publication bias in a decidedly casual fashion, the way you or I might mention that occasionally, we exceed the speed limit. It’s a fact of life.
    Publication bias is often the result of uncomfortable results, and as such, failure to publish these results leads to a distorted picture – both in the mind of other scientists and the public at large. How often does it happen? It’s virtually impossible to quantify publication bias, but it happens, and the result is that the overall picture is distorted.
    OPERA was not a typical case in which scientists published uncomfortable results. Having unleashed a wave of publicity through their premature announcement of superluminal neutrinos, OPERA leaders had no choice but to come clean.
    ***
    Now to the meat of Matt’s statements.
    The condemnation of OPERA originated within the community of physicists; I merely reported it, drawing on standard sources such as New Scientist, Physics World and Discovery News. Matt’s response, in essence, consists of ignoring the fact that a majority of physicists were critical of the entire OPERA affair, and implying that I invented these charges.
    Here is one blatant example of this strategy. Matt writes: “Yoram accuses the Opera researchers of releasing their results merely to ‘grab headlines’. I strongly doubt this…”
    Yoram accuses? Not at all! I quoted Discovery News. Readers can peruse the original post, and verify that the relevant paragraph began with “As one report put it.” I then quoted (not paraphrased) the report, which stated that “If faster than light neutrinos do exist, there need to be many rounds of testing, independent analyses and rigorous peer review before we can start announcing dents in Einstein’s bedrock theories. But, as is abundantly clear in this world of fierce media competition, social media and science transparency, any theory is a good theory so long as it makes a good story — as long as the scientific method has been followed and the science is correctly represented by the writer, that is. [Italics in the original.]”
    Nor did I invent the words penned by Professor Lawrence Krauss, who wrote in the LA Times that “What is inappropriate, however, is the publicity fanfare coming before the paper has even been examined by referees. Too often today, science is done by news release rather than waiting for refereed publication.”
    If you now go back to the post and Matt’s response to it, you will see how vacuous Matt’s criticism is. Again and again Matt resorts to shifting the criticism that emanated from the physics community to make it appear as if I invented it. Matt’s response is peppered with “Yoram this” or “Yoram that”. Babkes! It’s the physics community that shredded OPERA. Readers can easily verify this by reading the post and also by following the links I provided at the end of the post and the general news report from that time and since. OPERA was roundly condemned for the reasons outlined in the post, and blaming the messenger is as bankrupt a strategy here as it always is.
    Ask yourself, dear reader, the following questions:
    1. If OPERA is the way science is supposed to work, why did 10% of the team object to the initial publication of results as premature? Why did the OPERA leadership nonetheless proceed, even though it is standard practice in this type of investigation to reach consensus before publication?
    2. If OPERA is the way science is supposed to work, why were the preliminary results released to the public, rather than to peers as standard practice requires?
    3. If OPERA is the way science is supposed to work, why was a vote of no-confidence held regarding the leadership of the experiment, resulting in a 16-13 vote?
    4. If OPERA is the way science is supposed to work, why did the leaders of OPERA resign?
    5. If OPERA is the way science is supposed to work, why did world-famous physicists feel compelled to write scathing op-ed pieces in outlets such as the LA Times?
    6. If OPERA is the way science is supposed to work, why did a CERN official refused to be identified for an interview about CERN’s involvement with OPERA?
    Matt considers OPERA to be the way science should be done; mazel tov! But he should not blame the rational public for imagining him standing next to George Bush, surveying the desolation visited upon New Orleans by Hurricane Katrina, and nodding vigorously as the President cheerfully proclaims, “Heckuva job, Brownie!”

  6. Matt Says:

    The problem with Yoram’s article is not that his sources are bad. It is not that his quotes are wrong. His problem is that (1) he only selectively quotes his sources, (2) he makes false assertions about the “majority” of the community, and (3) he lacks a sufficiently deep understanding of the issue.

    ********
    “Firstly publication bias is a feature of contemporary science.”

    This statement is imprecise. Yoram and I both agree that publication bias exists. It is a perennial problem in science. But, what does he mean by “a feature”? How big of a “feature”? Are the researchers aware of this “feature” and do they take steps to address it? What are those steps and are they effective? Yoram cannot answer these questions because he has zero expertise or experience in my field. His blog-post above consists of cherry-picked quotes from articles written by other people (secondary, not primary sources). I should point out that the news stories are fine and I recommend reading them directly to get a more balanced picture.

    The Opera result has nothing to do with publication bias. I keep pointing out that the decision to publish was made while the results of the analysis were blinded. The decision to publish could not have been biased by the outcome, because the investigators did not know the outcome! Yoram still has not addressed this point. Knowing full well that the Opera results were not subject to publication bias, Yoram goes on to say that:

    “OPERA was not a typical case in which scientists published uncomfortable results.”

    He has not at any point supported this assertion with examples in particle physics. I have already pointed out that there are many other, more “typical” examples where scientists published uncomfortable results and I provided a a list of them. Yoram has not addressed these other examples. Again, regardless of whether OPERA is “typical” or “atypical” they followed standard protocol designed to guard against the types of publication bias Yoram indiscriminately accuses scientists of.

    ******
    “The condemnation of OPERA originated within the community of physicists; I merely reported it, drawing on standard sources such as New Scientist, Physics World and Discovery News. ”

    Yoram has cherry picked his quotes from the articles that he “merely reported on”. Most of those articles were well balanced and did not demonstrate universal “condemnation” as Yoram would like to imply. For example, he pulls only the quote from Janet Conrad that casts OPERA in a negative light and then disregards her other statements which largely defend the collaboration. Janet is a thoughtful person and her full position was not fully captured by Yoram’s post.

    ******
    “Matt’s response, in essence, consists of ignoring the fact that a majority of physicists were critical of the entire OPERA affair, and implying that I invented these charges.”

    Yoram *did indeed* invent these charges. They are fictional. None of Yoram’s sources claim that “a majority” of us were critical of the “entire OPERA affair”. The words “majority” and “entire” are Yoram’s own over generalizations. A majority of us were *skeptical* of the results. A majority of us were uncomfortable with *aspects* of the media circus. But, we were split on the decision to publish and present them. Most of Yoram’s own sources make it very clear that the community was split. I explained this point and I also clarified the philosophical basis for the disagreement. But, what would I know? I only have first hand experience and expertise on this matter.

    *******
    “If faster than light neutrinos do exist, there need to be many rounds of testing, independent analyses and rigorous peer review before we can start announcing dents in Einstein’s bedrock theories…”

    Yoram misinterprets this quote as a being critique of Opera. It is merely a disclaimer for readers not to draw definitive conclusions from news of the Opera results alone. I opened my response with a similar remark. The experiment needed to be repeated, and it was. This *exact* sentiment was expressed by the co-spokesperson of Opera at the public presentation: ‘”And now I have to add many words of caution from all of us. Despite the large significance of this measurement…it has a potentially great impact on physics. This motivates the continuation of our studies in order to identify any still unknown systematic effects, and we look also forward to independent measurements from other experiments. For the same reasons, we do not attempt any theoretical or phenomenological interpretation of the results.”

    *********
    “Nor did I invent the words penned by Professor Lawrence Krauss, who wrote in the LA Times that ‘What is inappropriate, however, is the publicity fanfare coming before the paper has even been examined by referees. Too often today, science is done by news release rather than waiting for refereed publication.’ ”

    Krauss expressed concerns about the news release preceding the publication. But, it was Yoram who described the release as an attempt to “grab headlines” (Yoram’s words). Krauss made no statement on the motives of the researchers.

    Yoram did no address my point that the story leaked to the media *before* Opera went public. At the time Krauss wrote his op-ed, this was not yet well known. Opera’s decision is much harder to cast judgment upon in light of that fact.

    *********
    “Matt considers OPERA to be the way science should be done;”

    To clarify my statement, which Yoram has misread: I said the the “community response” to the OPERA result is an example of “how science is supposed to work”. I’m talking about the bigger picture. I did not mean to imply the OPERA’s internal squabbles are examples of model behavior. I do feel that OPERA mismanaged aspects of the situation. On the other hand, neither I (nor most of my colleagues) consider it a scandal. Yoram goes on to provide a laundry list of rhetorical questions, focused mostly on rifts within the OPERA collaboration. I think it is perfectly fair to say that OPERA suffered from deep, internal political disagreements. But, one should not confuse political conflicts with scientific mismanagement. The subset of the collaboration who did the velocity measurement were very thorough in their work, following all of the standard scientific protocol. I see value in OPERA deciding to present those results to the larger community for comment, even though they seemed so unbelievable. Most of the community would agree with my first point, but we are split on the second point. Both of these points are independent of whether OPERA was politically well-managed.

    Yoram claims to be “just reporting the news”. But, when he compares OPERA to Bush’s handling of hurricane Katerina, his heavy handed editorial agenda becomes imminently clear. I urge Yoram to try to engage me and listen to what I have to say rather than trying to argue “at” me. I also urge him to reach out to other particle physicists. He needs to do more research before he comes out with such strong posts.

  7. SA learners Says:

    Hmm.. it seems like your blog ate my first comment (it was super
    long) so I will just summarise what I wrote and say,
    I’m seriously enjoying your blog. I am also an aspiring blogger but I’m
    still new to everything. Do you have any suggestions for beginner blog writers?

    I’d really appreciate it.

  8. fitness training Says:

    Great blog! Do you have any tips and hints for aspiring writers?
    I’m hoping to start my own blog soon but I’m a little
    lost on everything. Would you recommend starting with a free platform like WordPress or go for a paid option?
    There are so many options out there that I’m totally overwhelmed .. Any ideas? Bless you!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: