A prime example of how science should work: the “Faster-Than-Light” neutrinos

In fall last year, Italian scientists went public with puzzling data: they detected neutrinos emitted from CERN in Geneva which arrived 60 nanosecondes earlier than they should have if they have behaved according to the theory, i.e. travelling at the speed of light. This result was puzzling because it contradicted the theory —often tested, never challenged— of probably the most famous scientist of all times: Albert Einstein. No wonder the scientists of the OPERA collaboration were extremely cautious in their announcement: they never branded it a “discovery”, rather, they refered to it as an “anomaly”. Instead, after months of unsuccessful testing of their result, they went public with it to ask fellow physicians to challenge their discovery. On the 23rd of February 2012, they announced the following:

The OPERA Collaboration, by continuing its campaign of verifications on the neutrino velocity measurement, has identified two issues that could significantly affect the reported result. The first one is linked to the oscillator used to produce the events time-stamps in between the GPS synchronizations. The second point is related to the connection of the optical fiber bringing the external GPS signal to the OPERA master clock. These two issues can modify the neutrino time of flight in opposite directions. While continuing our investigations, in order to unambiguously quantify the effect on the observed result, the Collaboration is looking forward to performing a new measurement of the neutrino velocity as soon as a new bunched beam will be available in 2012. An extensive report on the above mentioned verifications and results will be shortly made available to the scientific committees and agencies.

This is something that deserves to be noted: after months of testing, the scientists finally decided to put their measurement to the test by other, sometimes competing, physicists. Eventually, the same people that published this result found where the problem was and made it public. Granted, scientists of the OPERA collaboration never really believe in the reality of their measurement: it was flying in the face of very well established, very solid science. Yet, their result, if true, would have ensured them a Nobel Prize; better yet: a place in History books all around the world. By doing what they did: going public with a puzzling result and acting in a very skeptical manner with their own results, they set up an example for all scientists to be followed.
Yet, the two leaders of the collaboration had to face a “no-confidence” vote of their colleagues, after which they both resigned. When the news of this vote reached me, I didn’t really understand why their colleagues decided to remove them from the head of OPERA: from what is public, both top-scientists acted ethically, if not even better than that. According to an editorial in Nature, one of the problem was the fear of some members of OPERA that funding might be impacted by the admission that a loose cable was probably at the origin of the freak measurement.

There is a big problem here, in my opinion. Science, all of it, is bound to make mistakes. The sheer complexity of some tasks undertook by scientists, the lack of knowledge (which is exactly why research is done in the first place: to fill gaps in knowledge), the pressure they are under to deliver swiftly data increases that risk. Here, what happened in the OPERA collaboration is a useful reminder of that fact. People outside of science might have hard time to understand it, but making mistakes, and understanding them is integral to the scientific method. However, the fear of seeing one’s funding dry because of such mistakes isn’t.

Researchers get money from agencies which will fund them with respect to their projects, the data they have already and so on… But when the economy is shrinking, as for everything else which won’t translate into short-term benefits, money becomes less available. Governments will fund “growth-driving” initiatives or reduce deficits rather than giving money to an activity for which the outcome is —by definition— uncertain. This leads to an increased pressure on scientists to deliver data and results fast. And, one can expect, with lower quality standards. Indeed, the New York Times ran an op-ed claiming that retraction rates have never been higher. These retractions seem to be due to fraud, of course, but also to honest mistakes that might be avoided. It is harder and harder to get funded, the criterions are tough, sometimes kafkaian: reviewers on the panel would like you to do something innovative, and yet, they’ll deny you grant money if you chose to go out of your comfort zone. Not to mention that usually one applies for a project for which there is already substantial data available, newer projects being actually funded on this grant money. All is well, except when you don’t have old projects which came to fruition. A british medical doctor in the U.S. solved this conundrum by forging data. Obviously, he got caught. However, The Guardian raises a good question: what prompted this doctor to do such a thing? He surely knew that he risqued being caught, with a rather grim career prospect if that – as it did– would have happened. The problem here is that, without any funding, his career prospects were equally grim. No money means that you are not able to conduct research, and to get money, you need to publish some papers, as high as possible. One does immediately see the vicious circle here: without money you are not able to do the research that should bring you money. Faced with such a dilemma, Peter Francis chose to go the easiest path: to present some fake data to the panelists. He probably hoped to redeem himself by actually doing the experiments once funded. The issue here, isn’t what he thought, though. It’s not even (or not that much, at least) what he did, but why he did it.

He would be alone in this, he would undoubtedly be branded as a “black sheep“ or a “rotten apple”. The problem is that, like in the broader society, there are reasons for deviant behaviour that are beyond a specific individual’s character. And Franklin isn’t alone. From a post-doctoral researcher caught poisoning the culture media of a PhD student to high profile unethical behaviour (who has been reinstated) there is something rotten in the realm of science.

This whole background might explain the reaction of some of the members of the OPERA collaboration, who, according to the editorial of Nature linked above feared that admitting such a “trivial” mistake might lead to some funds drying out. But, by admitting this mistake to the broader public, the scientists of the collaboration stood tall and, most of all, stood by the principles of good conduct that every scientist should follow. If, for that reason, the funders of the OPERA collaboration decide to reduce its funding, shame on them!

I would totally agree that the work of scientists need to be monitored. This is actually what the scientists are doing among themselves: it’s a way to ensure that results published are right. But the funding system seems to be the dream of a neoliberal economist gone wrong: the “publish or perish” motto which characterize the way the laboratories are evaluated is bound to produce such deviant behaviours. There is simply to much at stake for the laboratories, the scientists both professionally and in private life. There has to be a middle way, where performance is rewarded but also where scientists, young or confirmed, are on solid ground.

People are leaving academia after several years of toiling, first as a PhD student and a post-doc, there must be ways to avoid this waste of energy, money and in the end, knowledge.

About ravingscientist01

Trained as a molecular geneticist, I did a PhD in biochemistry and molecular biology. I am interested in science, its communication, the impact it can have on policies as well as the impact of various policies related to science may have on the latter.
This entry was posted in Policy. Bookmark the permalink.

Leave a comment