Jump to content

Examining, Improving Reliability of Scientific Research


Sci-2

Recommended Posts

Let the Light Shine In: Two big recent scientific results are looking shaky—and it is open peer review on the internet that has been doing the shaking

SCIENTISTS make much of the fact that their work is scrutinised anonymously by some of their peers before it is published. This “peer review” is supposed to spot mistakes and thus keep the whole process honest. The peers in question, though, are necessarily few in number, are busy with their own work, are expected to act unpaid—and are often the rivals of those whose work they are scrutinising. And so, by a mixture of deliberation and technological pressure, the system is starting to change. The internet means anyone can appoint himself a peer and criticise work that has entered the public domain. And two recent incidents have shown how valuable this can be.

The second claim came from cosmology. On March 17th researchers from the Harvard-Smithsonian Centre for Astrophysics, led by John Kovac, held a press conference at which they announced that they had discovered interesting patterns in the cosmic microwave background, a type of weak radiation left over from the universe’s earliest moments. They said they had spotted the signatures of primordial gravitational waves, ripples in space formed just after the Big Bang.

Once again, it was big news (including in The Economist). The existence of such waves would give strong support for the theory of inflation, which holds that the early universe underwent a brief burst of faster-than-light expansion. Inflation was put forward in the 1980s by theorists as a way to resolve various knotty problems with the standard theory of the Big Bang. But although it is widely assumed to be true, direct evidence that it happened had been lacking.

Dr Kovac and his colleagues made much of their data available online at the time, prompting hundreds of physicists to check their work. Doubts soon surfaced. The team’s claim to have spotted the waves relies on them having diligently scrubbed out every possible source of false positives. But doing that is hard, because the most likely culprit—interstellar dust—is poorly understood. Such diligence is made doubly difficult by the fact that, although several teams are hunting for primordial gravity waves, the glory of being the first to spot them means none is willing to share its data with the others.

The various online arguments culminated with the publication of an online paper by researchers from New York University, Princeton University and the Institute for Advanced Study (also in Princeton). This concluded that Dr Kovac’s data, which came from an Antarctic telescope called BICEP-2, may well have been contaminated by space dust, and that the purported gravitational waves may be much weaker than the team first claimed—if they exist at all.

Link to comment
Share on other sites

They miss the point of peer review. Which is meant to decide whether a paper meets the basics to be published in the intended journal, which is the lowest bar. The vetting and disagreements (after claims are made either in publication or otherwise) are what makes science.


Link to comment
Share on other sites

When did the peer review process really start? Did they have a similar model during newtons, Archimedes, or even Einstein's day? I always hear, 'is it peer reviewed?' As some form of credibility, but I have a hard time believing that all great discoveries/researched passed the current peer review model.

Link to comment
Share on other sites

Absolutely during Newton's time. Newton fought with everyone. The Royal Society Journal was peer reviewed and published some of Newton's papers.



Einstein was NEVER comfortable with the theory of quantum mechanics.



Peer review is the backbone of the scientific process.


Link to comment
Share on other sites

Nothing in science is ever true, not does it claim to be, or even want to be.



Performing the experiment, writing down the proof, presenting the result at an internal seminar, finishing the manuscript, submitting peer review, getting the revision published, getting others to actually read and cite your work, getting others to verify it, getting others to establish predictions made by your model, or find a neater proof for you result, receiving a best paper award, getting your model or result in a survey paper, encyclopaedia or research monograph written by somebody else, getting it into the undergraduate curriculum and a bunch of text books, receiving a major scientific price: these are some of the gradual steps of affirmation for you work.



If your result makes it all the way: great. If not: it was irrelevant or buggy, but hopefully still played a role in the process. In particular, having a result attacked and refuted at any point of the process is part of the deal. It’s how it’s supposed to work. The internet is part of this, just as much as face-to-face meetings.


Link to comment
Share on other sites

Mathematicians beg to differ. ;)

Yeah, I myself am of that ilk. And still, even a paper written by me, can and and will include mistakes, and the statement achieves theoremhood not by me putting a square box at the end of a text starting with Proof:, but by a social process that requires others to read, verify, and accept (or refute) my proof.

(It is true that we mathematicians start with a much higher claim to correctness than the experimental sciences, so if we plot “plausibility of correctness” versus “availability of result” than ours starts with a wall, followed by a gently sloping plateau, while everybody else has a much more gradual incline.)

Link to comment
Share on other sites

I believe mathematics and physics also have a thriving pre-print culture, which can remove some issues in clearly (in contrast to subtly) wrong work being published.


This is sadly mostly absent in other branches, such as chemistry and biology. I guess patent and industry issues are involved.


Link to comment
Share on other sites

We should also note that the BICEP results weren't even submitted for conventional peer review - it was by a news conference, correct? Of course, scraping data from a powerpoint presentation would make their peers batty. Not sure you can cite it as a failure of the current peer review system.



What I am more worried about is the modern trend to use the popular media as a vehicle for announcing results, creating unnecessary hype and contaminating the process. By the time the BICEP results actually lands at a referee's desk, there is so much history associated with it, creating all sorts of bias (think of it as a high profile jury trial, jurors are supposed to be unaware of all the fluff surrounding the case).



I am a peer reviewer, so I have fondness for the system, but I am also aware of its faults. For experimental work, some things have to be taken on faith. There is also 'reputation bias', where institutions and big shots get undue respect. Sometime the latter will collude with editors to override referee suggestions. Referee sloppiness is also an issue, since it is unpaid work. These are all things for which the internet is an appropriate corrective (in part).


Link to comment
Share on other sites

Peer review is really a interesting process in that your rivals are the ones usually selected to review the work. Once published, peer review never really ends as even now people still go back to Einstein's work to try and poke holes into it.


Link to comment
Share on other sites

science is an aesthetics: you looked at the world and concluded that it was good, rather than true, like YHWH in genesis, the first scientist.

Also the first abusive scientist; committing questionable experiments with unknowing sentient subjects, rejecting peer-review to the extent of stamping out competing pantheons, and getting a lowly intern to do all that boring categorization of animals.

Link to comment
Share on other sites

Mathematicians beg to differ. ;)

Maths is just a language used to try to express science.

Also the first abusive scientist; committing questionable experiments with unknowing sentient subjects, rejecting peer-review to the extent of stamping out competing pantheons, and getting a lowly intern to do all that boring categorization of animals.

As is right and proper. What are interns FOR if not to get stuff done that scientists are too busy to take care of. Besides, as experiments go, it's dead interesting.

Link to comment
Share on other sites

They miss the point of peer review.

No kidding...

This “peer review” is supposed to spot mistakes and thus keep the whole process honest.

I am actually surprised to find such a sentence in the Economist. Just completely wrong.

Link to comment
Share on other sites

What I am more worried about is the modern trend to use the popular media as a vehicle for announcing results, creating unnecessary hype and contaminating the process. By the time the BICEP results actually lands at a referee's desk, there is so much history associated with it, creating all sorts of bias (think of it as a high profile jury trial, jurors are supposed to be unaware of all the fluff surrounding the case).

I was listening to a debate on radio on Tuesday, and this point was one of those that remained with me post-debate.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...