Jump to content

Examining, Improving Reliability of Scientific Research


Sci-2

Recommended Posts

Peer review does far more than just enforce minimum standards of quality (and it's pretty terrible even at that). It determines the journal that a paper will be published in and therefore how much attention it will receive. It can also make or break a scientist's career. Given that the internet has made it extremely easy for papers to be quality checked post-publication, the entire system of peer review and publishing in journals (rather than through open access databases) is antiquated and unnecessary. It probably only survives because it's much easier to estimate the quality of someone's work by glancing at the titles of the journals taht they've published in rather than actually digging into their research.


Link to comment
Share on other sites

It probably only survives because it's much easier to estimate the quality of someone's work by glancing at the titles of the journals taht they've published in rather than actually digging into their research.

But that’s a very strong reason. Nobody has time to dig into other people’s research. We need journals (or, in my field, peer-reviewed conferences) in order to provide a first approximation of a quality seal.

Link to comment
Share on other sites

But that’s a very strong reason. Nobody has time to dig into other people’s research. We need journals (or, in my field, peer-reviewed conferences) in order to provide a first approximation of a quality seal.

If scientists as a community have time to volunteer for unpaid work as a reviewer for a journal, then why would they not have time to review papers post-publication that are of interest? A similar system (arXiv) seems to be working in physics, although since I'm not a physicist I can't say from personal experience. The system I would like to see for my own field would be something like a combination of arXiv and PLos One, where there is a minimal review before a paper is put online and then open post-publication review.

Link to comment
Share on other sites

The journals I review for have acceptance rates from 25-50% (say). So at least the discerning reader gets 1 in 4 papers that are worthy of publication, based on peer review. If we have minimal review and everything post review, I have to go through 4 times as many papers to get things of importance. More chaff than wheat.



So I like the fact someone else is doing the filtering for me before I spend time reading the literature.


Link to comment
Share on other sites

Do the papers that you reject not get published though? Or are they just being published by other (possibly less prestigious) journals? I'd like to get a physicist's view on how arXiv works since there is no peer review filter before things are posted. And with modern search technology, is it really that hard to find the important papers? It seems like the internet and search engines have made it remarkably easy to sort through the hundreds of thousands of papers that are published every year.


Link to comment
Share on other sites

Do the papers that you reject not get published though? Or are they just being published by other (possibly less prestigious) journals? I'd like to get a physicist's view on how arXiv works since there is no peer review filter before things are posted. And with modern search technology, is it really that hard to find the important papers? It seems like the internet and search engines have made it remarkably easy to sort through the hundreds of thousands of papers that are published every year.

All that money paid to journals and search engines for papers and such exists to pay people to read and file and rank these papers.

Link to comment
Share on other sites

Do the papers that you reject not get published though? Or are they just being published by other (possibly less prestigious) journals? I'd like to get a physicist's view on how arXiv works since there is no peer review filter before things are posted. And with modern search technology, is it really that hard to find the important papers? It seems like the internet and search engines have made it remarkably easy to sort through the hundreds of thousands of papers that are published every year.

No (talking from a chemistry background) it is still a horrible task to sort through all the information, and you really need specialized databases (which means expensive maintenance) in many fields. Of course review papers help.

The problem is not all information is created equal. And, despite all their issues, the journal system acts as a filter, sorting out articles into their subfield and forcing a minimum level of information present.

Link to comment
Share on other sites

I've used arXiV, by which I mean I've uploaded stuff to it that was going to get published in a peer reviewed journal anyway.



I have also downloaded articles from arXiV very sparingly. My impressions are that they publish a lot of string theory articles, and a fair number of speculative material that can be launching pads for ideas rather than fully fleshed out material (I saw something there about circular time, for instance).I guess it can be useful in specific circumstances, for sure. But not as a panacea for all that woes peer review (IMHO)


Link to comment
Share on other sites

I have also downloaded articles from arXiV very sparingly. My impressions are that they publish a lot of string theory articles, and a fair number of speculative material that can be launching pads for ideas rather than fully fleshed out material (I saw something there about circular time, for instance).I guess it can be useful in specific circumstances, for sure. But not as a panacea for all that woes peer review (IMHO)

Perhaps you're right. Still, when Nature can blithely put its stamp of approval on two STAP papers, only for the internet to almost instantly turn up evidence of image manipulation and plagiarism and Science can do the same with the Arsenic bacterium paper only for readers to instantly identify numerous technical flaws, something should probably change (of course, in both of these cases much of the blame rests with the editors, not just the reviewers).

Link to comment
Share on other sites

arXiv isn't peer reviewed. Most stuff there is just copies of papers that have been submitted to real journals (but not accepted yet by the time they are uploaded). People upload preliminary versions of their papers to circumvent the journal paywalls (and make jour paper public immediately).


Link to comment
Share on other sites

I am a physicist, but not in a field that is popular in the arxiv (optics).



I don't like the arxiv, and I feel that it is mostly a tool to get around peer review, especially with regards to publication time. You can say you did it first if it appeared on the arxiv first, even though it was not reviewed at all and can potentially be bullocks. Plus, as people said before me, it has no quality differentiation - very important in our day and age with so many papers. There is a lot wrong with peer review as it is today, but like democracy - it is far better than any alternative.



Also, I as I saw from the condensed matter colleagues, I think the arxiv counter intuitively serves to keep people narrowed in their own field, and be less exposed to other stuff. It happened becuase the common thing for a cond-mat physicist to di is to read every day the latest arxiv papers in cond-mat, and treat them as the new advances in the field. This alone takes quite some time, and together with the "distrust" of standard peer review journals, leads to those poeple not reading as much general-purpose physics journals as they probably should. This may sound petty and insignifact, but I think it is a very big issue. Being diverse and open to other fields is very important in science, even in the very specialized world today.


Link to comment
Share on other sites

I have never seen a complaint about the arxiv in the mathematical community. Then again, the peer review process is usually more of a formality for us, since we usually know that what we are attempting to publish is correct. Monitoring posts to the arxiv is a really great way to keep up with your specialized subsubsubfield. As a graduate student, I've learned that journal selection and peer review is much more about gaining recognition for the work rather than communicating the work itself. A proof will be announced via the arxiv, followed by "ooooh, that's being published in the Annals!" Also, papers are usually withdrawn once a mistake has been found.


Link to comment
Share on other sites

From a mailing list:

"Johns Hopkins is having a free web class on reproducable research techniques in a few days; after looking at the tools involved I thought it might be of some interest to people here.

Essentially there are a few people in acadamia whom are pushing for reports to be written in such a way that you can just easily grab the paper and verify the statistics using automated tools. So for instance if you wanted to check how someone achieved a given effect size, you simply grab the source file for the journal paper and it will spit out the R code used to calculate that data. This might pull the actual test statistics from a web server to a local file, and you could prod at it to your heart's content.

I think this is a great idea for all of the sciences, personally.
"

Link to comment
Share on other sites

Peer review does far more than just enforce minimum standards of quality (and it's pretty terrible even at that). It determines the journal that a paper will be published in and therefore how much attention it will receive. It can also make or break a scientist's career. Given that the internet has made it extremely easy for papers to be quality checked post-publication, the entire system of peer review and publishing in journals (rather than through open access databases) is antiquated and unnecessary. It probably only survives because it's much easier to estimate the quality of someone's work by glancing at the titles of the journals taht they've published in rather than actually digging into their research.

This doesn't sound right to me. Most researchers have a set of journals that they check regularly, because it's much easier that way. This is fundamental to media consumption - the NY Times has higher impact than the NY Daily News. Scientists only have limited time to read literature both new and old. There needs to be a filter.

The peer review process is problematic. I have had paper's reviewed by extremely incompetent people, and I have seen papers published with obvious unaddressed flaws. But the process of having your work criticized by a third party legitimately improves the quality of publications. The problem is not that it doesn't work, but that it doesn't work consistently enough.

I think the anonymity of reviewers should be removed. It does more harm than good. Scientists are often petty. I've seen reviewer reports that summarize as "this contradicts the premise of my funding. screw you." And I think that would happen less if the reviewer signed their reviews.

Link to comment
Share on other sites

I have never seen a complaint about the arxiv in the mathematical community. Then again, the peer review process is usually more of a formality for us, since we usually know that what we are attempting to publish is correct. Monitoring posts to the arxiv is a really great way to keep up with your specialized subsubsubfield. As a graduate student, I've learned that journal selection and peer review is much more about gaining recognition for the work rather than communicating the work itself. A proof will be announced via the arxiv, followed by "ooooh, that's being published in the Annals!" Also, papers are usually withdrawn once a mistake has been found.

In Materials Science this is harder. Is it relevant and is it correct are much harder to answer. How do you tell if the author calibrated their

Perhaps you're right. Still, when Nature can blithely put its stamp of approval on two STAP papers, only for the internet to almost instantly turn up evidence of image manipulation and plagiarism and Science can do the same with the Arsenic bacterium paper only for readers to instantly identify numerous technical flaws, something should probably change (of course, in both of these cases much of the blame rests with the editors, not just the reviewers).

Yes. The editors of Nature and Science love controversial results.

Science media is still a media business. Hype sells.

The Arsenic paper is really egregious. Another issue is that there is never a retraction. I've seen paper's I know to be wrong and that the author knows to be wrong cited over and over.

Link to comment
Share on other sites

I think the anonymity of reviewers should be removed. It does more harm than good. Scientists are often petty. I've seen reviewer reports that summarize as "this contradicts the premise of my funding. screw you." And I think that would happen less if the reviewer signed their reviews.

But if you remove reviewer anonymity you get things like this.

Though maybe that only happens when the papers in question are about witchcraft and alchemy.

Link to comment
Share on other sites

  • 4 weeks later...

Been awhile since I did anything related to stats or research, hoping the more science-y people on this forum can better evaluate how true this paper is:

Why Most Published Research Findings Are False

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...