Last post, I talked about the benefits a manuscript enjoys in the process of scientific publication. To me, it seems that the main benefits are that an editor and some number of peer reviewers read it and give edits. Somehow despite this part coming from volunteer labor, it still manages to cost $1500 an article.
And yet, as researchers, we can't afford to try to do without the journals. When the paper appears with a sagepub.com URL on it, readers now assume it to be broadly correct. The journal publication is part of the scientific canon, whereas the preprint was not.
Since the peer reviews are what really elevates the research from preprint to publication, I think the peer reviews should be made public, as part of the article's record. This will open the black box and encourage readers to consider: Who thinks this article is sound? What do they think are the strengths and weaknesses of the research? Why?
By comparison, the current system provides only the stamp of approval. But we readers and researchers know that the stamp of approval is imperfect. The process is capricious. Sometimes duds get published. Sometimes worthy studies are discarded. If we're going to place our trust in the journals, we need to be able to check up on the content and process of peer review.
Neuroskeptic points out that, peer review being what it is, perhaps there should be fewer journals and more blogs. The only difference between the two, in Neuro's view, is that a journal implies peer review, which implies the assent of the community. If journal publication implies peer approval, shouldn't journals show the peer reviews to back that up? And if peer approval is all it takes to make something scientific canon, couldn't a blogpost supported by peer reviews and revisions be equivalent to a journal publication?
Since peer review is all that separates blogging from journal publishing, I often fantasize about sidestepping the journals and self-publishing my science. Ideally, I would just upload a preprint to OSF. Alongside the preprint there would be the traditional 2-5 uploaded peer reviews.
Arguably, this would provide an even higher standard of peer review, in that readers could see the reviews. This would compare favorably with the current system, in which howlers are met with unanswerable questions like "Who the heck reviewed this thing?" and "Did nobody ask about this serious flaw?"
Maybe one day we'll get there. In the meantime, so long as hiring committees, tenure committees, and granting agencies are willing to accept only journal publications as legitimate, scientists will remain powerless to self-publish. In the meantime, the peer reviews should really be open. The peer reviews are what separates preprint from article, and we pay millions of dollars a year to maintain that boundary, so we might as well place greater emphasis and transparency on that piece of the product.
Header
Monday, May 16, 2016
Saturday, May 14, 2016
Be Your Own Publisher?
@PLOS's financials reveal that they are merely trying to maximize their personal and corporate profit, like any company 30/40— Andrew Kern (@pastramimachine) March 15, 2016
They are merely another Nature or Science that aims to maximize profits while cloaking itself in the white robes of OA. 32/40— Andrew Kern (@pastramimachine) March 15, 2016
The problem with paying any 3rd party for academic publishing is that these 3rd parties are corporations. Corporations have the defining goal of making as much profit as possible by providing a service..@pastramimachine i agree - I've always want to just post shit on the Internet, but the rest of you fuckers wanted journals so we made @PLOS— Michⓐel Eisen (@mbeisen) March 15, 2016
This goal is often at odds with what is best for science. Under the traditional publishing model, financial considerations favor the strategy of hoarding all the most exciting research and leasing it out for incredible subscription fees. Researchers stretch their data to try to get the most extraordinary story so that they can get published in the most exclusive journal. Under the Open Access publishing model, financial considerations favor the strategy of publishing as many papers as possible so long as the average paper quality is not so poor that it causes the journal's reputation to collapse.
Subscription journals apparently cost the educational system billions of dollars a year. Article processing fees at open-access journals tend to sit at a cool $1500. How can it be so expensive to throw a .PDF file up on the internet?
Let's consider the advantages a published article has relative to a preprint on my GitHub page. Relative to the preprint, the science in a published article has added value from:
1) Peer reviewers, who provide needed criticism and skepticism. (Cost: $0)
2) Editors, who provide needed criticism, skepticism and curation. (Cost: $0)
3) Publicity and dissemination for accepted articles (Cost: Marketing budget)
4) Typesetting and file hosting (Cost: $1500 an article, apparently)
The value-added to researchers comes from the following sources:
1) The perceived increase in legitimacy associated with making it past peer review (Value: Priceless)
2) Prestige associated with being picked out for curation. (Value: Priceless)
It leads me to wonder: What might be so wrong with universities, laboratories, and researchers simply using self-publishing? Websites like arXiv, SSRN, OSF, and GitHub provide free hosting for PDFs and supplementary files.
If the main thing that distinguishes a preprint from an article is that between two and five people have read it and okayed it, and if that part costs nothing, why not save a heap of money and just have people post peer reviews on your preprint? (Consider Tal Yarkoni's suggestion of a Reddit-like interface for discussion, curation, and ranking.)
Is it possible that we might one day cut out the middleman and allow ourselves to enjoy the benefits of peer review without the enormous financial burden? Or does institutional inertia make it impossible?
Maybe this fall my CV can have a section for "Peer-reviewed manuscripts not published in journals."
Wednesday, May 4, 2016
Post-pub peer review should be transparent too
A few weeks ago, I did a little post-publication peer review. It was a novel experience for me, and lead me to consider the broader purpose of post-pub peer review.
In particular, I was reminded of the quarrel between Simone Schnall and Brent Donnellan (and others) back in 2014. Schnall et al. suggested an embodied cognition phenomenon wherein incidental cues of cleanliness influenced participants' ratings of moral disgust. Donnellan et al. ran replications and failed to detect the effect. An uproar ensued, goaded on by some vehement language by high-profile individuals on either side of the debate.One thing about Schnall's experience stays with me today. In a blogpost, she summarizes her responses to a number of frequently asked questions. One answer is particularly important for anybody interested in post-publication peer review.
Question 10: “What has been your experience with replication attempts?”
My work has been targeted for multiple replication attempts; by now I have received so many such requests that I stopped counting. Further, data detectives have demanded the raw data of some of my studies, as they have done with other researchers in the area of embodied cognition because somehow this research area has been declared “suspect.” I stand by my methods and my findings and have nothing to hide and have always promptly complied with such requests. Unfortunately, there has been little reciprocation on the part of those who voiced the suspicions; replicators have not allowed me input on their data, nor have data detectives exonerated my analyses when they turned out to be accurate.
I invite the data detectives to publicly state that my findings lived up to their scrutiny, and more generally, share all their findings of secondary data analyses. Otherwise only errors get reported and highly publicized, when in fact the majority of research is solid and unproblematic.[Note: Donnellan and colleagues were not among these data detectives. They did only the commendable job of performing replications and reporting the null results. I mention Donnellan et al. only to provide context -- it's my understanding that the failure to replicate lead to 3rd-party detectives's attempts to detect wrongdoing through analysis of the original Schnall et al. dataset. It is these attempts to detect wrongdoing that I refer to below.]
It is only fair that these data detectives report their analyses and how they failed to detect wrongdoing. I don't believe Schnall's phenomenon for a second, but the post-publication reviewers could at least report that they don't find evidence of fraud.
Data detectives themselves can run the risk of p-hacking and selective report. Imagine ten detectives run ten tests each. If all tests are independent, eventually one test will emerge with a very small p-value. If anyone is going to make accusations according to "trial by p-value," then we had damn well consider the problems of multiple comparisons and the garden of forking paths.
Post-publication peer review is often viewed as a threat, but it can and should be a boon, when appropriate. A post-pub review that finds no serious problems is encouraging, and should be reported and shared.* By contrast, if every data request is a prelude to accusations of error (or worse), then it becomes upsetting to learn that somebody is looking at your data. But data inspection should not imply that there are suspicions or serious concerns. Data requests and data sharing should be the norm -- they cannot be a once-in-a-career disaster.
Post-pub peer review is too important to be just a form of witch-hunting. |
I do post-publication peer review because I generally don't trust the literature. I don't believe results until I can crack them open and run my fingers through the goop. I'm a tremendous pain in the ass. But I also want to be fair. My credibility, and the value of my peer reviews, depends on it.
The Court of Salem reels in terror at the perfect linearity of Jens Forster's sample means. |
* Sakaluk, Williams, and Biernat (2014) suggest that, during pre-publication peer review, one reviewer run the code to make sure they get the same statistics. This would cut down on the number of misreported statistics. Until that process is a common part of pre-publication peer review, it will always be a beneficial result of post-publication peer review.
** Simonsohn, Simmons, and Nelson suggest specification curve, which takes the brute-force approach to this by reporting every possible p-value from every possible model. It's cool, but I've never tried to implement it yet.
Subscribe to:
Posts (Atom)