Open Access

Open Evaluation: 11 sure steps – and 2 maybes – towards a new approach to peer review

Open Evaluation will improve science. Researchers constantly evaluate each other — when we submit our results for publication, when we apply for grants, and when we apply for new jobs or promotions. Peer evaluation is our quality assurance strategy. And it needs to be better.

Open access provides a context to radically reform scientific publishing. The way we evaluate scientific papers must be part of this. Our current system lacks sufficient quality and it lacks transparency.

One creative approach to reconceptualizing evaluation comes from Nikolaus Kriegeskorte, Alexander Walther and Diana Deca. These three scholars invited eighteen papers by authors ready to look beyond open access, and they summarized the nascent suggestions in their recent paper, An emerging consensus for open evaluation: 18 visions for the future of scientific publishing, appearing in Frontiers in computational neuroscience.

The importance of the Open Evaluation (OE) project is characterized in that article as follows.

Evaluation is at the heart of the entire endeavor of science. As the number of scientific publications explodes, evaluation, and selection will only gain importance. A grand challenge of our time is to design the future system, by which we evaluate papers and decide which ones deserve broad attention and deep reading.

The eighteen papers they solicited are cooked down to thirteen suggested features for OE. Eleven of the thirteen suggestions were “overwhelmingly endorsed” by the authors of the independent papers. Two of the suggestions were supported by some but doubted by others. Together, these give us a glimpse of a better system that is just around the corner.

13 measures for changing evaluation
The boldface phrases below are the words used by Kriegeskorte, Walther and Deca to describe the thirteen measures.

Traditionally, scientific articles are published but reviews are not. When reviews are published alongside articles, we move towards a situation in which the evaluation process is totally transparent. It is increasingly difficult for scientists to quickly identify the most important papers for their work; journals can assist them when evaluations are used to produce paper priority scores. Indeed, the needs of scientists differ and it should therefore be possible that anyone can define a formula for prioritizing papers.

The two measures lacking consensus reflect holdovers from the traditional approach to scholarly publishing. One reveals itself when we ask should evaluation begin with a closed, pre-publication stage. Without this, the websites of journals may be flooded with weak work. On the other hand, the sorting and prioritizing schemes could easily move us past this concern, and the loss represented by errors in the initial screening process could be significant. A milder way to address the same issue asks should the open evaluation begin with a distinct stage, in which the paper is not yet considered approved.

Connecting information about articles to various metrics on the web will supplement traditional approaches to evaluation. In a rich OE system, evaluations include written reviews, numerical ratings, usage statistics, social web information and citations. Some of these components are immediately available while others require subsequent updating, e.g., as citations appear.

The anonymity of reviews may reduce hesitation and thereby keep participation levels in the reviewing process high. On the other hand, signed reviews can help build a career when scientists make the effort to do high quality work also in this domain. When both approaches are found, the system utilizes signed (along with unsigned) evaluations.

Both anonymous and signed reviews can be done in a way such that evaluators’ identities are authenticated. Authentication provides a strategy for relating multiple reviews or for limiting the reviewing process to people with certain qualifications, e.g., research experience or affiliation with a research institution.

As we work towards improving the quality of evaluation, it may become possible to review individual reviewers and the reviews they write. If reviews and ratings are meta-evaluated, the quality of papers can be partially determined by the quality of the reviews. Hence, we imagine a system in which participating scientists are evaluated in terms of scientific or reviewing performance in order to weight paper evaluations. This could build on the meta-evaluation system — bringing together multiple reviews written by the same scientist — but it could also be built on their independently established status in their field.

The fluidity of digital publishing and the ease of adding content at any time, makes it possible to imagine a system in which evaluation is perpetually ongoing. Our understanding of the results in a paper can change over time, either enhancing or downplaying its importance. Open Evaluation will give these changes a role, independent of when they emerge.

Many of the changes proposed here can be made more precise when formal statistical inference is a key component of evaluation. A major advantage of all of these proposals is that the new system can evolve from the present one, requiring no sudden revolutionary change. Journals are free to begin implementing new approaches to reviewing scientific papers, although overall confidence in the system of scientific publication will undoubtably be enhanced if a generalized approach to Open Evaluation can be agreed upon.

What do you think about Open Evaluation? Do the 11 sure measures sound right to you? What about the two in which there was less agreement? Do you think OE in general is a good idea? How is your vision like or unlike the one presented here? How do you see the connection between Open Access and Open Evaluation? Leave a comment or share this article and we can find the foundation for a better system!

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more

Share

8 Comments

  • Pål Lykkja says:

    I hope some hackers will find this problem worthwhile to work with. If the right persons take the challenge, then we could soon see a solution to both open access and open evaluation. Here is a very interesting idea for the moment:

    http://neuroconscience.com/2013/01/15/could-a-papester-button-irreversibly-break-down-the-research-paywall/

  • Peter Gray says:

    The chief culprit here is spineless university governance at all levels, which has been complicit in supporting the traditional model: publish in high-ranking journals regardless of the ethical or arithmetical consequences. Ethical because research, as Curt rightly points out, is publicly funded but privately disseminated. Arithmetical because the system can only absorb a finite amount of material, yet universities behave as though the capacity of the system to absorb ‘research results’ is infinite. We are at a stage where data processing and publishing capacity is in fact virtually unlimited, but as Riel Miller pointed out a while back, the limiting factor is attention. We need to start from first principles: who is the audience for this research? Where do they get their information? What exactly do they need to know?
    The video publishers http://facultimedia.com/ (no connection) have seen the light. It is time the rest of us did. Open evaluation might well be one of the ways forward.

  • The open evaluation aspect of open science is very exciting. And particularly exciting are the open review platforms that have started appearing recently. Their various approaches to it will allow us to witness in practice what will work. I have listed several of these platforms here: http://thomas.arildsen.org/2013/08/01/open-review-of-scientific-literature/ – including additional ones in the comments.

    • Curt Rice says:

      Really great posting there, Thomas. I read it with great interest and tweeted it, too. These are super important issues that researchers really must face … Good work! Carry on! Stå på!

  • Boban Arsenijević says:

    Thanks for this piece, Curt, I definitely think that open evaluation can help us solve a lot of the issues pointed out. I’m just a bit concerned of one thing. In spite of some regulatory measures (such as prioritizing and weighing reviews/reviewers), I’m afraid that this type of evaluation will strengthen the position of the mediocre view. And while this is never really a good outcome, science is probably one of the domains where it is least desirable. Imagine a paper is set for evaluation that proposes a fruitful fundamental change in the methodological paradigm of a scientific field. What do you think the outcome would be of open evaluation of such a paper?

  • André Desrochers says:

    “Peerage of Science” does pretty much what is proposed here. Look it up on your search engine.

  • Jon Brock says:

    In the light of the Science sting operation, I’ve just posted something similar (but less well thought out), arguing that anonymous peer review is increasingly meaningless when anyone can set up a website and call it a journal. I’m not sure whether it really is a good analogy but I was struck by the parallels with the dying art of film criticism. When anyone can become a film critic, “reviews” that carry weight are those where the critic has invested something of their own reputation

    http://crackingtheenigma.blogspot.com.au/2013/10/this-study-lacked-appropriate-control.html

10 Trackbacks

Republish

I encourage you to republish this article online and in print, under the following conditions.

  • You have to credit the author.
  • If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
  • Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
  • Keeping all this in mind, please take this work and spread it wherever it suits you to do so!