Open Access

New approaches to quality control in publishing

There are four key components to publishing, and they’re all about to change.

Ten years from now, publishing will be done in ways that we are only beginning to envisage. Politics and profit will of course compel these changes. But the specific innovations coming our way will be driven by a generation of tweeters, bloggers, status updaters and Wikipedia editors.

Publishing starts with (i) an author who has (ii) something to say. It requires (iii) a system of quality control and then (iv) a way to produce and distribute the results.

These four core elements of publishing are the same, whether we are communicating scientific results, writing for a newspaper, telling a story in a novel, or blogging. And they are usually also the chronological stages, especially regarding quality control and distribution.

What changes are we about to experience? Where is the system soon likely to do something differently?

As we learned from Charles Darwin, one key way to identify where big changes are about to happen is to look for variation. Innovation leads to variation, and variation leads to breakthroughs.

In publishing, a current example of this is in the way editors assure the quality of research results that are to appear in academic journals.

When a paper is submitted, editors send it out to a few experts who offer their anonymous opinions. This system of peer review is traditionally the sine qua non of the quality control system in science. But it is slow, expensive, dependent on the goodwill of our colleagues, and potentially discriminatory. In my field (linguistics), it is not uncommon for three years to pass between initial submission of a paper and its appearance as an article.

Furthermore, a growing body of research suggests that peer review is not effective for quality control, only in part because of its costs. One major problem, among many, is the unpublishability of negative research results. Some are even asking why bother publishing in journals?

Maybe Open Access can offer us something better. As I’ve noted before, OA could be crucial for developing a Scientific Right of Access.

Many OA journals are concerned to demonstrate that they have the same quality control system as traditional journals. I think this is a mistake. Doing the same thing as the old, traditional, conservative system is not a good strategy for finding a competitive advantage.

You can’t win by doing as good as your competition. If Open Access is to become the dominant model for scientific publishing, it has to offer something better than what we already have.

And something better is emerging. There is now variation in how peer review is carried out. Various models are succinctly described in Wikipedia’s article on Open Peer Review.

One model allows authors to post articles on electronic archives. The scholarly community can then engage in discussion, which may lead to publication in a journal.

Another model has authors submit their articles — and also names of reviewers. Consider this editorial statement from WebMedCentral.

We at WebmedCentral have full faith in the honesty and integrity of the scientific community and firmly believe that most researchers and authors who have something to contribute should have an opportunity to do so. Each piece of research will then find its own place in scientific literature based on its merit.

We have introduced a novel method of post publication peer review, which is author driven.  It is the authors’ responsibility to actively solicit at least three reviews on their article. […]

[R]eaders would have full access to the entire communique.  Our intention is to generate a healthy debate on each published work.

A different approach to post-publication peer review lets the traditional journals continue their work, but then adds another layer of evaluation. This is the approach of the burgeoning Faculty of 1000, which describes its process as follows.

Faculty of 1000 (F1000) identifies and evaluates the most important articles in biology and medical research publications. Articles are selected by a peer-nominated global ‘Faculty’ of the world’s leading scientists and clinicians who then rate them and explain their importance. From the numerical ratings awarded, we have created a unique system for quantifying the importance of individual articles and, from these article ratings, journals. […]

Launched in 2002, F1000 was conceived as a collaboration of 1000 international Faculty Members. The name stuck even though the remit of the service continues to grow and the Faculty now numbers more than 10,000 experts worldwide. Their evaluations form a fully searchable database containing more than 100,000 records and identifying the best research available.

[…]

On average, 1500 new evaluations are published each month; this corresponds to approximately 2% of all published articles in the biological and medical sciences.

The scientists behind these approaches are motivated in part by the spirit of the Web, which tells us to make information publicly available, to eliminate filters, as Cameron Neylon would say. They are motivated in part by a conclusion that the current system both puts profit into the wrong pockets and does so without successfully assuring quality.

This is a debate. There are competing perspectives. One scientist or publisher might ask whether post publication review adds anything at all, or whether it even works?

Quality control is only one part of the publishing system. The way we create content, the way it is distributed, even our conceptions of authorship are sure to change soon, as I noted in Publishing in the Adjacent Possible.

Where do you see variation today? Where are breakthroughs likely? What do you think they’ll include?

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more

Share

3 Comments

  • Colin Phillips says:

    Publication in our field is abysmally slow, as you point out. This is mostly because reviewers are abysmally slow. Is there a reason to think that post-publication peer review would improve the speed or quality of input from expert reviewers/commenters? I think that there’s a danger that change will be driven by the biological sciences, and other areas that face different publishing challenges than many other fields. Reviewing the latest salami-published bio result and reviewing a complex proof in mathematics are rather different undertakings.

  • Jim Scobbie says:

    Interesting, and prompts some thoughts.

    One worry is that whatever the model, people will continue to publish towards, be reviewed by, and read and cite a small group of like-minded academics. What sort of quality control model will begin to promote higher quality in a holistic / universalit sense, especially in an inward looking academic field, rather that relativistically higher quality (which might in fact be the opposite)? Only one that would be rejected by most academics, I suspect. This is maybe where the funders come in. They pay the piper, so can call the tune, through promoting research and academic activities in areas that are meant to matter. So maybe this encourages interaction-competition between different academic approaches, in those areas, to take your Darwinist metaphor somewhere else.

    Another worry is the bandwagon, where it’s easy to get resources to do one type of trendy work, even if the quality is low. There are claims that there is a population explosion in neuro-imaging, for example, and that this work outcompetes other fields leading to their diminishment and perhaps extinction, before the too-large population of researchers in that one trendy area collapses.

    Finally, the waste that is part of competition. If we plan for one researcher to follow their nose, and train and support them, there is a cost to this, especially if they are not the best they could be, doing the best they could do. But the alternative is to train three, four, or more researchers, and let them compete. They spend their 20s in uncertain jobs, competing for resources, only to fail to attain the appropriate (peer-reviewed) quality level. Or maybe they are just unlucky. But they leave academic. Yes, they have transferable skills, and yes, the one(s) that remain are maybe doing the best work. But is that a good system, even for cold, heartless, pragmatic academia? Is it value, and does it provide good work?

  • Gustaf says:

    Simply… There can be no “control” of publishable information anymore. There doesn’t even need to be. The music industry are learning that the hard way right now – and the publishing industry will start to learn it, quite soon – if they haven’t already.

    Here’s a draft on how decentralized publishing and reviews can be made:

    https://thepiratebay.se/torrent/6734667/%5Bpaper%5D%5B1.00%5D%5BA_proposal_for_a_free__open_and_decentralized_publ

6 Trackbacks

Republish

I encourage you to republish this article online and in print, under the following conditions.

  • You have to credit the author.
  • If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
  • Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
  • Keeping all this in mind, please take this work and spread it wherever it suits you to do so!