If an infinite number of monkeys would spend eternity banging away on an infinite number of typewriters, eventually they would produce the complete works of Shakespeare. So says the ubiquitous Infinite Monkey Theorem and the Theory of Accidental Excellence it inspired.
Sometimes I wonder if Émile Borel’s thought experiment is also the rationale for the Norwegian model of research funding. Are universities nothing more than buildings full of typing monkeys? Is the government with its funding model betting on accidental excellence?
Universities in Norway are rewarded for the number of publications their staff produces. The more their people publish, the more the institution is rewarded.
Is publishing more better?
The reward system, of course, communicates the idea that publishing more is better than publishing less. Publishing, after all, is how we participate in the international research community and it’s how we contribute to pushing the frontiers of knowledge forward. Publishing is important. We should do more of it, the system says.
But publishing more is better only if the quality of everything published is good; it’s better only if what gets into print somehow makes a contribution. Unfortunately, we can’t assume either of these things.
Many papers are published too early, before they’ve been thoroughly tried and tested. A biotech firm in California working on cancer treatments recently asked its scientists to confirm the results of 53 landmark papers. The Los Angeles Times reports that only six of them could be proven valid.
Norway’s two-level system doesn’t lead to more high quality publications. In fact, it can’t. Instead, it leads to speculation, frustration and micromanagement.
We also know that not every article makes a contribution. Most of them can’t because most of them are hardly read and never cited. That’s right: about 90% of the papers published in academic journals are never cited by anyone and therefore never contribute to a research program. Even if we limit our view to the top journals, fewer than half of the articles they publish are cited within five years of their appearance.
With papers appearing before they’re ready and many having no impact at all, we have to acknowledge that a system pushing to increase the quantity cannot be assumed to capture quality as well. And given that, what really is the point of pushing for more?
An attempt to quantify quality
In fairness, the Norwegian system is not built on quantity alone; it does give a nod to putative quality. But, unfortunately, that part of the system isn’t working and it needs to be changed.
Publications put money into universities coffers based on a unique point system. Points are only given to publications in approved outlets, roughly — but not blindly — determined by whether or not the publisher uses peer review.
The number of points awarded varies based on the type of document and the quality of the publisher. There are three categories of documents: articles in anthologies, articles in journals, and books.
Each of the three categories is subdivided into two, with the bottom 80% (level 1) getting fewer points than the top 20% (level 2).
These levels are difficult to describe, but very roughly speaking, Norwegian scholars collectively determine what the best journals in their fields are and a cut-off is introduced such that the top 20% of the articles published by said scholars are given more points than the remaining 80%. It’s a home-grown Impact Factor system.
The government is currently evaluating this system and anything short of dramatic change must be considered a failure. Below, I identify six ways this system could be better.
1. Eliminate the two-level system
As a first improvement to this system, we should eliminate the distinction between level 1 and level 2. There are at least three reasons to make this move. Indeed, there surely are more; please add yours in the comments section below.
The levels do not reflect quality
The most important reason to change to a one-level system is that there is no quality-based justification for dividing journals into two groups. While some of the journals that are promoted to level 2 may be more visible or have higher impact factors, these are not reliable predictors of the quality of an individual article in that journal.
Research has demonstrated that “impact factor is … a poor measure of merit.” It is simply unsustainable to claim that the level 2 journals are the best journals, because there are no criteria in Norway or elsewhere that successfully deliver a list of qualitatively better journals, as argued here, here, here, here and here.
Impact factor has been discredited. On what grounds might we expect that Norway’s ad hoc alternative would fare any better?
The system is unpredictable
The second reason we should move to a one-level system is because the current system is implemented in ways that often leave researchers discouraged. The system for promoting journals to level 2 (and the concomitant demotion of others) depends largely on national committees that can be and are lobbied. One unfortunate result of this is that because of the annual modifications, a researcher who submits a paper to a level 2 journal might well find that journal downgraded to level 1 before the paper actually appears.
When it comes to publishing books, the system doesn’t simply refer to particular publishing houses, but rather to series at the publisher. This becomes almost impossible to keep track of.
With its frequent changes and unmanageably detailed classifications, the system is not merely complicated; it is unpredictable. And an unpredictable system is not a good tool for accomplishing anything.
A one-level system has greater integrity
The third reason we should move away from a two-level system is because it is unnecessary. Researchers should be trusted to use the right criteria to find the best outlet for their work. All of us who do research have opinions about the great journals in our fields. We don’t need governmental incentives to try our luck there. The prestige that comes our way when our papers are accepted at journals like Science and Nature is enough for us to push ourselves in that direction.
As a corollary to this third point, we can see now that a one-level system will have greater integrity. Researchers will decide themselves based on the criteria they find important, whether that is visibility or conceptual appropriateness or something else. Through a one-level system, the government will show greater confidence in researchers.
The two-level system we have in Norway doesn’t lead to more high quality publications. In fact, it can’t. You simply can’t count your way to quality. Instead, it leads to speculation, frustration and micromanagement.
A system like ours can promote quantity and quantity alone. Since neither the Norwegian approach to journal ranking nor any other can promote quality, we should abandon attempts to do so.
Let’s use the current evaluation to get away from an approach that turns scholarship into gamesmanship and that risks turning professors into typing monkeys.
This is the first of a multi-part series proposing six improvements to the Norwegian system of rewarding publications. Stay tuned for the rest of my list!
I encourage you to republish this article online and in print, under the following conditions.
- You have to credit the author.
- If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
- Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
- Keeping all this in mind, please take this work and spread it wherever it suits you to do so!