Open Access

The shocking truth about the quality of research: it’s not getting better

That’s the conclusion of a new report studying 10 years of well-documented research activity.

Since 2004, the Norwegian government has carefully classified all scholarly publications produced by Norwegian researchers.

The system for tracking publications has just been evaluated. The review shows that the quantity of papers published has exploded.

But the quality has remained unchanged. Stable. Flat. It hasn’t gotten worse. But it hasn’t gotten any better, either. “There are no indications of significant improvements.”

Counting for quality

The conclusion that the quality of research is unchanged is built on three different measurements of impact. Each assumes that quality can be measured by counting the number of times an article is cited in other research articles.

The broadest strokes are painted by comparing the average number of citations garnered by Norwegian articles to the average number of citations for all articles internationally. When we do this, we see that the Norwegian ones are cited a little more than average. That’s where we were in 2004 and that’s where we are today, giving the impression that quality is stable.

Do the numbers hide an increase in quality?

This conclusion doesn’t feel right. My impression is that the quality of research in Norway has gotten better.

Is it possible that an increase in quality is somehow hidden behind the numbers? By some measures, the number of Norwegian publications since 2004 has increased by over 80% while the number internationally has increased by about 70%. Furthermore, the number of researchers in Norway who publish anything at all has tripled during that period.

The report claims that weaker researchers have started writing. “The effects … have been greatest at the bottom of the system, where publication activity has been lowest.”

If it is true that many of the newly activated researchers are writing articles that get cited at below average rates, then this must mean that the best articles are holding up the average by getting cited even more. In other words, the stability of the overall average must be hiding improvements at the top. The best must be getting better.

Unfortunately, the other two measurements cast doubt on this view.

Comparing the best to the best

In addition to looking at the citation rates for all Norwegian articles, the evaluation also reports on developments at the top and the bottom of the heap.

We can look at the best articles in Norway and compare them to the best articles in the world. If 10% of Norway’s articles are among the 10% most cited articles in the world, then the best Norwegian researchers are doing work of the same quality as everyone else. In fact, Norway is slightly overrepresented among the world’s top 10% of articles. This is another positive sign of the quality of work being done.

However, this measurement, too, has been stable since 2004. That tells us that there has not been any relative increase in the quality of the best articles produced in Norway.

Some articles never get cited

The next step when trying to determine whether the numbers are hiding an increase in quality at the top is to look at the bottom. This takes us to the third measurement, which counts how many articles published in Norway get no citations at all.

In 2000, 25% of Norwegian articles remained uncited in their first four years of life. By 2009, this had fallen to about 15%. This shows that the “bottom” isn’t pulling the average down. In fact, it’s raising it, making more room for the top to pull us even higher.

It’s getting better all the time

Maybe the real picture in Norway is that everyone is doing better. The citation rates for all Norwegian articles and for the top 10% of Norwegian articles show that Norwegian research is relatively stable. The quality of research in Norway is parallel to the quality internationally.

But the the claim that the increase in quantity pulls down overall quality — and that there therefore must be a hidden improvement at the top — seems unfounded. More articles are being published and a higher percentage of them are being cited, throughout the system. But we only know the relative numbers of citations and not the absolute ones, so we can’t know where the cutoff is for positive and negative effects — except for those articles getting no citations at all, which obviously pull down the average.

The Norwegian system is well-managed and the criteria have been stable, which makes it a good object of study. For this reason, the report on the Norwegian system is important internationally (and it is therefore a pity that it’s written in Danish, although there is a good English summary).

The report does not demonstrate that quality can be measured. But it does demonstrate what can happen if we stipulate a measurement, such as citation rates. When we do that, then we see that an incentive system need not lead to a shift in focus, towards quantity at the expense of quality. This is an important contribution to the debate on how to make universities and researchers accountable. If policy makers insist on measuring us, tweaking the Norwegian model might be the best we can do.

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more

Share

6 Comments

  • I have to say Curt, if Norwegian percentage of articles cited is stable, and it’s safe to presume that the amount of articles available is increasing throughout the globe, given that Norway’s population is but a fraction of the worlds – does that not equal more citation from Norway?
    If Norway’s 6 million is stable compared to USA’s 300 million, or the worlds 6 billion, does that not hint at ours having a more substantial contribution?

  • Wilf Tarquin says:

    “quality can be measured by counting the number of times an article is cited”

    That’s the problem right there. Number of citations is not a measure of quality at all.

    • Curt Rice says:

      Indeed! The over-arching perspective I’m trying to get across here and in other writings on this blog is that when it comes to research, there’s just no strategy for counting your way to quality. You actually have to read the stuff and have the necessary expertise to evaluate it.

      • Bern Parent says:

        I think a better measure of quality would be to count the percentage of papers that appear in leading journals (with the latter obtained through polls or some other means like the h5 index for instance). Publishing there is generally considered a hallmark of quality due to the editorial board being more competent and having more rigorous standards.

  • Thank-you for this article !
    (Had thought it was general, but that it has to do with Norway specifically has motivated me to make this remark.)
    Norwegian people should become aware of their international responsibility : oil has allowed fiscal responsibility, but extra efforts not to allow their currency to be an international football (playied with external {foreign} narratives about FX hypes) are possible. To wit, continue to examine the international science scene, but:
    –>> __allow funds__ to neutrally, thoroughly, internally examine if Norwegian research is on an int’l par.
    (additional peer review scenarios, outside consultants, found industry [self-financed!!!!] peer review [industry specific] boards, only with a view to “practicality”.
    –>> react with __adequate funding__ to bring it higher – __regardless__ on what level is already evaluated!!
    –>> a little bit of government money to educate a lovely populace that might look/worry at/about the rest of the problems in Europe and reassure them that if they become a science powerhouse, the effect will be as beneficial as the discovery of their offshore oil!
    (((In germany they have a word “hoch-nassig”, wow.. in USA they just say… they say… “SAY IT”!=) )))

  • Ron Judson says:

    And then I wonder what must be done to elevate one’s research above the “quality” threshold, when the explosion in quantity seems to be drowning out one’s chances. This concerns me a bit, as an aspiring researcher who hopes to embark on a Ph.D. in the Oslo area within a year. Suddenly researchers need to become salespeople. Not only that, this post confirms what a professor of mine from BI said to me once, that priorities for research are misguided and academia is flooded as a result.

2 Trackbacks

Republish

I encourage you to republish this article online and in print, under the following conditions.

  • You have to credit the author.
  • If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
  • Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
  • Keeping all this in mind, please take this work and spread it wherever it suits you to do so!