Open Access

The shocking truth about the quality of research: it’s not getting better

That’s the conclusion of a new report studying 10 years of well-documented research activity.

Since 2004, the Norwegian government has carefully classified all scholarly publications produced by Norwegian researchers.

The system for tracking publications has just been evaluated. The review shows that the quantity of papers published has exploded.

But the quality has remained unchanged. Stable. Flat. It hasn’t gotten worse. But it hasn’t gotten any better, either. “There are no indications of significant improvements.”

Counting for quality

The conclusion that the quality of research is unchanged is built on three different measurements of impact. Each assumes that quality can be measured by counting the number of times an article is cited in other research articles.

The broadest strokes are painted by comparing the average number of citations garnered by Norwegian articles to the average number of citations for all articles internationally. When we do this, we see that the Norwegian ones are cited a little more than average. That’s where we were in 2004 and that’s where we are today, giving the impression that quality is stable.

Do the numbers hide an increase in quality?

This conclusion doesn’t feel right. My impression is that the quality of research in Norway has gotten better.

Is it possible that an increase in quality is somehow hidden behind the numbers? By some measures, the number of Norwegian publications since 2004 has increased by over 80% while the number internationally has increased by about 70%. Furthermore, the number of researchers in Norway who publish anything at all has tripled during that period.

The report claims that weaker researchers have started writing. “The effects … have been greatest at the bottom of the system, where publication activity has been lowest.”

If it is true that many of the newly activated researchers are writing articles that get cited at below average rates, then this must mean that the best articles are holding up the average by getting cited even more. In other words, the stability of the overall average must be hiding improvements at the top. The best must be getting better.

Unfortunately, the other two measurements cast doubt on this view.

Comparing the best to the best

In addition to looking at the citation rates for all Norwegian articles, the evaluation also reports on developments at the top and the bottom of the heap.

We can look at the best articles in Norway and compare them to the best articles in the world. If 10% of Norway’s articles are among the 10% most cited articles in the world, then the best Norwegian researchers are doing work of the same quality as everyone else. In fact, Norway is slightly overrepresented among the world’s top 10% of articles. This is another positive sign of the quality of work being done.

However, this measurement, too, has been stable since 2004. That tells us that there has not been any relative increase in the quality of the best articles produced in Norway.

Some articles never get cited

The next step when trying to determine whether the numbers are hiding an increase in quality at the top is to look at the bottom. This takes us to the third measurement, which counts how many articles published in Norway get no citations at all.

In 2000, 25% of Norwegian articles remained uncited in their first four years of life. By 2009, this had fallen to about 15%. This shows that the “bottom” isn’t pulling the average down. In fact, it’s raising it, making more room for the top to pull us even higher.

It’s getting better all the time

Maybe the real picture in Norway is that everyone is doing better. The citation rates for all Norwegian articles and for the top 10% of Norwegian articles show that Norwegian research is relatively stable. The quality of research in Norway is parallel to the quality internationally.

But the the claim that the increase in quantity pulls down overall quality — and that there therefore must be a hidden improvement at the top — seems unfounded. More articles are being published and a higher percentage of them are being cited, throughout the system. But we only know the relative numbers of citations and not the absolute ones, so we can’t know where the cutoff is for positive and negative effects — except for those articles getting no citations at all, which obviously pull down the average.

The Norwegian system is well-managed and the criteria have been stable, which makes it a good object of study. For this reason, the report on the Norwegian system is important internationally (and it is therefore a pity that it’s written in Danish, although there is a good English summary).

The report does not demonstrate that quality can be measured. But it does demonstrate what can happen if we stipulate a measurement, such as citation rates. When we do that, then we see that an incentive system need not lead to a shift in focus, towards quantity at the expense of quality. This is an important contribution to the debate on how to make universities and researchers accountable. If policy makers insist on measuring us, tweaking the Norwegian model might be the best we can do.

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more

Share

6 Comments

Republish