“When you have eliminated the impossible,
whatever remains, however improbable, must be the truth.”
Sherlock Holmes in The Sign of the Four, Sir Arthur Conan Doyle
Research fails. Almost always. Scientists discard hypotheses like so many untried drones, cast out to freeze before even getting a chance.
This is the nature of research. It’s the approach that gives us the results we have — the breakthroughs, the insights, the improvements in our quality of life. Failure in research is not a problem; in fact, it can push us forward.
Thomas Edison’s associate, the story goes, was frustrated with nearly a thousand unsuccessful experiments for a project. He was ready to throw in the towel, but Edison talked him out of it. “I cheerily assured him that we had learned something,” he is reported to have said in a 1921 interview. “We had learned for a certainty that the thing couldn’t be done that way, and that we would have to try some other way.”
But if failure in research is itself not a problem, the communication of such failures is. If we learn from negative results, why is it so hard to publish them? Is there a systemic challenge for the promulgation of research? If so, what changes might help?
Journals don’t have policies against publishing negative results. The World Association of Medical Editors states, on the contrary, that “studies with negative results … should receive equal consideration.” At the same time, there is research suggesting that statistically significant results increase the chance of publication, thereby lowering the odds that negative results get into print.
It may be difficult to publish negative results simply because reviewers find such studies less interesting. It could also be the case that editors are under pressure from publishers to increase the impact factor of their journals, and they know that studies with negative results are less cited than those with positive results.
Perhaps the attitudes of researchers are relevant, too. It may be more interesting to move onto testing a new hypothesis than to write up the results of a failed one. Maybe it’s even embarrassing to have a hypothesis that didn’t pan out.
Some offer more sinister speculation: Researchers don’t publish their own negative results because they don’t want to help the competition. This highlights a peculiar feature of the organization of universities.
University departments have two core activities: teaching and research. Each applies different pressures on hiring decisions. Teaching coverage requires breadth, while research success requires depth. Departments are often built on the basis of teaching needs. Hence, communities of researchers working on the same problems tend to span institutions.
While those institutions may collaborate, they are actually in competition with one another. They compete for funding. Could this provide an incentive to leave negative results unpublished? Might a team consider its negative results to be a competitive advantage, insofar as such results contribute to the context for subsequent positive results? Will we have a better chance for funding if we let the competition languish with a hypothesis that doesn’t work?
The skewing in favor of positive results follows from a lack of incentives to publish negative results. Either researchers themselves find the results uninteresting, or they anticipate that reviewers will be less enthusiastic, or they know they will be cited less, or they may even believe they surrender a competitive advantage by publishing such results. Why should one then bother?
What changes could counter these pressures? Can we imagine incentives to publish negative results? I see at least two developments that could play a role.
First, funding models that give economic rewards for publication could lead to the appearance of more negative results. Norway has recently adopted an elaborate national scheme of this type. Publication records are now directly tied to funding for travel, research assistance, equipment, etc, i.e. publication itself generates funds. Tracking the developments triggered by this system may reveal an increase in the publishing of negative results.
Second, there is increased political pressure to connect research with innovation. The European Commission announced recently that its new funding program will be called Horizon 2020 — The Framework Programme for Research and Innovation. Pressure to demonstrate innovation may yield increased publication of negative results. Negative results can winnow the possible directions for innovative applications, and thereby demonstrate the usefulness of research — in turn increasing the chances for EC funding. Examples range from medicine to evolutionary biology to the social sciences.
There is broad — but not unanimous — consensus that publishing negative results is important for science. In many ways, the current system discourages this. We should keep an eye on the new developments I’ve mentioned to see if they lead to change.
And if they do, then we will join Sherlock Holmes in a more open pursuit of truth — not only by eliminating hypotheses, but by telling each other when we do.
This is a pre-print of a column appearing in somewhat edited form as the View from the Top commentary in Research Europe, 21 July 2011. The commentary was solicited in part on the basis of my earlier blog entry 0.01% inspiration: The failure of research, with which the careful reader may notice occasional similarities.
Share
2 Comments
Republish
I encourage you to republish this article online and in print, under the following conditions.
- You have to credit the author.
- If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
- Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
- Keeping all this in mind, please take this work and spread it wherever it suits you to do so!
I don’t disagree with this blog post.
For sure they are important! there are even specific journals for negative results in research, everybody knows The All Results Journals for example:
http://www.arjournals.com
The problem is scientist do not like to submit negative results, because they think they can be bad for their careers or something similar. Very strange….
Thanks for the post,
Lewis