Open Access

0.01% inspiration: The failure of research

Research fails. Almost always.

Sometimes I try to explain this with help from Thomas Edison. Edison’s associate, the story goes, was frustrated with nearly a thousand unsuccessful experiments to find the right approach for a project. He was ready to throw in the towel, but Edison talked him out of it. “I cheerily assured him that we had learned something,” he is reported to have said in a 1921 interview. “We had learned for a certainty that the thing couldn’t be done that way, and that we would have to try some other way.”

I need help from Edison because publicly funded universities are expected to deliver results. We are accountable to politicians and the general public; we have to explain that we deserve their support. But I have found it difficult to arouse much passion among those groups for debating erudite topics such as “What is a Result?” — it’s an interesting topic, as Edison made clear, but perhaps it’s just, well, a bit academic.

Maybe it’s time to drop the Edison anecdote and replace it with a more contemporary metaphor. Over at Science2.0, Alex “Sandy” Antunes recently blogged about The 1% Conversion Rule. He was writing about marketing, about clicks on webpages, and about statistics. He demonstrates that about 1% of those who see a link on a webpage will actually click on it. And if you want your readers to do something on that new webpage — like make a purchase — only 1% percent of the first group of clickers will do as you hope.

If you get 1,000,000 visitors to a site, you can get 10,000 to the sales page. And of those 10,000 who go to the sales page, 100 may make a purchase. This conclusion is based in part on Antunes’ own research and the numbers are clear: 1% of 1% is what you can hope for.

Do researchers have results like these? Are 1% of our ideas good enough to develop a project around them? Can we hope that 1% of those projects will actually lead to new knowledge? If that’s the best a marketer can do, maybe the arduous path to results in research is just part of the human condition.

I think we can do better, though. True, the slow path to results is partially in the nature of research; the quest for new knowledge can’t be a straightforward journey. But it’s slow in part because of structural impediments, slow communication and failure to discuss widely enough what we’ve already tried. Few scholarly journals, for example, are interested in publishing results of what didn’t work; researchers can unwittingly waste effort trying what has already been proven unsuccessful — just because they don’t know about it.

If we build smarter networks and find even better ways of exchanging information, we can inspire each other and straighten the road a bit.

Marketers succeed 0.01% of the time; Edison beat them already with a success rate of 0.1%. How can we do even better? And how can we tell our story?

UPDATE: On the basis of this entry, Research Europe solicited a column which expands on some of these ideas, Negative Results are Important.

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more

Share

2 Comments

  • It is a very difficult dilemma. If we start publishing every little snippet of what we do, including the failures, or Twitter intermediate results, or something, that may please some politicians because it may make us more transparent, etc.
    But the downside is: it is also going to be a massive waste of everybody’s time. Who is going to read these journals with failed research? Today I tried to see whether a certain concept of feature cooccurrence constraint can be correlated with markedness in the UPSID. It cannot, and I am never going to talk to anybody about it, and this is better for everybody. (I mention it now, as an example.)
    Managers want to understand everything that everybody does at every instant, and many politicians are managers. But leaders have something else: trust. There has to be trust in the people you are working with, that they are serious and that when they fail, this is because the job is too hard.
    I know one cannot ask people for trust; in particular one cannot ask managers for trust. But it seems to me that this is your job. Let the people trust us.

    • curt rice says:

      I do trust you! Your track record makes that an easy and rational decision. And I’m completely sick of all the counting. It’s not the way forward. But the challenge is deep. Chairs have to report to deans, who have to report to rectors, who have to report to ministers. And ministers have to justify budgetary decisions. And the engaged part of the general public wants to know if we’re doing something sensible with “their” money. That expectation doesn’t strike me as unreasonable. The challenge for people like me is to communicate the value of what we’re doing — and, indeed, the “value added” — in a way that doesn’t lead to drowning researchers in counting. You’re right; it’s difficult.

1 Trackback

Republish

I encourage you to republish this article online and in print, under the following conditions.

  • You have to credit the author.
  • If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
  • Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
  • Keeping all this in mind, please take this work and spread it wherever it suits you to do so!