Sloppy Science? System Failure? Or…?

This week, something on the order of perhaps 15,000 to 20,000 of the world’s top cancer researchers, from PhD students all the way to Nobel laureates, are gathering in Chicago for the Annual Meeting of the American Association for Cancer Research  (AACR). Although I am not attending this year, if the current program resembles those of years gone by, you can be sure that there will be hundreds, if not thousands, of scientific presentations that are describing potential new targets for cancer therapy. Some of these will be in huge plenary sessions, some will be oral talks in mini-symposia and many will be presented in concurrent poster sessions.

Why? As we learn more and more about what makes cancer cells tick, we are discovering more and more pathways that are implicated in the origin or the continuation of the cancer “state”. Every molecule that gets identified as being part of one of these pathways is, at least potentially, a target for intervention – either to turn it off (if it is implicated in the initiation or progression of cancers), or to turn it on (if it is implicated in some “protective” mechanisms against cancers), and so on.

And every one of these studies could be important in its own right since each one adds to our burgeoning understanding of the molecular basis of cancers.

And some of these might turn out to be even more important if the so called “target” can be validated as really being involved in cancer causation (as opposed to an incidental bystander).

But the real “jackpot” comes when one of these targets is not only validated as being important and centrally involved in one cancer or another, but is also considered to be a so-called “druggable” target. By that, we mean that we expect to be able to discover or develop a drug, usually a small molecule or an antibody, that can then interfere with, or in some other way modulate, the cancer state and thus be an effective anti-cancer therapeutic.

The success of this model has been evidenced by drugs like Imatinib (Gleevec), a small molecule drug, or Trastuzumab (Herceptin), a monoclonal antibody. The search for the next “druggable targets” and the subsequent discovery of the next Gleevec or the next Herceptin continues to drive preclinical laboratory-based research the world over, since those avenues are where many of the next cancer therapeutics are expected to originate.

Undoubtedly, only a small number of these putative targets will actually traverse that magic line from being a preclinical “observation” to actually being of demonstrated clinical utility. This is the realm of so-called “translational research” – to translate, or move forward, research from the lab so it can end up in the clinic for the care and treatment of real patients in the real world.

That path is often a long and arduous one, mind you, fraught with frustration, but every long journey starts with a single step, as they say. Still, there will be palpable excitement as more and more of these potential targets are described, understood and tested for clinical utility.

Last week, however, a bit of a bucket of cold water was thrown on a number of highly touted studies that presumably had shown great promise of such translation into the clinic.

In a Commentary entitled “Drug development: Raise standards for preclinical cancer research” published in the respected journal, Nature on March 29, 2012 authors C. Glenn Begley and Lee M. Ellisreported that, sadly, not only have the vast majority of such studies have NOT resulted in translation into the clinic, but worse, they said, reputable scientists working at pharmaceutical or biotech companies have not been able to replicate most of the results that had been lauded at one time as potential “breakthroughs” (italics mine). 

 In total, they reported that at least 47 out of 53 publications – all from reputable researchers and published in reputable peer-reviewed scientific journals –  had not been able to be replicated during the time the one of the authors (Dr. Begley) had been the head of research at the biotech company Amgen. 

This rather shocking finding prompted the authors to make some specific recommendations to try to ensure that this situation did not persist.

 And it prompted an Editorial in the same issue of Nature, entitled “Must Try Harder” that opined that “too many sloppy mistakes are creeping into scientific papers. Lab heads must look more rigorously at the data — and at themselves”.   

 The Editorial went on to say:

[This] “Comment article … exposes one possible impact of such carelessness. Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.”

Please do note that, as the editorial says, no one is suspecting, suggesting nor accusing any fraudulent behaviour. And indeed there are many potential legitimate explanations why not all results can be reproduced. But the publication of this Commentary and the accompanying Editorial have certainly ignited a firestorm of subsequent comments, newspaper articles, blog posts and Twitter activity.

I found one online response to the Nature Editorial to be particularly telling, especially since it came from a friend and colleague whose opinions I respect immensely. Dr. Jim Woodgett,Director of Research at Toronto’s famed Samuel Lunenfeld Research Institute at Mount Sinai Hospital wrote:

“The issue with inaccuracies in scientific publication seems not to be major fraud (which should be correctable) but a level of irresponsibility. When we publish our studies in mouse models, we are encouraged to extrapolate to human relevance. This is almost a requirement of some funding agencies and certainly a pressure from the press in reporting research progress. When will this enter the clinic? The problem is an obvious one. If the scientific (most notably, biomedical community) does not take ownership of the problem, then we will be held to account. If we break the “contract” with the funders (a.k.a. tax payers), we will lose not only credibility but also funding. There is no easy solution. Penalties are difficult to enforce due to the very nature of research uncertainties. But peer pressure is surely a powerful tool. We know other scientists with poor reputations (largely because their mistakes are cumulative) but we don’t challenge them. Until we realize that doing nothing makes us complicit in the poor behaviour of others, the situation will only get worse. Moreover, this is also a strong justification for fundamental research since many of the basic principles upon which our assumptions are based are incomplete, erroneous or have missing data. Building only on solid foundations was a principle understood by the ancient Greeks and Egyptians yet we are building castles on the equivalent of swampland. No wonder clinical translation fails so often.”

As someone who ran the research operations of two major Canadian national cancer research funding agencies over the past two decades, I wonder if my own organizations have inadvertently been “complicit” in this. We always tried our very best not to “over-hype” any results from investigators we funded, but there is always a need, especially in a national health charity, to “excite” the public and the prospective donor, and to be accountable to previous donors by showcasing for them any success their generosity has won.

Perhaps we all need to take a closer look at the pressures we place on researchers globally to “publish or perish”. Are our incentives and the way we measure “success” all wrong?

Perhaps, indeed, it is long overdue that we take a very hard look at how we conceive, fund, undertake, promote and analyse cancer research results, and how and what we value in cancer research and in cancer researchers.

     
 
If you enjoyed this post, please consider sharing it, leaving a comment or subscribing to the RSS feed to have future articles delivered to your feed reader.
   

Comments

Sloppy Science? System Failure? Or…? — 3 Comments

  1. The responsibility of research lies at every link of the chain. From the trainee who is usually conducting the work, to the supervisor who usually takes credit for the work, to the institution that advertises the work, to the funders who must justify their investment. At each level there are inherent conflicts of interest. The student who wants to finish their PhD, the supervisor who needs the publication to support a grant application, the institution raising dollars to keep the lights on and the funder who must tread a fine line in communicating what they do to their donors or government. While I don't think the problem is enormous, it only takes a few bad apples, over-eager participants or simply poor judgement to taint the whole field. This is perhaps most important in cancer where promises of research breakthroughs and cures have populated the vernacular since Nixons war on cancer. How do we balance the need for research with the need for dollars. If we over promise (and we do), then at some point the public becomes saturated. Yet hope lives eternal.

    Michael – great to see you blogging and poking. Too much is left unsaid or behind closed doors. Research is too important to be hidden.

  2. Jim,

    Thanks for the cogent thoughts – a logical extension to the note you uploaded on the Nature editorial quoted above. I agree – it doesn't take too much to taint the field.

    The other part of this that concerns me, however, is public reaction. Faith in all things science is not exactly at a high – especially in conservative circles. While I don't think a lot of folks in the public read Nature per se, the conversations about this certainly hit the popular press. I don't want to see unfiltered messages taint the public's confidence. As we both know, we are at a huge tipping point in cancer research and we need more, not less, public (and government) support.

    One of the reasons I am writing this blog :)

    In that latter regard, thanks for your encouragement. Now all I need is to build up some readership! :)

  3. I thank you for sharing your expertise in the matter, but find this extremely upsetting, as we're dealing literally with life and death. I wish I could just say this inattention to detail and sloppiness remained in the cancer labs, too–but it's become a phenomenon all over the scientific community that results can't be replicated, which is quite alarming. I was disturbed enough to research a number of the areas in which this is occurring, and put them in a post (you can see "Play It Again, Sam: Trying–and Failing–To Reproduce Scientific Results" at http://wp.me/p22afJ-EC, if you've a mind to). What can a lay person do about it?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>