How often have we heard people lament about the inherent contradictions in cancer research? How often do people throw up their arms and say “I don’t know what to believe anymore”. Too many people harbour the feeling that whatever some study shows today will be contradicted by a study published next month and so on and so on.
What too many folks fail to appreciate is that this is actually science and research working as it is intended. It is rare these days, especially in an area bound by such immense complexity as cancer research, that any answer is going to be black and white. It is in the nuances that our knowledge and information evolves, and so it is natural that today’s understanding may well be eclipsed by a different understanding days, weeks, months or even years later.
This past week however, we saw the publication of a new breast cancer study on Ductal Carcinoma in Situ (DCIS) that appeared at first glance to be internally contradictory, and which created considerable confusion all by itself.
To be fair, the study itself, entitled Breast Cancer Mortality After a Diagnosis of Ductal Carcinoma In Situ by Dr. Stephen Narod and colleagues in Toronto (published in JAMA oncology) was not the source of the contradiction. It was journalists, and particularly headline writers, that created the confusion by zeroing in on two different aspects of the study and making them seem like they were linked.
For example, the following two headlines are related to the SAME STUDY cited above:
Early-stage breast condition may not require cancer treatment vs. Ductal carcinoma in situ carries a higher risk of death than previously thought. [N.B. the former title was subsequently changed to read “Doubt Is Raised Over Value of Surgery for Breast Lesion at Earliest Stage”].
We could be forgiven some considerable head scratching and wondering how these could possibly relate to the very same research study. On the one hand we see the suggestion that we may not require treatment at all for DCIS, whereas the other headline promotes considerably more fear about the risk of DCIS. To make matters worse, the second headline was actually taken from a press release from Dr. Narod’s own institution, and yet in the first article Dr. Narod was quoted as saying “I think the best way to treat D.C.I.S. is to do nothing.”
So which is it? Is DCIS a benign condition that doesn’t need treatment (according to Dr. Narod) or is it a serious condition that carries a higher risk of death (according to the press release from Dr. Narod’s institution)?
Many articles were subsequently written about this study that did a much better job of explaining the fact that these were not mutually exclusive outcomes. This is not a case of “either/or” but rather a case of “both/and”. The two results being reported are not as contradictory as some of the initial headlines suggested, but in linking the two together huge confusion was created.
What the study actually showed was indeed that DCIS probably needs to be rethought, in that it may not simply be a breast cancer precursor, or as is commonly thought breast cancer Stage-0, but may indeed be a disease with its own characteristics, and one that has its possibility of progression and carries its own risks. And as such, that the mortality risk from DCIS all by itself may well indeed be higher than had been previously thought.
On the other hand, the same study showed that women who had been diagnosed with DCIS and who had been treated with surgery, or surgery plus radiation, did not fare any better in terms of mortality then women who had not been treated.
These are NOT the same thing.
The problem is that headline writers did not or could not encapsulate the nuance of these two findings, and by picking one focus or the other for the headline, did not give a balanced view. Subsequent stories such as New DCIS study, news release lead to (very) mixed messages or Study Looks at How Many Women Die From Breast Cancer After a DCIS Diagnosis did a much better job of explaining that these two things are separate findings, not to be confused with one another.
It is bad enough when independent studies lead patients to different conclusions. That is a necessary, albeit frustrating, aspect of knowledge generation in a field that doesn’t have all the answers. But it is incumbent upon those writing about science and research not to add to the confusion by imprecision about what studies actually do (and do not) show. And in cases like the one cited here, where there are multiple findings that are not mutually exclusive, that we take more care in elaborating just what all of the multiple findings are telling us.
Erroneously and misleadingly conflating the ideas of “not needing treatment” with “higher risk of death” is enough to send anybody scurrying to the hills screaming that the scientists don’t know what the hell they are doing.