Not enough evidence

Be cautious when lack of evidence is described as evidence of “no effect”.

Sometimes, when a study finds that the effect of a health action is uncertain, people say there is “no evidence” that the health action is better or worse than the health action to which it is compared. This lack of evidence is often taken to mean that there is “no difference” between the health actions. This is not right. We cannot be sure that there is “no difference”, or whether one health action is better or worse than the other

Explanation

Systematic reviews sometimes conclude that there is “no difference” between the health actions that are compared. However, studies can never actually show that there is “no difference” (“no effect”).

By convention, a 5% probability that the results observed in a comparison could have occurred by the play of chance (p > 0.05) is considered “not significant”. Trials with “statistically non-significant” results are commonly referred to as “negative”. But they are inconclusive, not “negative” so this is misleading. Often those studies are not big enough to either rule in or rule out an important difference. Misinterpreting “statistically nonsignificant” results and failing to recognise uncertainty in estimates of effect can sometimes put researchers off doing further research to reduce the uncertainty. It also can result in delays in the uptake of effective treatments.

A survey of systematic reviews published in 2001-2002 found misleading claims of “no difference” or “no effect” in 21% of review abstracts (summaries). In 2017, a survey of systematic reviews found 71 examples of misleading interpretations. These included, for example, “evidence for no effect”, “does not affect”, and “found no beneficial or harmful effects”. This suggests that review authors’ misinterpreting lack of evidence as “no difference” is an important problem. Surveys of randomized trials and of press releases and associated media coverage based on the abstracts of randomized trial show that misinterpretations of results are widespread and that uncertainty in drawing conclusions is rarely mentioned.

Considering the precision (how much the play of chance affects the results) of effect estimates when making judgements about the certainty of the evidence, and not reporting effects as “significant” or “non-significant” can reduce the chances of being misled by research findings.

Example

A systematic review of 24 randomized trials of thrombolytic therapy (medicine that prevents blood clots from growing) given to patients after an acute heart attack found a reduction in deaths that was highly unlikely to have occurred by chance alone. But only five of the 24 trials had shown a “statistically significant” effect (p < 0.05). The lack of “statistical significance” of most of the individual trials and misinterpretation of those results led to a long delay before the value of thrombolytic therapy was appreciated.

Remember: Don’t confuse “no evidence” or “a lack of evidence” with “no difference” or “no effect”. And don’t be fooled if someone says there is “no difference” or “no effect”.

Educational resources for this concept
Back to Top