We have all seen headlines about something increasing or decreasing your risk for cancer. “Two cups of coffee a day increase your chance of bowel cancer” the headline screams. Then the next day they are reporting how coffee is an importance source of antioxidants and we should be drinking it by the gallon to avoid Alzheimer’s.

It may be no surprise that many science journalists see these primary studies and report them often without any scepticism. However, researchers at the University of Bordeaux, France, have shown the extent of the problem.

Using a database of thousands of studies based on three categories, psychiatry, neurology and four somatic diseases (breast cancer, glaucoma, psoriasis and rheumatoid arthritis) the researchers looked at the media coverage of each study and the future research based on the findings.

They split the studies into two categories lifestyle (eg. smoking) and non-lifestyle (eg. genetic risk), they found that about 13% of initial research about lifestyle and non-lifestyle factors was reported in newspapers. However, only 9% of subsequent research about lifestyle factors was reported which isn’t too much less. What is more concerning is that only 1% of subsequent research about non-lifestyle factors was reported.

It is important to look at subsequent research because this is what gives scientists confidence that a result found in primary research is real.

This trend was particularly clear when looking at psychiatry. Whereas 234 newspaper articles covered the 35 initial studies that were later disconfirmed, only four press articles covered a subsequent null finding and mentioned the refutation of an initial claim.

Journalists are also far more drawn to “positive” results. Out of their database, all the initial studies that were reported on were positive. None of the initial research that had null findings was reported on.

Subsequent studies that had null results did have some media coverage, out of the 45 subsequent studies that were reported on, 5 of them had null findings and there were only 10 articles written about them.

In 2003 a study in Science on how stress and genetics are linked to depression that garnered 50 newspaper stories, plus another nine articles when two subsequent studies appeared to confirm the finding. But “newspapers never covered the eleven subsequent studies that failed to replicate this genetic association,” according to the authors

66% of the subsequent studied looked at were confirmed by the corresponding meta-analyses compared to 33.3% for initial studies. (The averages are higher for studies published in prestigious journals). Journalists are preferentially picking initial studies which are far more likely to be wrong.

Overall only 48.7% of the 156 studies reported by newspapers were confirmed by the corresponding meta-analyses and the studies that are disproving them are not being reported so as far as the readers are concerned all the reported studies were correct.

The studies lead scientist Estelle Dumas-Mallet concludes that “When preparing a report on a scientific study, journalists should always ask scientists whether it is an initial finding and, if so, they should inform the public that this discovery is still tentative and must be validated by subsequent studies.” Of course, their findings hint that few will follow said advice: “Our study also suggests that most journalists from the general press do not know or prefer not to deal with the high degree of uncertainty inherent in early biomedical studies.”

It will be interesting to see if the results of this study are reproduced and how the subsequent studies are reported.

 

Source:

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0172650

Advertisements