Dr. Val Jones writes about the utter pointlessness of trying to reason scientifically with mainstream journalists on the blog Science-Based Medicine. Asked to contribute a skeptical perspective to the introduction of Reiki and other energy-based healing treatments to inpatients at a hospital in Maryland he – just to see what would happen – recorded his side of the story, which, in essence, is that there was neither evidence nor a plausible explanation for any of these practices.
“I did my level best to be compelling, empathic, and fair,” writes Dr. Jones, “but in the final analysis, not a single word of what I said made it into her article. In fact, the final piece is free of any skepticism whatsoever. To Dr. Jones the take-home message from the experience is that:
“blogs like Science Based Medicine seem to offer the only guarantee of unedited rational thought on matters of health and medicine. Thank goodness we’re no longer beholden to mainstream media for all our health news and commentary. It is a shame that most consumers get their news from TV and other outlets that don’t seem to maintain a journalistic quality filter.”
To be fair, he was talking to Southern Maryland Newspapers Online, which doesn’t look like it pays its reporters more than $20-30k a year, if that. But is it reasonable to assume from this experience that blogs are more rational than newspapers in their coverage of health and medicine – or is a frustrated Dr. Jones turning his own anecdote into data?
A recent paper on accuracy in the media by Brian Trench and Steven Knowlton, two Irish academics at Dublin City University’s School of Communications, offers some perspective on the errors in science reporting: Studies in the 1970s, for instance managed to classify 42 kinds of error, from which scientists recruited to examine the media coverage “identified on average 6.2 errors per story, with only 9% of the stories held to be free of errors, a much lower rating than for general news.”
A 1990 study found “that 40% of the stories contained statements ‘substantially different’ from the source document,” while a 1995 study “tracked back from media references to studies on breast cancer and found that over two thirds of the citations contained inaccuracies, including shift in emphasis, reporting speculation as fact and overgeneralising.”
Knowlton and Trench point out that science journalism fails most when it comes to putting data into context; reporters are prone to crib from press releases, which are themselves sources of inaccuracy, and leave it at that. As they note,
“The science-related errors reported here and in other such studies include those of significant omission and absence of qualifying statements, in other words, weak contextualisation. It is in these ‘subjective’ areas that the definitions of accuracy most obviously diverge between journalists and various types of source.”
So yes, it would appear that Dr. Jones prognostication is more correct than not.