In a series of interviews with New York Times science writer Gary Taubes on scientificblogging, psychology professor Seth Roberts turns to the question of how do you go about making the judgment as to whether a scientist is trustworthy, especially when the topic is controversial. Taubes responds:
I’m a stickler about the use of words like “evidence” and “proof”. So if someone tells you there’s no evidence for some controversial belief, you can be fairly confident that they’re a bad scientist. There’s always evidence, or there wouldn’t be a controversy. If somebody says that “we proved that this was true” or “we set out to prove that this was true” that’s another bad sign. The point here, as [Karl] Popper noted, among others, is that you can never prove anything is true; you can only refute it. So researchers who talk about proving a hypothesis is true rather than testing it make me worried.
SETH: Yeah, I see what you’re saying. They overstate; they twist things around to make it come out the way they want. They are way too sure of what they…
TAUBES: Yes, and the really good scientists are the ones, almost by definition, who are most skeptical of evidence that seems to support their beliefs. They’re most aware of how they could have been fooled, how they could have screwed up, or how they might have missed artifacts in their experiment that could have explained what they observed. They’re very careful about what they say. If you ask them to do play devil’s advocate, and tell you how they could have screwed up, then at the very least, they’ll say “Well, if I knew how I could have done it, I would have checked it before I made the claim”. So when I’m talking about discerning the difference between a good scientist and a bad scientist, I’m talking about how they speak about their research, the evidence itself, it’s presence or absence.
Worth bearing in mind when you hear something which appears to overturn consensus expressed in strident terms: Where all the other possible explanations for the phenomenon considered? How did the researchers test their theory and data against the best possible countervailing research? Why do their conclusions offer better explanatory power?