Studies Say…

I love this quotation( attributed to one Andrew Lang, who was born in 1844): “He uses statistics as a drunken man uses lamp-posts… for support rather than illumination.”

Actually, we all do that from time to time, and political psychologists tell us it is the mark of “confirmation bias”–the very human habit of cherry-picking available information in order to select that which confirms our preferred worldviews.

Because that is such a common behavior, and because we can easily find ourselves citing to “authorities” that are less than authoritative (and sometimes totally bogus), I’m going to bore you today by sharing information from a very useful tutorial on assessing the credibility of “studies,” as in “studies confirm that..” or “recent studies tell us that…”

Academics who have conducted peer reviews of journal submissions are well aware that many studies are fatally flawed, and should not be used as evidence for an argument or as confirmation of a theory. (If I were doing research on voter attitudes, and drew my sample–the population that I surveyed–from readers of this blog, my results would be worthless. While that might be an extreme case, many efforts at research fail because the methodology is inappropriate, the sample size is too small, the questions are posed in a confusing manner, etc.)

The tutorial suggests that journalists intending to cite to a study ask several pertinent questions before making a decision whether to rely upon the research:

The first question is whether the study has been peer-reviewed; in other words, has a group of scholars familiar with the field approved the methodology? This is not foolproof–professors can be wrong–but peer review is blind (the reviewers don’t know who conducted the study, and the authors don’t know who is reviewing it), and tends to be a good measure of reliability. If the study has been published by a well-regarded academic journal, it’s safe to assume that its conclusions are well-founded.

Other important inquiries included looking to see who funded the research in question.

 It’s important to know who sponsored the research and what role, if any, a sponsor played in the design of the study and its implementation or in decisions about how findings would be presented to the public. Authors of studies published in academic journals are required to disclose funding sources. Studies funded by organizations such as the National Science Foundation tend to be trustworthy because the funding process itself is subject to an exhaustive peer-review process.

The source of funding is especially relevant to the possibility that the authors have a conflict of interest. (Remember those “studies” exonerating tobacco from causing cancer? Surprise! They were paid for by the tobacco companies.)

Other important elements in the evaluation may include the age of the study, since, as the post noted,  “In certain fields — for example, chemistry or public opinion — a study that is several years old may no longer be reliable.”

Sample size and the method used to select survey respondents are obviously important, and statistical conclusions should be presented in a way that allows readers to review their calculations. It’s also worth looking closely to see whether the study’s conclusions are actually supported by the reported data. As the post notes,

Good researchers are very cautious in describing their conclusions – because they want to convey exactly what they learned. Sometimes, however, researchers might exaggerate or minimize their findings or there will be a discrepancy between what an author claims to have found and what the data suggests.

In an information environment increasingly characterized by misleading claims, spin and outright propaganda, the ability to distinguish trustworthy research findings from those that are intellectually suspect or dishonest is fast becoming an essential skill.

Comments