Misinterpreted data and unsubstantiated conclusions plague press and social media. What can journalists do to stop them? Quill asked Rob Pyatt, who has presented workshops focused on teaching critical thinking skills, to chime in on the subject. Pyatt, an assistant professor in the New Jersey Center for Science, Technology and Mathematics at Kean University, is certified in Clinical Molecular Genetics and serves as a director of the Oxy-Gen Laboratory in Norcross, Georgia.
“World’s First Human Head Transplant a Success, Professor Says.”
The New York Post headline was certainly attention-grabbing. And, technically, correct. Sergio Canavero did, in fact, remove the head from one body and connect it to another.
But the Post headline writer conveniently left out a key element.
The bodies were corpses.
The Post was hardly the only news organization to get readers to head off in the wrong direction. Websites including The Telegraph also used misleading headlines that failed to mention that the work was done on cadavers.
Disturbing? To me, yes. And not for the obvious reasons.
I’m a scientist and I get it. Talking about science is hard. Science is the process we use to investigate our world. Science journalists are tasked with translating this complicated and nuanced practice to the general public in an interesting way.
But the rules aren’t the same as they used to be. In a 2009 piece for Nature, a science journal, Boyce Rensberger described science journalists as transitioning from “cheerleaders to watchdogs.”
In his view, contemporary science journalists must manage new technologies and new media, understand the science and the people doing it, and look ahead to the potential social impacts of that work.
All that, but with less copy space.
So, yes, I appreciate the challenges. But that doesn’t make me less appalled when I see science misrepresented and news stories misinterpreting data or drawing unsubstantiated conclusions.
It’s bad enough when your cousin forwards nonsense online articles, such as “Science Proves Cats Don’t Make You Crazy.” But journalists should hold themselves to a higher standard.
Here are just some of the common problems I’ve seen — and some tips on how to avoid them.
Headlines directly determine the number of people who will pause to read an article and how those people interact with and remember a piece. Headlines are typically simple and direct, but they also need to accurately reflect the content and tone of the article.
Our clickbait society is obsessed with absolutes and broad generalizations, using words like “all,” “always,” “must” or “never” to entice us to push the button. Science is rarely (if ever) absolute, and such terms need to be avoided in science writing.
A recent New York Post article announced, “Ozzy Osbourne is a Genetic Mutant, DNA Research Proves”
My problem isn’t with Osbourne or his music. (I’m a huge fan of his “Bark at the Moon” album.) It’s the word proof that bothers me. Proof means that there is unqualified truth, and in science there is always at least some small amount of doubt. While the sequencing of The Prince of Darkness’s genome may have uncovered some interesting variation in his genetics, it hasn’t proven anything.
Similarly, extreme terms such as “Holy Grail” — as in Health News Review’s “Holy Grail Cancer Test? Hold On. Here’s what you Need to Know” — should be avoided as well.
Discoveries of Holy Grails only happen in Indiana Jones movies. Describing important scientific breakthroughs requires more subtle language, as was demonstrated by the University of Queensland’s news bureau in a story on the same discovery. “Nano-signature Discovery Could Revolutionize Cancer Diagnosis” may not earn as many clicks, but it’s factually sound and not misleading.
Similarly, the term “cure” should be used with caution. The discovery of absolute cures for diseases is extremely rare, and a headline touting such a breakthrough could give someone with that condition unwarranted hope. That hope will likely be dashed when the nuances of the science are explained within the body text.
A Guardian article covering a new immunotherapy for cancer used the headline “The Closest Thing Yet to a Cure for Terminal Cancer?”
As you would expect from Betteridge’s Law of Headlines, the answer to that question is “no.”
A great science article stimulates your curiosity to know more about the subject matter. As scientists, we’re trained to look for citations and references as deeper sources for specific details, so nothing is more frustrating than to read an article that lacks any notation for the scientific work that it’s covering.
A recent article from Market Watch made the bold statement: “Driverless Cars will Lead to More Sex in Cars, Study Finds.”
Really? I thought and then immediately wanted to know more details on how the study came to that conclusion, including how the study was conducted, how the subjects were selected, and what controls were used (see sidebar on Controls and Study Population).
I excitedly dove in, only to discover that the article was a complete dead end. There was no mention of the journal where the study was published, the specific volume or issue, or the names of the other scientist or authors.
Providing references or links in electronic formats makes the foundation of science writing transparent. “Study: Porn Superfans are Less Sexist than Your Average Man,” a recent article at Jezebel.com, was equally bold. But the online publication backed up its assertions with the journal name where the study was peer reviewed and published (Sociological Forum), along with links to the journal article itself, links to previous science articles and editorials from the same research team, and articles by other scientists working on the same research topic.
Copying the Press Release
The point of a press release is promotion. They are designed to be biased, small, easily digested packets of hype that are crafted to be spread as easily and as far as possible. All too often, time-pressed science journalists treat a press release as a Cliff’s Notes synopsis of their subject and use it as their primary or exclusive source for a story. This journalistic copy-and-paste technique eliminates the intellectual analysis integral in science journalism and reduces the practice to parroting.
A 2009 study conducted by doctors Steve Moloshin and Lisa Schwartz found that press releases originating from academic institutions often exaggerated the significance of the scientific findings they were promoting. In other words, press releases often aren’t reliable sources.
Yet, they are often treated as such.
In 2015, the University of Maryland issued a now-deleted press release that claimed a single brand of chocolate milk could aid in the recovery from a concussion. News this astounding was quickly spread by the media, culminating in the superintendent of one Maryland school system promising to spend $25,000 to make this brand of chocolate milk available to all student athletes.
In reality, the press release was overhyping bad science. The study claimed to find evidence that drinking Fifth Quarter Fresh brand chocolate milk improved test scores and concussion-related symptoms in high school football players. The study design was flawed with only this one brand tested and no comparison group, treatment or alternative brand included (see the sidebars for the use of controls).
The study had not undergone peer review or been published (see sidebar for peer review). As if that wasn’t bad enough, the lead scientist hid the fact that he had received some funding from the Fifth Quarter Fresh dairy, which further biased the study.
When science isn’t backed by evidence, it erodes public trust in both the scientific process and journalism.
Consulting Outside Experts
In an exercise to show how easy it is for bad science to be promoted by the media, John Bohannon and a team of German researchers conducted a sham study that tested chocolate as a dietary supplement.
The study design was intentionally flawed with too few subjects, and it used flawed statistical analyses. They submitted their article, “Chocolate with High Cocoa Content as a Weight-Loss Accelerator,” to a predatory journal that would publish it without peer review — in return for 600 euros.
With the publication of the article and a corresponding press release ready, the media frenzy was off. Newspapers, magazines, TV stations and morning talk shows covered the story without seriously questioning the science behind it or consulting outside nutrition experts to gain their perspectives on the study’s validity.
According to Bohannon, the point of this exercise was to highlight journalistic laziness. Most scientists worth their weight would have identified at least some of the study issues, and, in fact, some readers did via online comments posted with various media stories. But none of the reports covering the story brought in an outside expert to offer his or her review of the work.
In an article describing his deception, Bohannon commented that science journalists “have to know how to read a scientific paper — and actually bother to do it.”
But journalists don’t need to shoulder the burden of scientific assessment alone. There are resources and experts at every writer’s disposal. In 2015, Tara Haelle at NPR was set to report on a story claiming that a childhood vaccine could prevent leukemia. She had interviewed the lead scientist behind the exciting academic press release.
With some questions still lingering in her mind, she turned to a group of outside scientists for clarification. Their unanimous opinion was that nothing in the work showed that the vaccine reduces leukemia risk or prevents it. As Haelle commented in her story on the experience, “Be wary of grand claims, get outside perspectives on new research and never, ever rely only on the press release.”
Going to the Source
In January of 2008, headlines in places like The Telegraph rang out that “Children are Scared of Hospital Clowns.”
Reuters proclaimed, “Don’t Send in the Clowns,” the Digital Journal announced, “Study Shows Children Fear Clowns,” and the chorus continued from print media to television with the Today Show and Good Morning America carrying the story.
Circuses and parties aside, working in hospital wards to cheer up sick children remains a long-standing tradition for clowns. Had parents and medical practitioners been wrong about the benefits of these clinical merrymakers?
That would seem to be the conclusion from these stories. But to begin to evaluate the claims of these reports, you have to trace them back to their original source.
That source? It was a press release from the University of Sheffield, describing a study on hospital décor.
To be clear, it was not a study on actual clowns or their effectiveness in the medical setting. It was a study of hospital decorations. The “Space to Care” study that was described in the press release had surveyed 250 kids from ages 4 to 16, regarding “improving hospital design for children.” It was published as a half-page piece in Nursing Standard magazine, without undergoing peer review. Nothing in the piece mentioned the study methods to evaluate if the subjects were chosen appropriately, how many questions in the survey asked about clown pictures, and if those questions were written in a scientifically appropriate way.
But the unsubstantiated headline was an attention-getter: “No More Clowning Around — It’s Too Scary.”
That’s in marked contrast to a 2013 scientific study that underwent peer review and was subsequently published in BMC Pediatrics. It surveyed clowns, nurses and parents in Germany to see if they felt hospital clowns were beneficial. Both parents and hospital staff reported they felt the interaction with kids and clowns was positive, specifically resulting in reduced stress and boosted mood.
That study received little or no press coverage.
Of course, journalists shouldn’t shoulder all of the blame.
Scientists need to make greater efforts to free themselves from jargon. That may be convenient to use in discussions with scientific peers, but it’s a barrier to communicating with journalists and the general public.
Scientists can also work more openly with their colleagues in journalism and communication to make themselves open and available as resources. And they can constructively point out to journalists — preferably through email or other direct contact, rather than in comment sections — when mistakes are made.
For example, scientists behind a complex genetics study published in 2019 put together an FAQ for the media in simple-to-understand terms explaining how they did their work and their interpretation of the results. The peer-reviewed scientific paper covering their work was 12 pages long, and the FAQ was 17 pages.
Overkill, perhaps, but such concentrated effort serves the scientists, the journalists and the public.
SIDEBAR: Science Warning Signs
Not all science is created equal and, unfortunately, much of it is subpar. Much like cheese, you often can’t tell if what you smell is good or bad unless you are an expert.
Below are some warning signs to help journalists identify questionable science.
Correlation Does Not Equal Causation: Correlation refers to the use of statistics to determine if two variables are related. Causation takes the relationship a step farther and says that not only are those two things related, but if you change one, it causes a connected change in the other. Just because we can find two things that look correlated statistically, it doesn’t mean that they always share a cause and effect relationship.
It’s the causation that we should get excited about, but all too often it’s the correlation that gets the headline.
Example: In 2012, Frank Messerli wrote a paper for the New England Journal of Medicine describing a correlation between a country’s chocolate consumption and the number of their citizens winning the Nobel Prize (Yes, I am drawn to studies dealing with desserts). While chocolate is delicious, the causation behind more people winning Nobel Prizes in a country is more related to socioeconomic status and investment in research and education than in the consumption of any particular confection.
Sample Size is Too Small: Because conducting a study on an entire population of people is typically impossible, scientists instead use a smaller group of subjects for their research. The same applies for observations in an experiment. Using a sample that’s too small decreases statistical power, increases the margin of error and produces unreliable results.
Example: A 2018 study describing the time required to pass a Lego after it was swallowed used a sample of only six individuals ranging from 27 to 45 years old. You can’t build much with only six Legos.
Sample is Not Representative: The selection of a sample should be unbiased so it represents as closely as possible the greater population from which it was taken. As with sample size, if a sample isn’t truly representative, then study results can’t be extrapolated to the larger population and are unreliable.
Example: A 2011 study examining the personalities people give to their robot vacuum cleaners surveyed six participants, including two women and four men who all shared the characteristics of having a busy schedule, a background in or an affinity for technology, and all of whom were Dutch. That study group is hardly representative of anything.
Inappropriate Use of Controls: The use of controls in experiments is important so that investigators can reduce the number of variables impacting the experiment outcome to only those they are testing. Controls can be negative, where no response is expected, or positive, where a particular response is counted on.
Example: To test if the Danish myth that alcohol can be absorbed through your feet, researchers had three adults submerge their feet in vodka for three hours, and then were observed for signs of intoxication. No controls were used. A better study design would have included a negative control, where people submerged their feet in something like water that contained no alcohol and wasn’t expected to yield intoxication, and a positive control, where people would have consumed the vodka, showing it was effective via the traditional route of administration.
Unsupported Conclusions: The conclusions part of a scientific paper is where authors take the results from their experiments and generalize their importance in a wider context. Sometimes scientists can get carried away. Occasionally scientists will overreach and make statements that are too broad and aren’t supported by the data from a study.
Example: In the paper testing the Danish myth of alcohol absorption through the feet (noted above), the authors conclude that based on their results “Brewery workers cannot become intoxicated by ‘falling’ into a brewery vat.” While funny, this conclusion can’t be made based on the study results, as it only tested alcohol absorption through the feet and not other, more obvious routes.
Predatory Journals and Lack of Peer Review: Peer review is a method used in science to assess the validity and quality of research through examination of the work by independent scientists within similar areas of expertise. It is far from a perfect process, but it is one of the main ways science has of policing itself and separating the good work from the bad. The desire to avoid peer review by some questionable scientists has led to an explosion in predatory journals that will publish any study if their fees are paid. These pay-for-play schemes avoid peer review and corrupt the scientific process.
Example: The American Journal of Medical Sciences and Medicine sounds like a prestigious medical journal until you realize that until recently it listed as its editor-in-chief a doctor who died in 2013. Science that doesn’t go through peer review is neither evidence based nor credible.