Reading science headlines may cause cancer. Not really. But reading headlines alone — particularly misleading or mendacious ones that distort scientific findings — can cause real harm.
“There is definitely fallout from misunderstanding science,” said Ruth Etzioni, PhD, a biostatistician with Fred Hutchinson Cancer Research Center. “Policymakers who have to evaluate the evidence and find the truth in all of this noise may push for the wrong policies.” And individuals, she said, may reject beneficial interventions and embrace those that are costly and potentially harmful.
Headlines are not the takeaway, Etzioni said.
“They should only be used to decide whether to read the article or not,” she said. “They’re written to grab eyeballs and they’re often inflammatory and not scientific.”
But headlines — often written to gain clicks, not convey information — have become the takeaway in our TLDR (too long, didn’t read) society.
And it doesn’t stop there.
Spinning Science Series: Read all about it!
- Numbers don’t lie, but sometimes our brains do
- 99 problems when n = 1? Sample size matters
- Coronavirus puts ’open science’ under a microscope
- Correlation is not (necessarily) causation
- Oversized headlines obscure tiny findings: Types of research studies
Faulty headlines, flawed stories and even puffy PR pieces from research institutions are often picked up and repackaged by news aggregators and health and lifestyle blogs, which results in even more overhyped headlines and misinformation crowding our social feeds and confusing the public. That misinformation then gets promoted via social media by the same organizations to grab eyeballs. And groups with a particular agenda will spread the already distorted information with their own, intentional distortions to support their ideas. It’s like the game Telephone, except instead of making your friends giggle, you accidentally hurt them by passing along bad information.
A recent study on hair dye and chemical straighteners spun so far off course that the people most at risk — black women who use these products more than others — weren’t even mentioned in most of the coverage. Instead, headlines warned that “Hair dyes could raise risk of breast cancer” or predicted new trends about fearful women “going gray” or took crazy liberties with the researchers’ findings. Fast Company even provided a few creative descriptors, “A harrowing study of 46,000 women shows hair dyes are heavily associated with cancer.” Associated, yes. Heavily — if that means strongly — not at all.
But it’s not just that science headlines are overblown or inaccurate, it’s that tiny studies with little statistical heft, or “power,” will get huge media attention. Or extremely complicated research that requires nuance and context is reported with neither. Or the constant back and forth of conflicting studies confuses people so much they start to get all their health and medical advice from Dr. Dre.
It doesn’t help that scientists are often unable to explain their research in a 10-second, lay-friendly sound byte. Instead, they talk about statistically positive linear trends, risk associations, multivariable Cox regression analyses, and relative and absolute risk. It’s no surprise that earnest scientists and public servants of the National Institutes of Health fail to draw the same level of attention and glamor to their work when they’re up against some goopy actress / lifestyle entrepreneur.
Epidemiological and clinical research, usually the studies that attract the most attention from the media, are complicated, and the average reader may not have the chops to cut through the jargon and comprehend the statistics-speak. See Susan Keown’s story on understanding statistics, numeracy and risk models.
Health and science journalists can do this for us, turning scientific papers into understandable news stories for the lay reader. But general interest media outlets like newspapers and broadcast news have cut back on their science reporters and reporting. And while many science-focused content providers have sprouted up, they are competing for the same eyeballs and ad dollars as sports and lifestyle outlets — which can lead to shallower dives into the latest science.
Meanwhile, the volume of research papers is exploding, with something like 3 million papers published in 2018 alone.
How can the public tell whether good science has fallen victim to an overblown headline or questionable science has gone viral due to clickbait-y coverage?
“The first thing people should do is read the story, not just the headline,” Etzioni said. “Then you should find the study and read that [editor’s note: if you are able — many have paywalls]. Then you may want to go online and see if you can find studies that disagree with it. You may well see that there are just as many studies that find the opposite results. At that point, you might decide to throw up your hands, but don’t. This is the process of science.”
Etzioni said other important questions to ask include: How big is the study? How long do they follow the people for results? How do they measure the things they’re seeking to measure? And, are there any conflicts of interest?
Readers, particularly patients trying to better understand medical research, can always rely on sites like NIH.gov or the National Cancer Institute’s cancer.gov for good solid information. But readers should also consider the following questions when reading health and science stories (and the studies they’re based on):
Are we talking mice or men? Scientists have cured mice of cancer hundreds of times. We’re not mice. Don’t get sucked in by breathless headlines about new cancer cures when the therapy hasn’t even been tried on a single human being. Pro tip: “pre-clinical” is another way of saying “no human subjects.” Also, “murine” means it involves mice.
What kind of study is this? Randomized clinical trials, or RCTs, are the gold standard of research and the type of study that you would look to for potential new cancer therapies. But that’s only one type. There are also prospective cohort studies, observational studies, meta-analyses, case-control studies, systematic reviews and more. Each has its own place on the research continuum. Knowing the type of study can help you better understand the significance of the finding. Ditto for the various phases of clinical trials. Phase 1 is preliminary research. Phase 3 or 4 means it’s available or nearly available for use in people. Sci-curious? Check out Kristen Woodward’s explainer on types of scientific studies.
How big is your cohort? Size matters when it comes to many studies, particularly epidemiological ones that tell us what causes disease. Big cohorts give you a broad public health perspective. Small cohorts can be valuable as well, since they provide enticing clues that can be explored through additional research. Sometimes, though, small studies with preliminary findings or even single patient case studies will be given a disproportionate amount of media attention and hype, which usually only serves to raise false hope. Case in point: After Nature Medicine published findings from an “n of one” (single patient) immunotherapy trial, the story went viral, causing many cancer patients to clamor for the exact same treatment. See Jake Siegel’s story on the importance of sample size in scientific studies.
Who are you again? Readers should always consider the source, both the place where the story appeared and the scientific journal itself. Did the story come from a reputable news outlet (think Wall Street Journal, ABC News, NPR, STAT or Kaiser Health News) or is it a lifestyle site designed to sell you $600 skin cream? Also where was the study was published? Most of us know we can rely more on The New York Times than, say, the Weekly World News. Scientific journals have their own hierarchy, too, with some considered more “impactful” than others. Was this a well-regarded scientific journal? Was the study peer-reviewed? See Sabin Russell’s take on peer-reviewed journals and how they are being disrupted by open access science.
Is this fake science? Sometimes health stories will bubble up from the bowels of the internet and go viral when there’s not a speck of science behind them. Such is the case with the myth of the cancer-causing bra, which stemmed from a book published 25 years ago. This fake science story was repeated so often and produced so much anxiety, scientists at Fred Hutch finally decided to research it. Their findings, based on interviews with more than 1,000 women: Yes, women wear bras. Yes, women get breast cancer. But no, bras do not cause breast cancer, just like ice cream sales do not cause homicides, even though they both increase during summer. Correlation and causation are a common source of confusion. Sabrina Richards’ story helps clear it up.
This article was originally published on February 13, 2020, by Hutch News. It is republished with permission.
Comments
Comments