Yesterday, a breaking story came out reporting that new research linked air pollution to higher coronavirus death rates. Here's an example of the story from the New York Times.
They even go so far to claim under the top photo that Atlanta is likely to suffer more deaths than the adjacent Douglas County due to this effect.
I have not had time to read and digest the study. It could be a good one. It might be important. However, when I went to the web site for the paper, I noted that there is a disclaimer immediately under the title that says "This article is a preprint and has not been certified by peer review. It reports new medical research that has yet to be evaluated and so should not be used to guide clinical practice" [my italics].
https://www.medrxiv.org/content/10.1101/2020.04.05.20054502v1 |
Peer review has been criticized for many reasons, most commonly is the slowness of the process. This is particularly problematic during crises like the coronavirus pandemic, when we want to accelerate the race to understanding and solutions.
It is probably essential that scientists share pre-peer-reviewed results in a transparent way at this time. In fact, this is not entirely unusual. Trained scientists have the capacity to evaluate papers and determine their strengths, weaknesses, and utility. No paper is perfect, so the reality is that we are constantly doing this as part of our research efforts.
What is concerning currently, however, is the sharing of pre-peer-reviewed results and other observations publicly. This has the potential to do serious harm as people look for treatments based on anecdotal findings or understanding based on papers that may contain flaws or misinterpretations.
There was a time when I was taught that one should not publicize results prior to peer review. If you read science articles regularly, you will often see statements like, "in a paper published today in Science", which at least tells you that the paper has been through some level of peer review. It may not have been scrutinized by the broader scientific community, but it has had an initial check.
The paper above might be very good and important. It might be good, but need some changes in interpretation. Perhaps it will be rejected. I don't know. I have some capacity to evaluate portions of the paper, but I don't work with the sorts of datasets that are used for this type of research, so that's an area where I'm blind. I consider my expertise to be specialized and limited when I examine results in other fields.
As we move forward through this crisis, I encourage you to consider the status and significance of stories about scientific findings very carefully. Treat those based on anecdotal evidence with extreme caution and those based on pre-peer-reviewed results with at least some additional checking. For published results, consider that the nuances are often not reported. Look to experts who have sifted through the full spectrum of scientific results and have domain knowledge for information. The scientific process isn't perfect, but this helps you separate the wheat from the chaff and enables you to move forward with the best available information.
Yes, I've noticed people frequently linking to articles posted on Medium. Same problem there. Some of what's there is interesting, but even Medium itself makes a point of saying it hasn't been through any review process.
ReplyDeleteThanks for this comment. I just finished my PhD at the U last year, and what you said about peer review is spot on.
ReplyDeleteI had never attempted to publish prior, and so I had some simplistic and elevated views of the publishing and peer review process. I since understand why it's important, because so many individuals want to latch onto bad papers or one hint of evidence without seeing the process through. But peer review is also more cumbersome than I suspected. Still, it's the best we've got.
I've been finding myself reading COVID19 preprints daily to get a feel for the "end game" of this pandemic. You're absolutely right that "Trained scientists have the capacity to evaluate papers and determine their strengths, weaknesses, and utility. No paper is perfect, so the reality is that we are constantly doing this as part of our research efforts. What is concerning currently, however, is the sharing of pre-peer-reviewed results and other observations publicly. This has the potential to do serious harm as people look for treatments based on anecdotal findings or understanding based on papers that may contain flaws or misinterpretations."
Even peer-review, however, does not make "truth". It merely means that a paper has passed peer-review. Finding "truth" requires many things: reproducibility of claimed results, a healthy peer review process/community, tolerance for skeptics attempting to prove the consensus wrong, and so on.
ReplyDeleteWhen I saw the above paper all over the press, I was reminded of somewhat similar paper linking air pollution with miscarriages. While it had indeed passed peer review, the paper smacked of researchers knowing the conclusion that they were after, and then finding data to match the conclusion. The PI in fact had claimed to have anecdotally observed the connection prior to conducting the research - which set of alarms in my mind. How could one accurately observe such a thin connection? In that case they indeed found correlation between air pollution and miscarriages, but it was a convoluted correlation involving one particular pollutant and a shift in time (if I recall correctly). And, of course, correlation is not causation - which the paper may have been careful to avoid implying, but subsequent press releases certainly were not.