Friday, September 7, 2018

Not So Deep Thoughts on Reliability and Science

Spurred by a request by a commenter on my previous post, I'd thought I'd talk a little about evaluating the reliability of evidence and information provided through traditional media, social media, and the Internet. 

We all receive evidence and information on many topics in our day-to-day lives, but our individual knowledge of each of these topics varies.  For me personally, my knowledge of weather and climate is in some areas, such as winter storms in complex terrain, at an expert level.  My mind is well tuned in these areas to quickly access knowledge and evaluate evidence and information.  I know where to look for evidence and information that I can't recall precisely.  Although not infallible due to gaps in understanding and human biases, I have a decent shot of being able to critically evaluate evidence and information in this area.  If something doesn't pass my smell test, there's probably a problem, although my nose isn't perfect and sometimes I stick it right in the manure.   

In other weather and climate areas outside my area of specialization, I still have a decent shot at being able to critically evaluate evidence and information, but I often need to work harder to do it.  I don't have as large of a knowledge base or as developed a schema in areas, for example, such as tropical cyclones or severe thunderstorms.  I do, however, have training that allows me, with effort, to evaluate evidence and information in these areas reasonably well.  However, if something is really important, I'm probably going to want to consult with someone who is specialized in the topic area.  I'm more easily fooled.

However, as I slide further outside my area, my training becomes increasingly less effective.  I learned this quickly many years ago when a family member was diagnosed with cancer.  I dove into the scientific literature thinking that I could quickly get an education and provide some guidance, but that simply wasn't the case.  Although I have some background in the techniques used in medical science (e.g., statistics), I have little to none in others and it takes time to develop a structured knowledge base that enables critical evaluation.  The better option was to listen to people who have developed that knowledge base and fortunately, all turned out well. 

If I go even further afield into topics such as economics, psychology, and the like, the situation is even more hopeless.  Like all humans, I suffer from the Dunning–Kruger effect which, as noted on Wikipedia, "is a cognitive bias in which people of low ability have illusory superiority and mistakenly asses their cognitive ability as greater than it is."  Thus, if you catch me on the skin track or at a party and bring up economics, international conflict, or the best diet for athletes, I will talk a good game, but in reality, I don't know jack.  I also get a nice emotional boost when I see a social media post or news article that confirms my views.  Confirmation bias is a powerful drug.  I have to work hard to keep it in check even when working in my area of specialization. 

And you should be warned.  Given that I'm now talking about the Dunning–Kruger effect, I have never taken a class in psychology and my knowledge was obtained almost exclusively from Malcolm Gladwell books and the Internet!

Source: Darwin Eats Cake
Which brings us to the reliability of evidence and information.  The world is not black and white, yet we are bombarded daily with evidence and information outside of areas of knowledge and experience.  The human mind is wired to quickly assess this information and decide whether to accept it or discard it.  It's impossible to resist!  You are doing it right now as you read this post.

I am no expert in this field (Dunning-Kruger here we go again), but two critical questions I typically try to to ask are who is providing the information and how carefully has it been vetted?   News articles on a recent science publication may contain good information, but are not vetted at the same level as the original paper and sometimes misconstrue the key findings and significance of the paper.  Social media posts on climate by recognized experts are likely more reliable than non-experts, but can still contain errors in analysis or interpretation. 

This is one of the reasons why I recommend to family and friends that they get their weather information from the National Weather Service and emergency managers.  Social media is fun, but just go to twitter today and you'll find all sorts of tweets on Tropical Storm Florence, but they do not add up to a collective picture with clear information about what to do.  The National Hurricane Center discussion simply says "The risk of direct impacts associated with Florence along the U.S. east coast next week has increased.  However, there is still very large uncertainty in model forecasts of Florence's track beyond day 5, making it too soon to determine the exact location, magnitude, and timing of these impacts.  Interests near and along the U.S. East Coast should monitor the progress of Florence through the weekend and ensure they have their hurricane plans in place.

What about blogs like this one?  Well, I'd like to think that the reliability of information on this blog is higher than you get from the newspaper, but it isn't perfect.  It is written quickly often using graphics that have been generated elsewhere.  Quality control is limited.  I can perform some expert quality control, but even experts can be fooled.  Further, there is no such thing as good writing, only good rewriting, and I do little of the latter.  Finally, experts are not infallible and I make errors and mistakes. 

A peer-reviewed article would likely have higher reliability.  Having a paper examined by two or three experts in the field raises the bar.  It forces you to think more carefully about data interpretation and your analysis.  Hopefully your methods are examined.  In some cases, your work may be checked.  However, this initial peer review is not perfect.  Sometimes (often?) reviewers miss things.  Sometimes the review process might be weak. 

The media loves to report on scientific articles when they are released, but at release, they have typically only been examined by a small number of peers.  That's better than none, but give that paper some time.  If it sparks interest in the scientific community, more and more people are going to look at it.  If there are problems, they will likely be uncovered.  If there are good ideas, they will be supported by further data analysis.   Science isn't perfect, but it is self correcting.  Science is not the truth or a list of facts.  It is a process of seeking the truth. 

Paraphrasing Edward Abbey, "the process of science needs no defense, it only needs defenders."

2 comments:

  1. Nice one Jim! From the heart!

    FYI I got pounded on the LCC side of thunder mountain by an intense thunderstorm (hail, heavy rain, wind , lightning) at 3 pm yesterday. Caught in shorts and a t-shirt witj no layers thousands of feet up...the storm cell was just perched right above and not really moving. Blue skies and 90 degrees in the valley...it seemed to sneak in from the SE. Probably that monsoobal moisture you spoke of Wed... I suffered and survived;)

    ReplyDelete
    Replies
    1. My students talked about that possibility in our weather discussion at 1 PM. We didn't specifically call for a thunderstorm there, but there were a few things pointing toward it as a possibility, including the higher water-vapor contents over eastern Utah and the weak easterly flow.

      Good you lived. I've only been up there in the winter. Retreat would be difficult on that rock pile.

      Jim

      Delete