Monday, October 10, 2016

Biases in Weather Forecasting

When it comes to biases, meteorologists have to deal with at least three.

The first is bias in our modeling systems.  All modeling systems have shortcomings and tendencies.  Some of these are systematic, in that they appear frequently (but not necessarily consistently).  For example, many lower resolution ensemble modeling systems underpredict the frequency of significant precipitation events over the western United States.

Let's take a look at the Global Ensemble Forecast System as an example.  The GEFS is based on a relatively low resolution model and it struggles to predict major precipitation events at mountain locations in the western U.S.  The plot below shows the frequency of occurrence of observed events at SNOTEL stations compared to that produced by the GEFS control and GEFS mean.  The key line is the black dotted one, which is the ratio frequency of observed events compared to that predicted by the GEFS control (known as frequency bias, scale at the right).  Ideally, this dotted black line would be near 1, which would mean the model produces as many events as observed.  In the Pacific Ranges (i.e., the Cascades, Sierra Nevada, and coastal ranges) the GEFS doesn't do too badly for smaller vents, but it clearly underpredicts the frequency of events larger than 25.4 mm (1.0 inches).  In the western U.S. interior, the situation is even worse.

Source: Lewis et al. (2016, in prep.)
It is possible to adjust the GEFS forecasts to correct for these biases.  One approach that the National Weather Service uses is known as statistical downscaling.  The basic idea here is to use gridded, high-resolution analyses of climatological precipitation to add detail to the GEFS forecasts.  If one does this (I'm going to sweep the details of how this is done under the rug, one can significantly improve the biases.

Source: Lewis et al. (2016, in prep)
That looks fantastic.  A near-perfect model right?  Wrong.  There are also random errors.  Perhaps with these adjustments the GEFS gets the number of events of a given size right, but unfortunately there will be times it forecasts an event and it doesn't happen or it doesn't forecast an event and it does happen.

The second is human forecaster bias.  Some meteorologists are prone to overforecast precipitation, whereas others tend to underforecast.  Bias can sometimes change depending on event size as it is possible for one's biases to be different for large events than garden-variety events.  Meteorologists, like all scientists, need to guard against these biases.  One of the advantages of consensus forecasts produced by a team of meteorologists is that human biases often cancel, leading to a more objective forecast.  On the other hand, teams are not immune from bias either.

The third is public bias, which complex and multifaceted.  In part, it is affected by prior forecasts.  People become desensitized by false alarms.  Weather "surprises" erode public confidence.  People are also affected by their experiences (or lack thereof).  For example, some people that have gone through hurricanes unscathed assume they can survive the next one.  Unfortunately, hurricane damage is highly localized and variable, so one's experience in one hurricane may not be transferrable to another.  Biases also creep through our communities, TV, and social media.

Hurricane Matthew provides thousands of case studies of all of these biases and how they affect the forecast process and the societal response.

No comments:

Post a Comment