Thursday, September 29, 2011

Red Sox Woes and Weather Forecasting

On September 3, the Boston Red Sox sat 9 games over the Tampa Bay Rays for the American League wildcard.  The FiveThirtyEight blog from the New York Times estimates that they had a 99.6% chance of making the post season on that day.

Yesterday morning, they were tied with the Rays for the wildcard spot.  Later that night, the Rays trailed the Yankees 7–0 in the 8th inning.  The odds of a Rays win (also from FiveThirtyEight)?  0.3%.  As the Rays were chipping away at the lead, the Red Sox were beating the Orioles 3–2 in the 9th inning after a rain delay.  The odds of a 9th inning comeback by the Orioles?  2%.  But it's even worse than that.  The Orioles eventually were eventually down to their last strike.  Two outs and two strikes on the batter, Dan Johnson, who was hitting .108 for the season.  He had two strikes on him, and was 1–45.  The odds of the Rays making the playoffs were long indeed.

So what happened?  The Orioles came back and won the game, the Rays came back, put the game into extra innings, and Evan Longoria hit a walkoff home run that will go down in Red Sox infamy with home runs hit by Bucky Dent and Aaron Boone.


What does any of this have to do with weather forecasting?  Well, it is the low-probability outlier event that causes meteorologists to lose sleep.  Low probability does not mean zero probability. How can we best provide useful information about such events to the public and consumers of weather forecasts?  One doesn't like to sound like chicken little and it is well documented that false alarms erode the effectiveness of weather forecasts and warnings.   


On the other hand, a low-probability outlier event can have huge impacts and we are dealing with an atmosphere that cannot be predicted with precision.  Lake-effect snow is a good example.  The difference between a big event and a few snow squalls is largely dependent on processes that can neither be resolved by our observing system nor predicted with precision at lead times of more than a few hours.  Most meteorologists go to bed on a potential lake-effect night knowing that there's a small chance we're going to get hammered, but that it is most likely that we'll just see some scattered snow showers.  Repeated forecasts of such odds, however, erode public confidence in the forecast, even though it is the state-of-the-science today.  

There are at least two issues here.  The first is to quantify and improve probabalistic forecasts so that they are more reliable.  Current ensemble modeling systems are helpful, but need substantial improvements.  This is a physical science challenge.  The second is improving forecast communications, which I see largely as an interdisciplinary challenge.  I think it is clear, however, that the latter is a challenge that must be addressed if we are to maximize the effectiveness of weather forecasts for the benefit of society.  

1 comment:

  1. As a broadcaster, I share the same frustration. I could go on- but in a nutshell I think the public wants (and expects) definitive answers. (Not sure who is to blame for this.) They want words like "is coming" and "will" when the meteorologist's wording should actually be "could happen" or "looks like". We have a responsibility to remind people that there is often uncertainty. I would propose going back to a 5 day forecast, then show another graphic revealing a 'trend' for the following days. I thing KING5 in Seattle does this. In this way, we could put out a more confident forecast and hopefully not have as much egg on the face! -Grant Weyman

    ReplyDelete