Thursday, November 30, 2017

The Folly of Betting on the GFS and "DModel/Dt"

Just a quick post today following up on some themes from posts the past few days.

The graphic below lops through GFS 228, 204, 180, 156, 132, 108, and 84 hour forecasts valid at 0000 UTC 4 December (5 PM MST Sunday).  Imagine trying to forecast for Sunday afternoon based on just this single forecast system.  Good luck.


The loop also illustrates the folly of forecasting based on model trends, or what forecasters call "DModel/Dt" (i.e., the rate of change of the model forecast with respect to time).  Clearly, there is no evidence of a consistent trend in those forecasts.  The pattern is too chaotic, leading to a lack of run-to-run consistency, even in the more recent forecasts.

It's fun to talk about DModel/Dt, but studies have found that it has little forecast value for medium-range forecasts.  Hamill (2003) examined the value of DModel/Dt and here's what they found:
"Extrapolation of forecast trends was shown to have little forecast value. Also, there was only a small amount of information on forecast accuracy from the amount of discrepancy between short-term lagged forecasts. The lack of validity of this rule of thumb suggest that others should also be carefully scrutinized before use.
Let's put this rule of thumb to bed.

4 comments:

  1. Fascinating! What causes a model to not show convergence to a solution? Is it that the atmosphere's characteristics in the initial conditions have a lot of instability in the thermal/dynamical structure? (ie - lots of "positively tilted" troughs)

    ReplyDelete
    Replies
    1. In situations where there is a great deal of sensitivity to the initial model analysis, one will see this sort of jumpiness when examining model trends. Now that the forecast lead time is shorter, the models are beginning to converge on a large-scale solution with a trough passage late Sunday, but other aspects of the model run, such as the timing, structure of the front, and characteristics of the post-frontal environment, are where the uncertainty lies. Think of it like a game of Yahtzee. At extended range, you roll all five dice and just about anything is possible. At medium range, you might be rolling three or for. At short range, one or two. The possibilities become constrained, but you still don't know exactly what will happen.

      Delete
  2. What about DModel/Dt for ensembles? Think there is any value there?

    ReplyDelete
    Replies
    1. I'm not sure DModel/Dt is going to be all that useful, but the use of time-lagging to improve ensemble statistics is an ongoing area of work.

      Delete