KUTV Channel-2 is running a promotion based on the accuracy of Sterling Poulson's weather forecasts. Whenever Sterling's 10PM forecast for the next day's high temperature is within 2F of the actual high, one "lucky winner" will receive a free umbrella. They're calling this Guaranteed Weather.
I'll use this catchy slogan to motivate some discussion about the difference between forecast accuracy and forecast value.
Forecast accuracy is the correspondence between what is forecast and what is observed. Indeed, that is what Guaranteed Weather is measuring. This correspondence can be measures with a number of different metrics, such as the absolute error [i.e., |forecast-observed|] or the squared error [i.e., (forecast-observed)**2]. KUTV is using the former, which is widely used in temperature forecast verification and also simple and straightforward for the general public. The latter is also used occasionally because it more strongly penalizes larger errors.
Forecast value is what most of us care about and it measures the economic (or other) value to the customer or end user. Let me give an example. For many end users, the value of a temperature forecast that is within 2F of observed is much greater when it is precipitating and temperatures are near freezing than when it is sunny and in the 70s. When it is precipitating and near freezing, a 2F error can be the difference between bare and wet roads and snow-packed roads. On the other hand, when it is in the 70s and sunny, a 2F temperature error matters little for either commuting or what clothes you wear for the local picnic.
Despite the fact that economists may argue that anything can be monetized, forecast value can often be difficult to quantify and is often not measured in verification studies. An exception is the use of weather forecasts to project and plan for future natural gas demand, which because of its use in home heating, is strongly related to temperature. Overestimating natural gas demand typically requires the selling of excess natural gas at a loss, whereas underestimating requires the purchase of natural gas on the spot market.
For example, based on past relationships between temperature, wind, and natural gas consumption from a city in the midwest, we evaluated the value of forecasts produced by students in my forecasting class this spring for a hypothetical utility here in Salt Lake City. This helps to illustrate to students the potential financial value of the forecasts. Below you can see the loss relative to a perfect forecast. The best forecasters in this exercise saved the utility about $200,000 over the automated forecasts produced by model output statistics.
Most meteorologists are content to verify forecast accuracy, and it is an essential part of any good forecast validation system. However, at the end of the day, it is forecast value that really matters to the end user.
BTW, if anyone from Questar is reading, we'd love to do this with real-world numbers in the future...