Using 2011-2014 SSEO Verification Metrics To Assess Uncertainty In Severe Weather Forecasts Pamela Eck1 and James Correia Jr.2
1Hobart and William Smith Colleges, 2OU/CIMMS/Storm Prediction Center
Introduction
Results (cont’d)
843
Data and Methods
NSSL-WRF 120414/0000F030
Acknowledgments
20 30 40 50 70 80 100 125 250 500
Fig. 1 UH Objects >= 4 contiguous pixels and 2 pixels 1.00 0.8 0.6 0.4 0.25 0.15
UH maxima = proxy storm report
Results
• 3 years of data: August 2011 to May 2014 • Storm Scale Ensemble
of Opportunity (SSEO) with 7 members
• 3 groups: spring (AMJ), summer (JAS), winter (October – March)
• Observed and forecasted probabilities are compared using a dichotomous (yes/no) forecast
• Verification metrics are calculated
• 40km ROI, 120km Gaussian smoother • Verified against observed probabilities
of >= 15%
FORECAST OBSERVED
• All models display “check mark” pattern • SSEO01 performs the best in spring and
summer
• Models 2, 3 & 7 UNDER-forecast • Model 4 OVER-forecast • Interquartile ranges are largest in the
winter • Models 1 & 7 are the most skillful
during all three seasons • Does not indicate the type of events
that have the largest skill
Fig. 4 Fig. 5
• FSS median values are higher for events with > 40% maximum
observed probabilities for all models except for the SSEO04 • Summer FSS variability is much lower relative to spring for all models
Fig. 6 (left) • The range between
maximum and minimum max forecast probabilities ~30%
• The size of the observed probabilities does not impact the range of the forecast probabilities
Fig. 7 (right) • An ideal model would have
observed probabilities overlapping with the forecast probabilities
• SSEO01 has the best forecasts in spring for events > 40%
Fig. 8
Fig. 2
UNDER-forecasts
(OVER-forecasts)
*AMJ = 1 - 31% = 69% JAS = 1 - 34% = 66%
Winter = 1 - 37% = 63%
UNDER-forecasts
UNDER-forecasts
OVER-forecasts
Discussion and Summary
QR Code
Fig. 3
Thank you to the National Oceanic and Atmospheric Administration for funding this research through the Ernest F. Hollings Undergraduate Scholarship Program. Thank you to the National Weather Center for hosting me this summer and to everyone at the Storm Prediction Center for all of their support and guidance. Special thanks to Sandy Sarvis, Victoria Dancy, Jimmy Correia, Patrick Marsh, Greg Carbin, Keli Pirtle, Bill Line, Spencer Rhodes, Kacie Shroud, Matt Flournoy, Matt Brothers, Kyle Chudler, Chris McCray, and Kenzie Krocak.
The previous version of the Storm Scale Ensemble of Opportunity (SSEO) was comprised of seven models that create diagnostic fields used to forecast severe weather. Updraft helicity (UH) is particularly useful due to its ability to output measures of rotation in modeled storms. Verification of these UH fields can be used to determine the skill of the models as well as the uncertainty between them. Quantification of uncertainty would provide forecasters with additional information about the tools they are using, which could help to improve forecasts.
Verification • SSEO01 had the most skillful forecasts • UNDER-forecast = models 2, 3, 7 • OVER-forecasts = model 4 • Spring events and events with observed probabilities > 40% were handled the best by the models Uncertainty • The ensemble as a whole does well ~66% of the time,
regardless of season • Range of forecast probabilities is ~30% regardless of the
size of the event • We cannot predict the predictability
Fig. 9
Fig. 8 does not indicate the type of events that are falling inside the envelope
23%
13%
7%
26%
18%
14%
Spring Summer Winter