+ All Categories
Home > Documents > Mark T. Stoelinga University of Washington Thanks to: Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP...

Mark T. Stoelinga University of Washington Thanks to: Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP...

Date post: 20-Jan-2016
Category:
Upload: jessie-foster
View: 216 times
Download: 0 times
Share this document with a friend
Popular Tags:
42
Mark T. Stoelinga Mark T. Stoelinga University of Washington University of Washington Thanks to: Thanks to: Steve Koch, NOAA/ESRL/GSD Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP Brad Ferrier, NCEP Verification and Verification and Calibration Calibration of of Simulated Reflectivity Simulated Reflectivity Products Products During DWFE During DWFE
Transcript
  • Mark T. StoelingaUniversity of Washington

    Thanks to:Steve Koch, NOAA/ESRL/GSDBrad Ferrier, NCEPVerification and CalibrationofSimulated ReflectivityProductsDuring DWFE

  • Hurricane WRF(Chen 2006, WRF Workshop)

  • 2006 NOAA/SPC Spring Program

  • 2005 DTCWinter Forecast Experiment(DWFE)

    (Koch et al. 2005)WRF-ARW SRWRF-ARW 3-h PrecipObs Composite ReflectivityWRF-ARW 700-hPa winds/RH

  • Variational Data Assimilation: What is the best forward operator to use as a bridge between observed radar reflectivity and the model microphysics?Forecaster Testimonials.(we) liked the 4 km BAMEX model run and DONT want it to go away. The reflectivity forecasts were really very helpful, and almost uncanny.- NWS Forecaster after BAMEX field study

    Love the reflectivity product!- NWS Forecaster after DWFEHowever,Before any meaning can be ascribed to the Reflectivity Product for the purpose of interpreting mesoscale model forecasts, it is important to understand how it is determined.-Koch et al. (2005)

  • Study GoalsUsing archived forecast model runs and observed reflectivity from DWFE, examine Simulated Reflectivity (SR) from two different perspectives:

    Use statistics and direct examination to see where and why different SR products resemble or differ from observed reflectivity.

    Consider the question: If it can be shown that there is a systematic error in a particular SR product, such that the SR product consistently produces too much or too little of a given reflectivity value, can the SR product be calibrated to more closely match the observed radar reflectivity?

  • Data SourcesArchived Gridded Forecast Model Output from DWFEArchived Observed and Simulated Composite Reflectivity Imagery3-D Gridded Observed Reflectivity from the National Mosaic and Multi-Sensor Quantitative Precipitation Estimation (NMQ)Thanks to DTCThanks to NSSL

  • StratiformAreaConvective/StratiformArea13 February 2005 Cyclonic Storm System

  • ObservedNMM consistentARW genericARW consistentStratiform Area: Composite Reflectivity

  • ObservedNMM consistentARW genericARW consistentStratiform Area: CFADs (Yuter and Houze 1995)

  • 0100020003000400050006000-202468Stratiform Area: Frequency Distribution of Height of Maximum ReflectivityHeight above Freezing Level (km)Number of OccurrencesARW genericNMM consistentARW consistentObserved

  • Differences in ARW Reflectivity ProductsReal-time ARW post-processor used a generic SR that assumes a constant intercept parameter for the snow size distribution.Consistent ARW SR product uses T-dependent intercept, consistent with WSM5 microphysics in used in ARW.-50-45-40-35-30-25-20-15-10-5010610710810900.511.522.533.544.5510-410-21001021041061081010N0 (m-4)N (m-4)Snow particle size distributions for same mixing ratio qs=0.1 g kg-1Particle size (mm)Temperature (C)

  • Differences in ARW Reflectivity ProductsReal-time ARW post-processor used a generic SR that does not account for the change in dielectric factor for wet snow (brightband)

    Consistent ARW SR product uses the liquid-water dielectric factor for snow that is at T 0 C. Increases reflectivity by ~7 dBZ in the melting layer

  • Differences in ARW Reflectivity ProductsARW genericARW generic + var. N0S(b) (a)(a)(b)

  • Differences in ARW Reflectivity Products(b) (a)(a)(b)ARW generic + var. N0SARW generic + var. N0S + wet snow( = ARW consistent)

  • Differences between NMM and ARWReflectivity ProductsARW genericNMM consistent(a)(b)

  • Stratiform Area: Composite Reflectivity StatisticsObservedNMM consistentARW genericARW consistent

  • Stratiform Area: Composite Reflectivity Frequency Distributions-20-1001020304050100101102103104ARW genericNMM consistentARW consistentObservedNumber of grid boxesReflectivity (dBZ)

  • Calibration of Composite Simulated ReflectivityConsider the question:

    If it can be shown that there is a systematic error in a particular SR product, such that the SR product consistently produces too much or too little of a given reflectivity value, can the SR product be calibrated to more closely match the observed radar reflectivity?

    How would we do this?

    Use the bias? No. SR is too high in some places, too low in others.

    Use correlation/linear regression? No. Forecast and observed precipitation are not spatially well-correlated. (Ebert and McBride 2000)

    How about matching the frequency distribution?

  • Calibration of Composite Simulated Reflectivity-20-1001020304050100101102103104ARW genericNMM consistentARW consistentObservedNumber of grid boxesReflectivity (dBZ)

  • We seek a calibration function Znew = h(Zm), such that

    where Zm is the composite SR, and f(Z) and g(Z) are the frequency distributions of the simulated and observed composite reflectivity, respectively.

    Calibration of Composite Simulated Reflectivity

  • Calibration of Composite Simulated ReflectivityWhile h(Zm) is difficult to extract mathematically, there is a practical and simple way to arrive at it:

    Start with a set of SR values that will be used to obtain the calibration equation (e.g., all the grid values of composite SR in a single plot)Rank all the values in order from lowest to highest value.Do the same for the corresponding observed reflectivity set. It is important that the same number of points is used for both.Align the two ranked sets (simulated and observed). The full set of pairs of reflectivity values provide the precise calibration function needed to transform the SR plot into one that has the exact same frequency distribution as the corresponding observed reflectivity plot.

  • Calibration of Composite Simulated ReflectivityObservedNMM consistentARW genericARW consistent

  • -20-10010203040506070-20-100102030405060701-to-1Calibrated Reflectivity (dBZ)Simulated Reflectivity (dBZ)Calibration Curves for Stratiform AreaARW genericNMM consistentARW consistent

  • Uncalibrated Composite Simulated ReflectivityObservedNMM consistentARW genericARW consistent

  • Calibrated Composite Simulated Reflectivity(a)(b)(c)(d)ObservedNMM consistentARW genericARW consistent

  • StratiformArea13 February 2005 Cyclonic Storm System Convective/StratiformArea

  • Convective/Stratiform Area: Composite ReflectivityObservedNMM consistentARW genericARW consistent

  • Convective/Stratiform Area CFADsLow observed frequency of 20-30dBZ echoes aloft (compared to all models)ObservedNMM consistentARW genericARW consistent

  • 0-202468Convective/Stratiform Area: Frequency Distribution of Height of Maximum ReflectivityHeight above Freezing Level (km)Number of Occurrencess50010001500ARW genericNMM consistentARW consistentObserved

  • Convective/Stratiform Area: Composite Reflectivity Frequency Distributions-20-1001020304050100101102103104ARW genericNMM consistentARW consistentObservedNumber of grid boxesReflectivity (dBZ)

  • -20-10010203040506070-20-100102030405060701-to-1Calibrated Reflectivity (dBZ)Simulated Reflectivity (dBZ)Calibration Curves for Convective/Stratiform AreaARW genericNMM consistentARW consistent

  • Uncalibrated Composite Simulated Reflectivity

  • Calibrated Composite Simulated Reflectivity

  • 4-Week Study of Calibration of Composite Simulated ReflectivityWhat about the mean behavior of the SR products over many different types and intensities of precipitation?

    4-week study: 28 February 24 March 2005 (sub-period of DWFE)

    Daily forecasts and observations of composite reflectivity at18, 21, and 00 UTC (18, 21, and 21-h model forecasts)

    Area covering CONUS from Rocky Mountains eastward

    Used archived imagery only 5 dBZ resolution (width of color bands)

  • 4-Week Study of Calibration of Composite Simulated Reflectivity

  • 4-Week Study of Calibration of Composite Simulated ReflectivityFrequency Distribution010203040506070-2-1012345ARW genericNMM consistentObservedNumber of pixelsReflectivity (dBZ)

  • 4-Week Study of Calibration of Composite Simulated Reflectivity-20-10010203040506070-20-100102030405060701-to-1Calibrated Reflectivity (dBZ)Simulated Reflectivity (dBZ)WRF-ARW (constant N0)WRF-ARWWRF-NMMCalibration Curves

  • Caveats of SR CalibrationCalibration of SR will not significantly improve correlation of SR and observed reflectivity.

    Calibration can only partially compensate for flaws in model microphysics or SR algorithm.

    Calibration functions should be based on sufficiently large data sets such that they are not influenced by a small number of bad forecasts, i.e., they should reflect the mean behavior of the model.

    Calibration functions are dependent on many factors, including:- observational data quality- method of cartesianizing the observed reflectivity- precipitation type- geographic location and time of year- model resolution, physics, and forecast hour

  • Merits of SR CalibrationCalibration can remove systematic under or overprediction of various reflectivity ranges and improve the look of SR products.

    The process of determining the frequency distribution of SR vs. observed reflectivity, and deriving calibration functions, leads to insights into general flaws in model microphysics and SR algorithms.

    Calibration functions may provide a more reasonable forward operator for assimilating observed reflectivity data into models than the straight D6 function that is used.

    There is potential to enhance the calibration functions, by training them on more limited spatio-temporal windows, or by seeking dependencies on particular types of frequency distributions.

  • RecommendationsModel microphysics should be formulated not only to optimize QPF, but also to produce reasonable hydrometeor fields and size distributions that affect the model reflectivity.To the extent possible, SR algorithms should be precisely consistent with all assumptions in the associated model microphysical scheme.Ideally, SR should be calculated within the model as it runs, to take advantage of the increasingly complex and dynamic size distributions calculated by the schemes.Real-time or operational SR products should be statistically examined (using CFADs and other frequency distribution tests) to understand how they behave relative to observations.Real-time or operational SR products should be calibrated with observed reflectivity using the methods described herein.Calibration functions should be used in forward operators for assimilating reflectivity data into models.

  • Finis


Recommended