+ All Categories
Home > Documents > A Study on the Scale Dependence of the Predictability of Precipitation Patterns

A Study on the Scale Dependence of the Predictability of Precipitation Patterns

Date post: 20-Nov-2023
Category:
Upload: mcgill
View: 0 times
Download: 0 times
Share this document with a friend
20
A Study on the Scale Dependence of the Predictability of Precipitation Patterns MADALINA SURCEL,ISZTAR ZAWADZKI, AND M. K. YAU Department of Atmospheric and Oceanic Sciences, McGill University, Montreal, Quebec, Canada (Manuscript received 20 March 2014, in final form 8 September 2014) ABSTRACT A methodology is proposed to investigate the scale dependence of the predictability of precipitation patterns at the mesoscale. By applying it to two or more precipitation fields, either modeled or observed, a decorrelation scale l 0 can be defined such that all scales smaller than l 0 are fully decorrelated. For precipitation forecasts from a radar data–assimilating storm-scale ensemble forecasting (SSEF) system, l 0 is found to increase with lead time, reaching 300 km after 30 h. That is, for l # l 0 , the ensemble members are fully decorrelated. Hence, there is no predictability of the model state for these scales. For l . l 0 , the ensemble members are correlated, indicating some predictability by the ensemble. When applied to characterize the ability to predict precipitation as compared to radar observations by numerical weather prediction (NWP) as well as by Lagrangian persistence and Eulerian persistence, l 0 increases with lead time for most forecasting methods, while it is constant (300 km) for non–radar data–assimilating NWP. Comparing the different forecasting models, it is found that they are similar in the 0–6-h range and that none of them exhibit any predictive ability at meso-g and meso-b scales after the first 2 h. On the other hand, the radar data–assimilating ensemble exhibits predictability of the model state at these scales, thus causing a systematic difference between l 0 corresponding to the ensemble and l 0 corresponding to model and radar. This suggests that either the ensemble does not have sufficient spread at these scales or that the forecasts suffer from biases. 1. Introduction As shown by Lorenz (1963), two solutions of a system of nonlinear differential equations that differ only slightly in the specification of the initial conditions will diverge with time until they become as similar as two random states. This is an intrinsic property of nonlinear deterministic systems and leads to deterministic chaos. As the atmospheric processes are considered nonlinear deterministic processes, the results of Lorenz (1963) impose a finite limit on the intrinsic predictability of the atmosphere, even if a perfect model for prediction ex- isted and if the initial condition errors were much smaller than those currently used in atmospheric models (Lorenz 1996). Furthermore, this finite limit is highly dependent on the initial conditions, and an excellent illustration of this behavior for the Lorenz system is provided in Fig. 4 of Palmer (1993). In practice, the at- mospheric state is predicted using an imperfect model initialized with initial conditions that contain consider- able errors. Therefore, the practical predictability of the atmosphere (defined as the extent to which prediction is possible given current forecasting methods; Lorenz 1996) is affected by, but not equivalent to, the intrinsic predictability of the atmosphere. Moreover, estimates of the practical atmospheric predictability are highly de- pendent on the model used for prediction and on the initial state (just as for the Lorenz system). These con- siderations have made evident the need to assess the uncertainty associated with a given forecast, which can be done through ensemble forecasting. Ensemble fore- casting techniques for medium- to long-range weather prediction in midlatitudes, which is mostly affected by synoptic-scale baroclinic instabilities, are well estab- lished by now (Kalnay 2003, section 6.5). On the other hand, the mesoscale details of weather, such as the evolution of precipitation systems, are affected by moist Corresponding author address: Madalina Surcel, Department of Atmospheric and Oceanic Sciences, McGill University, 805 Sherbrooke St. W. #945, Montreal, QC H3A2K6, Canada. E-mail: [email protected] 216 JOURNAL OF THE ATMOSPHERIC SCIENCES VOLUME 72 DOI: 10.1175/JAS-D-14-0071.1 Ó 2015 American Meteorological Society
Transcript

A Study on the Scale Dependence of the Predictability ofPrecipitation Patterns

MADALINA SURCEL, ISZTAR ZAWADZKI, AND M. K. YAU

Department of Atmospheric and Oceanic Sciences, McGill University, Montreal, Quebec, Canada

(Manuscript received 20 March 2014, in final form 8 September 2014)

ABSTRACT

A methodology is proposed to investigate the scale dependence of the predictability of precipitationpatterns at the mesoscale. By applying it to two or more precipitation fields, either modeled or observed,a decorrelation scale l0 can be defined such that all scales smaller than l0 are fully decorrelated. Forprecipitation forecasts from a radar data–assimilating storm-scale ensemble forecasting (SSEF) system,l0 is found to increase with lead time, reaching 300 km after 30 h. That is, for l#l0, the ensemblemembers are fully decorrelated. Hence, there is no predictability of the model state for these scales. Forl.l0, the ensemble members are correlated, indicating some predictability by the ensemble. Whenapplied to characterize the ability to predict precipitation as compared to radar observations by numericalweather prediction (NWP) as well as by Lagrangian persistence and Eulerian persistence, l0 increaseswith lead time for most forecasting methods, while it is constant (300 km) for non–radar data–assimilatingNWP.Comparing the different forecastingmodels, it is found that they are similar in the 0–6-h range and that none

of them exhibit any predictive ability at meso-g and meso-b scales after the first 2 h. On the other hand, theradar data–assimilating ensemble exhibits predictability of the model state at these scales, thus causinga systematic difference between l0 corresponding to the ensemble and l0 corresponding to model and radar.This suggests that either the ensemble does not have sufficient spread at these scales or that the forecastssuffer from biases.

1. Introduction

As shown by Lorenz (1963), two solutions of a systemof nonlinear differential equations that differ onlyslightly in the specification of the initial conditions willdiverge with time until they become as similar as tworandom states. This is an intrinsic property of nonlineardeterministic systems and leads to deterministic chaos.As the atmospheric processes are considered nonlineardeterministic processes, the results of Lorenz (1963)impose a finite limit on the intrinsic predictability of theatmosphere, even if a perfect model for prediction ex-isted and if the initial condition errors were muchsmaller than those currently used in atmospheric models(Lorenz 1996). Furthermore, this finite limit is highlydependent on the initial conditions, and an excellent

illustration of this behavior for the Lorenz system isprovided in Fig. 4 of Palmer (1993). In practice, the at-mospheric state is predicted using an imperfect modelinitialized with initial conditions that contain consider-able errors. Therefore, the practical predictability of theatmosphere (defined as the extent to which prediction ispossible given current forecasting methods; Lorenz1996) is affected by, but not equivalent to, the intrinsicpredictability of the atmosphere.Moreover, estimates ofthe practical atmospheric predictability are highly de-pendent on the model used for prediction and on theinitial state (just as for the Lorenz system). These con-siderations have made evident the need to assess theuncertainty associated with a given forecast, which canbe done through ensemble forecasting. Ensemble fore-casting techniques for medium- to long-range weatherprediction in midlatitudes, which is mostly affected bysynoptic-scale baroclinic instabilities, are well estab-lished by now (Kalnay 2003, section 6.5). On the otherhand, the mesoscale details of weather, such as theevolution of precipitation systems, are affected by moist

Corresponding author address: Madalina Surcel, Departmentof Atmospheric and Oceanic Sciences, McGill University, 805Sherbrooke St. W. #945, Montreal, QC H3A2K6, Canada.E-mail: [email protected]

216 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

DOI: 10.1175/JAS-D-14-0071.1

! 2015 American Meteorological Society

convective processes. Since computational resourceshave become available in the early 2000s to allow thesimulation of mesoscale phenomena with high hori-zontal grid spacing, significant effort has been devotedto understanding the processes responsible for errorgrowth in such models. The various studies on errorgrowth at the mesoscale (e.g., Walser et al. 2004; Zhanget al. 2002, 2003, 2006, 2007; Hohenegger and Schar2007a,b; Bei and Zhang 2007; Melhauser and Zhang2012; Wu et al. 2013) indicated that moist convection isthe primary mechanism promoting the growth of smallinitial-condition errors. Moreover, it has been shownthat small errors saturate at the convective scale andthen grow upscale through geostrophic adjustment orcold pool dynamics to limit the predictability at themesoscale within the time of interest of such forecasts(about 24 h), resulting in error growth rates forconvection-allowing models being much higher than forlarge-scale models (Hohenegger and Schar 2007a).However, just as for the very simple Lorenz system, theexact limit of predictability is highly case dependent(Done et al. 2012; Hohenegger et al. 2006; Walser et al.2004). Moreover, Hohenegger et al. (2006) showed thateven cases with apparently similar intensity of moistconvection might exhibit different predictability de-pending on the relation between the moist convectionand the larger-scale flow.Given these considerations, the formulation of proper

ensemble techniques at the storm scale remains verydifficult (Johnson et al. 2014). A significant effort indeveloping ensemble forecasting strategies at convection-allowing resolutions is represented by the NationalOceanic and Atmospheric Administration (NOAA)Hazardous Weather Testbed (HWT) Spring Experi-ments, which have been taking place every spring since20071 (http://hwt.nssl.noaa.gov/spring_experiment/).As part of this experiment, ensemble forecasts wereproduced for 30–48h at very high resolution (dx5 4km)using the storm-scale ensemble forecasting (SSEF) sys-tem developed at the Center for the Analysis and Pre-diction of Storms (CAPS; Xue et al. 2008; Kong et al.2008). While the ensemble configuration has changedthroughout the years, each year a set of members withperturbed initial conditions (IC) and lateral boundaryconditions (LBC), different model physics, and meso-scale data assimilation (DA; including radar) was pro-duced. The ICs and LBCs for these members werederived from the operational short-range ensemble

forecast (SREF) system run at the National Centers forEnvironmental Prediction (NCEP; Du et al. 2009).SREF forecasts are produced with grid spacing of 32–45 km (depending on the member), and thus, the IC–LBC perturbations do not include information at verysmall scales. A recent study by Johnson et al. (2014)compared the effect of this type of perturbation tosmaller-scale IC–LBC perturbations. It was found thatthe relative importance of the two types of errors is casedependent, but that on average, the small-scale pertur-bations are less important than the larger-scale errorsfor precipitation forecasts at medium and large scales(64–4096 km in their study). Furthermore, they con-cluded that the current CAPS SSEF configurationsamples the primary sources of error. However, theevaluation by Clark et al. (2011) of ensemble quantita-tive precipitation forecasts (QPFs) from the 2009 SpringExperiment showed that the same type of ensemble isunderdispersive for lead times between 6 and 18 h. Onthe other hand, by investigating the filtering effect ofensemble averaging for precipitation forecasts from the2008 Spring Experiment, Surcel et al. (2014) indicatedthat QPFs from the ensemble members become fullydecorrelated at larger scales with increasing lead time.The objective of this paper is to further investigate the

scale dependence of precipitation predictability by the2008 CAPS SSEF by extending the analysis of Surcelet al. (2014) to determine how the range of scales overwhich the ensemble QPFs are fully decorrelated evolveswith forecast lead time for 22 cases during spring 2008.As mentioned by Surcel et al. (2014), the complete de-correlation of the ensemble forecasts can be regarded asa lack of predictability of precipitation patterns by theensemble at those scales. As the purpose of ensembleforecasts is to provide information on the uncertainty inthe forecast, it is desirable to compare the predictabilityby the ensemble to the actual ability of the ensemblemembers to forecast precipitation at those scales asquantified by comparison to observations.Therefore, in this paper we will provide quantitative

estimates of the loss of precipitation predictability withspatial scale and forecast lead time by particular NWPmodel configurations. These estimates will be obtainedin two ways: (i) by analyzing the difference betweenforecasts from the CAPS ensemble and (ii) by compar-ing forecasts from the CAPS SSEF to observations. Asthe CAPS SSEF has both IC–LBC perturbations de-rived from a regional-scale ensemble and varied modelphysics, both estimates correspond to the loss of the‘‘practical predictability’’ of precipitation, as defined byLorenz (1996) and discussed by Zhang et al. (2006). Asexplained in previous work by Zhang et al. (2006), Beiand Zhang (2007), andMelhauser andZhang (2012), the

1 In fact, the NOAAHWTSpring Experiments have been takingplace since 2000, but ensemble forecasts have been produced aspart of the experiments only since 2007.

JANUARY 2015 SURCEL ET AL . 217

practical predictability is influenced by the intrinsicpredictability of the atmosphere, that is, to the growth ofsmall IC errors due to the chaotic nature of the atmo-sphere. While understanding how the intrinsic pre-dictability of precipitation for our dataset affects theestimates of predictability loss that we obtain is veryimportant, it is outside the scope of the current study andis left for future work. The objective of this paper is topresent quantitative estimates of the loss of practicalpredictability and to intercompare the estimates ob-tained through the two methods discussed above fora dataset consisting of 22 precipitation cases duringspring 2008. Therefore, to facilitate the discussion ofthe comparison presented in this paper, we will refer tothe loss of practical predictability as estimated from thedifferences among forecasts from the ensemble asthe loss of ‘‘the predictability of the model state.’’ Thisdepends on the numerical weather prediction (NWP)model, themethod of generating the ensemble forecasts,and the metric used to quantify the variability within themembers. Herein, the predictability of the model statewill be estimated by the decorrelation scale corre-sponding to QPFs from the CAPS SSEF. On the otherhand, when estimates of the loss of practical pre-dictability are obtained by comparing the outputs ofNWP models or of other forecasting methods to obser-vations, we will refer to them as estimates of ‘‘the modelpredictability of the atmospheric state’’ (in this paper,we assume that observations closely describe the at-mospheric state, although in section 4d we provide anassessment of the effect of this assumption on our re-sults). These estimates of predictability depend both onthe prediction model and on the particular metric ofmodel–observation comparison. The model pre-dictability of the atmospheric state will be quantified bythe decorrelation scale between precipitation forecastsand precipitation observations. Previous verificationstudies of precipitation forecasts have reported thatforecasting skill shows scale dependence (Casati et al.2004; Gilleland et al. 2009; Roberts and Lean 2008;Surcel et al. 2014), with a loss of useful skill at largerscales with increasing forecast lead time (Germann et al.2006; Roberts 2008). Therefore, we aim to determinehow the range of scales with a complete lack of skill (nomodel predictability of the atmospheric state) comparesto the range of scales lacking predictability of themodeled precipitation for the 2008 CAPS SSEF. Ideally,the predictability of the model state and the modelpredictability of the atmospheric state should be con-sistent with each other when estimated over sufficientlylarge datasets. That is, it is desirable, on average, forsmall ensemble spread to correspond to good forecastskill, and vice versa. Therefore, our results could provide

insight on whether the perturbations currently em-ployed in the CAPS SSEF are sufficient to representuncertainty as a function of scale.In addition to dynamical models for precipitation

forecasting, statistical models are still often employedfor the very-short-term prediction of precipitation(nowcasting). The simplest one is Eulerian persistence(EP), which simply assumes no evolution of the currentstate (in this case, of the precipitation field). Evidently,this is a very poor model, as a cursory examination ofradar imagery demonstrates that rainfall is neither sta-tionary nor steady. However, the EP model remainsa baseline for validating predictions frommore complexmodels.Another more appropriate statistical method for

short-term precipitation forecasting is Lagrangian per-sistence (LP), which assumes persistence in the refer-ence frame of the moving precipitation system.Therefore, to obtain a forecast, it suffices to characterizethe motion of the precipitation patterns in the immedi-ate past and to extrapolate the current precipitation fieldusing this motion field. This method is known to workwell for very-short-term (0–6 h) precipitation fore-casting when used in radar-based extrapolation algo-rithms (Berenguer et al. 2012; Lin et al. 2005; Turneret al. 2004). Furthermore, LP precipitation forecastswere found to outperform the deterministic forecastsfrom the CAPS SSEF radar data–assimilating membersfor about 3 h (Berenguer et al. 2012). To complementthe results of their analysis, we will also investigate herehow the scale dependence of the lack of predictability ofrainfall compares between the radar data–assimilatingmembers and LP forecasting.Our study will show the potential of using the decor-

relation scale introduced by Surcel et al. (2014) as anevaluation metric for quantifying the scale dependenceof the predictability of precipitation patterns. Further-more, it will offer a quantitative estimate of the loss ofpredictability of precipitation by both dynamical andstatistical forecasting methods, as a function of scale andforecast lead time for a set of 22 cases during spring2008. By analyzing the scale dependence of precipitationpredictability for a set of cases, rather than adoptinga case study approach, we attempt to generalize and thuscomplement the results obtained by previous predictabilitystudies. The results of our study are also relevant to fore-casting applications consisting of postprocessing modeloutput, such as some ensemble averaging and blendingapplications (Atencia et al. 2010; Ebert 2001; Kober et al.2012), as it indicates which components of the very detailedtwo-dimensional picture provided by the model do notcontain useful information and therefore should not beused in such applications.

218 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

The paper is organized as follows. Section 2 describesthe precipitation forecasts used in the analysis. Section 3explains the methodology used to derive the decorre-lation scale. Section 4 presents the results. Section 5offers a discussion on the predictability of precipitationat the mesoscale and suggestions for future work, andsection 6 presents the conclusion.

2. Data

Both precipitation forecasts and precipitation obser-vations are used in the study, and they are describednext. All the forecasts and observations have beenremapped on a common grid using a nearest-neighborinterpolation method, and the analysis is performed ona domain covering most of the central and easternUnited States, extending from 328 to 458N and from 1038to 788W, as illustrated in Fig. 1. The dataset consists of 22precipitation cases from 18 April to 6 June 2008. Bothhourly rainfall accumulations fields and instantaneousreflectivity fields (simulated or observed) were availablefor each dataset, and the entire analysis has been per-formed on both types of fields, with consistent results.However, in this paper, only the results corresponding tohourly rainfall accumulation fields are presented.

a. Precipitation forecasts

1) CAPS SSEF FORECASTS

The SSEF system was developed at CAPS, and it wasrun during the 2008 NOAA HWT Spring Experiment(Xue et al. 2008; Kong et al. 2008). It uses the AdvancedResearch version of the Weather Research and Fore-casting (WRF-ARW) Model (Skamarock et al. 2008),version 2.2, and consists of 10 members with different

physical schemes, mesoscale data assimilation includingradar in 9 out of the 10 members, and perturbed ICs andLBCs. The background ICs are interpolated from theNorth American Model (NAM; Janji!c 2003) 12-kmanalysis, and the IC–LBC perturbations are directlyobtained from the SREF system run operationally atNCEP (Du et al. 2009). The SREF members are basedon different dynamic cores [Eta, WRF NonhydrostaticMesoscale Model (WRF-NMM), and WRF-ARW] andare run with grid spacing of 32 or 45 km. Therefore, theIC–LBC perturbations do not have variability at thescale at which the 4-km members are run. In addition toIC–LBC perturbations, the ensemble members havedifferent microphysical schemes varying among Thomp-son (Thompson et al. 2004), WRF single-moment 6-class(WSM6; Hong and Lim 2006), and Ferrier (Ferrier et al.2002); different planetary boundary layer (PBL) schemesvarying between Mellor–Yamada–Janji!c (Mellor andYamada 1982; Janji!c 2001) and Yonsei University (YSU;Noh et al. 2003); and different shortwave radiationschemes varying between Goddard (Tao et al. 2003) andDudhia (1989). Thirty-hour forecasts on a 4-km grid wereperformed almost daily in April–June 2008. Two of themembers (control members C0 and CN) do not haveSREF-based IC–LBC perturbations and have identicalmodel configurations. However, convective-scale obser-vations from radar [from theWeather Surveillance Radar-1988 Doppler (WSR-88D) network] and surface stationsare assimilated only within CN. The assimilation of me-soscale observations was performed using the AdvancedRegional Prediction System (ARPS) three-dimensionalvariational data assimilation (3DVAR) and cloud anal-ysis package (Gao et al. 2004; Hu et al. 2006a,b; Xue et al.2003). Radar reflectivity, surface data, and visible and10.5-mm infrared data from theGeostationaryOperational

FIG. 1. Analysis domain. The red contours represent the coverage of the 2.5-km constant-altitude plan position indicator (CAPPI) maps, while the rectangle ranging from 328 to 458Nand from 1038 to 788Wcorresponds to the analysis domain. The precipitation field presented inthis figure is typical for spring 2008.

JANUARY 2015 SURCEL ET AL . 219

Environmental Satellite (GOES) were processed by thecloud analysis scheme to retrieve hydrometeor information.Radar radial velocity data and data from the OklahomaMesonet, METAR, and wind profiler networks were as-similated with the ARPS 3DVAR (Johnson et al. 2014).

2) MAPLE LP FORECASTS

The LP forecasts analyzed in this paper were producedwith the McGill Algorithm of Precipitation Forecastingby Lagrangian Extrapolation (MAPLE; Germann andZawadzki 2002). These are very-short-term precipitationforecasts produced using an extrapolation-based tech-nique that employs the variational echo tracking (VET)algorithm (Laroche and Zawadzki 1995) to estimate themotion field of precipitation and a modified semi-Lagrangian backward scheme for advection. MAPLEwas run using the National Severe Storms Laboratory(NSSL) 2.5-km height rainfall maps described below togenerate 8-h forecasts initialized every hour with a tempo-ral resolution of 15min. For the analysis of hourly rainfallaccumulations, maps of radar reflectivity Z are convertedinto rain rate R according to Z 5 300R1.5, and then in-stantaneous reflectivity maps every 15min are averaged toobtain radar-derived hourly rainfall accumulations.

b. Precipitation observations

The precipitation observations used in this study areU.S.radar reflectivity mosaics at 2.5-km altitude generated byNSSL (Zhang et al. 2005) every 5min and mapped witha spatial resolution of 1 km. For the analysis of hourlyrainfall accumulations, observed reflectivity maps every15min have been processed to obtain maps of hourlyrainfall accumulations as in the case of MAPLE.

3. Methodology

Given Xi, Xj, (i, j 5 1, . . . , N) precipitation fields andassuming their variance is defined, the following holds:

Var

!N

i51Xi

!

5 !N

i51Var(Xi)1 !

i6¼jCov(Xi,Xj) . (1)

If the fields Xi, Xj are fully decorrelated, thenCov(Xi, Xj)5 0 for all i 6¼ j. It follows that

!N

i51Var(Xi)

Var

!N

i51Xi

!5 1. (2)

Note that if (2) stands for two fields (N 5 2), this alsomeans that the variance of the error between the fields

exceeds the variance of the mean of the fields. We areinterested in the scale dependence of the predictabilityof precipitation. Hence, we will investigate whetherthere are scales at which various precipitation forecastsare fully decorrelated. To do so, we verify whether (2)holds for variances of the precipitation fields at certainwavelengths. That is, let PXi(l) be the variance of fieldXi at scale l. We define the power ratio as

R(l)5!N

i51PX

i(l)

PXT(l)

, (3)

whereXT 5!Ni51Xi, and search for l such thatR(l)5 1.

To obtain the variance of a precipitation field at a givenscale, we compute the power spectrum using the discretecosine transform (DCT; Denis et al. 2002). This trans-form is equivalent to the fast Fourier transform (FFT),but it is preferred as it eliminates the problems associ-ated with discontinuities at the boundaries of the do-main. The values for R(l) vary between 1, whichrepresents complete decorrelation between the fieldsXi

at scale l, and 1/N, which represents perfect resemblancebetween the fields at scalel. To illustrate this, Fig. 2a shows,R(l)5!9

i51PXi(l)/PXT (l), where XT 5!9i51Xi and Xi

are precipitation forecasts of hourly accumulationsfrom the nine radar data–assimilating ensemble mem-bers at 1000 UTC 24 April 2008. While the R(l) curveis noisy, the figure indicates that R(l)5 1 forl, l0 5 76 km (red circle in Fig. 2a), indicating thecomplete decorrelation between the ensemble mem-bers for this range of scales. Also, for l. l0, the ratiodecreases toward 1/9 without reaching this value. Surcelet al. (2014) mentioned that the decorrelation scaleincreases with forecast lead time. Therefore, Fig. 2bshows l0 as a function of lead time for 24 April 2008.The value of l0 was determined by finding the largest lfor whichR(l)$ 0:95. The threshold of 0.95 was chosenrather than 1 as it was found to eliminate some of thenoise in determining the decorrelation scale, withoutintroducing any significant bias. Alternatively, l0 couldbe determined as the intersection between two linearfits: R(l)5 1 for l, l0 and R(l)5 alb for l.l0.However, this method was showing very similar resultswhile it wasmore sensitive to the noise in theR(l) curves.While the l0(t) curve is somewhat noisy, it shows thatthe decorrelation scale is increasing with forecast leadtime, reaching around 300 km at the end of the forecastperiod. The decorrelation scale can be determined foreach different precipitation event, and averages of l0together with its standard deviation can be presented tohave an estimate of the variability of the l0 values. To

220 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

eliminate some of the noise in the l0(t) curve, a 3-hrunning mean is applied for each of the plots of l0(t)versus t presented in the paper.This methodology can be applied for any number of

precipitation fields, even though the smaller the numberof precipitation fields, the noisier the l0 estimates.However, we will use this methodology to examine notonly the decorrelation scale between only two ensemblemembers (predictability of the model state), but also thedecorrelation between forecasts and observations (pre-dictability of the atmospheric state by a certain model).Section 4 presents the results of applying this method-ology to the 2008 dataset.The decorrelation scale l0 represents the upper limit

of the range of scales where there is a complete lack ofpredictability using a certain forecasting method. Thismethodology does not provide any information aboutthe degree of predictability at scales larger than l0; itsimply shows that there is some predictability at thosescales. A measure of the predictability for scales l. l0could be the value of R(l) in Fig. 2a since this valuedepends on the covariance term on the right-hand sideof (1).The decorrelation scale could similarly be obtained by

investigating the range of scales over which two fore-casts are decorrelated. However, that would involvedecomposing the precipitation fields in different scalecomponents using decomposition methods such as DCTor FFT or theHaar wavelet transform (Casati et al. 2004;Germann et al. 2006; Johnson et al. 2014). DCT or FFTbandpass-filtered rainfall fields are strongly affected bythe Gibbs effect and thus would make such computa-tions impossible. Haar filtering is less prone to Gibbseffects (Turner et al. 2004), but the Haar transform

imposes a coarse sampling in scale. Predictability esti-mates for LP precipitation forecasts were obtained inthis way by Germann et al. (2006), and we will discussthe comparison to their results in section 4c.

4. Results

This section presents the results obtained by applyingthe above methodology to precipitation forecasts andobservations from 22 cases during spring 2008. This setof cases has been previously analyzed by Surcel et al.(2010) and Berenguer et al. (2012), who have shown thatthe period was dominated by large-scale precipitationsystems that nonetheless exhibited a marked diurnalcycle. Figures 3 and 4 illustrate the 22 cases in terms ofthe evolution of the power spectra of hourly rainfallfields derived from radar and in terms of radar-derivedtotal accumulations for the entire time period (30 h). Asshown by the power spectra (left panels), most casesshow a clear diurnal cycle in the evolution of the sta-tistical properties of the precipitation fields, with vari-ance decreasing at all scales with time from 0000 UTC,reaching a minimum during early afternoon, and thenbeginning to increase again through the evening. Thisdiurnal signal can affect evaluationmetrics, as evident inJohnson et al. (2014, their Figs. 6, 8, and 11), and canmake it difficult to analyze the evolution of skill withlead time. This problem is avoided in our analysis, as thedecorrelation scale is computed from a power ratio, thusremoving the effect of the large changes in the varianceof a precipitation field with the time of day. The resultspresented in this section are usually averaged over theentire dataset, but the case-to-case variability is alsoaddressed wherever relevant.

FIG. 2. (a) The power ratio for hourly accumulations from the nine-member ensemble at 1000 UTC 24 Apr 2008(10-h lead time). The red filled circle indicates the scale (76 km). (b) Decorrelation scale as a function of forecast leadtime for 24 Apr 2008.

JANUARY 2015 SURCEL ET AL . 221

FIG. 3. Temporal evolution of the power spectrum of (left) observed hourly rainfall accumulations and (right) total rainfall accumu-lations for each case between 23Apr and 21May 2008. All the evaluations described in the text are performed on the domain illustrated inthis figure.

222 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

a. Predictability of the model state for theCAPS SSEF

As mentioned in the introduction, by predictability ofthe model state, we mean the extent to which forecastsfrom models with slight differences in the model formu-lation and in the IC–LBCs resemble each other. Usually,the predictability of the model state is characterized in

terms of the ensemble spread. In this sense, computingthe decorrelation scale for the entire ensemble is equiv-alent to determining the range of scales over which theensemble has as much spread as that of an ensemble ofrandom precipitation fields.Figure 5a shows the power ratios corresponding to the

forecasts from all the radar data–assimilating ensemblemembers:

FIG. 4. As in Fig. 3, except from 22 May to 6 Jun 2008.

JANUARY 2015 SURCEL ET AL . 223

R(l)5!9

i51PX

i(l)

PXT(l)

,

where XT 5!9i51Xi and Xi represent a 2D pre-

cipitation field from one of the nine radar data–assimilating ensemble members for all forecast leadtimes (colors) and averaged over the 22 cases. Afterthe first 3 h, there is a range of scales for which theratios are 1, meaning that the forecasts are fully de-correlated at those scales. Furthermore, there isa clear increase in the range of scales over whichR(l)5 1 with forecast lead time. Following themethodology described in section 3, Fig. 5b shows l0as a function of forecast lead time. The black line andthe gray shading represent the mean and standarddeviation (6s), respectively, of l0 for all cases, whilethe blue line shows the decorrelation scales derivedfrom the average power ratio curves in Fig. 5a. Thevalue of l0 increases with forecast lead time followinga power law: l0(t)5 19t0:7, with t in hours and l0 inkilometers (red line in Fig. 5b). Figure 5b is in-terpreted as follows: for lead times and scales underthe curve, the ensemble members are fully decorre-lated and there is thus no predictability of modeledprecipitation—QPFs from the ensemble membersresemble each other as much as any nine randomprecipitation fields do. For lead times and scales abovethe curve, there is some predictability, althoughnothing can be said from this plot about the quality ofit. According to this result, on average for the 22 casesunder study, the sources of uncertainty consideredin this ensemble are sufficient to result in a loss of

predictability at meso-g scales (2–20 km) after the firsthour and at meso-b scales (20–200 km) after the first18 h. While this result might sound surprising fromthe point of view of operational forecasting, it is inagreement with other results obtained byWalser et al.(2004), Zhang et al. (2003), Bei and Zhang (2007), andCintineo and Stensrud (2013). To illustrate the greatvariability between the members at those scales, Fig. 6shows a snapshot of the precipitation fields from theensemble members for a lead time of 24 h for oneevent over a subdomain of 1300 km 3 1300 km (leftpanel) and for a subdomain of 300 km 3 300 km (rightpanel). In the left-hand panel, the eye focuses on thelarge-scale patterns, which are similar among themembers and with observations. On the other hand,the lack of similarity becomes evident in the right-hand panel, when we focus on the detail at scalessmaller than 300 km.The decorrelation scale varies with time following

a power law with an exponent smaller than 1, meaningthat the error growth rate decreases with increasing leadtime. The ensemble members have radar DA, IC–LBCperturbations, and different model physics, so the de-correlation between the members can be caused by anyof these sources of error. However, there are threeensemble members that have fewer differencesamong each other: the control member CN; the C0member, which has the same configuration as CN butlacks the radar DA; and N2, which has the samemodel configuration as the control member, but hasIC–LBC perturbations. The effects of radar DA andof IC–LBC perturbations can be investigated by com-puting the power ratios between CN and C0 and be-tween CN and N2 as

FIG. 5. (a) Power ratios for the nine-member ensemble averaged over the 22 cases. The colored lines represent thedifferent lead times from the beginning (T5 0 h, blue) to the end (T5 29 h, red) of the forecast. (b) The decorrelationscale as a function of forecast lead time. The black line resulted from averaging the daily l0 values, the blue line wasobtained from the ratios in (a), and the red line represents the power-law fit to the black line. The gray shadingrepresents the uncertainty (6s) of the black line.

224 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

R(l)5PCN(l)1PC0(l)

PCN1C0(l)and

R(l)5PCN(l)1PN2(l)

PCN1N2(l)

and then computing l0 as a function of forecast leadtime. Figure 7a shows l0(t) for CN–C0 (orange), CN–N2(blue), and the ensemble (black) averaged over all casesand with the variability around the mean values (shad-ing), together with the equations for the power-law fit toeach curve. This figure shows that the decorrelationscales are similar for each pair of forecasts after the first15 h (blue and orange curves). Similar to what was ob-tained for the entire ensemble, l0 increases with forecastlead time following a power law l0(t)5 31t0:7 for CN–N2.The IC–LBC perturbations affect increasingly largerscales with increasing lead time, causing a completelack of predictability at meso-b scales after the first10 h, and the error growth rate decreases with leadtime. Because of our limited dataset, we cannot at thistime investigate the reasons for the variability of l0(t).

However, this evolution of IC–LBC error is consistentwith the conceptual model of Zhang et al. (2007), de-spite having differences in the structure of the IC–LBCperturbations and despite analyzing precipitationrather than temperature and wind fields. According totheir multistage model for error growth, small initial-condition errors rapidly grow at convective scales,saturating on time scales ofO(1) h. Once errors saturateat convective scales being amplified by moist convec-tive processes, they grow upscale through mechanismssuch as geostrophic adjustment and cold-pool dynamics,resulting in the loss of predictability at the mesoscaleafter 20–30h. The balanced component of error couldfurther grow upscale through baroclinic instability tolimit predictability at larger scales (Zhang et al.2007). Also, both Zhang et al. (2007) and Hoheneggerand Schar (2007b) report decreasing error growthrates with increasing scale, as shown as well by ourresults. Moreover, l0(t) appears to evolve similarlybetween N2 and the other ensemble members, despitethe additional physics perturbations in the othermembers. It seems that mixed physics as an addition

FIG. 6. Example of hourly precipitation forecasts and observations (left) on a large subdomain (1300 km3 1300 km) and (right) on a smallsubdomain (300 km 3 300km) to illustrate the difference between the forecasts as a function of spatial scale.

JANUARY 2015 SURCEL ET AL . 225

to IC–LBC perturbations does not affect the scaledependence of precipitation predictability on averagefor the 22 cases. However, it has been shown byStensrud et al. (2000) that the relative importance ofIC errors and mixed physics is highly case dependent,with model physics playing a larger role in cases withweaker large-scale forcing. To look at the differencein l0 caused only by IC–LBC perturbations comparedto also adding perturbed physics, Fig. 7b showsa scatterplot between l0 corresponding to CN–N2 andl0 corresponding to CN–N1 for different lead times.Unlike CN and N2, the N1 member uses the Ferriermicrophysics scheme and the YSU PBL scheme. Thisfigure shows that for lead times up to 6 h, addingphysics perturbations results in larger l0, while after12 h, the decorrelation scale no longer seems to de-pend on the type of perturbations. This agrees with theresults of Hohenegger and Schar (2007b), who re-ported that precipitation ensemble spread becomessimilar after 11 h for ensembles using different per-turbation methodologies.On the other hand, the decorrelation scale between

CN and C0 is constant throughout the forecast period.This is particularly clear in Fig. 7c, which shows thescatterplot between l0 values corresponding to CN–C0and l0 values corresponding to CN–N2 for all cases andfor different lead times. For lead times of less than 6 h,the points are below the 1:1 line, l0 for CN–C0 being thelargest (purple and black symbols); for lead times of 12and 18 h, they are situated around the 1:1 line (blue andgreen symbols), while after 24 h, l0 for CN–N2 is largerthan for CN–C0 (orange and red symbols). For the sameforecasting system, but run during 2009 and 2010,Stratman et al. (2013) showed that the difference

between members CN and C0 decreases with forecastlead time in terms of several metrics. That is, withforecast lead time, the effect of radar DA is lost and themembers become increasingly similar. However, wesee that the scales at which the error saturates are neverrecovered. Therefore, it appears that the assimilationof radar data has a transient effect at larger scales,while its effect at the mesoscale is long lived. As dis-cussed by Craig et al. (2012), it could be that the effectof radar DA is manifested very clearly in the pre-cipitation fields because of the assimilation of radarreflectivity, but not as much in the dynamical fields thatcontrol the forecast evolution, and thus it would appearto wash out with forecast time. Furthermore, it is pos-sible that the lack of upscale growth of perturbationsinduced by radarDA could be due to CN andC0 havingidentical LBCs. It is well known that apparent ‘‘en-hanced’’ mesoscale predictability is often caused bylack of LBC perturbations (Zhang et al. 2007). Un-derstanding the reasons for the scale variability of theeffect of radar data assimilation is of great importance,but it would require the analysis of data other thanprecipitation. This is outside the scope of this paper andis left for further work.This subsection only deals with the effect of certain

sources of error on the forecasts themselves, not onforecasting skill. The next section discusses the SSEF’spredictability of the atmospheric state as evaluatedagainst radar rainfall estimates.

b. CAPS SSEF’s model predictability of theatmospheric state

This methodology can also be applied to forecast–observations pairs, by computing the power ratio

FIG. 7. (a) The decorrelation scale as a function of forecast lead time for the entire ensemble (black) and for two pairs of models: CN–N2(blue, IC–LBC perturbations) and CN–C0 (orange, radar DA) averaged over all 22 cases. The best power-law fits for each of the threethick lines are on the bottom-right side of the graph. The shading represents the variability around themean values of l0. (b) Scatterplot ofl0 values for CN–N1 and CN–N2 for different lead times as indicated on the figure. Each point represents a different case. (c) As in (b),except comparing CN–C0 and CN–N2.

226 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

R(l)5PRadar(l)1PXi

(l)

PRadar1Xi(l)

,

whereXi is the rainfall field corresponding to an ensemblemember. Since P(l) gives the variance of each scale, thecomparison of model output with observations using theequation above can be seen as a measure of model skill.The decorrelation scale curves l0(t) derived from thepower ratios for each radar data–assimilating ensemblemember and averaged over all days are shown in Fig. 8a(gray lines), whereas the black line represents the averageover all the ensemble members. This line will be used asthe representative l0(t) curve for the predictability of themodel state by this radar data–assimilating ensemble. Thedecorrelation scale increases rapidly during the first sixforecast hours and increases more as a power law after-ward. The rapid increase at the beginning seems associatedwith the loss of the effect of the radar DA. This high-resolution, convection-allowing, radar data–assimilatingensemble shows no predictability of the atmospheric stateat meso-b scales after the first 3h.To better characterize the effect of assimilating radar

observations, Fig. 8b shows l0(t) for radar–CN (blueline), radar–C0 (orange line), and CN–C0 (black line).The figure also indicates the power-law fit for each line,even though for the radar-CN curve the fit is poor duringthe first 6 h, as the growth is faster than the power law. Asindicated by this figure, the non–radar data–assimilatingmember C0 had no predictive skill at scales lower than300 km throughout the forecast lead time. Also, after thefirst 15h, the C0 and CNmembers are similar in terms ofthe range of scales they can predict. Asmentioned before,radar data assimilation affects the forecast at all scales

lower than 200km throughout the forecast lead time, assuggested by l0 corresponding to CN–C0; but, as shownin Fig. 8b, a model without radar DA shows no pre-dictability of the atmospheric state at scales lower than300km throughout the forecast period, while radar DAleads to some gain in predictive ability at scales lowerthan 300km during the first 15 h, on average, for the 22cases. However, we note that all three lines lie within theerror bars of each other after the first 3 h.Our results are comparable to Roberts (2008), who

found that even at the beginning of the forecast, usefulskill is exhibited only at scales larger than 100 km forforecasts of localized rainfall. This result appears grim interms of the ability of convection-allowing models topredict precipitation. However, this metric does notmeasure the actual predictive skill that these modelshave at scales where they exhibit some predictability. Infact, our results suggest that to properly evaluate theimportance of radar data assimilation for model skill, allfeatures in the precipitation field occurring at scales lowerthan l0 should be filtered out before performing theevaluation. The recent study by Stratman et al. (2013)shows that the positive effect of radarDA in terms of skillis manifested not at small scales but at scales between 40and 320km. They suggest that the assimilation of radarobservations with a 3DVAR cloud analysis results ingains in skill only at larger scales and out to 12h.

c. Predictability by EP and LP

The simplest model of prediction is EP, which simplystates that any future state of a system is identical tothe current state. In the case of precipitation, as men-tioned by Germann et al. (2006), an EP precipitation

FIG. 8. (a) The decorrelation scale as function of forecast lead time for the radar and each of the ensemblemembers(gray lines) averaged over all cases. The black line represents themean over all ensemblemembers. The gray shadingshows the variability around themean (6s). (b) The decorrelation scale for three pairs of fields: CN–C0 (black), CN–radar (blue), andC0–radar (orange), averaged over all cases. The shading shows the variability around themean. Thebest power-law fits are indicated in the bottom-left corner.

JANUARY 2015 SURCEL ET AL . 227

forecast can be obtained from the current radar pre-cipitation map:

n(t0 1 t)5c(t0) ,

where n(t0 1 t) is the 2D rainfall forecast at lead time tand c(t0) is the observed 2D rainfall field at initial timet0. Therefore, looking at the decorrelation between EPforecasts and observations is equivalent to computingthe temporal autocorrelation of observed rainfall.Figure 9a shows the power ratios for EP radar for EPinitialized at 0000 UTC for lead times up to 12 h andaveraged over all cases. The clear progression of coloredlines indicates that here, as well, l0 increases with leadtime. Figure 9b shows l0(t) averaged over all cases(black line) together with the uncertainty around themean (gray shading), the l0(t) derived from the averagepower ratio curves in Fig. 9a (blue line), and the power-law fit to the average l0(t) (red line). Indeed, l0 in-creases with forecast lead time following a power lawl0(t)5 171t0:7, resulting in a complete loss of pre-dictability at meso-b scales after the first forecast hourand confirming that EP is indeed a poor method ofpredicting rainfall. Finally, Fig. 9c shows l0 for forecastsinitialized at different times of the day (colored lines). Itappears from the progression of the lines that the de-correlation scale is lower and changes slower withforecast lead time for the forecasts initialized around0000UTC (dark red to dark blue lines). For the forecastsinitialized later in the day, the decorrelation scale isslightly higher, but the rate of change with time is similarto the forecasts initialized around 0000 UTC (blue andgreen lines). The change of l0(t) with time is largest forthe forecasts initialized around 1800 UTC. According toSurcel et al. (2010), during spring 2008, the average di-urnal cycle of precipitation indicates that 1800 UTC ismarked by the initialization of precipitation on the leeside of the Rockies and hence in the rapid evolution ofthe rainfall field. This can explain both the poorer per-formance of EP and the greater variability of the per-formance with lead time.A better model for precipitation nowcasting is LP.

Radar-based extrapolation algorithms using this prin-ciple are commonly used for very-short-term forecasting(0–6 h). Figure 10 shows the power ratios as a function ofscale (Fig. 10a) and the decorrelation scale as a functionof lead time (Fig. 10b) averaged over all cases for hourlyaccumulation forecasts initialized at 0000 UTC andproduced by MAPLE. The decorrelation scale is in-creasing with lead time for LP forecasts as well, fol-lowing a power law l0(t)5 111t0:7, but it increases lessrapidly than for EP forecasts. However, after 2 h, scalessmaller than 200 km are no longer predictable.

FIG. 9. (a) Power ratios for EP–radar for hourly rainfall accu-mulations for all different lead times (0–7 h, colors), averaged overall 22 cases for forecasts initialized at 0000 UTC. (b) The decor-relation scale averaged over all cases (black line), power-law fit tothe average l0 (red line), variability around the average l0 (grayshading), and l0 derived from the average power ratios in (a) (blueline) for forecasts initialized at 0000UTC. (c)Average l0 values forforecasts initialized at all hours of the day (see color shading fortimes).

228 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

Figure 10c shows l0(t) for forecasts initialized everyhour (colored lines). From the differences between thelines, it appears that l0 increases more rapidly withtime for the forecasts initialized later in the day (1600–2300 UTC), even though there is some variability aroundthe lines as shown in Fig. 10b. It seems reasonable for thetemporal evolution of l0 to show sensitivity to initialization

time, as it was shown previously by Berenguer et al. (2012)that predictability by MAPLE is affected by the diurnalcycle of precipitation.The predictability of precipitation by LP has been

thoroughly studied by Germann and Zawadzki (2002,2004) and Germann et al. (2006). Despite using a dif-ferent methodology, Germann et al. (2006) nonethelessobtained results similar to ours, with predictability es-timates of about 3 h for scales of O(100) km, as quanti-fied in terms of the lifetime of bandpass scales.

d. The effect of observational uncertainty onpredictability estimates

The entire analysis presented in this paper is based onthe comparison of forecasts to a particular set of radar-derived quantitative precipitation estimates (QPEs)described in section 2b. While this verification datasethas been chosen for its quality as mentioned by Surcelet al. (2010), it is still reasonable to question how theuncertainty of these products affects the estimates of thedecorrelation scale. Radar QPE is affected by manysources of error, and a proper error characterization iscomplicated, as shown by Berenguer and Zawadzki(2008, and references therein). Therefore, rather thanattempting to characterize the error of the radar QPEsused here, we investigate the effect of observationaluncertainty simply by comparing our verification data-set to another set of rainfall estimates. The additionalverification dataset is NCEP’s Stage IV multisensorprecipitation product (Baldwin and Mitchell 1997),available as hourly rainfall accumulations on a 4-kmpolarstereographic grid. The Stage IVdatawere obtained fromthe NCEP website (www.emc.ncep.noaa.gov/mmb/ylin/pcpanl/) andwere remapped on the analysis grid (section 2)using nearest-neighbor interpolation. The two pre-cipitation datasets are compared in terms of the de-correlation scale.Figure 11a shows the power ratios R(l)5 [PRadar(l)1

PStageIV(l)]/PRadar1StageIV(l) between radar precipitationand Stage IV precipitation for different times, averagedover the 22 cases. The average R(l) is never 1, but thereare individual cases for which R(l)5 1. Figure 11bshows the average l0(t) for those cases. According tothis figure, it is impossible to infer anything about pre-cipitation predictability at scales lower than 12 km be-cause of the uncertainty in the observed precipitationfields.

5. Discussion on the comparison between variousforecast methods

Figure 12a summarizes the results previously pre-sented about the predictability of precipitation for lead

FIG. 10. As in Fig. 9, but for LP (MAPLE) forecasts.

JANUARY 2015 SURCEL ET AL . 229

times of 0–30 h. This figure shows l0(t) corresponding tothe ensemble (predictability of the model state; green),to the radar–radar DA model (model predictability ofthe atmospheric state for amodel with radarDA; black),and to the radar-DA–non-radar-DA model (modelpredictability of the atmospheric state for a modelwithout radar DA; purple). The nonshaded areas cor-respond to regions of a complete lack of predictability. Asemphasized by this figure, after the effect of the radarDAwears out, there is no predictability of the atmosphericstate byNWP atmeso-g andmeso-b scales. Furthermore,after approximately 5 h, the radar data–assimilating and

the non–radar data–assimilating models become equiva-lent in terms of the range of scales they are unable topredict (black and purple line are superimposed), andboth types of models exhibit some predictability atscales larger than 300 km throughout the forecast pe-riod. However, as found by Stratman et al. (2013), theeffect of radar DA seems to be present at larger scalesfor longer lead times. Future work should focus on de-termining the case dependence (and reasons thereof) ofthe effect of radar DA across scales.The region in Fig. 12a between the two lines repre-

sents scales where there is some predictability of the

FIG. 11. (a) Power ratios for radar–Stage IV averaged over the 22 cases. The colored lines represent the differenttimes every hour starting at 0000 UTC. (b) The decorrelation scale as a function of forecast lead time. The black lineresulted from averaging the daily l0, and the red line represents the fit l0(t)5 12 to the black line. The gray shadingrepresents the uncertainty (6s) of the black line.

FIG. 12. Summary of the comparison of the different types of predictability (a) over a 30-h forecast period and(b) only for very short lead times, illustrated in terms of l0(t) vs t. The different color curves correspond to thedifferent types of predictability as described in the legend. The color shading represents the regions of some pre-dictability (as compared to no predictability) of the kind described by that color (same as the curves). The thick blackdashed curve corresponds to the decorrelation scale between two different sets of verification data (radar derived andStage IV), and it is meant to illustrate the effect of the errors in the verification data on the results. All curvescorrespond to forecasts of hourly rainfall accumulations and are averaged over all 22 cases.

230 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

modeled state, but no predictability of the atmosphericstate by NWP. Ideally, the two lines should lie together,as the truth, approximated here by observations, shouldbe a sample of the ensemble probability density func-tion. We have explained in the previous section that theevolution of l0(t) for the predictability of the modelstate results mostly from the growth of IC–LBC errorsand is consistent with the conceptual model of Zhanget al. (2007). But when it comes to l0(t) between fore-casts and observations, it is more difficult to even hy-pothesize what causes the evolution of l0(t). It has beenshown previously by Durran et al. (2013), Nuss andMiller (2001), and Bei and Zhang (2007) that small ICerrors at the large scale significantly impact mesoscalepredictability. In addition, Zhang et al. (2002) showedthat simulations run with reanalyzed conventional ob-servations resulted in an improvement in QPF skill withrespect to the operational forecast. Therefore, it ispossible that improving the way the analysis data isutilized could lead to improving the forecast, thus de-creasing the values of l0(t) between model and radarand bringing the black and green curves closer together.On the other hand, the assimilation of mesoscale ob-servations does positively affect forecast skill, eventhough it is not clear whether this is simply the effect ofassimilating reflectivity, which leads to the morphing ofthe model precipitation fields into the observed fieldat the initial time, or to radar DA actually changingthe initial conditions. Our analysis results indicate thatthe average differences between the predictability of themodel state and the model predictability of the atmo-spheric state are consistent from one case to another(not shown). Given this systematic behavior, and theresults of Johnson et al. (2014) on the minimal effect ofadding small-scale perturbations to the IC–LBC per-turbations derived from the NCEP SREF, we believethat more benefits may be gained by accounting forlarger-scale IC uncertainties in the ensemble design,which hopefully would increase the values of the de-correlation scale between the ensemble members. Ofcourse, another possible reason for the systematic dif-ference between the predictability of the model stateand the model predictability of the atmospheric statecould be model bias. Therefore, our results highlight theneed to perform experiments that focus on determiningthe reasons for this apparent lack of spread at meso-bscales.Figure 12b focuses on the time range (0–6 h) of very-

short-term precipitation forecasts and adds the decor-relation scale curves for LP and EP. This figure showsthat after the first two hours, no forecasting methodexhibits any predictability as characterized with respectto radar observations at meso-g and meso-b scales.

Furthermore, while at the beginning of the forecast pe-riod, the decorrelation scale corresponding to LP islower than that of the model, with the crossover timebeing 3 h, both forecasting methods (NWP and LP) arevery similar during this time. Also, both the CAPS SSEFand the MAPLE LP algorithm exhibit better pre-dictability than simple EP in terms of the scales that theycan predict. It is therefore clear that EP is no longernecessary as a baseline for evaluating precipitationforecasts from LP algorithms or from radar data–assimilating models, as these other methods consistentlyoutperform EP.The dashed lines in the two figures illustrate the effect

of the uncertainty in our verification data on the results,which is exhibited at scales smaller than 12 km. As thisscale is very low compared to the model predictabilitylimits, there is a large range of scales over which im-provement is necessary before reaching this ‘‘observa-tional’’ limit.

6. Conclusions

This paper builds on the results of Surcel et al. (2014)to propose and use a methodology for analyzing thescale dependence of the predictability of precipitationfields over the continental United States during spring2008 by various forecasting methods. There have beenmany efforts to understand mesoscale predictability inthe past few decades, and our study contributes to this by

(i) offering a quantitative measure of the evolution ofthe decorrelation scale, and hence of the range ofscales at which a given method exhibits a lackof predictability, with forecast lead time;

(ii) computing this measure for precipitation forecastsand observations for a dataset of a reasonable size(22 cases during spring 2008), rather than for onlya few cases, thus verifying and complementing theresults obtained by previous predictability studiesthat used a case study approach (Walser et al. 2004;Zhang et al. 2002, 2003, 2006, 2007; Bei and Zhang2007);

(iii) using the decorrelation scale to intercompare thepredictability of the model state to the modelpredictability of the atmospheric state, hence pro-viding ameasure of ensemble consistency as a func-tion of scale for a storm-scale ensemble; and

(iv) intercomparing the predictive ability of statisticaland dynamical methods for short-term precipita-tion forecasting as a function of scale.

Our results show that for all forecastingmethods thereis a range of scales over which the method displaysa complete lack of predictability of the atmospheric

JANUARY 2015 SURCEL ET AL . 231

state, the upper limit of which is the decorrelation scalel0. For all forecasting methods, l0 increases with fore-cast lead time. The rate of change of l0 is fastest for EP,followed by LP. On the other hand, the decorrelationscale between radar and C0 has a constant value ofabout 300 km throughout the forecast lead time. Also, l0increases very rapidly during the first 5 h for radar andCN, showing that the effect of radar data assimilation interms of improving predictability at smaller scales israpidly washed out. On the other hand, comparisonbetween CN and C0 shows that radar DA affects scaleslower than 200km throughout the forecast time. Inagreementwith previous studies (Roberts 2008;Germannet al. 2006; Surcel et al. 2014), none of the forecastingsystems analyzed here show any predictability of pre-cipitation at meso-g and meso-b scales after the first 2 h.Therefore, to properly intercompare these methods interms of QPF skill, the unpredictable scales should befiltered out before performing the verification.The comparison among EP, LP forecasts, and radarDA

models, meant to complement the study of Berengueret al. (2012), confirms that, given the better performanceshown by radar DA models, a better baseline for modelevaluation would be LP rather than EP.On the other hand, we found that the uncertainties

currently accounted for in the CAPS SSEF appear to notgenerate sufficient spread at forecast hours less than 18 hat meso-b scales, demonstrated by the difference in l0between the ensemble and the radar or model, and thatthis behavior was systematic for our dataset. Recentresearch has contributed greatly to the understanding oferror growth in convection-allowingmodels (e.g., Zhanget al. 2002, 2003, 2007; Hohenegger et al. 2006, Walseret al. 2004), and the growth of IC–LBC perturbations inour study seems to be consistent with the error-growthmodel proposed by Zhang et al. (2007). However, sig-nificant case dependence of precipitation predictabilityhas been often reported in the literature (e.g., Germannet al. 2006; Walser et al. 2004; Zhang et al. 2006;Hohenegger et al. 2006; Done et al. 2012), while ourresults indicate that the difference between the pre-dictability of themodel state and themodel predictabilityof the atmospheric state (the difference between thegreen line and purple and black lines in Fig. 12a) is con-sistent among the cases. Therefore, it is possible that thisapparent lack of spread at meso-b scales could be a con-sequence of model bias (Clark et al. 2011) or of large-scale IC errors that are known to contribute most toforecasting skill (Durran et al. 2013).The predictabilities discussed here correspond to

what Lorenz (1996) and later studies (e.g., Zhang et al.2006; Bei and Zhang 2007; Melhauser and Zhang 2012)refer to as practical predictability. For example, the

value of the decorrelation scale between forecasts andobservations is more likely due to large IC and modelerrors than to the amplification of small IC errorsthrough nonlinear dynamics. In the case of the decor-relation scale between the ensemble members as well,the initial perturbations were derived from a regional-scale ensemble, and it can therefore be expected that thepredictability limit for this ensemble might be differentthan that of an ensemble that samples only very small ICerrors. The intrinsic predictability of the atmospherewould have an effect on the estimates of practical pre-dictability if the model captures the appropriate non-linear dynamics. For example, cases with intense moistconvection (highly nonlinear processes) are usuallymore unpredictable from a practical point of view aswell. On the other hand, even for cases that exhibitstrong intrinsic predictability, it is possible that modeldeficiencies and analysis errors might lead to poorforecast results. Investigating the intrinsic predictabilityfor our set of cases would have demanded setting upadditional experiments, thus needing significant com-putational resources, and is therefore better suited forfuture work. However, the results that we have obtainedqualitatively agree with those of Bei and Zhang (2007)and Melhauser and Zhang (2012), who looked at therelationship between practical and intrinsic pre-dictability for two case studies. By reducing the IC errorsconsidered in their ensemble simulations, they notedlinear gains in predictability, but they also found that theeffect of moist convective processes on error growth setsan inherent predictability limit at the mesoscale.Our study provides a quantitative estimate of the

range of spatial scale over which the very detailed in-formation that a forecasting method can provide is infact unpredictable given the errors both in the modelingapproach and in the initial conditions. However, thisdecorrelation scale focuses on the agreement betweenentire two-dimensional precipitation maps, and there-fore it is sensitive to displacement errors. In an opera-tional setting, forecasters might still find useful theinformation provided by a model in terms of stormcharacteristics at scales lower than 200 km, and ourmethodology does not account for these cases. How-ever, our methodology is useful for the many applica-tions that use all of the information in a two-dimensionalQPF, such as blending applications, ensemble averag-ing, or hydrological modeling. For these applications, itis useful to know that scales lower than l0 are un-predictable and should therefore be treated in a sto-chastic manner.Finally, we remind the reader that the results pre-

sented in this study are dependent on the forecastingsystems under study. We are in the process of extending

232 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

the methodology herein to the Spring Experiment dataof 2009–13. Our preliminary results indicate that thesensitivity to the IC–LBC perturbations analyzed here isconsistent from year to year and that the decorrelationscale shows great sensitivity to the type of perturbations;that is, different errors propagate differently upscale. Apaper describing these new findings is in preparation.

Acknowledgments. We are greatly indebted to MingXue and Fanyou Kong from CAPS for providing us theensemble precipitation forecasts. The CAPS SSEFforecasts were produced mainly under the support ofa grant from the NOAACSTAR program, and the 2008ensemble forecasts were produced at the PittsburghSupercomputer Center. Kevin Thomas, Jidong Gao,Keith Brewster, and Yunheng Wang of CAPS madesignificant contributions to the forecasting efforts.M. Surcel acknowledges the support received from theFonds de Recherche du Québec–Nature et Technolo-gies (FRQNT) in the form of a graduate scholarship.This work was also funded by the Natural Science andEngineeringResearch Council of Canada (NSERC) andHydro-Quebec through the IRC program. We ac-knowledge the comments and suggestions of threeanonymous reviewers that helped improve the paper.

REFERENCES

Atencia, A., and Coauthors, 2010: Improving QPF by blending tech-niques at the Meteorological Service of Catalonia. Nat. Hazard

Earth Sys., 10, 1443–1455, doi:10.5194/nhess-10-1443-2010.Baldwin, M. E., and K. E. Mitchell, 1997: The NCEP hourly mul-

tisensor U.S. precipitation analysis for operations and GCIPresearch. Preprints, 13th Conf. on Hydrology, Long Beach,CA, Amer. Meteor. Soc., 54–55.\.

Bei, N., and F. Zhang, 2007: Impacts of initial condition errors onmesoscale predictability of heavy precipitation along the Mei-Yu front of China. Quart. J. Roy. Meteor. Soc., 133, 83–99,doi:10.1002/qj.20.

Berenguer,M., and I. Zawadzki, 2008:A study of the error covariancematrix of radar rainfall estimates in stratiform rain. Wea. Fore-

casting, 23, 1085–1101, doi:10.1175/2008WAF2222134.1.——, M. Surcel, I. Zawadzki, M. Xue, and F. Kong, 2012: The

diurnal cycle of precipitation from continental radar mo-saics and numerical weather prediction models. Part II:Intercomparison among numerical models and with now-casting. Mon. Wea. Rev., 140, 2689–2705, doi:10.1175/MWR-D-11-00181.1.

Casati, B., G. Ross, and D. B. Stephenson, 2004: A new intensity-scale approach for the verification of spatial precipitationforecasts. Meteor. Appl., 11, 141–154, doi:10.1017/S1350482704001239.

Cintineo, R. M., and D. J. Stensrud, 2013: On the predictability ofsupercell thunderstorm evolution. J. Atmos. Sci., 70, 1993–2011, doi:10.1175/JAS-D-12-0166.1.

Clark, A. J., and Coauthors, 2011: Probabilistic precipitationforecast skill as a function of ensemble size and spatial scale in

a convection-allowing ensemble. Mon. Wea. Rev., 139, 1410–1418, doi:10.1175/2010MWR3624.1.

Craig, G. C., C. Keil, and D. Leuenberger, 2012: Constraints on theimpact of radar rainfall data assimilation on forecasts of cu-mulus convection. Quart. J. Roy. Meteor. Soc., 138, 340–352,doi:10.1002/qj.929.

Denis, B., J. Cote, and R. Laprise, 2002: Spectral decomposition oftwo-dimensional atmospheric fields on limited-area domainsusing the discrete cosine transform (DCT). Mon. Wea. Rev.,130, 1812–1829, doi:10.1175/1520-0493(2002)130,1812:SDOTDA.2.0.CO;2.

Done, J. M., G. C. Craig, S. L. Gray, and P. A. Clark, 2012: Case-to-case variability of predictability of deep convection ina mesoscale model.Quart. J. Roy. Meteor. Soc., 138, 638–648,doi:10.1002/qj.943.

Du, J., and Coauthors, 2009: NCEP Short Range Ensemble Fore-cast (SREF) System Upgrade in 2009. 23rd Conf. on WeatherAnalysis and Forecasting/19th Conf. on Numerical WeatherPrediction,Omaha, NE, Amer. Meteor. Soc., 4A.4. [Availableonline at https://ams.confex.com/ams/23WAF19NWP/techprogram/paper_153264.htm.]

Dudhia, J., 1989: Numerical study of convection observed duringthe winter monsoon experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 3077–3107,doi:10.1175/1520-0469(1989)046,3077:NSOCOD.2.0.CO;2.

Durran,D.R., P.A.Reinecke, and J.D.Doyle, 2013:Large-scale errorsand mesoscale predictability in Pacific Northwest snowstorms.J. Atmos. Sci., 70, 1470–1487, doi:10.1175/JAS-D-12-0202.1.

Ebert, E. E., 2001: Ability of a poor man’s ensemble to predictthe probability and distribution of precipitation. Mon. Wea.Rev., 129, 2461–2480, doi:10.1175/1520-0493(2001)129,2461:AOAPMS.2.0.CO;2.

Ferrier, B. S., Y. Jin, Y. Lin, T. Black, E. Rogers, and G. DiMego,2002: Implementation of a new grid-scale cloud and rainfallscheme in the NCEP Eta Model. Preprints, 15th Conf. onNumerical Weather Prediction, San Antonio, TX, Amer. Me-teor. Soc., 280–283.

Gao, J. D., M. Xue, K. Brewster, and K. K. Droegemeier, 2004:A three-dimensional variational data analysis method withrecursive filter for Doppler radars. J. Atmos. Oceanic Tech-nol., 21, 457–469, doi:10.1175/1520-0426(2004)021,0457:ATVDAM.2.0.CO;2.

Germann, U., and I. Zawadzki, 2002: Scale-dependence of thepredictability of precipitation from continental radar im-ages. Part I: Description of the methodology. Mon. Wea.Rev., 130, 2859–2873, doi:10.1175/1520-0493(2002)130,2859:SDOTPO.2.0.CO;2.

——, and ——, 2004: Scale dependence of the predictability ofprecipitation from continental radar images. Part II: Proba-bility forecasts. J. Appl. Meteor., 43, 74–89, doi:10.1175/1520-0450(2004)043,0074:SDOTPO.2.0.CO;2.

——, ——, and B. Turner, 2006: Predictability of precipitationfrom continental radar images. Part IV: Limits to prediction.J. Atmos. Sci., 63, 2092–2108, doi:10.1175/JAS3735.1.

Gilleland, E., D. Ahijevych, B. G. Brown, B. Casati, and E. E. Ebert,2009: Intercomparison of spatial forecast verification methods.Wea. Forecasting, 24, 1416–1430, doi:10.1175/2009WAF2222269.1.

Hohenegger, C., andC. Schar, 2007a:Atmospheric predictability atsynoptic versus cloud-resolving scales. Bull. Amer. Meteor.Soc., 88, 1783–1793, doi:10.1175/BAMS-88-11-1783.

——, and——, 2007b: Predictability and error growth dynamics incloud-resolving models. J. Atmos. Sci., 64, 4467–4478,doi:10.1175/2007JAS2143.1.

JANUARY 2015 SURCEL ET AL . 233

——, D. Luthi, and C. Schar, 2006: Predictability mysteries incloud-resolving models. Mon. Wea. Rev., 134, 2095–2107,doi:10.1175/MWR3176.1.

Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF Single-Moment6-Class Microphysics Scheme (WSM6). J. Korean Meteor.Soc., 42, 129–151.

Hu, M., M. Xue, and K. Brewster, 2006a: 3DVAR and cloudanalysis with WSR-88D level-II data for the prediction of theFort Worth, Texas, tornadic thunderstorms. Part I: Cloudanalysis and its impact. Mon. Wea. Rev., 134, 675–698,doi:10.1175/MWR3092.1.

——, ——, J. D. Gao, and K. Brewster, 2006b: 3DVAR and cloudanalysis with WSR-88D level-II data for the prediction of theFort Worth, Texas, tornadic thunderstorms. Part II: Impact ofradial velocity analysis via 3DVAR.Mon.Wea. Rev., 134, 699–721, doi:10.1175/MWR3093.1.

Janji!c, Z. I., 2001: Nonsingular implementation of the Mellor-Yamada level 2.5 scheme in the NCEP Meso model. NCEPOffice Note 437, NOAA/NWS, 61 pp. [Available online atwww.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.]

——, 2003: A nonhydrostatic model based on a new approach. Me-teor. Atmos. Phys., 82, 271–285, doi:10.1007/s00703-001-0587-6.

Johnson, A., and Coauthors, 2014: Multiscale characteristics andevolution of perturbations for warm season convection-allowing precipitation forecasts: Dependence on backgroundflow and method of perturbation. Mon. Wea. Rev., 142, 1053–1073, doi:10.1175/MWR-D-13-00204.1.

Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation, andPredictability. Cambridge University Press, 341 pp.

Kober, K., G. C. Craig, C. Keil, and A. Dornbrack, 2012: Blendinga probabilistic nowcasting method with a high-resolution nu-merical weather prediction ensemble for convective pre-cipitation forecasts.Quart. J. Roy. Meteor. Soc., 138, 755–768,doi:10.1002/qj.939.

Kong, F., and Coauthors, 2008: Real-time storm-scale ensembleforecast experiment—Analysis of 2008 Spring Experimentdata. Preprints, 24th Conf.on Severe Local Storms, Savannah,GA, Amer. Meteor. Soc., 12.3. [Available online at https://ams.confex.com/ams/24SLS/techprogram/paper_141827.htm.]

Laroche, S., and I. Zawadzki, 1995: Retrievals of horizontal winds fromsingle-Doppler clear-air data by methods of cross correlation andvariational analysis. J. Atmos. Oceanic Technol., 12, 721–738,doi:10.1175/1520-0426(1995)012,0721:ROHWFS.2.0.CO;2.

Lin, C., S. Vasic, A. Kilambi, B. Turner, and I. Zawadzki, 2005:Precipitation forecast skill of numerical weather predictionmodels and radar nowcasts. Geophys. Res. Lett., 32, L14801,doi:10.1029/2005GL023451.

Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos.Sci., 20, 130–141, doi:10.1175/1520-0469(1963)020,0130:DNF.2.0.CO;2.

——, 1996: Predictability—Aproblempartly solved.Proc. Seminaron Predictability,Vol. 1, Reading, UnitedKingdom, ECMWF,1–18.

Melhauser, C., and F. Zhang, 2012: Practical and intrinsic pre-dictability of severe and convective weather at the mesoscale.J. Atmos. Sci., 69, 3350–3371, doi:10.1175/JAS-D-11-0315.1.

Mellor, G. L., and T. Yamada, 1982: Development of a turbulenceclosure model for geophysical fluid problems. Rev. Geophys.,20, 851–875, doi:10.1029/RG020i004p00851.

Noh, Y., W. G. Cheon, S. Y. Hong, and S. Raasch, 2003: Im-provement of the K-profile model for the planetary boundarylayer based on large eddy simulation data. Bound.-LayerMeteor., 107, 401–427, doi:10.1023/A:1022146015946.

Nuss,W.A., andD. K.Miller, 2001:Mesoscale predictability undervarious synoptic regimes. Nonlinear Processes Geophys., 8,429–438, doi:10.5194/npg-8-429-2001.

Palmer, T. N., 1993: Extended-range atmospheric prediction andthe Lorenz model. Bull. Amer. Meteor. Soc., 74, 49–65,doi:10.1175/1520-0477(1993)074,0049:ERAPAT.2.0.CO;2.

Roberts, N. M., 2008: Assessing the spatial and temporal variationin the skill of precipitation forecasts from an NWP model.Meteor. Appl., 15, 163–169, doi:10.1002/met.57.

——, and H. W. Lean, 2008: Scale-selective verification of rainfallaccumulations from high-resolution forecasts of convectiveevents. Mon. Wea. Rev., 136, 78–97, doi:10.1175/2007MWR2123.1.

Skamarock, W. C., and Coauthors, 2008: A description of theAdvanced Research WRF version 3. NCAR Tech. NoteNCAR/TN-4751STR, 113 pp., doi:10.5065/D68S4MVH.

Stensrud, D. J., J. W. Bao, and T. T. Warner, 2000: Using initialcondition and model physics perturbations in short-range en-semble simulations of mesoscale convective systems. Mon.Wea. Rev., 128, 2077–2107, doi:10.1175/1520-0493(2000)128,2077:UICAMP.2.0.CO;2.

Stratman, D. R., M. C. Coniglio, S. E. Koch, and M. Xue, 2013:Use of multiple verification methods to evaluate forecasts ofconvection from hot- and cold-start convection-allowingmodels. Wea. Forecasting, 28, 119–138, doi:10.1175/WAF-D-12-00022.1.

Surcel, M., M. Berenguer, and I. Zawadzki, 2010: The diurnal cycleof precipitation from continental radar mosaics and numericalweather prediction models. Part I: Methodology and seasonalcomparison. Mon. Wea. Rev., 138, 3084–3106, doi:10.1175/2010MWR3125.1.

——, I. Zawadzki, and M. K. Yau, 2014: On the filtering prop-erties of ensemble averaging for storm-scale precipitationforecasts. Mon. Wea. Rev., 142, 1093–1105, doi:10.1175/MWR-D-13-00134.1.

Tao, W. K., and Coauthors, 2003: Microphysics, radiation andsurface processes in the Goddard Cumulus Ensemble (GCE)model. Meteor. Atmos. Phys., 82, 97–137, doi:10.1007/s00703-001-0594-7.

Thompson, G., R. M. Rasmussen, and K. Manning, 2004: Explicitforecasts of winter precipitation using an improved bulk micro-physics scheme. Part I: Description and sensitivity analysis.Mon.Wea. Rev., 132, 519–542, doi:10.1175/1520-0493(2004)132,0519:EFOWPU.2.0.CO;2.

Turner, B. J., I. Zawadzki, and U. Germann, 2004: Predictability ofprecipitation from continental radar images. Part III: Opera-tional nowcasting implementation (MAPLE). J. Appl. Me-teor., 43, 231–248, doi:10.1175/1520-0450(2004)043,0231:POPFCR.2.0.CO;2.

Walser, A., D. Luthi, and C. Schar, 2004: Predictability ofprecipitation in a cloud-resolving model. Mon. Wea. Rev.,132, 560–577, doi:10.1175/1520-0493(2004)132,0560:POPIAC.2.0.CO;2.

Wu, D. C., Z. Y. Meng, and D. C. Yan, 2013: The predictability ofa squall line in south China on 23 April 2007.Adv. Atmos. Sci.,30, 485–502, doi:10.1007/s00376-012-2076-x.

Xue,M.,D.H.Wang, J.D.Gao,K. Brewster, andK.K.Droegemeier,2003: The Advanced Regional Prediction System (ARPS),storm-scale numerical weather prediction and data assimi-lation. Meteor. Atmos. Phys., 82, 139–170, doi:10.1007/s00703-001-0595-6.

——, and Coauthors, 2008: CAPS realtime storm-scale ensembleand high-resolution forecasts as part of the NOAAHazardous

234 JOURNAL OF THE ATMOSPHER IC SC IENCES VOLUME 72

Weather Testbed 2008 Spring Experiment. Preprints, 24thConf. on Severe Local Storms, Savannah, GA, Amer. Meteor.Soc., 12.2. [Available online at https://ams.confex.com/ams/24SLS/techprogram/paper_142036.htm.]

Zhang, F. Q., C. Snyder, and R. Rotunno, 2002: Mesoscalepredictability of the ‘‘surprise’’ snowstorm of 24–25 January2000. Mon. Wea. Rev., 130, 1617–1632, doi:10.1175/1520-0493(2002)130,1617:MPOTSS.2.0.CO;2.

——, ——, and ——, 2003: Effects of moist convection on meso-scale predictability. J. Atmos. Sci., 60, 1173–1185, doi:10.1175/1520-0469(2003)060,1173:EOMCOM.2.0.CO;2.

——, A. M. Odins, and J. W. Nielsen-Gammon, 2006: Mesoscalepredictability of an extreme warm-season precipitation event.Wea. Forecasting, 21, 149–166, doi:10.1175/WAF909.1.

——, N. F. Bei, R. Rotunno, C. Snyder, and C. C. Epifanio, 2007:Mesoscale predictability ofmoist baroclinicwaves: Convection-permitting experiments and multistage error growth dynamics.J. Atmos. Sci., 64, 3579–3594, doi:10.1175/JAS4028.1.

Zhang, J., K. Howard, and J. J. Gourley, 2005: Constructing three-dimensional multiple-radar reflectivity mosaics: Examples ofconvective storms and stratiform rain echoes. J. Atmos. Oce-anic Technol., 22, 30–42, doi:10.1175/JTECH-1689.1.

JANUARY 2015 SURCEL ET AL . 235


Recommended