Date post: | 15-Jan-2016 |
Category: |
Documents |
View: | 216 times |
Download: | 0 times |
1
THE WINTER STORM RECONNESSAINCE PROGRAM OF THEUS NATIONAL WEATHER SERVICE
Zoltan Toth
GSD/ESRL/OAR/NOAAFormerly at EMC/NCEP/NWS/NOAA
Acknowledgements:
Yucheng Song – Plurality at EMCSharan Majumdar – U. Miami
Istvan Szunyogh – Texas AMUCraig Bishop - NRLRolf Langland - NRL
THORPEX Symposium, Sept 14-18 2009, Monterey, CA
2
OUTLINE / SUMMARY
• History– Outgrowth of FASTEX & NORPEX research– Operationally implemented at NWS in 2001
• Contributions / documentation– Community effort– Refereed and other publications, rich info on web
• Highlights– Operational procedures for case selection, ETKF sensitivity calculations– Positive results consistent from year to year
• Open questions– Does operational targeting have economic benefits?– Can similar or better results be achieved with cheaper obs. systems?– What are the limitations of current techniques?
3
HISTORY OF WSR
• Sensitivity calculation method– Ensemble Transform (ET) method developed around 1996
• Field tests– FASTEX – 1997, Atlantic
• Impact from sensitive areas compared with that from non-sensitive areas (“null” cases)
– NORPEX – 1998, Pacific• Comparison with adjoint methods
– CALJET, PACJET, WC-TOST, ATReC, AMMA, T-PARC
• WSR– 1999 - First test in research environment– 2000 - Pre-implementation test– 2001 - Full operational implementation
4
CONTRIBUTIONS• Craig Bishop (NASA, PSU, NRL)
– ET & ETKF method development
• Sharan Majumdar (PSU, U. Miami)– ETKF method development and implementation
• Rolf Langland (NRL), Kerry Emanuel (MIT)– Field testing and comparisons in FASTEX, NORPEX, TPARC
• Istvan Szunyogh (UCAR Scientist at NCEP, U. MD, Texas AMU)– Operational implementation, impact analysis, dynamics of data impact
• Yucheng Song (Plurality at EMC/NCEP/NWS/NOAA)– Updates, maintenance, coordination
• Observations– NOAA Aircraft Operations Center (G-lV)– US Air Force Reserve (C130s)
• Operations– Case selection by NWS forecasters (NCEP/HPC, Regions)– Decision making by Senior Duty Meteorologists (SDM)
5
DOCUMENTATION• Papers (refereed / not reviewed)
– Methods• ET Bishop & Toth• ETKF Bishop et al, Majumdar et al
– Field tests• Langland et al FASTEX• Langland et al NORPEX• Szunyogh et al FASTEX• Szunyogh et al NORPEX• Song et al TPARC (under preparation)
– Operational implementation• Toth et al 2 papers
– WSR results• Szunyogh et al• Toth et al (under preparation)
• Web– Details on procedures– Detailed documentation for each case in WSR99-09 (11 years, ~200+ cases)
• Identification of threatening high impact forecast events• Sensitivity calculation results• Flight request• Data impact analysis
6
OPERATIONAL PROCEDURES • Case selection
– Forecaster input – time and location of high impact event• Based on perceived threat and forecast uncertainty
– SDM compiles daily prioritized list of cases for which targeted data may be collected
• Ensemble-based sensitivity calculations– Forward assessment
• Predict impact of targeted data from predesigned flight tracks– Backward sensitivity
• Statistical analysis of forward results for selected verification cases
• Decision process– SDM evaluates sensitivity results
• Consider predicted impact, priority of cases, available resources– Predesigned flight track # or no flight decision for next day– Outlook for flight / no flight for day after next
• Observations– Drop-sondes from manned aircraft flying over predesigned tracks
• Aircraft based in Alaska (Anchorage) and/or Hawaii (Honolulu)– Real time QC & transmission to NWP centers via GTS
• NWP– Assimilate all adaptively taken data along with regular data– Operational forecasts benefit from targeted data
7
HIGHLIGHTS• Case selection
– No systematic evaluation available– Some errors in position / timing of threatening events in 4-6 day forecast range
• Affects stringent verification results– Need for objective case selection guidance based on ensembles
• Sensitivity calculations– Predicted and observed impact from targeted data compared in statistical sense– Sensitivity related to dynamics of flow
• Variations on daily and longer time scales (regime dependency)
• Decision process– Subjective due to limitations in sensitivity methods
• Spurious correlations due to small sample size
• Observations– Aircraft dedicated to operational observing program used– Are there lower cost alternatives?
• Thorough processing of satellite data• UAVs?
• NWP forecast improvement– Compare data assimilation / forecast results with / without use of targeted data
• Cycled comparison for cumulative impact• One at a time comparison for better tracking of impact dynamics in individual cases
8
Pre
dict
ed d
ata
impa
ct
Obs
erve
d da
ta im
pact
For
ecas
t im
prov
emen
t / d
egra
datio
n
9
Figure 1. Winter Pacific forecasts: Verification of mean 500 hPa geopotential rmse up to day 10 for SEAOUT in grey dotted and SEAIN in black: Both experiments are verified using ECMWF operational analysis. Verification regions: (a) North Pacific, (b) North America, (c) North Atlantic and (d) the European region.
WHY TARGETING MAY WORKImpact of data removal over Pacific - Kelly et al. 2007
10
FORECAST EVALUATION RESULTS
Based on 10 years of experience (1999-2008)
• Error reduced in ~70% of targeted forecasts– Verified against observations at preselected time / region
• Wind & temperature profiles, surface pressure
• 10-20% rms error reduction in preselected regions– Verified against analysis fields
• 12-hour gain in predictability– 48-hr forecast with targeted data as skillful as 36-hr forecast
without
11
WSR Summary statistics for 2004-07
Variable# cases
improved# cases neutral
#cases degraded
Surface pressure 21+20+13+25=79 0+1+0+0=1 14+9+14+12=49
Temperature 24+22+17+24=87 1+1+0+0=2 10+7+10+13=40
Vector Wind 23+19+21+27=90 1+0+0+0=1 11+11+6+10=38
Humidity 22+19+13+24=78 0+0+0+0=0 13+11+14+13=51
25+22+19+26 = 92 POSITIVE CASES
0+1+0 +0 = 1 NEUTRAL CASE
10+7+8 +11 = 36 NEGATIVE CASES 71.3% improved 27.9% degraded
Wind vector error, 2007
With
out
targ
ete
d d
ata
With targeted data
12
Valentine’s day Storm2007
• Weather event with
a large societal impact• Each GFS run
verified against its own analysis – 60 hr
forecast• Impact on surface pressure verification
• RMS error improvement: 19.7% (2.48mb vs. 2.97mb)
Targeted in high impact weather area marked by the circle
Surface pressure from analysis
(hPa; solid contours)Forecast Improvement (hPa;
shown in red)Forecast Degradation (hPa;
blue)
13
Average surface pressure forecast error reduction from WSR 2000
The average surface pressure forecast error reduction for Alaska (55°–70°N, 165°–140°W), the west coast (25°–50°N, 125°–100°W), the east coast (100°–75°W), and the lower 48 states of the United States (125°–75°W). Positive values show forecast improvement, while negative values show forecast degradation
(From Szunyogh et al 2002)
14
Forecast Verification for Wind (2007)
RMS error reduction vs. forecast lead time
10-20% rms error reduction in winds
Close to 12-hour gain in predictability
15
Forecast Verification for Temperature (2007)
RMS error reduction vs. forecast lead time
10-20 % rms error reduction
Close to 12-hour gain in predictability
16
CONCLUSIONS
• High impact cases can be identified in advance using ensemble methods
• Data impact can be predicted in statistical sense using ET / ETKF methods– Optimal observing locations / times for high impact
cases can be identified
• It is possible to operationally conduct a targeted observational program
• Open questions remain
17
OPEN QUESTIONS• Does operational targeting have economic benefits?
– Cost-benefit analysis needs to be done for different regions – SERA research• Are there differences between Pacific (NA) & Atlantic (Europe)?
• Can similar or better results be achieved with cheaper observing systems?– Observing systems of opportunity
• Targeted processing of satellite data• AMDAR
– UAVs?
• Sensitivity to data assimilation techniques– Advanced DA methods extracts more info from any data
• Better analysis without targeted data• Larger impact from targeted data (relative to improved analysis with standard data)?
• What are the limitations of current techniques?– What can be said beyond linear regime?
• Need larger ensemble for that?– Can we quantify expected forecast improvement (not only impact)?
• Distinction between predicting impact vs. predicting positive impact– Effect of sub-grid scales ignored so far
• Ensemble displays more orderly dynamics than reality?– Overly confident signal propagation predictions?
18
DISCUSSION POINTSHow to explain large apparent differences between various
studies regarding effectiveness of targeted observations?
• Case selection important– Only every ~3rd day there is a “good” case– Targeting is not cure for all diseases
• If all cases averaged, signal washed out at factor of 1/3
• Measure impact over target area– Effect expected in specific area
• If measured over much larger area, signal washes out by factor of 1/3• 2 factors above may explain 10-fold difference in quantitative assessment of utility in
targeting observations
• Not all cases expected to yield positive results– Artifact of statistical nature of DA methods
• Should expect some negative impact– Current DA methods lead to forecast improvements in 70-75% of cases
• Geographical differences– Potentially larger impact over larger Pacific vs smaller Atlantic basins?
19
BACKGROUND
Example: Impact of WSRP targeted dropsondes
1 Jan – 28 Feb 2006 00UTC Analysis
NOAA-WSRP 191 Profiles
Beneficial (-0.01 to -0.1 J kg-1)
Non-beneficial (0.01 to 0.1 J kg-1)Small impact (-0.01 to 0.01 J kg-1)
Binned Impact
Average dropsonde ob impact is beneficial and ~2-3x greater than average radiosonde impact
21
Composite summary maps
139.6W 59.8N 36hrs (7 cases) - 1422km 92W 38.6N 60hrs (5 cases)- 4064km
122W 37.5N 49.5hrs (8 cases) - 2034km 80W 38.6N 63.5hrs (8 cases) - 5143km
Verification Region
Verification Region
22 -8
-7
-6
-5
-4
-3
-2
-1
0 Satwind
AMSU-A
SSMI/PRH
Raob
Dropsonde
Aircraft
Land Sfc
Scatwind
Windsat
Modis
Ship Sfc
SSMI/Wnd
North Pacific observation impact sum - NAVDAS
1-31 Jan 2007 (00UTC analyses)Change in 24h moist total energy error norm (J kg-1)
Error Reduction
23 -10-9
-8-7
-6-5
-4-3
-2-1
0 Satwind
AMSU-A
SSMI/PRH
Raob
Dropsonde
Aircraft
Land Sfc
Scatwind
Windsat
Modis
Ship Sfc
SSMI/Wnd
Error Reduction
(x 1.0e5)
Change in 24h moist total energy error norm (J kg-1)
1-31 Jan 2007 (00UTC analyses)
North Pacific forecast error reduction per-observation
Ship Obs
Targeted dropsondes =
high-impact per- ob, low total
impact
24
3649.5
60
63.5
0
1000
2000
3000
4000
5000
6000
0 20 40 60 80
Forecast Hours
Dis
tan
ce (
km)
ETKF predicted signal propagation
25
Precipitation verification
• Precipitation verification is still in a testing stage due to the lack of station observation data in some regions.
20.4416.50OPR
18.5616.35CTL
3:14:1Positive vs. negative
cases
10mm 5mm ETS