Predictability & Prediction of Seasonal Climate over North America

Post on 12-Jan-2016

43 views 0 download

Tags:

description

Predictability & Prediction of Seasonal Climate over North America. Lisa Goddard , Simon Mason, Ben Kirtman, Kelly Redmond, Randy Koster, Wayne Higgins, Marty Hoerling, Alex Hall, Jerry Meehl, Tom Delworth, Nate Mantua, Gavin Schmidt (US CLIVAR PPAI Panel). Potential predictability. - PowerPoint PPT Presentation

transcript

23-27 Oct. 2006 NOAA 31st Annual Climate Diagnostics and Prediction Workshop

Predictability & Prediction of Predictability & Prediction of Seasonal Climate Seasonal Climate over North Americaover North America

Lisa GoddardLisa Goddard, Simon Mason, Ben Kirtman, , Simon Mason, Ben Kirtman, Kelly Redmond, Randy Koster, Wayne Higgins, Kelly Redmond, Randy Koster, Wayne Higgins,

Marty Hoerling, Alex Hall, Jerry Meehl, Marty Hoerling, Alex Hall, Jerry Meehl, Tom Delworth, Nate Mantua, Gavin SchmidtTom Delworth, Nate Mantua, Gavin Schmidt

(US CLIVAR PPAI Panel)(US CLIVAR PPAI Panel)

Time Series of Prediction Skill

operational

Potential predictability

Research forecasts

(1) Understand the limit of predictability

(2) Identify conditional predictability (e.g. state of ENSO or Indian Ocean)

(3) Document the expected skill to judge potential utility of the information for decision support

(4) Set a baseline for testing improvements to prediction tools and methodologies

(5) Set a target for real-time predictions.

(Courtesy of Arun Kumar & Ants Leetmaa)

Real-time prediction skill…North America, 1-month lead, seasonal terrestrial climate

• Provide a template for verification- What are the best metrics? Best for who?

- Pros & cons of current metrics- Can we capture important aspects of variability (e.g. trends, drought periods)?

• Estimate skill of real-time forecasts- How predictable is N. America climate?

- Benefit of multi-model ensembling?

• Provide baseline against which we can judge future advances- How best to archive/document for future comparison?

- Are we missing something? (i.e. statistical models)

Forecast DataDynamical models (single):• CCCma – Canadian Centre for Climate Modeling and Analysis• KMA – Korean Meteorological Agency• MGO- Main Geophysical Observatory, Russia• NASA/GMAO-National Aeronautics and Space Administration, USA• RPN – Canadian Meteorological Centre• ECHAM4.5 – MPI (run at IRI)• CCM3.6 – NCAR (run at IRI)• ECMWF –European Center for Medium Range Weather Forecasts• Meteo-France – Meteorological Service, France• LODYC- Laboratoire d'Océanographie Dynamique et de Climatologie,

France• Met Office – UK Meteorological Office• MPI – Max Planc Institute for Meteorology, Germany• CERFACS – European Centre for Research and Advanced Training in

Scientific Computing, France• INGV-Instituto Nazionale di Geofisica e Vulcanolgia, Italy• NOAA-CFS – National Oceanic Atmospheric Administration, USAMulti-Model of dynamical models (simple average)

Statistical models (from CPC): CCA, OCN (others?)Multi-Model of dynamical + statistical models

Model NX NY NM L S

CCCma-GCM2 96 48 10 0.5-3.5 Mar1969-Dec2003 by 3

CCCma-GCM3 128 64 10 0.5-3.5 Mar1969-Dec2003 by 3

KMA 144 73 6 2.5-8.5 Jan1979-Dec2002

MGO 144 73 6 0.5-3.5 Nov1978-Nov2000 by 3

NASA-GMAO 144 90 6 1.5-3.5 Feb1993-Nov2002 by 3

RPM 192 96 10 0.5-3.5 Mar1969-Dec2000 by 3

ECHAM4.5 128 64 24 0.5-6.5 Jan1958-Dec2002

CCM3.6 128 64 24 0.5-6.5 Jan1958-Dec2002

ECMWF 144 71 9 0.5-5.5 Feb1958-Nov2001 by 3

Meteo-France 144 71 9 0.5-5.5 Feb1958-Nov2001 by 3

LODYC 144 71 9 0.5-5.5 Feb1974-Nov2001 by 3

MetOffice 144 71 9 0.5-5.5 Feb1959-Nov2001 by 3

MPI 144 71 9 0.5-5.5 Feb1969-Nov2001 by 3

CERFACS 144 71 9 0.5-5.5 Feb1980-Nov2001 by 3

INGV 144 71 9 0.5-5.5 Feb1973-Nov2001 by 3

CFS 192 94 15 0.5-8.5 Jan1981-Dec2003

Forecast Data

Model NX NY NM L S

CCCma-GCM2 96 48 10 0.5-3.5 Mar1969-Dec2003 by 3

CCCma-GCM3 128 64 10 0.5-3.5 Mar1969-Dec2003 by 3

KMA 144 73 6 2.5-8.5 Jan1979-Dec2002

MGO 144 73 6 0.5-3.5 Nov1978-Nov2000 by 3

NASA-GMAO 144 90 6 1.5-3.5 Feb1993-Nov2002 by 3

RPM 192 96 10 0.5-3.5 Mar1969-Dec2000 by 3

ECHAM4.5 128 64 24 0.5-6.5 Jan1958-Dec2002

CCM3.6 128 64 24 0.5-6.5 Jan1958-Dec2002

ECMWF 144 71 9 0.5-5.5 Feb1958-Nov2001 by 3

Meteo-France 144 71 9 0.5-5.5 Feb1958-Nov2001 by 3

LODYC 144 71 9 0.5-5.5 Feb1974-Nov2001 by 3

MetOffice 144 71 9 0.5-5.5 Feb1959-Nov2001 by 3

MPI 144 71 9 0.5-5.5 Feb1969-Nov2001 by 3

CERFACS 144 71 9 0.5-5.5 Feb1980-Nov2001 by 3

INGV 144 71 9 0.5-5.5 Feb1973-Nov2001 by 3

CFS 192 94 15 0.5-8.5 Jan1981-Dec2003

Forecast Data: JJA & DJF (1981-2001)

Verification Data & Metrics

OBSERVATIONAL DATA: 2.5x2.5 deg• 2m T: CRU-TSv2.0 (1901-2002)• Precipitation: CMAP (1979-2004)

VERIFICATION MEASURESMetrics consistent with WMO - SVS for LRF (Standardised Verification

System for Long Range Forecasts) • Deterministic information :

- MSE & its decomposition - correlation, mean bias, & variance ratio

• Probabilistic information: - Reliability diagrams, regionally accumulated- ROC areas for individual grid boxes

Mean Squared Error

Mean Squared Error

Pro:* Gives some estimate of uncertainty in forecast

(i.e. RMSE).

Con:* Can not infer frequency of large errors unless

precise distributional assumptions are met.

Recommendation:* Perhaps simple graph or table showing

frequency of errors of different magnitudes would be appropriate.

Correlation : TemperatureDJF 1981-2001

Correlation : TemperatureJJA 1981-2001

Correlation : PrecipitationDJF 1981-2001

Correlation : PrecipitationJJA 1981-2001

Correlation

Pros:* Commonly used; familiar* Gives simple overview of where models are

likely to have skill or not

Con:* Merely measure of association, not of forecast

accuracy

Recommendation:* Avoid deterministic metrics

Event occurs? Warning given?

Hit Yes Yes

Miss Yes No

False Alarm No Yes

Correct Rejection No No

FORECASTS

OBSERVATIONS Yes No

Yes Hit Miss

No False alarm Correct Rejection

Example

Ensemble forecasts of above-median March – May rainfall over north-eastern Brazil

FORECASTS (80%)

OBSERVATIONS Wet Not wet Total

Wet 5 2 7

Not wet 1 7 8

Total 6 9 15

1

false-alarm rate 13%8

5

hit rate 71%7

ROC Areas : DJF TemperatureBELOW-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : DJF TemperatureABOVE-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : JJA TemperatureABOVE-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : JJA TemperatureBELOW-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : DJF PrecipitationABOVE-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : DJF PrecipitationBELOW-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : JJA PrecipitationABOVE-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas : JJA PrecipitationBELOW-NORMAL

0.50.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

ROC Areas

Pros:

* Can treat probabilistic forecasts

* Can be provided point-wise

* Can distinguish ‘asymmetric’ skill

Cons:

* Fails to address reliability

RELIABILITY

RELIABILITY

Reliability

Pros:

* Treats probabilistic forecasts

* Relatively easy to interpret

* Provides most relevant information on usability of forecast information over time

Cons:

* Difficult to provide for individual grid points, especially for short time samples

Temperature Trends over North America%

-Are

a C

over

ed b

y “A

bove

-Nor

mal

Temperature Trends over North America%

-Are

a C

over

ed b

y “A

bove

-Nor

mal

Observed Precipitation over North America1998-2001

Anomalies relative to1981-1997

Percent difference relative to 1981-1997

1 2 3 4 1 2 3 4

JJA DJF

Frequency (# years out of 4)for precipitation in BN category

Frequency of Below-Normal PrecipitationJJA 1998-2001

1 in 4

2 in 4

3 in 4

4 in 4

OBSERVATIONS

Frequency of Below-Normal PrecipitationDJF 1998-2001

1 in 4

2 in 4

3 in 4

4 in 4

OBSERVATIONS

Summary

• What’s an appropriate template?- Skill metrics should be flexible (i.e. user defined “events”, categories, thresholds)

- Probabilistic forecasts must be treated probabilistically!!!

• How are we doing?- Could be better. Encouraging performance estimates by some measures, but inadequate performance on important aspects of climate variability. - Missing elements necessary for seasonal prediction?

• Baseline??