+ All Categories
Home > Documents > Intercomparisons Working Groupe activities

Intercomparisons Working Groupe activities

Date post: 08-Jan-2016
Category:
Upload: qamar
View: 26 times
Download: 0 times
Share this document with a friend
Description:
Intercomparisons Working Groupe activities. Prepared by F. Hernandez L. Crosnier, N. Verbrugge, K. Lisaeter, L. Bertino, F. Davidson, M. Kamachi, G. Brassington, P. Oke, A. Schiller, C. Maes, J. Cummings and the MERSEA assessment group. Definition of metrics at the global level: where are we ? - PowerPoint PPT Presentation
Popular Tags:
25
F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007 Intercomparisons Working Groupe activities • Definition of metrics at the global level: where are we ? – Class 1, 2, 3 and 4 metrics definition – Available observation and climatologies • Implementation in practice – Data servers, formats etc….. • Plan for GODAE intercomparisons: what do we decide? Prepared by F. Hernandez L. Crosnier, N. Verbrugge, K. Lisaeter, L. Bertino, F. Davidson, M. Kamachi, G. Brassington, P. Oke, A. Schiller, C. Maes, J. Cummings and the MERSEA assessment group
Transcript
Page 1: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Intercomparisons Working Groupe activities

• Definition of metrics at the global level: where are we ?

– Class 1, 2, 3 and 4 metrics definition

– Available observation and climatologies

• Implementation in practice

– Data servers, formats etc…..

• Plan for GODAE intercomparisons: what do we decide?

Prepared by F. Hernandez

L. Crosnier, N. Verbrugge, K. Lisaeter, L. Bertino, F. Davidson, M. Kamachi, G. Brassington, P. Oke, A. Schiller, C. Maes, J. Cummings and the MERSEA assessment group

Page 2: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Intercomparisons Working Groupe activities

• Definition of metrics at the global level: objectives in GODAE

We all define and use diagnostics to assess our models and forecasting systems, but this is not the point…..

Thus, the purpose here is to define and test common ways to validate the systems in the framework of GODAE by

• Chosing a common methodology for validation (what are we looking at…)

• Defining a set of diagnostics (the « metrics »)

• Chosing common set of reference (climatologies, observations…)

Then, promoting this work as standards

Page 3: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

The validation « philosophy »

• Basic principles. Defined for ocean hindcast and forecast (Le Provost 2002, MERSEA Strand 1):

– Consistency: verifying that the system outputs are consistent with the current knowledge of the ocean circulation and climatologies

– Quality (or accuracy of the hindcast) quantifying the differences between the system “best results” (analysis) and the sea truth, as estimated from observations, preferably using independent observations (not assimilated).

– Performance (or accuracy of the forecast): quantifying the short term forecast capacity of each system, i.e. Answering the questions “do we perform better than persistency? better than climatology?…

• A complementary principal, to verify the interest for the customer (Pinardi and Tonani, 2005, MFS):

– Benefit: end-user assessment of which quality level has to be reached before the product is useful for an application

Page 4: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Metrics definition (MERSEA heritage)

CLASS1 like : Regular grid and few depth, daily averaged• Comparison of the 2D model surface SST and SLA with -SST -SLA -SSM/I Ice concentration and drift for Arctic and Baltic areas• Comparison of each model (T,S) with climatological (T,S, mixed layer depth) at several

depth (0m, 100m, 500m, 1000m )?

CLASS2 like: High resolution vertical sections and moorings•Comparison of the model sections with Climatology and WOCE/CLIVAR/OTHER/XBT hydrographic sections• Comparison of the model SLA at tide gauge location, of the model (T,S,U,V) at fixed mooring locations

CLASS3 like: Physical quantities derived from model variables •Comparison of the model volume transport with available observations (Florida cable measurments….)•Assessment through integrated/derived quantities: Meridional Overturning Circulation, Warm Water Heat Content etc….

CLASS4 like: Assessment of forecasting capabilities • Comparison between climatology, forecast, hindcast, analysis and observations • Comparison in 15x15degree boxes/dedicated boxes of each model with

T/S CORIOLIS, SSM/I Sea Ice concentration, tide gaugesSST High resolution ?

SLA AVISO ?

Page 5: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Metrics definition over the world ocean

Class 1 Class 2 Class 3 Class 4

Atlantic,

Med, Baltic T/S

Sea-Ice

Arctic

Indian ?

Southern ?

Pacific ?

ME

RS

EA

BL

UE

Lin

kG

OD

AE

W

ork

sh

op

Page 6: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

1/2°

Agreement on Class 1 regional files

1/4°

T, S, U, V, SSH, MLD, BSF, Tx, Ty, Qtot+relax., E-P-R +relax., MDT(MSSH)

Page 7: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

1/2°

1/6°

1/6°

1/6°

1/6°

1/6°

T, S, U, V, SSH, MLD, BSF, Tx, Ty, Qtot+relax., E-P-R +relax., MDT(MSSH)

Page 8: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

1/2°

1/6° or 1/8°

1/8°

T, S, U, V, SSH, MLD, BSF, Tx, Ty, Qtot+relax., E-P-R +relax., MDT(MSSH) + Sea Ice variables and fluxes

Page 9: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Assessment through Class 1 metrics

• Consistency: Monthly averaged fields compared to:

– WOA’2005, Hydrobase, CARS, MEDATLAS, Janssen, climatologies

– De Boyet Montégut MLD climatology

– SST climatology ?

• Quality: Daily fields compared to

– Dynamic topography, or SLA (AVISO products)

– SST (to be determined)

– SSM/I Sea-Ice concentration and drift products

– Surface currents (DBCP data, OSCAR, SURCOUF products)

• Performance:

– Class 1 analyses, hindcast, forecast can be compared

– Class 1 format can also be used to store assimilation quantities: innovation, residual vectors

Page 10: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Class 2/3: MERSEA/GODAE GLOBAL METRICS: Online Systematic Diagnostics

XBT linesMOORINGS

GLOSS TAO PIRATA

Model/Tide gauge SLA time series Comparison

OceanSITES moorings

SOOP

MFS MODEL T XBT Observed T

MODEL/OBS comparison

WOCE CLIVAR CANADIAN SECTIONS

MODEL/WOCE-CLIVAR SECTION

VOLUME TRANSPORT across FLORIDA Strait :MODEL/CABLE Comparison

SECTIONS and TRANSPORT

Page 11: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Re-visiting class 2/3 metrics in the North Atlantic

C-NOOF already started comparisons with AZMP sections

Page 12: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Class 2/3 metrics: proposed definitions

Revised proposition of M. Kamachi

ME

RS

EA

fir

st p

rop

osi

tio

n

ongoing work with C. Maes and M. Kamachi

ongoing work with BLUElink and SPICE people

Still need to be discussed (IRD, Peru, Chile contacts)

Page 13: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Class 3 metrics

• Already defined or ready to be implemented:

– Transport computation discussed

– MOC

– Sea-Ice volume, extent

• What else can be implemented ? (in-line of off-line) in relation with GSOP?

– Monitoring of western boundary currents (Kuroshio and Gulf Stream path, axis)

– Heat content of specific water masses (tropical areas, Warm Water Heat Content)

– Mesoscale monitoring by regions: EKE timeseries, SLA spectrum

– Tropical dynamics monitoring: Nino boxes, SLA/SST Howmuller diagrams

– Water masse distribution, T-S diagrams

– Lagrangian statistics, particle dispersion….

Page 14: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Assessment through Class 2/3 metrics

• Consistency: Class2 sections, moorings, monthly averaged fields compared to:

– WOA’2005, Hydrobase, CARS, MEDATLAS, Janssen, climatologies

– De Boyet Montégut MLD climatology

• Quality: Daily fields at sections/moorings compared to

– T/S in-situ (XBT, Argo, tropical moorings etc…

– Sea-Ice data (OSI SAF)

– Tide gauges

– ADCP current

• Performance:

– Analyses, hindcast, forecast can be compared at sections and mooring locations

Page 15: Intercomparisons Working Groupe activities

Comparison to Global Tide gauges SLA

Page 16: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Class 4 metrics, concept and implementation

Class 1, 2 and 3 metrics can be applied to any field produced by the

forecasting system (hindcasts, nowcasts or forecasts).

More specifically, Class 4 metrics aims to measure the performance of the

forecasting system, its capability to describe the ocean (hindcast mode),

as well as its forecasting skill (analysis and forecast mode) at once.

All fields are evaluated using identical criteria. From the assimilation point of

view, these diagnostics are performed in the observational space.

Page 17: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Class 4 metrics, concept and implementation

Time

+

+

++ ++

+

+

++

++ +

++

++

+

T0 T0+7

One modelvariable •Initial conditions

(previous analysis)

•Truth

+Observations

•Analysis

•Hindcast

•Persistency

•Climatology

time windowto compute stats

•Forecast

Best estimates May be calculated in delayed mode by the system

Best real time estimate

For some systems, can be the analysis

Best 3-day forecastAll systems can provide the 3-day forecast, the last forecast from DMI can be considered as the three day forecast

Best 6-day forecast All systems but the DMI can provide a 6-day forecast.

Persistence at 3 days Based on the corresponding best real time estimate

Persistence at 6 days Based on the corresponding best real time estimate

First guess Independent from the observation it is being compared

Climatology Levitus, MEDAtlas

Page 18: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

(Tobs-Tclim) 0-100m

(Tobs-Tmod)2 0-100m

Here, Only BA and PF files (No TE neither MO files)

(Tobs-Tclim)2 0-100m(Sobs-Sclim)2 0-100m

(Sobs-Smod)2 0-100m

Page 19: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Compute Class4 statistics •per geographical boxes or in regular 5x5degree boxes•per vertical layers (0-100m, 100-500m, 500-5000m?)

Elementary box patchwork

Page 20: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Class 4 based on Sea-Ice in the Barents Sea

TOPAZ sea-ice vs SSM/I data. RMS of the ice concentration error (model-observation) over a box in the Arctic Ocean. Analysis is compared to forecast and persistence over a 10-day window

Page 21: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Assessing the performance of the system

• Class 1, 2, and 3 can be used if applied on forecast,

persistency etc…..

• Class 4 define in the “observation space”:

– Use observations and compute differences with model fields

• T/S, sea level at tide gauges, OSI SAF sea ice• Define “share-able dataset is mandatory”

– SST ? Altimetry ? Surface drifters ?

– Compute statistics of differences per boxes (typically every week)

– Compare these statistics to infer the performance

• Possible diagnostics in the “model space”: not defined !

Page 22: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Implementation in practice

• Alle these metrics need:

– Similar implementation

– Convention for name, format (NetCDF COARDS CF)

– Data servers for exchanges (FTP, OpenDAP)

Class 1 Class 2 Class 3 Class 4

Atlantic, Med,

Baltic (T/S)

Arctic

Indian

Southern

Pacific

Page 23: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

GODAE metrics definition: summary

• Class 1 and 2 definition and technical implementation guidelines finished by the end of 2007

• Class 3: only transport and MOC fully defined by the end of 2007

– Any other diagnostics to be included?

• Class 4 definition and technical implementation guideline for T/S and Sea-Ice finished by the end of 2007

– Tide gauges, SST, and Surface Velocity need agreements on observation data set

– Nothing defined in the “state space”

Page 24: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

Intercomparisons Working Groupe activities

• Definition of metrics at the global level: where are we ?

– Class 1, 2, 3 and 4 metrics definition

– Available observation and climatologies

• Implementation in practice

– Data servers, formats etc…..

• Plan for GODAE intercomparisons: what do we decide?

Page 25: Intercomparisons Working Groupe activities

F. Hernandez, Mercator Océan – IGST XII – St Johns 8/08/2007

GODAE Intercomparison Working Group

• Decide intercomparison exercice at IGST XII meeting ?– Objectives ? (internal, dedicated to large publicity….)

– “Who”, “When”, “How” ?• Plans for implementation• Dedicated distributed archive: OpenDap

– What are the possibilities for “ocean assessment”:• Intercomparison on hindcast (or reanalysis) over a period in the past• Off line comparison of operational systems (like in MERSEA)• Real-time comparison of operational systems for a given period

– Other possibility: assessing the system in operation:• Complementary use of Key Parameters Indicators to verify the technical efficiency of

the system

– Other possibilty: looking for user feed-backs (but then no need for all the metrics already defined !)


Recommended