Rob Roebeling,
Hartwig Deneke and Arnout Feijt
GEWEX Cloud Assessment Meeting
Madison, United States of America
6 -7 July 2006
"METEOSAT-8 (SEVIRI) CLOUD PROPERTY RETRIEVALS
FOR CLIMATE STUDIES"
2
Introduction
Introduction
Validation for the CloudNet sites
Sensitivity
Norrkoping Cloud Work Shop results
Conclusions
3
Introduction
4
Retrieval Method
Project: Climate Monitoring SAF (CM-SAF)
Satellites: METEOSAT-8/SEVIRI and NOAA-17/AVHRR
Channels: VIS (0.63 m) and NIR (1.6 m) and IR (10.8 m)
Products: COT, CLWP, CPH (and Reff)
RTM: Doubling Adding KNMI model (DAK)
Surface reflectance: MODIS white sky albedo
Optical thicknesses: 0 -256
Water clouds: spherical droplets (1 -24 m)
Ice clouds: imperfect hexagonal crystals (6,12, 26, 51 m)
5
Examples Meteosat-8 Cloud Properties
Cloud Thermodynamic Phase
Water Ice Clear
Cloud Optical Thickness
6
Inter-calibration
NOAA-17/AVHRR&
METEOSAT-8/SEVIRI
7
Inter-Calibration: NOAA17 vs. METEOSAT-8
Diff. ~5% Diff. ~20%
SEVIRI vs. AVHRR reflectances using observations from
August – December 2004 over Central Africa.
8
Results after re-calibration
SEVIRI and AVHRR COT and CLWP after re-calibration to MODIS
using observations of April and May 2004 over Northern Europe.(SEVIRI: 0.6 m + 6% and 1.6 m +2%; AVHRR: 0.6 m + 6% and 1.6 m +22%)
Diff. 0- 5% Diff. 0- 5%
9
Validation
10
CloudNet Data
Cabauw
Paris
Chilbolton
Chilbolton
CLWP: 1 year of microwave radiometer
data at 2 CLOUDNET sites
COT: 1 year of pyranometer data, 27 stations
11
Comparison: Example CLWP product
SEVIRI CLWP 1 May 2004 CLWP time series from SEVIRI and microwave radiometer at Chilbolton, UK
12
CLWP Validation
Distribution of diff. LWP Meteosat-8 - MW, July 2004 (Chilbolton)
Distribution of Meteosat-8 LWP July 2004(Chilbolton)
13
CLWP Validation: Daily medians summer
Chilbolton, UK
14
CLWP Validation: Monthly medians summer
Chilbolton, UK
Palaiseau, France
15
CLWP Validation: Daily medians one year
Chilbolton, UK
16
CLWP Validation: Monthly medians one year
Chilbolton, UK
17
Validation Pyranometer
18
Sensitivity
19
Assessment error budget
Co-location & resolution (48 g m-2) Position ground station Parallax VIS – NIR mismatch Wobbling of the satellite Plane parallel assumption
MW – Radiometer (30 g. m-2)
Difference due to sampling different cloud portions (20 g m-2)
20
Assessment error budget
RT RT & FOV
FOV FOV
FOV
Total
21
Assessment error budget
222
2222
36~6070__
60~203048__
mgSumErrMSG
mgErrValTot
22
Sensitivity: viewing geometry6:00hr
0 = 70 = 83 7:00 hr
0 = 60 = 97 8:00 hr
0 = 51 = 110 9:00 hr
0 = 42 = 123
Fig. CLWP frequency distributions 21 June 2006 over Northern Europe
23
Sensitivity: viewing geometry
Loeb and Coakley, 1997, Journal of Climate
24
Conclusions
25
Conclusions (1)
Re-calibration reduces the differences between NOAA-17
and METEOSAT-8 retrievals of COT and CLWP over
Northern-Europe to about 5%.
There is good agreement between SEVIRI and microwave
radiometer retrieved cloud liquid water path.
The accuracy of SEVIRI CLWP retrievals decreases at solar
zenith angles > 60 degrees.
Accuracy changes due to geometry may manifest artificial
trends
26
Conclusions (2)
Part of the validation differences can be explain by co-
location and sampling differences.
The 15 minutes time resolution SEVIRI have enabled the
synergetic use of ground-based and satellite
observations.
27
Comparison Cloud Work Shop17 January 2006
28
Comparison CWS: Cloud Optical Thickness
29
Comparison CWS: Effective Radius
30
31
Methods: Radiative Transfer Modelling
R(sur)
Above the cloud
Below the cloud
ac
bc
Scattering and absorption
Reflectance,
Cloud properties
•Geometric thickness •Thermodynamic phase
•Optical thickness
•Effective radius
•Droplet distribution
Cloud properties
•Geometric thickness •Thermodynamic phase
•Optical thickness
•Effective radius
•Droplet distribution
32
Inter-calibration
NOAA-17/AVHRR&
METEOSAT-8/SEVIRI
33
Inter-Calibration: NOAA17 vs. METEOSAT-8
Diff. ~5% Diff. ~20%
SEVIRI vs. AVHRR reflectances using observations from
August – December 2004 over Central Africa.
34
Results after re-calibration
SEVIRI and AVHRR COT and CLWP after re-calibration to MODIS
using observations of April and May 2004 over Northern Europe.(SEVIRI: 0.6 m + 6% and 1.6 m +2%; AVHRR: 0.6 m + 6% and 1.6 m +22%)
Diff. 0- 5% Diff. 0- 5%
35
Sensitivity: viewing geometry
Fig. 0.6 m reflectance vs. viewing zenith angle
AVHRR MSG AVHRR MSG
36
Sensitivity: viewing geometry
Fig. 1.6m reflectance vs. viewing zenith angle
AVHRR MSG AVHRR MSG
37
Retrieval Method
Wate
r
Cloud
s
Ice
Cloud
s
38
Influence of reflectance spectra
SCIAMACHY reflectance spectra for 5 typical scenes (Stammes et al. 2005)
0
0.2
0.4
0.6
0.8
1
1450 1500 1550 1600 1650 1700 1750Wavelengths [nm]
Re
fle
cta
nc
e
Water CloudIce CloudVegetationOceanDesert
AVHRR SEVIRI
Diff. Ice clouds < 10%Diff. Water cloud < 3%
39
Monthly COT and CLWP compositesCOT Meteosat-8, May 2004
CLWP Meteosat-8, May 2004 CLWP NOAA-AVHRR, May 2004
COT NOAA-AVHRR, May 2004
40
CLWP Validation: Palaiseau daily results
41
Sensitivity: retrieval cloud optical thickness
Fig. 0.63 m reflectivities
Error in retrieved COT and Reff assuming errors of ± 1, 2 and 3%
Fig. 1.6 m reflectivities
42
Results using pre-launch calibration
Cum. freq. dist. COT for water clouds
Cum. freq. dist. CLWP for water clouds
SEVIRI > AVHRR
Diff. 0- 20% Diff. 10 - 20%
SEVIRI < AVHRR
43
Position station