+ All Categories
Home > Documents > TCS-Near Fault Observatories for EPOS IP. NFO: a modern vision of EQKs science One of the EPOS goal...

TCS-Near Fault Observatories for EPOS IP. NFO: a modern vision of EQKs science One of the EPOS goal...

Date post: 25-Dec-2015
Category:
Upload: hubert-heath
View: 214 times
Download: 0 times
Share this document with a friend
Popular Tags:
27
TCS-Near Fault Observatories for EPOS IP
Transcript

TCS-Near Fault Observatoriesfor EPOS IP

NFO: a modern vision of EQKs scienceOne of the EPOS goal is the “…promotion of innovative approaches for a better understanding of the physical processes controlling earthquakes …” thus EPOS is the natural house for hosting services based on high-resolution multidisciplinary near fault data and products collected by innovative (in situ & dense) research infrastructures

NFO

Increase research Capability

New expertise for Multidisciplinary

HR-NF data

Next generationData Analysisand Monitoring

Transnational accessTechnological tansfer

Tectonic regime of the sites goes from plate boundaries to mountain ranges. Strike-slip and (steep-low) angle normal faulting.

NFOs cover areas characterised by a broad range of strain rate values (3-30 mm/yr) activating well developed large crustal faults and complex networks of smaller fault segments.

• Greece and Mid Atlantic rift (10-20 mm/yr);• Plate boundaries (Eurasia-Anatolian,

25mm/yr);• Apennines belt and Alpine region (2-4

mm/yr).

EPOS Management

ICS TCS BOARD

National Research Infrastructures (Near Fault Observatories)

TABO

O

INFO

MAR

SITE

SISZ

VALA

IS

CORI

NTH

mor

e N

OD

Es

TCS-NFO

IN SITUNF OBSERVATORIES

• Multi-Disciplinary NF data

• M-data standardisation M-metadata definition

Structure: In situ observatories including distributed data centres.

Products: Continuous/event based MD-NF-data acquisition and standardisation; MD-metadata definition.

Services: SPD data repository; STD data upload; web services for data discovery and data-metadata access.

VIRTUALLABORATORY

• Data products• Cross disciplinary

analysis and visualization tools

Structure: Common e-infrastructure and web services.

Products: transient detection, characterization and catalogs; eqks parameters; MD-time series (raw/corrected); faults geometry; strain data; 3-4D models.

Services: MD-data products and metadata database, MD-data visualization and cross disciplinary analysis.

TESTBEDs

• EEW• Training and EDU• N-RT Procedures

Structure: distributed data centers.

Products: EEW, training and educational interfaces, hazard products.

Services: Early warning testing and educational centers, environment for MD earthquake forecast and eqks induced effects, n-real time data products for hazard products and tests (OEF; ShakeMaps).

Inte

roper

abili

tyB

OA

RD

for

GO

VE

RN

AN

CE

an

d L

EG

AL

other

TCSs

TCS-NFO need a BOARD to guarantee:o TCS governance, legal and financial issueso TASKs harmonization and interoperability (TCS, ICS)o coherent implementation of the serviceso strategic link with both the EPOS BSC and the NRIo scientific community and global collaboration (e.g. ICDP, EarthScope)o national and international projects (e.g. Supersites, I3, National priorities)

Board composition: one representative of each pillar (3) node (6) = 9 people

The NFOs Board will establish a WORK PLAN and activities EVALUATION CRITERIA to CONSOLIDATE the RESULTS.

Solutions for implementing the governance (e.g. MOU) should be explored to guarantee TRANSPARENCY and SHARE RESOURCES.

Governance and Legal

Governance and Legal

Faults geometry …

NFO (multidisciplinary) data definition

Seismological dataEIDA

Geodetic data… GNSS

Geochemical data

Strain data

EM data

NFO STD will be delivered to the mono-thematic TCS (for storing and distribution) following the prescribed

formats and quality standard

NFO STD require specific fields in the metadata not yet considered

NFO SPD will be stored at in situ distributed data-centres

Common formats and metadata do not exist!

SPD often require large, dynamic and multidisciplinary metadata for

data correction

Spec

ific d

ata

Stan

dard

dat

a

Supe

rsite

s

TCS

–V.

P1 - In Situ Near Fault Observatories

Distributed Data center for Specific Data storage

Web-service for Specific Data and metadata

discovery and access

Standardization of specific data

Metadata definition

Data policy.We will follow the EPOS (open, IAAA) data policy.Temporary delay could be also needed in the case of a large event within the NFO. Research data (in projects) can be embargoed for a reasonable time.

Manpower (2016-2018):Existing:• 8 man-months / yr at the primary node for technical support and implementation.• 2 man-months / yr at the secondary nodes for technical support.

Requested:• 1 man/year (for three years 2016-18) + 1.5 man/year (for one year) for metadata

definition, data standardisation and upload at the secondary nodes.

360 K-euro (human resources and hardware)

P1 - In Situ Near Fault Observatories

P2 – Test Beds Platform for experimenting the use of n-real time high-resolution data in

unique frameworks apart from the (inter-national) systems it will later be added to (EEW testing centre).

Not submitted to EPOS IP because we are not ready but NFO are available for sharing the environment and data products for:

o Multidisciplinary and/or Operational Earthquake Forecast (downscaling; we recorded within the NFOs events in the range M 0-5.3 in 2010-2014).

o Training and Educational projects involving students of diverse levels, local authorities and citizens (transnational access policy in I3-GIGANTIS project)

o Geo-hazard (clear) and Geo-resources (NFO sit on CO2 natural reservoirs and geothermal fields)

o Link with the SMEs for testing new and cheap instruments

P2 – Early Warning Testing Centres

How: • Realization and implementation

of a prototype at one (primary) node

• Duplication at secondary nodes

Euro-Med EEWtesting centre

Running EEW codesComparing results/performances

Sharing knowledge with end users

Testing centres for EEW (REAKT and other projects follow up) will• allow to understand current limits and potential for end-users and the public• provide a framework for rapid integration of innovative community-based ideas for

the practical implementation and development of these systems.

Manpower (2016-2018):Existing:• 6 man-months / yr at the primary node + 2 man-months/yr at the secondary

nodes (in total) for technical support and guidelines definition.

Requested: • 1 man/year (for three years 2016-18) to build EEW testing centre + 0.5 man for

one year (last year), to be distributed for the three partners for replicating testing centres and real-time testing.

260 K-euro (human resources and hardware)

P2 – Early Warning Testing Centres

P3 – Virtual NF Laboratory A common e-infrastructure to store and release all the data products

coming from the diverse disciplines.

VL will be based on a database populated with Standard and Specific Data products and metadata describing the data and the work flows.

Availability of basic web services and tools foro high Level (1-3) data discovery and (for human and machine-to-machine) and

availabilityo preliminary cross-disciplinary analysiso visualization of multidisciplinary time series and products

We will guarantee the data inter-operability with all the working groups providing complementary services (TCS-S, TCS-V and TCS-G), the EPOS-ICS, and with the inter-national HPC resources.

Rd

VP/VS

GPS

Examples ofcombined queries

°T

Requestedstation

… whatever

Rd

VP/VS

GPS

Examples ofcombined queries

°T

Requestedstation

… whatever

The VL is not only an informatics issue but also

a scientific tool performing multidisciplinary studies

and …an a new generation of use case in collaboration with ICS

• Manpower: Existing (2016-18):• 11 person-months/yr at the primary node for the e-infrastructure design and

setup plus data exchange standards definition• 1 person-months/yr at each secondary node for standard and tools definition,

testing and implementation

Requested:• 1 man/year (for three years 2016-18) + 1 man/year (for one year) for data upload,

data exchange format decisions and in general for the interaction with ACTARUS at each (5) secondary node

320 K-euro (human resources and hardware)

P3 – Virtual NF Laboratory

Deliverable and MilestonesTASK 1: TCS BOARD (INGV?)D1.1 WG strategy and financial planD1.2 Coordinating the implementation and the development of the servicesD1.3 Guaranteeing the EPOS Data policy and access rulesD1.4 Guaranteeing the coherency with the overall EPOS approach and solutions (ICS, SCB, ERIC)D1.5 Guaranteeing the harmonization and supporting the coherency with the national strategic plansD1.6 Involving the broad user community and stakeholders (government, academia, industry, SMEs)D1.7 Building up a long-term strategy for new services and service maintenance beyond EPOS IPD1.8 Ethics and legal (e.g. risk communication due to the proximity to the population)

TASK 4 : Virtual Laboratory (INGV)D4.1 Definition and description of data products and related metadata (12 months)D4.2 Advanced server and DB functionalities setup (18 months) M4 First stage of DB population with data and metadata of the primary node (18 months)D4.3 GUIs for human interaction and web services for machine-to-machine interactions are designed (24 months)D4.4 GUIs and web services testing at the primary node (30 months)D4.5 Test for full remote operability with NFOs (36 months)

TASK3 : EEW Testing Centers (AMRA)D 3.1 : Definition of guidelines for performance evaluating (12 months)M 3: Setting up of the platform (24 months)D3.2: Off-line tests (24 months)D3.3: Real time implementation of the prototype and results (36 months)

TASK 2 : IN SITU NFO (KOERI) D2.1: Maintenance and developmentof the in situ data centers for collection of multidisciplinary near fault data (36 months)D2.2: Standardization of formats for Specific Data and development of procedures for data quality control (coord. TCS-V) (12 months)M2 : Static and Dynamic metadata definition for Specific Data (coord. TCS-V).(18 months)D2.3: Development of a web service at each NFO to provide data discovery and open access to Level 0 multidisciplinary data and metadata (coord. With TCS-V) (30 months)

Financial Sustainability beyond EPOSexisting

request for EPOS IP

Tot. 1040 K€

• 100 K€• (community and project

meetings)

Governance and Legal

• 360 K€• (1.5 FTE + hardware)

Pillar 1- In situ NF

Observatories

• 260 K€• (1.2 FTE + hardware)

Pillar 2 –Test Beds

• 320 K€• (1.4 FTE + hardware)

Pillar 3 – Virtual Laboratory

1 m

geology

seismology

geodesy

geochemistry

laboratory

Fin

West Bohemia

VranceaSlovenia

Potential NFOs

Governance and Legal

In order to guarantee the harmonization of the NFO activity and their interoperability with other TCS, the ICS and a coherent implementation of the specific services, the TCS-NFO needs to set up a Board.

The Board is expected to serve also as a strategic link with both the EPOS BSC and the NRI, dealing with scientific community, legal, governance and financial issues related to the NFOs.

The governance of the TCS is going to be organized around the NFO Board composed by one representative of each pillar (3) and of each node (6) for a total of 9 people.

Governance and LegalNFO BOARD - TASK 1: deliverables and milestones

• D1.1 WG strategy and financial plan• D1.2 Coordinating the implementation and the development of the services• D1.3 Guaranteeing the EPOS Data policy and access rules• D1.4 Guaranteeing the coherency with the overall EPOS approach and solutions (ICS,

SCB, ERIC)• D1.5 Guaranteeing the harmonization and supporting the coherency with the

national strategic plans• D1.6 Involving the broad user community and stakeholders (government, academia,

industry and SMEs)• D1.7 Building up a long-term strategy for new services and service maintenance

beyond EPOS IP• D1.8 Ethics and legal (e.g. risk communication due to the proximity to the population)

Request: 100 K€ for TCS management

Leading partner: INGV

TASK 2 - Deliverables and MilestonesD2.1: Maintenance and development of the in situ data centers to provide the collection of continuous and event based multidisciplinary near fault data (36 months)

D2.2: Standardization of formats for Specific Data and development of procedures for data quality control (coord. TCS-V) (12 months)

M2 : Static and Dynamic metadata definition for Specific Data (coord. TCS-V).(18 months)

D2.3: Development of a web service at each Observatory to provide data discovery and open access to Level 0 multidisciplinary data and metadata (coord. With TCS-V) (30 months)

Leading Partner: KOERI (MARSITE)

P1 - In Situ Near Fault Observatories

TASK3Deliverables and Milestones: • D 3.1 : Definition of guidelines for performance evaluating (12 months)

• M 3: Setting up of the platform (24 months)

• D3.2: Off-line tests (24 months)

• D3.3: Real time implementation of the prototype and results (36 months)

Leading Partner : AMRA

P2 – Early Warning Testing Centres

TASK 4Deliverables and Milestones: • D4.1 Definition and description of data products and related metadata(12 months)• D4.2 Advanced server and DB functionalities setup (18 months) • M4.1 first stage of DB population with data and metadata of the primary

node (18 months)• D4.3 GUIs for human interaction and web services for machine-to-

machine interactions are designed (24 months)• D4.4 GUIs and web services testing at the primary node (30 months)• D4.5 Test for full remote operability with NFOs (36 months)

Leading Partner : INGV

P3 – Virtual NF Laboratory


Recommended