+ All Categories
Home > Documents > Common Analysis Framework, June 2013

Common Analysis Framework, June 2013

Date post: 21-Jan-2016
Category:
Upload: elaina
View: 30 times
Download: 0 times
Share this document with a friend
Description:
Common Analysis Framework, June 2013. Database solutions and data management for PanDA system Gancho Dimitrov (CERN). Outline. ATLAS databases – main roles and facts Used DB solutions and techniques : Partitioning types – range, automatic interval Data sliding windows Data aggregation - PowerPoint PPT Presentation
Popular Tags:
23
27-June-2013 1 Common Analysis Framework, June 2013 Database solutions and data management for PanDA system Gancho Dimitrov (CERN) G. Dimitrov
Transcript
Page 1: Common Analysis Framework, June 2013

27-June-2013 1

Common Analysis Framework, June 2013

Database solutions and data management for PanDA system

Gancho Dimitrov (CERN)

G. Dimitrov

Page 2: Common Analysis Framework, June 2013

27-June-2013 2

Outline

ATLAS databases – main roles and facts

Used DB solutions and techniques : - Partitioning types – range, automatic interval- Data sliding windows - Data aggregation- Scheduled index maintenance- Result set caching- Stats gathering settings- Potential use of Active Data Guard (ADG)

JEDI component and relevant DB objects

Conclusions

G. Dimitrov

Page 3: Common Analysis Framework, June 2013

27-June-2013 3

ATLAS databases topology

G. Dimitrov

ADC applications PanDA, DQ2, LFC, PS, AKTR, AGISare hosted on the ADCR database

ACDR database cluster HW specifications:4 machines 2 quad core CPUs Intel Xeon@ 2.53GHz48 GB RAM10 GigE for storage and cluster access NetApp NAS storage with 512 GB SSD cache

Snapshot taken from the PhyDB Oracle Streams replication monitoring

Page 4: Common Analysis Framework, June 2013

27-June-2013 4

PanDA system

The PanDA system is the ATLAS workload management system for production and user analysis jobs

Originally based on a MySQL database. Migrated in 2008 to Oracle at CERN.

Challenges: - PanDA system has to manage millions of grid jobs daily- Changes into jobs statuses, sites loads have to be reflected on the database fast enough. Fast data retrievals to the PanDA server and monitor are key requirements- Cope with spikes of user’s workload - DB system has to deal efficiently with two different workloads: transactional from PanDA server and (to some extent) data warehouse load from PanDA monitor.

G. Dimitrov

Page 5: Common Analysis Framework, June 2013

27-June-2013 5

Trend in number of daily PanDA jobs

G. Dimitrov

Jan – Dec 2011 Jan – Dec 2012 Jan- June 2013

Page 6: Common Analysis Framework, June 2013

27-June-2013 6

The PANDA ‘operational’ and ‘archive’ data

All information relevant to a single job is stored in 4 major tables. The most important stats are kept separately from the other space consuming attributes like job parameters, input and output files.

The ‘operational’ data is kept in a separate schema which hosts active jobs plus finished ones of the most recent 3 days. Jobs that get status ‘finished’, ‘failed’ or ‘cancelled’ are moved to an archive PANDA schema (ATLAS_PANDAARCH).

ATLAS_PANDA => ATLAS_PANDAARCH

G. Dimitrov

Page 7: Common Analysis Framework, June 2013

27-June-2013

PanDA data segments organization

G. Dimitrov

ATLAS_PANDA JOB, PARAMS, META and FILES tables:partitioned on a ‘modificationtime’ column. Each partition covers a time range of a day

ATLAS_PANDAARCH archive tables partitioned on the ‘modificationtime’ column.Some table have defined partitions each covering three days window, others a time range of a month

Inserts the data of the last complete day

Oracle scheduler job

Filled partitions

Empty partitions relevant to the future. A job is scheduled to run every Monday for creating

seven new partitions

Partitions that can be dropped. An Oracle scheduler job is taking care of that daily

Panda server DB sessions

Time line

Certain PanDA tables have defined data sliding window of 3 days, others 30 days. This natural approach showed to be adequate and not resource demanding !

7

Important: For being sure that job information will be not deleted without being copied in the PANDAARCHschema, a special verification PLSQL procedure is taking place before the partition drop event!

Page 8: Common Analysis Framework, June 2013

27-June-2013 8

PanDA data segmentation benefits

High scalability: the Panda jobs data copy and deletion is done on table partition level instead on a row level

Removing the already copied data is not IO demanding (very little redo and does not produce undo ) as this is a simple Oracle operation over a table segment and its relevant index segments (alter table … drop partition )

Fragmentation in the table segments is avoided. Much better space utilization and caching in the buffer pool

No need for indexes rebuild or coalesce operations for these partitioned tables.

G. Dimitrov

Page 9: Common Analysis Framework, June 2013

27-June-2013 9

PANDA=>PANDAARCH data flow machinery

The PANDA => PANDAARCH data flow is sustained by a set of scheduler jobs on the Oracle server which execute a logic encoded in PL/SQL procedures

G. Dimitrov

Weekly job which creates daily partitions relevant to the near future days

Daily job which copies data of a set of partitions from PANDA to PANDAARCH

Daily job which verifies that all rows of certain partition have been copied to PANDAARCH and drops the PANDA partition if the above is true.

Page 10: Common Analysis Framework, June 2013

27-June-2013 10

Monitoring on the PanDA => PANDAARCH data flow

G. Dimitrov

PanDA scheduler jobs have to complete successfully. Very important are the ones that pre-create partitions, copy data to the archive schema and removes partitions which data has been already copied. Note: Whenever new columns are added to the PanDA JOBS_xyz tables, the relevant PLSQL procedure for the data copying has to be updated to consider them.

In case of errors, a DBA or other database knowledgeable person has to investigate and solve the issue(s). Oracle keeps a record from each scheduler job execution (for 60 days)

If happen that the 3 days data sliding window in the PanDA ‘operational’ tables is temporary not sustained by whatever reason, the PanDA server is not affected. For the PanDA monitor is not very clear yet. However there were no complains for the two such occurrences in the last 30 months.

Page 11: Common Analysis Framework, June 2013

27-June-2013 11

The PanDA PLSQL code

Currently certain pieces in the PanDA PLSQL code are bound to the ATLAS PanDA account names (e.g. DB object names are full qualified with hardcoded object owner)

Efforts done to parameterize the PLSQL procedures for being flexible on the schemes they interact on.

Full set of the modified PLSQL procedures are under validation on the INTR testbed.

After a successful validation of the modified PLSQL procedures and Oracle scheduler jobs these pieces of the PanDA database layer will be ready for the CAF (Common Analysis Framework)

G. Dimitrov

Page 12: Common Analysis Framework, June 2013

27-June-2013 12

Improvements on the current PanDA system

Despite the increased activity on the grid, using several tuning techniques the server resource usage stayed in the low range: e.g. CPU usage in the range 20-30%

Thorough studies on the WHERE clauses of the PanDA server queries resulted in: - revision of the indexes: removal or replacement with more approriate multi-column ones - weekly index maintenance (rebuids triggered by a scheduler job) on the tables with high transaction activity

- tuned queries - an auxiliary table for the mapping Panda ID <=> Modification time

G. Dimitrov

Page 13: Common Analysis Framework, June 2013

27-June-2013 13

DB techniques used in PanDA (1)

Automatic interval parititioningPartition is created automatically when a user transaction imposes a need for it (e.g. user inserts a row with a timestamp for which a partition does not yet exist)

CREATE TABLE table_name ( … list of columns …) PARTITION BY RANGE(my_tstamp) INTERVAL(NUMTODSINTERVAL(1,’MONTH’)) ( PARTITION data_before_01122011 VALUES LESS THAN(TO_DATE('01-12-2011', 'DD-MM-YYYY')) )

G. Dimitrov

In PanDA and other ATLAS applications interval partitioning is very handy for transient type of data where we impose a policy of agreed DATA SLIDING WINDOW (however partition removal is done via a home made PLSQL code)

Page 14: Common Analysis Framework, June 2013

27-June-2013 14

DB techniques used in PanDA (2)

Result set caching

This technique was used on well selected set of PanDA server queries - useful in cases where data do not change often, but is queried on a frequent basis.

The best metric to consider when tuning queries, is the number of Oracle block reads per execution. Queries for which result has been gotten from the result cache shows ‘buffer block reads = 0’

Oracle sends back to the client a cached result if the result has not been changed meanwhile by any transaction, thus improving the performance and scalability

The statistics shows that 95% of the executions of the PanDA server queries (17 distinct queries with this cache option on) were resolved from the result set cache.

G. Dimitrov

Page 15: Common Analysis Framework, June 2013

27-June-2013 15

DB techniques used in PanDA (3)

Data aggregation for fast result devilery

PanDA server has to be able to get instantaneously details on the current activity at any ATLAS computing site (about 300 sites) – e.g. number of jobs at any site with particular priority, processing type, status, working group, …etcThis is vital for justifying on which site the new coming jobs is best to be routed.

Query execution addresing the above requirement showed to be CPU expensive because of high frequency execution.

Solution: A table with aggregated stats is re-populated by a PLSQL procedure on an interval of 2 min by an Oracle scheduler job (usual elapsed time 1-2 sec)

The initial approach relied on a Materialized view (MV), but it showed to be NOT reliable because the MV refresh interval relies on the old DBMS_JOB package

G. Dimitrov

Page 16: Common Analysis Framework, June 2013

27-June-2013 16

DB techniques used in PanDA (4)

Customized table settings for the Oracle stats gatheringHaving up-to-date statistics on tables data is essential for having optimal queries’ data access path. We take advantage from statistics collection on partitioned tables called incremental statistics gathering.

Oracle spends time and resources on collecting statistics only on partitions which are transactional active and computes the global table statistics using the previously ones in an incremental way.

exec DBMS_STATS.SET_TABLE_PREFS ('ATLAS_PANDA', 'JOBSARCHIVED4', 'INCREMENTAL', 'TRUE');

G. Dimitrov

Page 17: Common Analysis Framework, June 2013

27-June-2013 17

Potential use of ADG from PanDA monitor

PanDA complete archive now hosts information of 900 million jobs – all jobs since the job system start in 2006

ADCR database has two standby databases:- Data Guard for disaster recovery and backup offloading - Active Data Guard (ADCR_ADG) for read-only replica

PanDA monitor can benefit from the Active Data Guard (ADG) resources=> An option is to sustain two connection pools: - one to the primary database ADCR - one to the ADCR’s ADGThe idea is queries that span on time ranges larger than certain threshold to be resolved from the ADG where we can afford several paralell slave processes per user query.

=> Second option is to connect to the ADG only and fully rely on it.

G. Dimitrov

Read-only replica

Page 18: Common Analysis Framework, June 2013

27-June-2013 18

New development: JEDI (a component of PanDA)

JEDI is a new component of the PanDA server which dynamically defines jobs from a task definition. The main goal is to make PanDA task-oriented.

Tables of initial relational model of the new JEDI schema (documented by T. Maeno) complement the existing PanDA tables on the INTR database

Activities in the last months:- understanding the new data flow, requirements, access patterns- address the requirement of storing information on event level for keeping track of the active jobs’ progress. - studies for the best possible data organization (partitioning) from manageability and performance point of view. - get to the most appropriate physical implelentaion of the agreed relational model- tests with representative data volume

G. Dimitrov

Page 19: Common Analysis Framework, June 2013

27-June-2013 19

JEDI database relational schema

G. Dimitrov

Yellow: JEDI tablesOrange: PanDA tablesGreen: Auxiliary tablesfor speeding up queries

Page 20: Common Analysis Framework, June 2013

27-June-2013 20

Transition from current PanDA schema to a new one

The idea is the transition from the current PanDA server to the new one with the DB backend objects to be transparent to the users.

JEDI tables are complementary to the existing PanDA tables. The current schema and PANDA => PANDAARCH data copying will be not changed.

However the relations between the existing and the new set of tables have to exist. In particular: - Relation between JEDI’s Task and PanDa’s Job by having a foreign key in all JOBS* tables to the JEDI_TASKS table

- Relation between JEDI’s Work queue (for different shares of workload) and PanDa’s Job by having a foreign key in all JOBS* tables to the JEDI_WORK_QUEUE table- Relation between JEDI’s Task, Dataset and Contents (new seq. ID) and PanDA’s Job processing the file (or fraction of it) by having a foreign key in the FILESTABLE4 table to the JEDI_DATASET_CONTENTS table(when a task tries a file multiple times, there are multiple rows in PanDA’s FILESTABLE4 while there is only one parent row in the JEDI’s DATASET_CONTENTS table)

Note: "NOT NULL" constraints will not be added to the new columns on the existing PANDA tables for allowing to be used standalone without the use of JEDI tables.

G. Dimitrov

Page 21: Common Analysis Framework, June 2013

27-June-2013 21

JEDI DB objects – physical implementation

Data segmenting is based on a RANGE partitioning on the JEDI’s TASKID column with interval of 100000 IDs (tasks) on six of the JEDI tables (uniform data partitioning).

The JEDI data segments are placed on dedicated Oracle tablespace (data file) separate from existing PanDA tables

Thanks to the new CERN license agreement with Oracle, now we take advantage of the Oracle advanced compression features – compression of data within a data block while application does row inserts or updates.

G. Dimitrov

PanDA tests on tables with and without OLTP compression showed that Oracle was right in the predictions of the compression ratio.

Page 22: Common Analysis Framework, June 2013

27-June-2013 22

PanDA database volumes

Disk space used by the PanDA in 2011 is 1.3 TB in 2012 is 1.7 TB for the first half of 2013 is 1 TB.

According to the current analysis and production submission tasks rates of 10K to 15K per day, the estimate for the JEDI needed disk space is in the range 2 to 3 TB per year.

However with the OLTP compression is place, the disk space usage will be reduced.

Activating the same type of compression on the existing PanDA tables would be beneficial as well.

G. Dimitrov

Page 23: Common Analysis Framework, June 2013

27-June-2013 23

Conclusions

G. Dimitrov

The slides content presented the current PanDA data organization and the planned new one with respect to the JEDI component

Deployment of the new JEDI database objects in the ATLAS production database server is planned for 2th July 2013

Thank you!


Recommended