Date post: | 03-Jul-2018 |
Category: |
Documents |
Upload: | truongquynh |
View: | 215 times |
Download: | 0 times |
How Recent Storage Innovations Can Help Improve
Performance and Reliability For Your DB2 Subsystem
Jeffrey Berger
IBM
Session Code: 1413
Date and Time of Presentation | Platform: DB2 for z/OS
Agenda
• IBM DS8870
• Solid State Disks
• Ultra SSD
• High Performance FICON
• DB2 utilities
• DB2 disorganized index scans
• DB2 RID list scans
• How to configure DS8000 storage
• HyperPAV and Extended Addressability Volumes
2004 2009
POWER5 POWER6
� Designed for Enterprise environments with over 5-9’s availability natively
� Designed for Enterprise environments with over 6-9’s availability when DS8000 with Metro Mirror is combined with GDPS/PPRC HyperSwap
The 5th Generation DS8000 Disk System – DS8870
3
2010
POWER6+
DS8300DS8300
DS8700DS8700
2006
POWER5+
DS8300 Turbo
DS8300 Turbo
2012
POWER7
DS8870DS8870
DS8800DS8800
Business Class and Enterprise Class Configuration Options
4
Mode
l
Processo
rs per
CEC
Physical Capacity
(max.)
Disk Drives
(max.)
System
Memory
(GB)
Host
Adapters
(max.)
9xE
Attach
Business Class
961 2-core 216 TB 144 16/32 4 0
Enterprise Class
961 4-core 360 TB 240 64 8 0
961 8-core 2,304 TB 1536 128/256 16 0-3
961 16-core 2,304 TB 1536 512/1024 16 0-3
First Expansion Frame
96E N/A 504 TB 336 N/A 8 N/A
Second/Third Expansion Frame
96E N/A 720 TB 480 N/A N/A N/A
SSD History
2.5” SSA-2 Solid State Disks available for the DS8800
DS8870 improves SSD performance
2009
2010
2011
2012
2013
3.5” fiber channel Solid State Disks available for the
DS8300 and DS8700
Easy Tier – 5th generation
40% SSD list price reduction
Ultra SSD
Easy Tier
DS8870 improves SSD performance
Easy Tier Server boosts transaction dramaticallyUp to 5X performance increase for DB2 banking brokerage workload
Easy Tier caches hottest data to server
5.3Xperformance increase!
Base configuration is all-HDD with Easy Tier Server not activated
IBM Power 770 server running AIX with 1 Ultra SSD I/O Drawer
DS8870 146GB 15K drives (RAID 5) with 2 1.3TB database volumes
IBM high-end flash storage
Exceptional Performance
SuperiorReliability
Unique Server Integration
Efficiency & Optimization
DS8870Enterprise Storage for Critical Applications
IBM Flash
Flash Exploitation
DS8000 flash evolution
All-flash
< 1 millisecond
Resp
on
se T
ime
Hybrid with Easy Tier
~ 1 millisecond
5-15 milliseconds
All-HDD
Combining flash optimization plus high-end capabilities
Performance-Cost benefits of flash will replace
enterprise usage of spinning drives over time
The economics of All-Flash Enterprise Systems
146GB 15k-RAID10
146GB 15k-RAID5
300GB 15k-RAID10
300GB 15k-RAID5
1.2TB 10k-RAID5
TODAY
Time
DS8800 DB2 Random I/O short seeks, no cache hits
0
2
4
6
8
10
0 20000 40000
IO/sec
Resp
on
se T
ime
(milliseco
nd
s)
SSD, 1 DA port
15K, 3 ranks
15K, 12 ranks
15K, 21 ranks
10K, 3 ranks
10K, 12 ranks
10K, 21 ranks
All-flash benefits for transactional (OLTP) workload
Same usable capacity but with…
80%reduction in drive count
70%reduction in
response time
62%reduction in
energy usage
41%reduction in raw capacity
33%reduction in floor space
Comparing all-flash DS8870 with all-HDD system boost performance and reduces costs with equivalent $/GB
Source: internal IBM lab measurements
Additional flash news
•40% list price reduction on all DS8870 SSDsIBM
Flash
DS8870 A-frame
DC-
UPS
DC-
UPS
p7
p7
Statement of Direction:
• Up to 4 Ultra SSD drawers connect directly into available PCIe slots
• Each drawer contains 30 SSD drives
• Each drive has 400 GB capacity
• PCIe provides substantial performance improvement
DS8870
13
• Up to 240 SAS-2 drives in the A-frame
zHPF History
2
0
0
9
2
0
1
0
2
0
1
1
DS8100/DS8300 with R4.1 or above
z10 processor
Format writes, multi-domain I/O
QSAM/BSAM exploitation
Incorrect Length Facility
z/OS R11 and above, EXCPVR
Multi-track, but <= 64K
Multi-track any size
z196 processor >64K transfers
Single domain, single track I/O
Reads, update writes
Media manager exploitation
z/OS R8 and above
DS8700/DS8800 with R6.2
z196 FICON Express 8S
IMS, Sort, DSS, etc…
ISV Exploitation
EXCP/EXCPVR Support
100% of DB2 I/O is now converted to zHPF
2
0
1
2
DB2 Load Utility
0
50
100
150
200
4K 8K 16K
Page size
Th
rou
gh
pu
t (M
B/s
ec
)
FICON
zHPF
This chart assumes that VPSIZE x VPSEQT is at least 400MB
� Format writes are critical to the performance of LOAD, REORG,
REBUILD, RECOVER, DSN1COPY utilities
� zHPF is especially important for a small page size
zHPF format writes
• Only IBM storage
supports zHPF list
prefetch
FICON Express 8S, z196, DS8800
I/O response time for 128K (cache hits)
0.620.64 0.983
2.4
0
0.5
1
1.5
2
2.5
3
FICON 4K zHPF 4K
Mil
lis
ec
on
ds
Contiguous pages Noncontiguous pages
• Only IBM storage
supports zHPF list
prefetch
zHPF list prefetch
DB2 10 disorganized index scans
I/O resp. time (32 pages)
0
20
40
60
80
100
120
0 50 100
% of index read
Mil
liseco
nd
s
FICON 10K zHPF 10K
FICON 15K zHPF 15K
FICON SSD zHPF SSD
• Only IBM storage with
zHPF can optimize the
performance of index
scans
• zHPF is especially
important with SSD
DB2 index-to-data access, skip sequentialFEx8, 4K pages
0
20
40
60
80
100
0 20 40 60 80 100 120
Skip distance (4K page)
MB
/se
c
zHPF SSD
FICON SSD
zHPF 15K
FICON 15K
DB2 RID List Scans
• Only IBM storage with
zHPF can optimize the
performance of RID list
scans
• zHPF is especially
important with SSD
Volume by volume performance
0
10
20
30
40
50
60
70
80
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Volume number
I O /
s e
c
0
5
10
15
20
25
R e s p
o n
s e t i m e
(m i l l i s e c o
n d
s)
� Volumes at the middle circumference perform better than inner and outer
volumes due to lower seek distances
� Clustering the data achieves 40% lower response time and 60% higher
throughput
InnerOuter
1600 IO/sec
9.9 ms avg r.t
981 IO/sec total
16.3 ms avg. r.t.
How do I know which are the inner volumes and which are the outer volumes?
• If each DS8000 extent pool has one LCU (Logical Control Unit),
the volumes in each LCU are allocated from the outer cylinders
to the inner cylinders
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
HyperPAV
• Principles of PAV
• Two I/Os to the same volume might not contend with each other
• Two I/Os to different volumes on the same HDD might contend with
each other
• If you define enough PAV aliases and you have sufficient
channels, and if you use HyperPAV, the only physical queuing is
on the HDD
• The size and number of volumes is irrelevant
How many PAVs do you need?
• With HyperPAV, you need roughly 3 or 4 PAVs for each physical disk
• Example: if you have 240 disks, you need 720 to 960 PAVs
• Double this to be on the safe side
• Since an LCU is limited to 256 addresses, carving the disks into too many volumes limits the number of PAVs that you can define
• With sufficient PAVs, you will not have IOSQ time unless your physical hardware is saturated
RMF: I/O Queue ActivityI/O Q U E U I N G A C T I V I T Y
-TOTAL SAMPLES = 26 IODF = C6 CR-DATE: 03/14/2013 CR-TIME: 10.34.00 ACT: POR
AVG AVG DELAY AVG AVG DATA
LCU CU DCM GROUP CHAN CHPID % DP % CU CUB CMR CONTENTION Q CSS HPAV OPEN XFER
MIN MAX DEF PATHS TAKEN BUSY BUSY DLY DLY RATE LENGTH DLY WAIT MAX EXCH CONC
0 0024 7645 5C 256.38 0.00 0.00 0.0 0.0
5D 256.38 0.00 0.00 0.0 0.0
5E 256.37 0.00 0.00 0.0 0.0
5F 256.38 0.00 0.00 0.0 0.0
C0 256.39 0.00 0.00 0.0 0.0
C1 256.37 0.00 0.00 0.0 0.0
C2 256.38 0.00 0.00 0.0 0.0
C3 256.38 0.00 0.00 0.0 0.0
* 2051.0 0.00 0.00 0.0 0.0 0.000 0.00 0.0 0.000 69 0.54 0.54
• HPAV WAIT/MAX shows if your PAV aliases are under-
configured or over-configured
DS8000 Extent Pools
Extent pool 1
Extent pool 2
• LOGCOPY1 and BSDS1
• Database 1 and Image Copy Pool 2
• LOGCOPY2 and BSDS2
• Database 2 and Image Copy Pool 1
LCU DS8000
Extent
Pool ID
LCU DS8000
Extent
Pool ID
00A0 0 00A1 1
00A2 2 00A3 3
00A4 4 00A5 5
00A6 6 00A7 7
00A8 8 00A9 9
DS8000 Extent Pools and Logical Control Units
• Map LCUs and DS8000 extent pools one-to-one
• Even numbered LCUs use one DS8000 cluster and odd numbered LCUs use
the other cluster
Configuring DS8000 Storage Extent Pools
• A DS8000 extent pool is the smallest “single point of failure”
• A DS8000 requires a minimum of two extent pools
• Using Rotate Extents (default), each volume is striped across all of the disks in the extent pool
• If there are only two extent pools, then you have very little flexibility for managing performance and reliability
• Once your extent pools are configured, it is nearly impossible to reconfigure them
How to configure a DS8800 – A Case StudyThe single frame DS8800 and DS8870 that SVL uses to evaluate OLTP performance of DB2 for z/OS
• 240 x 15K RPM disks, 300GB each, RAID 5
• ~53TB capacity
• 8 Channels, 8 Host Adapters
• 10 Extent Pools using Rotate Extent
• 10 Logical Control Units (1 per Extent Pool)
• 32 3390-A volumes (EAV) per LCU
• 193,662 (174x1113) cylinders per volume
• 320 volumes total
• 128 PAV aliases per LCU (twice as many as needed)
• 1280 PAVs total
• HyperPAV enabled
• zHPF enabled (High Performance FICON)
• FlashCopy enabled
E S S L I N K S T A T I S T I C S
SERIAL NUMBER 00000AHM51 TYPE-MODEL 002107-951 CDATE 04/23/2013 CTIME 15.21.33 CINT 02.13
------ READ OPERATIONS ------- ------ WRITE OPERATIONS ------
--EXTENT POOL-- OPS BYTES BYTES RTIME OPS BYTES BYTES RTIME ----ARRAY----- MIN RANK RAID
ID TYPE RRID /SEC /OP /SEC /OP /SEC /OP /SEC /OP SSD NUM WDTH RPM CAP TYPE
0000 CKD 1Gb 0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
0002 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
0004 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 19 15 5700G RAID 5
0001 CKD 1Gb 0001 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
0003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
0005 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 19 15 5700G RAID 5
--------------------------------------------------Extent pools 2 and 3 omitted---------------------------------------------------
0004 CKD 1Gb 000C 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
000E 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
0010 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 19 15 5700G RAID 5
0005 CKD 1Gb 000D 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
000F 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
0011 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 19 15 5700G RAID 5
0006 CKD 1Gb 0012 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
0014 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
0016 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 21 15 6300G RAID 5
0007 CKD 1Gb 0013 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
0015 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
0017 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 21 15 6300G RAID 5
0008 CKD 1Gb 0018 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
001A 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
001C 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 19 15 5700G RAID 5
0009 CKD 1Gb 0019 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
001B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 6 15 1800G RAID 5
001D 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 7 15 2100G RAID 5
POOL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 19 15 5700G RAID 5
– 30 ranks of 15K RPM 300GB disks divided into 10 extent pools divided among 4 DA loops.
– 3 of the loops are connected to 6 ranks each and the other loop is connected to 12 ranks.
– Each loop has 4 spares (for a total of 16 spares out of 30 ranks). Pools 6 and 7 have no spares.
– Rotate Extents was used to stripe volumes across the ranks in an extent pool.
3390-9 3390-9
3390-A EAV
3GB
3,339 cyl
9GB
10,017 cyl
27GB
32,760 cyl
54GB
65,520 cyl
Architectural Limit:
100s of TB*
3390-A
EAV
223GB*
262,668 cyl
3390-3
• EAV helps relieve constraints to address large capacity needs
3390-9
Extended Address Volumes
• Extended Address Volumes (EAV) enables volumes of more than 65,280 cylinders
• 223 GB volumes initially supported on z/OS V1.10* and IBM SystemStorage DS8000
• IBM storage now supports up to 1TB volumes with z/OS V1.13 and IBM System Storage DS8000 R6.2
• EAV can help simplify storage management.
• Manage fewer, large volumes as opposed to many small volumes.
• Avoid multi-volume data sets
• EAV can improve performance
• DS8000 Dynamic Volume Expansion can allow
• Non-disruptive migration to larger volume sizes
EAV news
• Software developers supporting z/OS V1.13 and EAV, http://www-
03.ibm.com/systems/z/os/zos/software/isv113.html
• Cheryl Watson’s Tuning Letter, 2013 No. 2, www.watsonwalker.com
• Implementation considerations
• You need the correct driver level on the DS8000 (driver 7.6.2)
• Review the required APARs and fixes for both IBM and ISVs
• Review SHARE presentation 3204 before implementing
• Review IBM publication: z/OS 1.13 (SC26-7473) – DFSMS Using the New Functions
• Review the dependencies, coexistence/migration considerations
• EAV migration tracking facility output (this identifies applications that might fail when data sets are on EAVs)
• Configure EAV on your DS8000
• Add EAV to your storage group/pools
• Enable the use of EAV on your system (IGDSMSxx PARMLIB member, change the default USEEAV(NO) to YES)
• Migrate data
Reason why I like large volumes
• Too many volumes makes the amount of volume level statistics
too voluminous
• I don’t want to waste UCB addresses
• I don’t like multi-volume data sets
• Spreading the data on many volumes increases the likelihood of
increased seek distances
How many volumes do you need?
• A few hundred to a few thousand
• Enough to be able to make volume level performance
statistics useful
• Not so much as to make the volume of statistics overwhelming
• Enough to provide flexible re-assignment of space to
different SMS storage groups (which are usually
associated with different applications)
• Do not assign so many small volumes to an SMS storage
group that it will impact the CPU cost of SMS allocation
• Nor should you define too many SMS storage groups that
necessitate complex ACS routines
Volume sizes
• Migrating to EAV has its challenges, but on an IBM control unit there is no
reason not to use the maximum non-EAV size that is a multiple of the
DS8000 extent size (1113 cylinders)
• 58 x 1113 = 64554 cylinders = 60GB
• On a DS8000, if the volume size is not a multiple of extent size, part of the
extent will be wasted
Volume Sizes…..
• With HyperPAV, you only need 5 I/O addresses for each
physical disk to enable the disks to become 80% busy
• Example: With 240 disks, you need 1200 I/O addresses
• Example: Suppose an LCU consists of 24 disks
• Example 1a) The LCU has 64 volumes with N cylinders each
• Since you can only have 196 PAVs, even if the I/Os are evenly distributed
across all 64 volumes, you will incur IOSQ time before causing the disks to
become 80% busy
• Example 1b) The LCU has only 16 volumes of 4N cylinders each
• With only 120 aliases, even if all of the I/Os are to a single volume, you will
not have any IOSQ time unless the disks are 80% busy.
• Conclusion: Bigger volumes means better performance
FlashCopy
• System Level Backups
• Available since DB2 V8 and subsequently enhanced
• Uses volume level FlashCopy
• The fastest way to backup or recover the whole DB2
system
• VSAM image copies
• Available since DB2 10
• Uses data set level FlashCopy
• Enables transaction consistent image copies
• Saves the CPU cost and channel cost of reading the data
into DB2 and writing it back out to DASD
• Faster backup and recovery than traditional image copies
References
• High Performance FICON (zHPF) FAQs March 15th, 2013 Frequently Asked Question, http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FQ127122
• DB2 for z/OS and List Prefetch Optimizer, http://www.redbooks.ibm.com/abstracts/redp4862.html
• DFSMS V1.10 and EAV Technical Guide, http://www.redbooks.ibm.com/abstracts/sg247617.html
• z/OS Version 1 Release 13 Implementation, http://www.redbooks.ibm.com/abstracts/sg247946.html
• HyperPAV Support, ftp://www.redbooks.ibm.com/redbooks/2006_zSeries_Workshop_Materials/HyperPAV.pdf
• DS8000 Performance Monitoring and Tuning, http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf
• DB2 10 for z/OS Performance Topics, http://www.redbooks.ibm.com/abstracts/sg247942.html
Jeffrey BergerIBM
Session 1413
How Recent Storage Innovations Can Help Improve
Performance and Reliability For Your DB2 Subsystem