Date post: | 26-Oct-2014 |
Category: |
Documents |
Upload: | antonio-martinez-ranz |
View: | 71 times |
Download: | 1 times |
© 2011 IBM Corporation
IBM Virtualization EngineTM TS7700
© 2011 IBM Corporation2
Disclaimers
� Copyright© 2011 by International Business Machines Corporation.
� No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation.
� The performance data contained herein were obtained in a controlled, isolated environment. Results obtained in other operating environments may vary
significantly. While IBM has reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. These values do not constitute a guarantee of performance. The use of this information or the implementation of any of the techniques discussed herein is a customer responsibility and depends on the customer's ability to evaluate and integrate them into their operating environment. Customers attempting to adapt these techniques to their own environments do so at their own risk.
� Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only
� References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM's intellectually property rights, may be used instead. It is the user's responsibility to evaluate and verify the operation of any on-IBM product, program or service.
© 2011 IBM Corporation3
Disclaimers (continued)
� THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT.
� IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g. IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein.
� Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
� The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to:
IBM Director of LicensingIBM CorporationNorth Castle DriveArmonk, NY 10504-1785U.S.A.
© 2011 IBM Corporation4
Trademarks
� The following terms are trademarks or registered trademarks of the IBM Corporation in either the United States, other countries or both.
– IBM, Power Systems, System Storage, TotalStorage, System i, System p, System x, System z, Virtualization Engine
– z/OS, z/VM, VM/ESA, OS/390, AIX, DFSMS/MVS, OS/400, i5, FICON, ESCON, Tivoli
– VSE/ESA, TPF, DFSMSdfp, DFSMSdss, DFSMShsm, DFSMSrmm
� Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
� Other company, product, and service names mentioned may be trademarks or registered trademarks of their respective companies.
© 2011 IBM Corporation5
Agenda
� Virtual Tape Concepts
� Product Overview
� Subsystem Performance
� Potential Business Benefits
� Technology
� VTS Migration
© 2011 IBM Corporation
Virtual Tape Concepts
© 2011 IBM Corporation7
Product Development Objective
� Develop a subsystem that exploits the storage hierarchy– Supports the requirements for data written to virtual tape to address
• business continuity in the event of an outage, or/and• long term archival or regulatory legislation
– Matches the data to its access requirements• Fast access to data on ‘virtual volumes’ on the disk buffer• Cost effective storage by optionally migrating ‘virtual volumes’ to physical tape
– Provides the industries most flexible choices of technology to meet the
customer’s needs• Disk only virtual tape for those customer that do not require the presence of physical
tape or favor the large disk residency of a disk only solution• Disk and physical tape tightly integrated in order to tightly couple the benefits of disk
and physical tape to deliver performance for active data and best economics for inactive data
• Hybrid of both disk only and disk and physical tape solutions in order to handle all combinations in between
© 2011 IBM Corporation8
Virtual Volume 2
TS1140up to 12TB capacity1
Logical Volume 1
. . .
VirtualDrive
1
VirtualDrive
n
180 181 19F
Virtual Volume 1
Virtual Volume n
TapeVolume Cache
VirtualDrive
2
Logical Volume n
Virtual Tape Concepts
� Virtual Tape Drives– Appear as multiple 3490E tape drives
– Can be shared / partitioned like real tape drives
– Requires fewer or can eliminate real tape drives
� Tape Volume Caching– Designed to eliminate all or many physical tape delays
– Supports read hits from cache / recalls from cartridge
– Supports 100 % cache write hits
– Can be configured to support 100% cache read hits
� Optional Volume Stacking to Physical Tape– Designed to fully utilize cartridge and library capacity
– Stacks multiple logical volumes onto stacked cartridges
– Supports TS1140/TS1130/TS1120 and/or 3592 J1A tape drives
1assumes 3:1 compression
OptionalTape Stacking
© 2011 IBM Corporation9
TS7700 Virtualization Engine Operation
� Virtual volume copy function– Logical Volumes are copied in FIFO order– Scheduled after End of Volume (EOV) completes
� Optional cache space management*– SMS preference groups PG0/PG1– Uses Last Recently Used (LRU) algorithm
� Automatically manages physical cartridge space* – Reclaims ‘gaps’ caused by logical volume expiration– User set policies to control when a cartridge is eligible for reclaim
Virtual Volume 2
TS1140up to 12TB capacity1
Logical Volume 1
. . .
VirtualDrive
1
VirtualDrive
n
180 181 19F
Virtual Volume 1
Virtual Volume n
TapeVolume Cache
VirtualDrive
2
Logical Volume n
1 assumes 3:1 compression
OptionalTape Stacking
*For configurations with Tape attachment
© 2011 IBM Corporation10
Peak Performance is a key measurement criteria
Timeto Process
Job1
Job3
Job n14MB/s
Job1
Job2
Job6
Job n
VirtualDevice
150
152
PhysicalDevice
151
max. n
153
181
~3MB/s
Total TS7700 Bandwidth
14MB/s
~3MB/s
~3MB/s
~3MB/s
~3MB/s
~3MB/s
~3MB/s Job4
Job5 Job7Job2 Job3
Job4180
TS7700 Virtualization Engine Performance
© 2011 IBM Corporation
Product Overview
© 2011 IBM Corporation12
TS7700 Virtualization Engine Overview
� Includes advanced architecture– Architecture designed to facilitate future enhancements– Advanced IBM technology to increase performance and capacity– Grid business continuity option to increase flexibility and reduce cost
� Two models provide high performance and capacity – Both can achieve 900 MBps for standard workloads– TS7720 provides ~ 1.3 PB of native cache capacity (3:1 compression)
• No attachment to back end tape except via Grid attachment to a TS7740
– TS7740 provides over 84TB of native cache capacity (3:1 compression)• Supports attachment to IBM TS1140, TS1130, TS1120 or 3592 J1A tape drives• Supports tape drives in an IBM TS3500 tape library• Supports TS1140/TS1130/TS1120 data encryption
� Synchronous Mode Copy¹– Provides true synchronous copy through real time duplexing (local mount and remote mount at the
same time)– Copy consistency at two locations up to any implicit or explicit sync point providing a zero recovery
point objective (RPO)• Up to two sites containing will be consistent after each implicit or explicit tape SYNC operation• Provides applications, such as DFSMShsm, dataset level replication (Zero RPO!)• When duplexing fails, downgrade to sync-deferred or fail job• LAN/WAN buffering will be used until SYNC points are reached• Additional Deferred and/or Immediate copies can occur once volume is closed
– Most optimal for DFSMShsm, DFSMSdfp OAM Object Support and other dataset or object stacking applications
Belongs to the R2.1 code1
© 2011 IBM Corporation13
Continued
� Remote Mount IP Link Failover¹
– Improved remote read/write operations by utilizing alternate links in the event of path
failure.
– Use redundant grid links for remote mounts for transparent failover
– If a failure occurs, the Failover path searches another IP link to continue the job, without
stopping or letting the host and customer realize it.
� Copy Export Acceleration¹
– New option which improves the speed of large copy export operations by reducing the
number of physical volumes which contain a copy of the database used for recovery
� Parallel Copies and pre-Migrations during Mounts for Read¹
– Current design cancels any active copy/migration activity on mounts for read
– New design allows copies and pre-migrations to occur in parallel to host mounts for read
Belongs to the R2.1 code1
© 2011 IBM Corporation14
Continued
� Copy Export Merge for Workload Moves¹
– Allow export restore to a system that is not empty
– Can be used to move content from one grid to another through copy export
� Grid Merge¹
– Merge two existing multi-cluster grids to form a single larger grid.
� Preferred Migration of Scratch Volumes¹,²
– Reduce use of cache for data unlikely to be read
– Favor the migration of previously scratched PG1 volumes before private volumes.
– Allows private volumes to be retained in disk cache longer.
2 Applies on TS7740 only
Belongs to the R2.1 code1
© 2011 IBM Corporation15
TS7720 Virtualization Engine Components
� A tape frame1
– Frame provides up to 36u for mounting• A TS7720 Virtualization Engine
• One TS7720 cache controller
• Up to six TS7720 cache drawers
– Supports high availability• Redundant power supplies
• Two power feeds
1 Machine Type 3952 Model F05
© 2011 IBM Corporation16
TS7720 Virtualization Engine Components (continued)
� One TS7720 node1
– High performance IBM Power 7 server • One 8-way processor card• Up to 900 MBps• New I/O drawers
– Performance enablement features (FC5268)• Up to nine additional 100MBps increments • First increment is enabled with the server
– Enhanced continuous availability• 2x1Gb, 4x1Gb (SW optical or Copper) or 2x10Gb
LW Optical Grid Ports • 5-way and 6-way grids via iRPQ
1 Machine Type 3957 Model VEB
© 2011 IBM Corporation17
TS7720 Virtualization Engine Components (continued)
� Up to three TS7720 cache controllers1
– Base frame supports one controller– Optional expansion frame (3592 F05) supports up to
two additional controllers• Base frame must be fully populated before adding expansion
frame
– Provide high performance RAID 6 disk tape volume cache
• Attach to one TS7720 Virtualization Engine node• Provides up to 24TB of usable cache capacity per drawer• Includes 2TB SATA HDDs• Includes four 8Gbps FC interfaces
– Supports high availability• Dual power• Automatic HDD hot sparing and rebuild• Redundant hot-swap components
• Raid Controllers
• Power Supplies
• Enclosure fans
• Hard disks
1 Machine Type 3956 Model CS8
© 2011 IBM Corporation18
TS7720 Virtualization Engine Components (continued)
� Base frame supports two to six XS7 cache drawers1
–Provide high performance RAID 6 disk arrays–Maximum base frame capacity ~ 162TB (pre-compression)
� Optional expansion frame supports up to ten additional
XS7 expansion drawers1
–Maximum TS7720 configuration capacity ~442TB (pre-compression)
� Each TS7720 cache drawer–High performance RAID 6 disk
• Attaches to the TS7720 cache controller• Provides up to 24TB of usable cache capacity• Includes 2TB SATA HDDs
–Supports high availability• Dual power• Automatic hot sparing/rebuild• Redundant hot-swap components
• Power Supplies
• Enclosure fans
• Hard disks
1 Machine Type 3956 Model XS7
© 2011 IBM Corporation19
TS7740 Virtualization Engine Components
� A tape frame1
– Frame provides up to 36u for mounting• A TS7740 Virtualization Engine
• One TS7740 cache controller
• Zero, one or three TS7740 cache drawers
– Supports high availability• Redundant power supplies
• Two power feeds
1 Machine Type 3952 Model F05
© 2011 IBM Corporation20
TS7740 Virtualization Engine Components (continued)
� One TS7740 node1
– High performance IBM Power 7 server • One 8-way processor card• Up to 900 MBps• New I/O drawers
– Cache enablement features (FC5267)
• Offered in one to 28 1TB increments
– Performance enablement features (FC5268)
• Up to ten, 100MBps increments
– Number of cache and performance increment
features are not required to be equal
– Enhanced continuous availability• 2x1Gb, 4x1Gb (SW optical or Copper) or 2x10Gb LW
Optical Grid Ports • 5-way and 6-way grids via iRPQ
1 Machine Type 3957 Model V07
© 2011 IBM Corporation21
TS7740 Virtualization Engine Components (continued)
� One TS7740 cache controller1
–Provides high performance RAID 5 disk tape volume
cache• Attach to one TS7740 Virtualization Engine node• Provides up to 7TB of usable cache capacity• Includes 600GB FC HDDs• Includes four 8Gbps FC interfaces
–Supports high availability• Dual power• Automatic HDD hot sparing and rebuild• Redundant hot-swap components
• Raid Controllers
• Power Supplies
• Enclosure fans
• Hard disks
1 Machine Type 3956 Model CC8
© 2011 IBM Corporation22
TS7740 Virtualization Engine Components (continued)
� Up to three TS7740 cache drawers1
–Provide high performance RAID 5 disk arrays
–One, two or four drawer cache configurations• One drawer with up to 7TB capacity – cache controller• Two drawer with up to 14TB capacity – cache controller plus
one cache drawer• Four drawer with up to 28TB capacity – cache controller plus
three cache drawers
� Each TS7740 cache drawer–High performance RAID 5 disk
• Attaches to the TS7740 cache controller• Provides up to 7TB of usable cache capacity• Includes 600GB FC HDDs
–Supports high availability• Dual power• Automatic hot sparing/rebuild• Redundant hot-swap components
• Power Supplies
• Enclosure fans
• Hard disks
1 Machine Type 3956 Model CX7
© 2011 IBM Corporation23
TS7700 Grid Configurations
� Couples up to six TS7700 clusters together to form a Grid configuration
– B10/B20 P2P VTCs have been eliminated– Hosts attach directly to the TS7700 clusters
� Clusters in a Grid can be any combination of TS7740s and TS7720s
– TS7720s in the same Grid are not required to have the same cache size
� Any volume accessible through any TS7700 cluster in the Grid configuration
� I/P based replication– Two 1Gbps Ethernet links
• Dual-port copper RJ45 FC1032• Dual port shortwave optical fibre FC1033
– Option to have 4 x 1G (SW optical or Copper) or 2x 10Gb (LW Optical Grid Ports)
– Dynamic Grid network load balancing– Standard TCP/IP
� Policy-based replication management
� Can be configured for disaster recovery and/or higher availability environments
� Grid Merge¹– Merge two existing multi-cluster grids to form a
single larger grid
TS1140/TS35001Hosts
TS1140/TS35001OptionalHosts
Optional
connectionsI/P Links
Grid Configuration
TS7700
Belongs to the R2.1 code1
© 2011 IBM Corporation24
Cluster Families - Tape Volume Cache Selection
� Indicates a grouping of clusters
– Typically by geographic locality
– Optimize the use of bandwidth between clusters
� Clusters within the same family would have higher weight for TVC selection
– For example, recall from a cluster within a family instead of a remote family
Drives/Library
TS7740 Cluster1
TS7720 Cluster2
LAN/WANDrives/Library
TS7740 Cluster0
Drives/Library
TS7740 Cluster3
City A City B
© 2011 IBM Corporation25
Cooperative Replication
� Copy management uses information to optimize long-distance copy links
– For deferred mode copies
– Prioritizes getting one copy to each family before making copies to members in a family
– Executes single copy between families, then local copies between family members
� Enhanced bandwidth utilization
– Reduces redundant copy tasks moving data between sites
Drives/Library
TS7740 Cluster
TS7720 Cluster
LAN/WAN
Drives/Library
TS7740 Cluster
Drives/Library
TS7740 Cluster
City A City B
Family to Family
Within family
© 2011 IBM Corporation26
Automatic Removal Policy
� Disk-centric and cost effective Hybrid Grid Configurations
� Automatic volume migration and cache space management of the TS7720´s cache
– Volumes are copied from a TS7720 to a TS7740 through normal copy policies
– When space is needed, the least recently accessed volumes in the TS7720´s cache that have been copied to a TS7740 are removed from the TS7720´s cache
� Migrated volumes remain accessible through the TS7720
– TS7720 uses the grid links to remotely access the volume data in the TS7740
TS7720 Cluster
Drives/Library
TS7740 Cluster
LAN/WANData Migration
Migrated Data Access
Very Large Cache
Intermediate Cache
Tape
© 2011 IBM Corporation27
27
Extended Removal Policies
� Configurable Volume Migration Settings – “Pinned” – These volumes remain pinned in the TS7720 cache (except when scratched).– “Prefer Remove” LRU Group 0 and “Prefer Keep” Group 1 – As the TS7720 reaches full capacity, these volumes
will be automatically removed in LRU order favoring those in Group 0 over those in Group 1. Only volumes that have completed peer copies are candidates for removal.
• Minimum Retention Time – Associated pin duration or grace period where data must exist prior to removal. Only after the pin time has elapsed since last access will the LRU Group 0 and 1 volumes become candidates for removal.
– Fast Ready/Scratch Volumes – When removal takes place, volumes that have been returned to scratch are always preferred first. This includes both “Pinned” and “LRU Group 0/1” volumes.
� Removed volumes remain accessible through all clusters– Grid links are used to remotely access the volume data in other TS7700s if locally removed.
TS7720 Cluster
Drives/Library
TS7740 Cluster
LAN/WANData Migration
IBM Confidential
Migrated Data AccessRemoval nowsupported on all TS7720
configurations!
© 2011 IBM Corporation28
28
Selective Write Protect for Disaster Recovery
IBM Confidential
TS7740 VE
•Production Jobs
DR Test Host
TS7740 VE
• DR Test Jobs
• Read all volumes
• Write only to category X
Write Protect All Volumes
Exclude category X
� Management interface extension of cluster write protect
– Allows certain categories to be excluded from cluster scope write protection
� Used for DR testing– Allows customer to simulate a DR scenario– Allows DR test site to read production data while still
being able to write DR test volumes within pre-configured categories
– Prevents DR test from modifying production volumes
Production Host
Statements of IBM future plans and directions are provided for information
purposes only. Plans and direction are subject to change without notice.
© 2011 IBM Corporation29
New Multi-tenancy – Selective Device Access Control
Enables hard partitioning of a TS7700 between several hosts
– Blocks access and control of volumes created by one host from the other hosts
– Separated by tape management systems, independent volume ranges and
scratch pools
– Access is allowed through specific Library Port IDs (virtual device addresses)
z/OS HostLP ID 01-08
z/OS HostLP ID 09-0F
z/OS HostLP ID 0D-10
TS7700
Group1 = LP ID 01-08Group2 = LP ID 09-0FGroup3 = LP ID 10-1F
• Volume ranges assigned to groups
• Access is allowed if command is received on a device in the assigned group
Assigned Group3
VOL00A-VOL00Z
VOLA00-VOLZ00
Assigned Group 2
VOL123-VOL999Assigned Group1
VOLABC-VOLZZZ
X
Virtual Tape Device Addresses
X
XVOL123-VOL999: Group2VOLABC-VOLZZZ: Group1VOL00A-VOL00Z: Group3VOLA00-VOLZ00: Group3
© 2011 IBM Corporation30
New Scratch Allocation Assistance
Prefer particular clusters for scratch mounts – Use Management Class to configure “Candidate Clusters” for scratch mounts– z/OS will randomly choose a device among the “Candidate Clusters”– If all “Candidate Clusters” are unavailable, any available cluster will be used
instead
Direct new workloads to particular clusters– Direct primary data such as DFSMShsm ML2 workload to TS7720s– Direct backup/archive data to TS7740s– Existing allocation assistance will be used for read access
Tape Library
TS7740 Cluster
TS7720 Cluster
LAN/WAN
Tape Library
TS7740 Cluster
Archive Workload
Primary Workload
This feature is offered on JES2 only at this time
© 2011 IBM Corporation31
TS7700 Grid Configuration Summary
� The capability to peer is built into the architecture of the TS7700
Virtualization Engine
� All TS7700 configurations, including standalone - appear to the
host as a PTP VTS (Composite and Distributed libraries) to
simplify migration
� Activated by feature codes 4015 - Grid Enablement
� IP interface between TS7700s
I/P Links
Grid Configuration
TS7700
© 2011 IBM Corporation32
Supported Code Upgrade Levels and Other Restrictions
� R2.1 release of code can be installed on any previous TS7700 machine that contains a code level of 8.7.0.XXX or later
– Any TS7700 with an earlier level of code will be required to first upgrade to
R1.7 or later prior to installing R2.1
–Requires V06 and VEA server models to have 16 GB of memory prior or
during installation
� 3494 library attached installations will not be supported in this release
© 2011 IBM Corporation33
TS7700 Grid Configuration Examples
© 2011 IBM Corporation34
TS7700 Two Cluster Grid Configuration for High Availability
� The two TS7700s are located at one site
� Interconnected through a Local Area
Network
� Hosts are attached to both TS7700s
� Data is available through either TS7700
– Ownership Takeover Manager enabled
automatically when one of the TS7700 fails
Hosts
© 2011 IBM Corporation35
TS7700 Two Cluster Grid Configuration for Disaster Recovery
� Two TS7700s are located at two sites geographically separated
� Interconnected through a Wide Area Network
� Only the disaster recovery host is connected to the remote TS7700
� If local TS7700 is unavailable, data is only available at remote TS7700
– Ownership Takeover Manager enabled automatically when one of the TS7700 fails
Production Site
TS35001Hosts
TS35001Disaster RecoveryHosts
I/P Links
WAN
Disaster Recovery Site
1Supported by TS7740 Model only
WAN
© 2011 IBM Corporation36
TS7700 Two Cluster Grid Configuration for Disaster Recovery & Availability
TS1140/TS35001Hosts
TS1140/TS35001Disaster RecoveryHosts
I/P Links
� The two TS7700s are located at two sites geographically separated
� Interconnected through a Wide Area Network
� The local host connects to the remote TS7700 through channel extended FICON interfaces making data available through either TS7700
– Vary devices online when needed – Ownership Takeover Manager enabled
automatically when one of the TS7700 fails
FICON
Channel
Extension
Disaster Recovery Site
Production Site
1Supported by TS7740 Model only
WAN
© 2011 IBM Corporation37
TS7700 Three Cluster Grid Configuration for High Availability and Disaster Recovery
� Two TS7700s are located within 100km
� Interconnected through a Local Area
Network
� Hosts are attached to both TS7700s
� One TS7700 located remotely, connected
via Wide Area Network
� Immediate copies between the metro sites
� Remote site receives primarily deferred
copies
– Immediate copies reserved for critical
volumes only
� Virtual devices offline except for disaster
recovery or unassisted disaster testing
TS1140/TS35001
Hosts
TS1140/TS35001
Disaster RecoveryHosts
I/P Links
Metro Sites
LAN
Remote SiteTS1140/TS35001
? 1Supported by TS7740 Model only
WAN
© 2011 IBM Corporation38
TS7700 Three Cluster Grid Configuration for Disaster Recovery
� Two TS7700s in independent data
centers which may be remote
� Hosts are not connected to each other
� TS7700s do not replicate data
between themselves
� Third TS7700 receives primarily
deferred copies for DR
TS1140/TS35001
Hosts
TS1140/TS35001
Disaster RecoveryHosts
I/P Links
Data Center
DR Site
TS1140/TS35001
Data Center
Hosts
?1Supported by TS7740 Model only
WAN
© 2011 IBM Corporation39
TS7700 Four Cluster Grid Configuration for Electronic Vaulting
� Three production TS7720 clusters all feeding into a common TS7740
� Production data migrates into the TS7740
Drives/Library
TS7740 ClusterLAN/WANTS7720 Cluster
Copy Export
TS7720 Cluster
TS7720 Cluster
Production Capacity of
1200TB
© 2011 IBM Corporation40
TS7700 Four-Way Grid Configuration for High Availability and High Performance
HA Production Center HA Disaster Recovery Center
Maximum of 400TB of most accessed compressed data
Drives/Library
TS7740 ClusterWAN
Drives/Library
TS7740 Cluster
TS7720 ClusterTS7720 Cluster
� High availability, high performance for all data through remote access� Higher cache hit percentage with production cache size of 400TB
© 2011 IBM Corporation41
TS7700 Grid Configurations for Capacity Scaling
� All volumes appear to host as a composite library
� Composite library can support
– Up to 1024 virtual tape drives
– Up to 1776TB of usable storage space – all
TS7720s
– Up to 115TB of usable cache storage space – all
TS7740s
– Assumes no copies
TS7700 TS7700 TS7700
Host
Composite Library
TS7700
© 2011 IBM Corporation42
TS7700 Five-Way* Grid Configuration for High Availability and High Performance
� 5-way provides 2 HA production sites with a single cluster at the DR site.
� Most of the bigger configurations are hybrids (TS7720/TS7740).
� Currently only one cluster at a time can be joined into the grid. It can be empty (join) or have existing volumes (merge).
� Software updates for 2.0 will support up to 2048 logical devices
HA Production Center HA Disaster Recovery Center
Drives/Library
TS7740 Cluster
WANDrives/Library
TS7740 Cluster
TS7720 Cluster
HA Production Center
Drives/Library
TS7740 Cluster
TS7720 Cluster
*Five-way require an RPQ
© 2011 IBM Corporation43
� 6-way provides 2 HA production sites with two cluster at the DR site.
� All scratch mounts are directed to the desired clusters (ie. Scratch Allocation Assist is available).
� Retain copy modes is enabled to prevent extra copies during outages when the mounts are not routed to the expected cluster.
� Large caches on the TS7740 will allow premigration activity to be done outside of the batch window by setting the premigration thresholds much higher.
TS7700 Six-Way* Grid Configuration for High Availability and High Performance
*Six-way require an RPQ
HA Production Center HA Disaster Recovery Center
Drives/Library
TS7740 Cluster
WANDrives/Library
TS7740 Cluster
TS7720 Cluster
HA Production Center
Drives/Library
TS7740 Cluster
TS7720 ClusterTS7720 Cluster
© 2011 IBM Corporation44
TS7700 Virtualization Engine Specifications
256
2,000,000
NA
NA
NA
2 or 4
Up to 1.3PB
Up to 442TB
256
TS7720
128128256Logical paths per FICON attachment
84422 or 4FICON
2,000,000
NA
4 – 16
NA
3TB to 84TB
Enablement
Up to 28TB
Enablement
256
TS7740
4 16
256
500,000
4 - 12
4 - 12
8
2.4TB to
5.2TB
864GB to
1.7TB
128
Model B20
8
250,000
4 - 6
4 - 12
2
648GB to
1.2TB
216 to
432GB
64
Model B10
3590 Tape Drive Attachment
Number of Virtual Volumes
TS1140/1130/1120/3592 Tape Drive Attachment
ESCON Channels
Compressed Cache Capacity (3:1)
Usable Cache Capacity
Number of Virtual Devices
Specification
© 2011 IBM Corporation45
Advanced Policy Management
� Advanced function includes
– Tape Volume Cache Management1
• Used to influence the retention of virtual volumes in cache
– Volume Pooling1
• Used to group logical volumes on a set of physical volumes
– Selective Logical Volume Copy1
• Used to create a duplicate logical volume stacked copy on a second cartridge
– Cross-site Replication
• Used in a Grid configuration to create a copy of a logical volume at different sites
– Logical Volume Sizes
• Used to select larger logical volume sizes (up to 6,000 MB)
– Secure Data Erase1
• Limits how long expired data remains on a cartridge before being erased
– Copy Export1
• Used to export logical volumes from a standalone or Grid TS7700
– Logical WORM
• Used to emulate TS1120/TS1130/TS1140 WORM support1Not supported on TS7720 Model
© 2011 IBM Corporation46
Primary Volume
1 Duplex Copy
P
TCDB = 1, 2TMC = 1, 2
STCDB = 1, 2TMC = 1, 2
2 Duplex Copy
Primary Volume
This diagram is meant to illustrate the logical process of the primary and secondary virtual volume copy
creation, and does not include all steps in the process.
Supports Tier 3 Business Continuity
Production Site
Recovery Site
1
2
TS7700 Cache
CacheTS7700
CPU
Optional FICON TCP/IP ReplicationGlobal Mirror
© 2011 IBM Corporation47
Cache
Primary Volume
1 Duplex Copy
Cache
2 Duplex Copy
Primary Volume
This diagram is meant to illustrate the logical process of the primary and secondary virtual volume copy
creation, and does not include all steps in the process.
Supports Tier 5 Business Continuity
Production Site
Recovery Site
1
Optional FICON TCP/IP Replication
2
TS7700
TS7700
P
TCDB = 1, 2TMC = 1, 2
STCDB = 1, 2TMC = 1, 2
Global Mirror
CPU
CPU
© 2011 IBM Corporation48
TS7700 Grid Solutions
� Each TS7700 provides – 256 virtual devices– 4 - 4Gb FICON Channels– Up to 84TB of Cache (3:1 C/R) TS7740– Up to 1.3PB of Cache (3:1 C/R) TS7720
� 2,000,000 logical volumes
� Interconnect is standard TCP/IP using dual 1Gb links
– Supporting 1000s of miles separation
� Data written to one is transparently replicated to the other
– Policy control for location of copies and how copies are made
� A volume’s data can be accessed through any TS7700
� Can be configured for disaster recovery and/or higher availability environments
– One to six site configurations
WAN interconnected TS7700s form a Grid configuration
Optimized for Recoverability and
Automatic Failback
© 2011 IBM Corporation49
Cache
Primary Volume
1 Grid Copy
Cache
2 Grid Copy
Primary Volume
Grid HA/DR 2way Configuration
Production Site
Recovery Site
1
Optional FICON TCP/IP Replication
2
TS7700
TS7700
P
TCDB = 1, 2TMC = 1, 2
STCDB = 1, 2TMC = 1, 2
Global Mirror
CPU
CPU
Host Mirroring TS7700 Grid
© 2011 IBM Corporation50
Cache-1
1
Cache-3
2
Grid HA/DR Partitioned 4way Configuration
Production Site
Recovery Site
1
2
TS7700-1
TS7700-3
P
TCDB = 1, 2TMC = 1, 2
STCDB = 1, 2TMC = 1, 2
Global Mirror / XRC
CPU
CPU
Host Mirroring TS7700 Grid
Cache-2
Cache-4
4
3
TS7700-2
TS7700-4
4
3
TCP/IP WAN
© 2011 IBM Corporation51
Suggested 4-Way HA/DR Hybrid Grid Configuration
Drives/Library
TS7740 Cluster
LAN/WAN
�Three production TS7720 clusters all feeding into a common TS7740– Each TS7720 primarily replicates only to the common TS7740– Provides 1.3PB (4PB @ 3:1 compression) of HIGH PERFORMANCE production cache when running in balanced
mode– Each TS7720 has full access to all data contained within grid providing a fully HA solution– The installed TS7740 performance features can be minimal since host connectivity wouldn’t be expected
�Production data migrates into the TS7740– If a TS7720 reaches capacity, the oldest data which has already been replicated to the TS7740 will be
removed from the TS7720 cache.– Copy export can be utilized at the TS7740 in order to have a second copy of the migrated data.– Duration between copy exports can be longer since the last N days of data has not yet been migrated
TS7720 Cluster
Copy Export (Optional)
TS7720 Cluster
TS7720 Cluster
Production Capacity of 1.3PB (4PB @ 3:1)
Disk Only Production
Center
© 2011 IBM Corporation52
HA/DR 4way Partitioned Configuration
� Two production clusters at metro distances– Host workload balanced across both clusters– Content written to particular mounting cluster is only replicated to one remote cluster– During anoutage, all reads are local– HA pair clusters can be of mixed types (Hybrid)
� Two remote DR clusters at metro distances– Each DR cluster replicates content from one of the two production clusters
� High availability at both production and DR location without four copies of the data
Production Capacity of 884TB
WAN
TS7700 Cluster
TS7700 Cluster
TS7700 Cluster
TS7700 Cluster
Production
Host(s)
HA Pair HA Pair
© 2011 IBM Corporation53
HA/DR 4way Partitioned Configuration (Continue)� Two production clusters at local or metro distances
–Host workload balanced across both clusters–Content written to particular mounting cluster is only replicated to one remote cluster
� Two remote clusters at metro or DR distances–Each remote cluster replicates content from one of the two production clusters
� High availability at both production and secondary location without four copies of the data–Same capacity as two 2-way configurations, with the high availability of a 4-way configuration–A true high available and disaster recoverable business continuous configuration
WANTS7700 Cluster
TS7700 Cluster
TS7700 Cluster
TS7700 Cluster
Production
Host(s)
HA Pair HA Pair
HA/DR Grid(2.65PB @ 3:1 comp)(2.65PB @ 3:1 comp)
Up to 672MB/s (1.8GB/s @ 2.66:1)
Up to 718MB/s (1.9GB/s @ 2.66:1)
© 2011 IBM Corporation
Subsystem Performance
© 2011 IBM Corporation55
Performance Considerations
� Peak write performance is the data rate available for a finite period1
– May be influenced by workload characteristics• Block size and Batch Window Characteristics• Processor and channel configuration
– May be influenced by customer data compressibility• Charts that follow reflect typical compression of 2.66:1• Customer data may compress at different ratios
� Sustained write performance is the data rate that can be sustained indefinitely1
– May be influenced by the same elements that impact peak write performance
� Peak read performance is the data rate available from the cache1
– May be influenced by the same elements that impact peak write performance
� Grid performance includes replication effects1
– May be influenced by the same elements that impact peak read/write performance – May be impacted by synchronization mode (Deferred / Immediate Copy)– May be influenced by network configuration / available bandwidth
1Laboratory measurements, 800MB volumes, 32KB block size
Applies to TS7740 only, the TS7720 runs in peak indefinitely
© 2011 IBM Corporation56
0 200 400 600 800 1000 1200
Host MB/s (uncompressed)
B10 VTS (4xFICON)
B20 VTS (8xFICON) W/FPA
TS7740 V06 3956 CC6/CX6 R 1.3 (4-DRs - 4x2Gb - z900)
TS7740 V06 3956 CC6/CX6 R 1.5 (4-DRs - 4x2Gb - z900)
TS7740 V06 3956 CC6/CX6 R 1.5 (4-DRs - 4x2Gb - z990)
TS7720 VEA 3956 CS7/XS7 R 1.5 (7-DRs - 4x4Gb - z10)
TS7740 V06 3956 CC7/CX7 R 1.6 (4-DRs - 4x4Gb - z10)
TS7720 VEA 3956 CS7/XS7 R 1.6 (7-DRs - 4x4Gb - z10)
TS7740 V06 3956 CC8/CX7 R 1.7 (4-DRs - 4x4Gb - z10)
TS7720 VEA 3956 CS8/XS7 R 1.7 (7-DRs - 4x4Gb - z10)
TS7720 VEA 3956 CS8/XS7 R 1.7 (19-DRs - 4x4Gb - z10)
TS7740 V07 3956 CC8/CX7 R 2.0 (4-DRs - 4x4Gb - z10)
TS7720 VEB 3956 CS8/XS7 R 2.0 (7-DRs - 4x4Gb - z10)
TS7720 VEB 3956 CS8/XS7 R 2.0 (19-DRs - 4x4Gb - z10)
Standalone VTS/TS7700 Write Performance History
Sustained Write Peak Write
© 2011 IBM Corporation57
0 200 400 600 800 1000 1200
Host MB/s (uncompressed)
B10 VTS (4xFICON)
B20 VTS (8xFICON) W/FPA
TS7740 V06 3956 CC6/CX6 R 1.3 (4-DRs - 4x2Gb - z900)
TS7740 V06 3956 CC6/CX6 R 1.5 (4-DRs - 4x2Gb - z900)
TS7740 V06 3956 CC6/CX6 R 1.5 (4-DRs - 4x2Gb - z990)
TS7720 VEA 3956 CS7/XS7 R 1.5 (7-DRs - 4x4Gb - z10)
TS7740 V06 3956 CC7/CX7 R 1.6 (4-DRs - 4x4Gb - z10)
TS7720 VEA 3956 CS7/XS7 R 1.6 (7-DRs - 4x4Gb - z10)
TS7740 V06 3956 CC8/CX7 R 1.7 (4-DRs - 4x4Gb - z10)
TS7720 VEA 3956 CS8/XS7 R 1.7 (7-DRs - 4x4Gb - z10)
TS7720 VEA 3956 CS8/XS7 R 1.7 (19-DRs - 4x4Gb - z10)
TS7740 V07 3956 CC8/CX7 R 2.0 (4-DRs - 4x4Gb - z10)
TS7720 VEB 3956 CS8/XS7 R 2.0 (7-DRs - 4x4Gb - z10)
TS7720 VEB 3956 CS8/XS7 R 2.0 (19-DRs - 4x4Gb - z10)
Standalone VTS/TS7700 Read Hit Performance History
Read Hit
© 2011 IBM Corporation58
Standalone TS7720 R 1.6 - R 1.7 - R 2.0 Performance Comparison
0
200
400
600
800
1000
1200
Write 1:1 Read 1:1 Mixed 1:1 Write 2.66:1 Read 2.66:1 Mixed 2.66:1
Ho
st
MB
/s (
Un
co
mp
res
se
d)
R 1.6 -- VEA/CS7/7 drawers R 1.7 -- VEA/CS8/7 drawers R 1.7 -- VEA/CS8/19 drawers
R 2.0 -- VEB/CS8/7 drawers R 2.0 -- VEB/CS8/19 drawers
© 2011 IBM Corporation59
Standalone TS7740 R 1.6 - R 1.7 - R 2.0 Performance Comparison
0
200
400
600
800
1000
1200
Peak Write
1:1
Sust. Write
1:1
Read 1:1 Peak
Mixed 1:1
Sust.
Mixed 1:1
Peak Write
2.66:1
Sust. Write
2.66:1
Read 2.66:1 Peak
Mixed
2.66:1
Sust.
Mixed
2.66:1
Ho
st
MB
/s (
Un
co
mp
res
se
d)
R 1.6 -- V06/CC7/2 drawers R 1.6 -- V06/CC7/4 drawers R 1.7 -- V06/CC8/2 drawers R 1.7 -- V06/CC8/4 drawers
R 2.0 -- V06/CC8/4 drawers R 2.0 -- V07/CC8/2 drawers R 2.0 -- V07/CC8/4 drawers
© 2011 IBM Corporation
Potential Business Benefits
© 2011 IBM Corporation61
Potential Customer Benefits
� Find your data faster and put it to work sooner
– Restore backups up to 30% faster with new POWER7 processors in the
TS7700 Virtual Tape Library
– Fast access to data on ‘virtual volumes’ on the disk buffer
– Continuous operation thru Automated failover capabilities
� Store more data in less space
– Store twice as many volumes on a single TS7700 virtual tape library and
reduce both hardware and floor space requirements
� Protect your investment
– Cost effective storage by optionally migrating ‘virtual volumes’ to physical tape
– Makes lower TCO possible for data protection and long term retention
– Protects customer investment by providing upgrade path for current
installations
� Secure your data
– Hardens business continuance environment
– Protects vital data assets
© 2011 IBM Corporation62
Why choose the IBM TS7700 Virtualization Engine?
� IBM provides technology that matters– The 1st generation VTS completely changed tape processing– The 2nd and 3rd generation VTSs increased performance and capacity– The PtP VTS hardened business continuance and reduced recovery time– Only IBM provides an entire solution spectrum from disk only virtual tape, to
hybrid disk and tape to mostly physical tape
� IBM continues to invest in tape technologies– Acknowledged in the industry as a leader in tape drive technology– Offers a full range of tape drives, libraries and virtual tape subsystems– Offers the full complement of software and services to maximize your ROI– Offers data protection via tape drive encryption support
� The TS7700 supports a further reduction in cost by– Virtualizing more of your data at a lower cost on fewer resources– Automating storage management through full DFSMS support– Reducing configuration and environmental requirements– Providing high performance and cache capacity
© 2011 IBM Corporation63
Virtualize Like Resources
Virtualize Unlike
Resources
Virtualize The
Enterprise
Virtualize Outside The Enterprise
Vir
tua
liza
tio
n
Grid Computing Supports Information Infrastructure
� Grid Computing is about virtualizing and sharing resources
� The TS7700 Virtualization Engine
– Supports infrastructure optimization
• Facilitates workload management and consolidation
• Reduces time to information
– Increases access to data and collaboration
• Facilitates access to information
• Supports global distribution
– Provides resilient highly available infrastructure
• Business continuity
• Recovery and failover
– Provides information security
• Supports tape encryption
• Supports WORM
© 2011 IBM Corporation64
Approaching 60 Years of Tape Innovation
2010TS7610
TS7680
2008
TS1130
(3592 G3)
1984
IBM 34801st cartridge drive
1964
IBM 21041st read/back drive
1959
IBM 7291st read/write drive
1952
IBM 7261st magnetic tape drive
20033592 Gen1
1995
IBM 3590
1999
IBM 3590E2005TS1120(3592 G2)
2004LTO Gen3
2002LTO Gen2
2000LTO Gen1
2007LTO Gen4
1962IBM Tractor System
1992IBM 3495
1997VTS G1
2000TS3500
1994IBM 3494
1999VTS G2
2001VTS G3
2006TS7740 (VTS Gen 4)
2005TS7510 VTL
2007TS7520
2007TS3400
2005TS3200TS3300
2007TS7530
2008TS2900TS3500High Density
2008TS7720
2008TS7650G
2009TS7650Appliance
2008
TS1130
(3592 G3)
1984
IBM 34801st cartridge drive
1964
IBM 21041st read/back drive
1959
IBM 7291st read/write drive
1952
IBM 7261st magnetic tape drive
20033592 Gen1
1995
IBM 3590
1999
IBM 3590E2005TS1120(3592 G2)
2004LTO Gen3
2002LTO Gen2
2000LTO Gen1
2007LTO Gen4
In tape automation and virtualization1992IBM 3495
1997VTS G1
2000TS3500
1994IBM 3494
1999VTS G2
2001VTS G3
2006TS7740 (VTS Gen 4)
2005TS7510 VTL
2007TS7520
2007TS3400
2005TS3200TS3300
2007TS7530
2008TS2900TS3500High Density
2008TS7720
2008TS7650G
1974
3850 MSS
2009TS7650Appliance
In tape drive technology
2010LTO Gen5
2011
TS1140
(3592 G4)
2011TS3500
Connector & Shuttle
2011TS7740
TS7720
© 2011 IBM Corporation65
Tape is an integral part of an efficient backup process and long term retention
� Tape provides another line of defense against data loss
� Saves money
– Price per TB is about 10% of Tier 2 disk1
– Power consumption is about 1%, compared to hard disk2
� Transportable
– Light weight, compact, crash-proof
� Preserve data for up to 30 years on the same media
� Significantly reduced power, cooling, and space requirements
� Provides investment protection with scalability and media
compatibility
Over 80%consider tape an integral part of their backup processSource: Enterprise Strategy Group Research Report, 2010 Data Protection Trends, April 2010
Sources: 1. “Top 10 Strategies for Surviving Unconstrained Data Growth,” Gartner Symposium Presentation, October 2010, slide 21
2 “In Search of the Long Term Archiving Solution – Tape Delivers Significant TCO Advantage over Disk”, The Clipper Group, Inc., December 2010.
© 2011 IBM Corporation
Technology
© 2011 IBM Corporation67
Content
� The value of FICON Attachment
� The value of new Grid interconnection method
� Ownership and Ownership Takeover
� Copy Export
� Advanced Function
– Tape Volume Cache Management
– Logical Volume Pooling
� 3592 Cartridge Selection Considerations
� TS1140/TS1130/TS1120 Encryption Support
� Logical WORM Support
� FICON Channel failure recovery
� Extended Distance Support
© 2011 IBM Corporation68
The Value of FICON Attachment
� Reduces infrastructure requirements
– Fewer channels to manage
– Fewer director ports to purchase
– Fewer channel addresses used on the server
� Improves performance
– May reduce the batch window by providing higher aggregate bandwidth and full duplex operation
– May reduce individual job run times by providing higher single channel bandwidth
� Supports remote attachment to support business continuance
© 2011 IBM Corporation69
The Value of IBM FICON Attachment (continued)
ESCONChannel
Half duplex
ESCON Director A1
CU-A
FICONChannel
System z
Full Duplex
FICON / FC Director
FICON Frames
FICON Frames
FICON Frames
FICON Control Units
A2
A1
A2
A1
B2 B3A3
B4 B1B2
B3
B4
B1
A3
Full DuplexCU-B
ESCON Control Units
1 2 3
1 2 3
A1
B5
B5
1 2 43 5
One VS Eight 'Conversations'
CU-A
CU-B 1 2 43 5
ESCON Frames
ESCON Frames
FICON Express supports up to 32!
© 2011 IBM Corporation70
The Value of Grid Interconnect Method
� Move from FICON to I/P based interconnections
� Integrate peering functions into base TS7700 Virtualization Engine
� Reduces infrastructure requirements
– Eliminates Virtual Tape Controllers
– Eliminates channel extension hardware
� Simplifies management
– Fewer elements to manage (no VTCs, channel extension hardware)
– Single management interface for standalone and Grid configurations
© 2011 IBM Corporation71
Ownership and Ownership Takeover Mode
� Volume Ownership in a Grid configuration
– Each logical volume is owned by a TS7700 cluster
– All changes to a volume are controlled by the owner
– A TS7700 cluster must have ownership of a volume to execute a mount operation
– Volume ownership is dynamically transferred between TS7700 clusters in a Grid configuration as
part of mount processing
� Ownership Takeover Modes
– If a TS7700 cluster that has ownership of a logical volume is unavailable, the Automatic Ownership
Takeover Manager allows the available TS7700 cluster to take ownership away from the
unavailable TS7700 cluster
– The Ownership Takeover Manager uses the TS3000 System Console (TSSC) on each TS7700
cluster to provide an independent method to check the status of a cluster
– If a failure has been confirmed, one of the ownership takeover modes is enabled automatically
• A default of Neither, Read or Write ownership is set by IBM Service
© 2011 IBM Corporation72
Copy Export – TS7740
� Export function to support transfer of data for offsite disaster recovery
� Exports a copy of selected data, leaving the primary copy in the TS7700
� Copy exported physical volume continues to be managed by the source TS7700
– Utilizes off site reclamation to free up previously exported tapes
� Customer performed recovery process– All copy exported data from a source TS7700 is recovered on an empty TS7700– Recovery options for test vs an actual disaster recovery
� Operation in a Grid Configuration – Executed on a specific TS7700– Logical volumes must have been copied to the TS7700 to be exported– Recovery is to a standalone TS7700
© 2011 IBM Corporation73
Copy Export – TS7740
Source TS7700
TVC
Pool 01 Pool 09
Production Host
Database
Production Site Recovery Site
Recovery TS7700
TVC
Pool 09
Database
Recovery Host
Dual copy of dataExport second copy w/DB
Load all exported volumes in libraryRestore DB from latest export volume
© 2011 IBM Corporation74
Tape Volume Cache Management – TS7740
� Designed to minimize job processing delays– Maximize cache hits
� Two options - Preference Group 1 or 0– Selectable by logical volume– Storage Class construct control– Preference Group 0 - prefer expedited removal from cache– Preference Group 1 - prefer to retain in cache
� Defined by Storage Class– Management Interface Web Interface
� Assignment options– Dynamic assignment by SMS– Fully Backward compatible with IART attribute (Initial Access Response Time provided by SMS
Storage Class Construct) – Static assignment through Management Interface Web Interface– System default
© 2011 IBM Corporation75
Standard Cache Management Advanced Function Cache
Management
Space available for frequently accessed
volumes
Space : LRU Algorithm
Copy : Priority / Size
Space available for infrequently accessed
volumes
Space : LRU Algorithm
PG0
PG1
Copy : FIFO order
Copy : Second in FIFO order
Space : Delete First
Tape Volume Cache Management –TS7740 (continued)
� The illustration below assumes 80 % of volumes created can be identified to the ACS
routines as infrequently accessed
© 2011 IBM Corporation76
Volume Pooling – TS7740
� Groups logical volumes on pools of 3592 tape cartridges
� Supports up to 32 physical pools– Media type control / intermix– Borrow / return from common scratch pool– Independent reclamation thresholds and policies– Reclaim to same / alternate pool
� Pool management– Move physical volumes from pool to pool– Move logical volumes from pool to pool
� Supported on both standalone and Grid attached TS7700
� Defined by Storage Group in Library Manager panels or TS7700 GUI
© 2011 IBM Corporation77
3592 Cartridge Selection Considerations – TS7740
� Use 3592 JB cartridges– 1.6TB capacity using TS1140 in native E07 mode– 1TB capacity using TS1130 in native E06 mode– Used in combination with TS1140 E06 Copy Export format for TS1130 based recovery
� Use 3592 JC cartridges– For very large amounts of data– 4TB capacity using TS1140 in native E07 mode– Used for copy export pools where recovery machine also supports TS1140
� Use 3592 JK cartridges– 500 GB capacity using TS1140 in native E07 mode
� Use 3592 JJ cartridges (128GB capacity1) – Where retrieval response time of individual data sets is critical
• Recall of migrated data sets– When recovery time for multiple datasets is critical
• Recovering 600GB of data from six 128GB cartridges simultaneously is faster than recovering from two 640GB cartridges
– Only with TS1130 and TS1120 tape drives– 3592 JJ read-only on TS1140 tape drive*
* Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change without notice. Planned availability:
PGA1 code December 2011.
© 2011 IBM Corporation78
� Use 3592 JA cartridges (640GB capacity1) – Where retrieval response time is (usually) not critical
• Full Volume Dumps• Tape transaction logs (MI/MO)• HSM ABARS• HSM Recycle target
– Only with TS1130 and TS1120 tape drives.– 3592 JA read-only on TS1140 tape drive*
� Intermix of JA and JB cartridges – Can be assigned to a specific pool or added as a common scratch pool– May be borrowed/returned– Only with TS1130 and TS1120 tape drives– 3592 JA read-only on TS1140 tape drive*
� Intermix of JB and JC cartridges– Only when using TS1140 tape drive.– JB media leveraged in Copy Export pools when the recovery machine uses TS1130 drives.
� Use advanced policy management to create transparent backup copies– Enables automatic recovery in the event of a media error
3592 Cartridge Selection Considerations – TS7740
* Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change without notice. Planned availability:
PGA1 code December 2011.
© 2011 IBM Corporation79
TS1140/TS1130/TS1120 Encryption Support – TS7740
� Requirements:– All TS1140, TS1130 or TS1120 tape drives must be encryption capable– TS1140, TS1130 or TS1120 tape drives must be enabled to run in native format mode – FC 9900 must be ordered
� One or more of the 32 physical pools can be enabled for encryption
� Out-of-band key management interface accessed via Ethernet interface that
connects the TS7740 to the network – Request for encryption key is directed to I/P address of primary key manager,
responses are passed through the TS7740 to the drive – To read a logical volume from a physical volume in an encryption enabled pool, the
TS7740 uses the key management interface to de-encrypt the data
© 2011 IBM Corporation80
Logical WORM Support
� Support for compliant logical worm (LWORM) tape volumes through TS7700
software emulation – Host will view the TS7700 as a logical worm library– Minor host software changes will exploit and assist in managing logical WORM volumes
� Previously written volumes cannot be upgraded to logical WORM volumes – Must read the contents and rewrite them to a new logical volume that has been bound as
a logical worm volume
� All clusters in a Grid must be at R1.6 for LWORM support
© 2011 IBM Corporation81
FICON Channel Failure Recovery
� Historically tape I/O was relative to the last completed command– As a result the host cannot just "redrive" a failed I/O operation– The location will vary depending on the last successful I/O command – The only way to recover was to append and perform a step or job restart – This tape architecture applies to ALL tape products
� FICON capable processors support Tape Positional Retry Architecture– FICON is not required, but an IOS firmware update may be required– The IOS maintains block ID of last successful I/O command in the I/O chain– If a I/O command failure occurs
• IOS attempts to reset the failed path or uses an alternate path• Clear any pending tape drive status• Repositions the tape to the prior known good position• Restart command chain the failure occurred in at the right point
© 2011 IBM Corporation
VTS Migration
© 2011 IBM Corporation83
Supported Migrations
� B10 or B20 VTS/TS3500 tape library (3584) to TS7700/TS3500 tape library
� PtP VTS with each VTS attached to different TS3500 tape libraries to two Grid attached TS7700s attaching to the TS3500s
� Two standalone VTSs with each VTS attached to the same TS3500 tape library to a single TS7700 attached to the same TS3500
� VTS attached to a TS3500 tape library migrates to a TS7700 attached to a different TS3500 Tape Library
� Two standalone VTSs with each VTS attached to a different TS3500 tape library to a single TS7700 attached to a TS3500 tape library (the TS7700's TS3500 Tape Library may be the same as one of the VTS's TS3500 tape library, or it may be different)
© 2011 IBM Corporation84
Migration Considerations
� Every migration requires the VTS to use only 3592 J1A, TS1140, TS1130 or TS1120
backend tape drives and the VTS data must reside on 3592 media
� Although VTS allows the use of either 2-Gbps or 4-Gbps fibre channel switches to connect
to the backend tape drives, the TS7700 supports 4-Gbps and 8-Gbps switches
© 2011 IBM Corporation
Thank You