Date post: | 03-Oct-2014 |
Category: |
Documents |
Upload: | maveriks86 |
View: | 1,017 times |
Download: | 10 times |
Student Guide Book 1 of 2 for
Hitachi Data Systems Storage Foundations – Enterprise
(THI0517)
Course Version 1.0
Hitachi Data Systems Storage Foundations – Enterprise
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United States and/or other countries: Application Optimized Storage Extended Serial Adapter ExSA Graph-Track HiCard HiPass Hi-PER Architecture HiReturn
Hi-Track® iLAB Lightning 9900 Lightning 9980V Lightning 9970V Lightning 9960 Lightning 9910 NanoCopy
Resource Manager ShadowImage SplitSecond TagmaStore Thunder 9200 Thunder 9500 Thunder 9585V Thunder 9580V
Thunder 9570V Thunder 9530V Thunder 9520V TrueCopy TrueNorth Universal Star Network Universal Storage Platform Network Storage Controller
All other trademarks, trade names, and service marks used herein are the rightful property of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, express or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems being in effect, and that may be configuration-dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office for information on feature and product availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local sales representative to obtain a printed copy. If you purchase or license the product, you are deemed to have accepted these terms and conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA, EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Notice: Notational conventions: 1KB stands for 1,024 bytes, 1 MB for 1,024 kilobytes, 1 GB for 1,024 megabytes, and 1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for prefixes for binary and metric multiples.
©2006, Hitachi Data Systems Corporation. All Rights Reserved
0026
Content Developed by HDS Academy
Contact Hitachi Data Systems at www.hds.com.
This manual may not be copied, transferred, reproduced, disclosed, or Page ii distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Data Systems Storage Foundations – Enterprise
Contents Book 1 Course Description ........................................................................................................................ix Target Audience ............................................................................................................................ x Course Objectives .........................................................................................................................xi Course Objectives (continued) .....................................................................................................xii Course Content ........................................................................................................................... xiii
1. APPLICATION OPTIMIZED STORAGE™ SOLUTIONS FROM HITACHI DATA SYSTEMS .. 1-1 Module Objectives ......................................................................................................................1-2 Business Challenges… ..............................................................................................................1-3 The Customers Speak................................................................................................................1-5 Applications are the Link ............................................................................................................1-6 Storage Management: Simplification, Optimization, and Automation ........................................1-7 Data Management: Movement, Replication, Protection, and Recovery ....................................1-9 Tiered Storage: A New Paradigm.............................................................................................1-10 A New Paradigm.......................................................................................................................1-11 Competitive Landscape............................................................................................................1-12 Application Optimized Storage Proof Points ............................................................................1-13 Summary ..................................................................................................................................1-14 Module Review .........................................................................................................................1-15
2. HITACHI COPY-ON-WRITE SNAPSHOT SOFTWARE ................................................. 2-1 Module Objectives ......................................................................................................................2-2 Functional Description................................................................................................................2-3 Hitachi TagmaStore Family Models ...........................................................................................2-5 Network Storage Controller ........................................................................................................2-6 Universal Storage Platform Models............................................................................................2-8 USP100 ......................................................................................................................................2-9 USP600 ....................................................................................................................................2-10 USP1100 ..................................................................................................................................2-11 Models - Overview....................................................................................................................2-12 Key Features and Strengths.....................................................................................................2-13 Module Review .........................................................................................................................2-22
3. HARDWARE AND ARCHITECTURE ......................................................................... 3-1 Module Objectives ......................................................................................................................3-2 Hardware Overview....................................................................................................................3-3 Universal Storage Platform Overview ........................................................................................3-4 Disk Controller - DKC .................................................................................................................3-5 Disk Unit - DKU ..........................................................................................................................3-6 DKC Box.....................................................................................................................................3-7 Universal Star Network Architecture ..........................................................................................3-8 Architecture Advantages ..........................................................................................................3-11 Optional Hardware Features ....................................................................................................3-12 Clusters ....................................................................................................................................3-13 Architecture: Cache Memory....................................................................................................3-15 Standard Cache Access Model ................................................................................................3-16
This manual may not be copied, transferred, reproduced, disclosed, or High Performance Cache Access Model..................................................................................3-17
distributed, in whole or in part, without the prior written consent of HDS. Page iii
Hitachi Data Systems Storage Foundations – Enterprise
Shared Memory........................................................................................................................ 3-18 Architecture: Shared Memory Path.......................................................................................... 3-20 Directors................................................................................................................................... 3-21 Front-end Director (FED) ......................................................................................................... 3-22 Front-end Director (FED) Features.......................................................................................... 3-23 Front-end Director Features..................................................................................................... 3-25 CHA Options ............................................................................................................................ 3-26 CHA Options - Fibre Channel Ports......................................................................................... 3-27 Back-end Director (BED) Feature............................................................................................ 3-28 DKA Option .............................................................................................................................. 3-29 DKA Options - Enhanced Backend FC-AL .............................................................................. 3-30 Optional Disk Features ............................................................................................................ 3-31 Read Hit ................................................................................................................................... 3-32 Read Miss ................................................................................................................................ 3-33 Fast Write................................................................................................................................. 3-34 Algorithms for Cache Control................................................................................................... 3-35 Battery...................................................................................................................................... 3-37 Cache Destaging Process - Universal Storage Platform......................................................... 3-39 Hi-Track Tool............................................................................................................................ 3-40 Module Review......................................................................................................................... 3-42
4. DKU BACK-END ARCHITECTURE AND LOGICAL UNITS ...........................................4-1 Module Objectives ..................................................................................................................... 4-2 DKU Back-end Architecture and Logical Units Overview .......................................................... 4-3 HDU Box .................................................................................................................................... 4-4 Extending HDU Boxes ............................................................................................................... 4-5 DKU Hardware and Interconnections ........................................................................................ 4-6 Creating a Logical Device .......................................................................................................... 4-7 Virtualized Back-end .................................................................................................................. 4-8 DKU - HDU Numbering.............................................................................................................. 4-9 B4 Numbering in the Universal Storage Platform.................................................................... 4-10 Array Groups............................................................................................................................ 4-11 RAID Protection ....................................................................................................................... 4-12 Parity Group Addressing.......................................................................................................... 4-13 Emulations ............................................................................................................................... 4-14 Emulation Types ...................................................................................................................... 4-15 Control Unit .............................................................................................................................. 4-16 Volumes ................................................................................................................................... 4-17 VLL........................................................................................................................................... 4-18 VLL Functions .......................................................................................................................... 4-19 LUSE Overview........................................................................................................................ 4-20 LUSE Specifications ................................................................................................................ 4-21 LUN Manager Overview........................................................................................................... 4-23 LUN Manager Operations ........................................................................................................ 4-25 Universal Storage Platform RAID Intermix .............................................................................. 4-26 Module Review......................................................................................................................... 4-27
This manual may not be copied, transferred, reproduced, disclosed, or Page iv distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Data Systems Storage Foundations – Enterprise
5. HITACHI RESOURCE MANAGER™ UTILITY PACKAGE ............................................. 5-1 Module Objectives ......................................................................................................................5-2 Components ...............................................................................................................................5-3 Hitachi Storage Navigator Overview ..........................................................................................5-4 Connecting to the Storage Navigator .........................................................................................5-8 Storage Navigator Panels ..........................................................................................................5-9 LUN Manager Operation Example ...........................................................................................5-11 Other Resource Manager Components ...................................................................................5-13 Module Review .........................................................................................................................5-14
6. HITACHI NAS BLADE FOR TAGMASTORE™ UNIVERSAL STORAGE PLATFORM........ 6-1 Module Objectives ......................................................................................................................6-2 NAS Server Blade ......................................................................................................................6-3 Location of NAS Server Blade Feature ......................................................................................6-4 NAS Blade Software Components .............................................................................................6-5 NAS Management ......................................................................................................................6-7 NAS Blade File System ..............................................................................................................6-8 File System - HiXFS ...................................................................................................................6-9 High Availability Cluster Architecture .......................................................................................6-10 User Authentication and Name Resolution ..............................................................................6-11 NAS Optional Products (PP) ....................................................................................................6-12 Backup and Restore.................................................................................................................6-13 Module Review .........................................................................................................................6-14
7. HITACHI DYNAMIC LINK MANAGER™ PATH MANAGER SOFTWARE.......................... 7-1 Module Objectives ......................................................................................................................7-2 Overview.....................................................................................................................................7-3 Features .....................................................................................................................................7-6 GUI Interface ..............................................................................................................................7-9 Operations ................................................................................................................................7-11 Module Review .........................................................................................................................7-12
8. HITACHI HICOMMAND® DEVICE MANAGER SOFTWARE.......................................... 8-1 Module Objectives ......................................................................................................................8-2 HiCommand Suite 4.x Products .................................................................................................8-3 Device Manager Software Centrally Manages All Tiers of Hitachi Storage ...............................8-4 Device Manager Software Centrally Manages all Tiers of Hitachi Storage ...............................8-5 Overview.....................................................................................................................................8-6 Customer View ...........................................................................................................................8-9 Device Manager Architecture ...................................................................................................8-10 Components .............................................................................................................................8-11 HiCommand Suite Common Component .................................................................................8-12 HiRDB Embedded Edition ........................................................................................................8-13 Device Manager Software Agent Support ................................................................................8-14 Storage Management Concepts...............................................................................................8-15 Basic Operations - Best Practices............................................................................................8-17 Basic Operations -- Best Practices...........................................................................................8-18 Support .....................................................................................................................................8-23 Report Operations ....................................................................................................................8-24 Report Operations ....................................................................................................................8-25
distributed, in whole or in part, without the prior written consent of HDS. Page v
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Data Systems Storage Foundations – Enterprise
Command Line Interface.......................................................................................................... 8-26 Module Review......................................................................................................................... 8-27
9. BUSINESS CONTINUITY ........................................................................................9-1 Module Objectives ..................................................................................................................... 9-2 A New Way of Looking at Business Continuity.......................................................................... 9-3 Business Continuity Solutions.................................................................................................... 9-4 Hitachi ShadowImage™ In-System Replication Software......................................................... 9-6 Hitachi Copy-on-Write Snapshot Software ................................................................................ 9-7 Hitachi TrueCopy™ Remote Replication Software.................................................................... 9-8 Hitachi Universal Replicator Software ....................................................................................... 9-9 Hitachi Cross-System Software ............................................................................................... 9-10 Module Review......................................................................................................................... 9-11
10. HITACHI SHADOWIMAGE™ IN-SYSTEM REPLICATION...........................................10-1 Module Objectives ................................................................................................................... 10-2 Overview .................................................................................................................................. 10-3 Commands............................................................................................................................. 10-12 Tools ...................................................................................................................................... 10-31 Module Review....................................................................................................................... 10-33
11. HITACHI COPY-ON-WRITE SNAPSHOT SOFTWARE ...............................................11-1 Module Objectives ................................................................................................................... 11-2 Overview .................................................................................................................................. 11-3 Operation Scenarios ................................................................................................................ 11-7 Copy-on-Write Snapshot Requirements ................................................................................ 11-10 Pool Volume........................................................................................................................... 11-11 P-VOL .................................................................................................................................... 11-12 Copy-on-Write Snapshot Workflow........................................................................................ 11-14 Tools – Storage Navigator ..................................................................................................... 11-16 Status Transitions .................................................................................................................. 11-17 Tools – RAID Manager .......................................................................................................... 11-18 RAID Manager Commands.................................................................................................... 11-19 Module Review....................................................................................................................... 11-20
This manual may not be copied, transferred, reproduced, disclosed, or Page vi distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Data Systems Storage Foundations – Enterprise
Contents Book 2
12. HITACHI TRUECOPY™ REMOTE REPLICATION SOFTWARE................................... 12-1
13. RAID MANAGER/CCI ....................................................................................... 13-1
14. HITACHI UNIVERSAL REPLICATOR SOFTWARE .................................................... 14-1
15. HITACHI HICOMMAND® STORAGE SERVICES MANAGER SOFTWARE, POWERED BY APPIQ* ....................................................................................... 15-1
16. HITACHI DATA SYSTEMS STORAGE FOUNDATIONS – ENTERPRISE ........................ 16-1
17. HITACHI UNIVERSAL VOLUME MANAGER SOFTWARE........................................... 17-1
18. HITACHI CROSS-SYSTEM COPY SOFTWARE........................................................ 18-1
19. HITACHI HICOMMAND® TIERED STORAGE MANAGER SOFTWARE......................... 19-1
20. HITACHI DATA RETENTION UTILITY .................................................................... 20-1 Module Objectives ....................................................................................................................20-2 Overview...................................................................................................................................20-3 Data Retention Utility Access ...................................................................................................20-6 Data Retention Panel ...............................................................................................................20-7 Expiration Lock .........................................................................................................................20-9 Term Setting ...........................................................................................................................20-10 Module Review .......................................................................................................................20-11
21. ENTERPRISE CONTENT ARCHIVAL...................................................................... 21-1
22. HITACHI HICOMMAND® TUNING MANAGER SOFTWARE ....................................... 22-1 Module Review .......................................................................................................................22-20
APPENDIX A: .............................................................................................................. 1 Elements Supported by Storage Services Manager Software...................................................... 2
APPENDIX B: GLOSSARY ............................................................................................ 1
distributed, in whole or in part, without the prior written consent of HDS. Page vii
This manual may not be copied, transferred, reproduced, disclosed, or
Introduction
Course Description
2
This course provides an overview of the Hitachi Data Systems storage products and technology for the storage networking enterprise. This includes: Hitachi Storage Hardware and Software for the enterprise, HiCommand® Suite, Virtualization, Application Optimized Storage™Solutions from Hitachi Data Systems, Hitachi Storage Area Management software, Performance, Configuration, and Hitachi Business Continuity Manager software.
This course is part of the Hitachi Certified Storage Professional 2006 Program and supports the Hitachi Data Systems Storage Foundations -Enterprise exam (HH0-110).
Hitachi Data Systems Storage Foundations – Enterprise
Target Audience
3
Hitachi Data Systems employees, partners, and customers with a technical focus
This manual may not be copied, transferred, reproduced, disclosed, or Page x distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Data Systems Storage Foundations – Enterprise
Course Objectives
4
Upon completion of this course, the learner should be able to:Describe storage performance and data protection strategies for Hitachi Identify fundamental differences in Hitachi storage strategy compared with the competition (Internal architecture, disk architecture, cache operations, RAID use, and emulation types)Describe the essential components within Hitachi storage for the enterprise including models, numbering convention, capacities, types of devices, and physical to logical maps. Describe the evolution of the enterprise product line from the Hitachi Lightning 9900™ V Series enterprise storage system to the Hitachi TagmaStore™ Universal Storage Platform and Hitachi TagmaStore™ Network Storage Controller (Cite the differences between the products, features, functions, benefits, and any major technology differences)Describe the Hitachi Resource Manager™ Utility Package and what it is used for (Storage Navigator volume management, and machine management)Describe the features, functions, and principles for Hitachi TrueCopy™ Remote Replication software (example: synchronous vs. asynchronous replication)Describe the features, functions, and principles for Hitachi ShadowImage™ In-System Replication softwareDescribe the features, functions, and principles for Hitachi Universal Replicator software (including how given a scenario, describe how Virtualization is enabled or used to solve business problems functions on top of TrueCopy software and ShadowImage software)
distributed, in whole or in part, without the prior written consent of HDS. Page xi
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Data Systems Storage Foundations – Enterprise
Course Objectives (continued)
5
Describe the benefits of Application Optimized Storage™Describe the product solutions within Application Optimized Storage and how they can solve business issuesGiven a scenario, describe how virtualization is enabled or used to solve business problemsDescribe how to manage externally attached storage using the Hitachi Universal Volume Manager softwareDescribe how Hitachi Virtual Partition Manager software enables the logical partitioning of the Universal Storage Platform and Network Storage ControllerDescribe how HiCommand® Device Manager functionsDescribe how HiCommand Storage Services Manager functionsDescribe how HiCommand Tuning Manager functionsDescribe Hitachi Data Systems strategy for enterprise content archivalDescribe how Hi-Track® “call home” and remote diagnostic tool functionsDescribe Hitachi NAS features and benefits to the customerIdentify the product sets and management tools that constitute a SAN
This manual may not be copied, transferred, reproduced, disclosed, or Page xii distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Data Systems Storage Foundations – Enterprise
Course Content
6
Module 12: TrueCopy softwareModule 13: RAID Manager/CCIModule 14: Universal Replicator softwareModule 15: Storage Services Manager software, powered
by AppIQ*Module 16: Hitachi Virtual Partition ManagerModule 17: Hitachi Universal Volume Manager softwareModule 18: Hitachi Cross-System Copy SoftwareModule 19: Hitachi HiCommand Manager softwareModule 20: Data Retention UtilityModule 21: Enterprise Content ArchivalModule 22: Tuning Manager software
Introduction Module 1: Application Optimized StorageModule 2: Product OverviewModule 3: Platform and Network Storage Controller
Hardware and ArchitectureModule 4: DKU Architecture and Logical UnitsModule 5: Resource Manager softwareModule 6: NAS Blade Module 7: Hitachi Dynamic Link Manager softwareModule 8: Device Manager softwareModule 9: Business ContinuityModule 10: ShadowImage softwareModule 11: Copy-on-Write Snapshot software
Content
distributed, in whole or in part, without the prior written consent of HDS. Page xiii
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Data Systems Storage Foundations – Enterprise
This manual may not be copied, transferred, reproduced, disclosed, or Page xiv distributed, in whole or in part, without the prior written consent of HDS.
1. Application Optimized Storage™ Solutions from Hitachi Data Systems
distributed, in whole or in part, without the prior written consent of HDS. Page 1-1
This manual may not be copied, transferred, reproduced, disclose, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to: – Describe the purpose of Application Optimized Storage™
solutions from Hitachi Data Systems – Identify the business issues that drive Application Optimized
Storage solutions– Understand how Hitachi Data Systems Software and Hardware
products support Application Optimized Storage solutions– List the building blocks of Application Optimized Storage solutions
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-2 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems Business Challenges
Business Challenges
3
Risk Compliance
Governance
Efficiency
Value
CostDrive
There is a logical thread between top-line business issues and storage.
The following issues demonstrate the correlation between top line business issues and storage:
Organizational requirement to address corporate governance:
E.g. Sarbanes-Oxley requires timely and accurate financial reporting
More stringent oversight by Board of Directors
Corporate officers and auditors are held responsible
Organizational requirement to reduce operational risk:
Increasing threats of terrorism and wide area outages
Technology concentrates and magnifies exposures
Supply chain creates ripple effect
Organizational requirement to increase business efficiency:
Global competition increases need for efficiency
Mergers and acquisitions complicate consolidation
distributed, in whole or in part, without the prior written consent of HDS. Page 1-3
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Business Challenges
Vertical markets become more specialized
Organizational requirements to increase business value:
Cross selling and up selling to expand customer reach
Unlock value that is locked in business units
Agility in addressing new markets
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-4 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems The Customers Speak
The Customers Speak
4
End-users in a recent survey ranked their pain points in the following order, highest to lowest priority:
Cost -- price and total cost of ownership (TCO)
The challenge of managing growth
The inability to manage storage assets and infrastructure
The lack of integrated and/or interoperable solutions
Increasing complexity of storage infrastructure
distributed, in whole or in part, without the prior written consent of HDS. Page 1-5
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Applications are the Link
Applications are the Link
5
Applications are the critical driver of business process and decision making, impacting organizational growth, risk, and profitability
Applications have unique performance, access, protection, and retention requirements
Messaging
Databases
Imaging Content Management
ERP
ArchivingBackup / DR
Messaging
AOS: a strategy to align business and IT by optimizing storage infrastructure and management with application requirements based upon price, performance, availability, and functionality
Applications are the link between business and IT. By focusing on applications, and addressing their unique storage requirements Hitachi Data Systems can help organizations address their key business challenges.
Application Optimized Storage system (AOS) is a platform for aligning business and IT objectives. AOS: a strategy to align business and IT by optimizing storage infrastructure and management with application requirements based upon price, performance, availability, and functionality
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-6 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems Storage Management: Simplification, Optimization, and Automation
Storage Management: Simplification, Optimization, and Automation
6
Application-to-spindle view of infrastructure and storageEfficiently discover, allocate, provision, and utilize assetsApplication-centric quality-of-service, provisioning, capacity, performance, availability, and automationManagement simplicity with common tools for all assets
Storage Area ManagementStorage Area Management
• Storage Services Manager• Chargeback software• Global Reporter software• Path Provisioning software• QoS for Exchange software
CA® Unicenter Integration Module
Tiered Storage Manager software
Backup Services Manager software
Resource Manager softwareDynamic Link Manager softwareTuning Manager software
Performance Maximizer package• Server Priority Manager software• Volume Migration software• Performance Monitor software
Device Manager software
Replication Monitor software
Protection Manager software
Whether you are deploying a multi-tier infrastructure or a single array, the next critical piece of AOS is storage management. Due to the breadth of the Hitachi Data Systems storage area management suite our customers can leverage a single set of tools for their entire storage infrastructure.
Hitachi Resource Manager ™ utility package
Hitachi Dynamic Link Manager™ path manager software
Hitachi HiCommand® Tuning Manager Software
Hitachi HiCommand® Device Manager Software
Hitachi Performance Maximizer storage system optimization package
Hitachi Server Priority Manager software
Hitachi Volume Migration software
Hitachi Performance Monitor software
Hitachi HiCommand® Protection Manager software
distributed, in whole or in part, without the prior written consent of HDS. Page 1-7
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Storage Management: Simplification, Optimization, and Automation
Hitachi HiCommand® Replication Monitor software
Hitachi HiCommand® Storage Services Manager software
Hitachi HiCommand® Chargeback software, powered by AppIQ*
Hitachi HiCommand® Global Reporter software, powered by AppIQ*
Hitachi HiCommand® Path Provisioning software, powered by AppIQ*
Hitachi HiCommand® QoS for Microsoft Exchange software, powered by AppIQ*
Hitachi HiCommand® Backup Services Manager software, powered by APTARE®
Hitachi HiCommand® Tiered Storage Manager software
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-8 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems Data Management: Movement, Replication, Protection, and Recovery
Data Management: Movement, Replication, Protection, and Recovery
7
Universal data migration, replication, backup, and security for heterogeneous storageLocal and remote replication capabilitiesBi-directional data movement across all tiers of storage to support comprehensive DLMPolicy-based for automationComplete data integrity
In addition to storage management, Hitachi Data Systems has a powerful and unique data management suite that supports all tiers of storage.
distributed, in whole or in part, without the prior written consent of HDS. Page 1-9
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Tiered Storage: A New Paradigm
Tiered Storage: A New Paradigm
8
The Analysts Agree:
Virtualized storage infrastructure to reduce complexity and speed deployment of new assets
Heterogeneous, multi-tier capacity to address all service level requirements
Secure partitioning to support discreet application Quality-of-Service and lifecycle requirements
Common platform for universal connectivity – SAN, NAS, FICON, ESCON
CIM-based management providing flexibility for heterogeneous infrastructure management
Tiered storage is the foundation for AOS and Hitachi Data Systems ’s tiered storage strategy, it is unique in the market.
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-10 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems A New Paradigm
A New Paradigm
9
Lightning Thunder
DRU w/SATA 300GB
RAID
HDSHDS
76GB
72GB
Application A
Application B
Application C
Application A
Application B
Application C
FC RAID 1
SATA
300GB146GB
Application Optimized Storage One Size Must Fit All Niche
IBM
CompetitionUniversal, Multi-Tier Storage
API
FC RAID 1
Illustrates how with AOS you can leverage a single platform, with a common set of tools, and address a customer application storage requirements. Contrast that with competitive offerings where either you use one type of storage for all your application requirements or you are forced to use multiple solutions, which introduce complexity and cost.
distributed, in whole or in part, without the prior written consent of HDS. Page 1-11
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Competitive Landscape
Competitive Landscape
10
Application Optimized Storage:ILM:
Addresses immediate IT needs for simplified infrastructure and
tiered storage
Reduces complexity with universal storage and data
management
Addresses application requirements for QoS, including performance, availability, and security
Established track record and install base
Can be complex, costly, and time consuming
Does not address requirements for application QoS.
Is still evolving and not clearly understood by the marketplace
Focuses outside the scope of storage/data management. Does not address basic IT
requirements
Application Optimized Storage delivers tangible business value today and providesthe best platform for ILM, as it matures
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-12 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems Application Optimized Storage Proof Points
Application Optimized Storage Proof Points
11
Services
Services – a unique suite of consulting, design, and deployment services to help organizations optimize their storage infrastructure:
Risk Analysis – help customers understand their risk profile and map those requirements to the appropriate storage infrastructure
Storage Economics - helps customers assess, analyze, design, and economically justify the most appropriate storage architecture for their
organization.
AOS Assessment and Planning – assess current storage infrastructure and requirements, then assess and deploy storage infrastructure and
data lifecycle management solutions.
Storage Consolidation Planning and Design Service - help customers analyze their existing storage environment, application characteristics, and
capacity requirements to determine the plan of implementation.
distributed, in whole or in part, without the prior written consent of HDS. Page 1-13
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Summary
Summary
12
The increasing focus on value, risk, governance, and efficiency is driving organizations to more closely align
businesses and IT resources.
Customers need storage solutions that help reduce total cost and complexity, manage data
growth, and improve utilization of assets.
Application Optimized Storage solutions help organizations large and small align applications with storage requirements based upon cost, performance, availability, and functionality.
Application Optimized Storage is simple, cost-effective, easy to deploy, and
delivers value today.
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-14 distributed, in whole or in part, without the prior written consent of HDS.
Application Optimized Storage™ Solutions from Hitachi Data Systems Module Review
Module Review
13
distributed, in whole or in part, without the prior written consent of HDS. Page 1-15
This manual may not be copied, transferred, reproduced, disclosed, or
Application Optimized Storage™ Solutions from Hitachi Data Systems Module Review
This manual may not be copied, transferred, reproduced, disclosed, or Page 1-16 distributed, in whole or in part, without the prior written consent of HDS.
2. Hitachi Copy-on-Write Snapshot Software
distributed, in whole or in part, without the prior written consent of HDS. Page 2-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi Copy-on-Write Snapshot Software Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to: • Discuss the major benefits of the Hitachi TagmaStore™ Universal
Storage Platform and Hitachi TagmaStore™ Network Storage Controller technology, and how this technology can meet your organization's needs
• Describe the Universal Storage Platform and Network Storage Controller models and family positioning
• State the features and strengths of the Universal Storage Platform
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Functional Description
Functional Description
3
The Universal Storage Platform– Is a revolutionary platform for managing and organizing
your data
– Is enabled by scalable enterprise storage architecture
– Simplifies disaster recovery solutions
– Provides efficient and affordable storage consolidation
Highly scalable, flexible, and performing
Lower environmental and management costs
Protects software and hardware investments
– Allows you to manage multiple and tiered heterogeneous storage systems
– Provides comprehensive storage management software
The Solution
The Universal Storage Platform is a revolutionary platform that helps you manage and organize all your disparate storage systems to work more efficiently for your business.
The Universal Storage Platform represents a new computing revolution that delivers efficient and flexible IT infrastructure. It enables you to extend the life of current storage investments and take advantage of new functionality on yesterday’s storage products. Multiple and tiered heterogeneous storage systems can be connected to and managed through the Universal Storage Platform.
The Universal Storage Platform provides exceptionally powerful capabilities for data storage and management.
distributed, in whole or in part, without the prior written consent of HDS. Page 2-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Functional Description
4
Using the Universal Storage Platform, you can:– Connect all of your storage systems into a common storage platform
– Share that platform across virtual storage machines
– Reduce your data management cost by using a single replication engine
– Enhance server, application, and network performance
– Manage all of this from a single pane of glass
Address application-specific performance, availability, cost, and protection requirements
Support all of your data protection and lifecycle management needs
Offload replication, migration, and data protection tasks
Reduce complexity and extend the useful life of storage you already own
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Hitachi TagmaStore Family Models
Hitachi TagmaStore Family Models
5Enterprise Served
The NSC55 and the USP1100 are resold by Sun and Original Equipment Manufacturer (OEM) by HP. An OEM repackages the entire product to make it look like it is their product. A reseller simply resells the product and leverages the support organizations (to a greater or lesser extent) of the manufacturer.
Universal Storage Platform and Network Storage Controller model names across vendors:
NSC55 Sun StorEdge 9985 HP StorageWorks XP10000
USP100 USP600 USP1100
Sun StorEdge 9990 HP StorageWorks XP12000
Hitachi Thunder 9585V™ ultra high-end modular storage
Hitachi Lightning 9970V™ single-cabinet enterprise storage system
Hitachi Lightning 9980V™ multi-cabinet enterprise storage system
distributed, in whole or in part, without the prior written consent of HDS. Page 2-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Network Storage Controller
Network Storage Controller
6
Up to 240 disksFront-end Director (FED)
– Standard -16 Fibre Channel Ports (FCP)– Option – eight FICON or 16 ESCON Ports– Option – Additional 32 FCP– Option – eight NAS Ports
Back-end Director (BED)– Eight Back-end Paths
Standard FED Feature and BED Feature are on same PCBTwo Cache Memory/Cache Switches/Control Memory cards
– Maximum 64GB cache, 8.5GB/s data bandwidth– Cache, Control Memory, and Switch are on the same PCB
Not Upgradeable to Universal Storage PlatformMicrocode is common between the Universal Storage Platform and the Network Storage ControllerMarket – High-end Modular to Low-end Enterprise
NSC55
Network Storage Controller occupies a unique position in the TagmaStore family. It is positioned between the high-end of our modular storage family, the Thunder 9585V system, and the low-end of our enterprise storage family, the Universal Storage Platform USP100. This versatility provides our modular storage customers to exploit the performance, availability, and software characteristics of the Universal Storage Platform family, at a new lower price point, and in a smaller, modular form factor than previously available.
Network Storage Controller and the existing Universal Storage Platform systems will use the exact same microcode.
All host OS and external storage documented in the ECN (as supported by Universal Storage Platform) will also be supported by the Network Storage Controller
Supports RAID1 (2D+2D), RAID5 (3D+1P), and RAID6 (6D+2P). There is no support for RAID1 (4D+4D) or RAID5 (7D+1P)
The Network Storage Controller supports mainframes and open systems, so the connectivity is limited to a maximum 48 FC or 16FC and 8 NAS connections. The minimum number of disks is five (one RAID group plus one spare).
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Network Storage Controller
There is support for: 48 FC Connections Up to 8 FICON and 16 ESCON host connections Capacity is 240 HDDs 8 NAS ports Cache Memory is 64MB Maximum PCB – Printed Circuit Board
distributed, in whole or in part, without the prior written consent of HDS. Page 2-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Universal Storage Platform Models
Universal Storage Platform Models
7
3 Models
– USP100 - Entry Level
– USP600 - Enhanced
– USP1100 - High Performance
USP100 USP600 USP1100
Configuration will match needs and budgets while protecting the investment– Grow from smaller to the
larger but there are steps to go through
There are three Universal Storage Platform models, ranging from an entry-level to a high-performance configuration. You may non-disruptively upgrade from the smallest model to the maximum configuration.
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software USP100
USP100
8
Up to 256 disks (up to 74TB)Single Back-end Director (BED) featureTwo Cache Memory cards, Switches, and Control Memory – Maximum 64GB cache, 17GB/s data bandwidth
Up to 64 ESCON/32 FICON channels or 128 FC– Front-end Directors are the only optional features
Upgradeable to USP600– Add second BED, Cache Switch pair and Array Frame
Equivalent in performance to a small Lightning 9980V system
USP100
The USP100 is suitable for small to medium enterprises and/or specific applications.
distributed, in whole or in part, without the prior written consent of HDS. Page 2-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software USP600
USP600
9
Up to 512 disks (96 + one minimum), ~148TB raw capacity– Dual frame standard, specific cabling
Dual Back-end Director (BED)Four switches, two Cache Memory cards and Control Memory – Additional Cache Memory and Control Memory are
available as an optional upgrade– Maximum 64GB cache, 128GB after optional upgrade– 34GB/s data bandwidth
Up to 48/(96 later release) FICON, 96 ESCON channels, or 192 FCUpgradeable to USP1100Twice the throughput of the Lightning 9980V system– Faster performance with fewer components– Higher efficiency and reliability
USP600
The USP600’s performance capabilities are created to match most of the requirements of Fortune 500 customers. It offers full upgrade ability to the USP1100 model and excellent scalability in terms of features and functions.
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software USP1100
USP1100
10
Up to 1152 disks (128+ one minimum)– Aggregate up to 332TB of capacity
Two or four Back-end directors– Maximum of 64 Back-end Paths
Four switches, Cache Memory cards and Control Memory – Max 128GB cache, 68GB/s data bandwidth
Up to 48/(96 later release) FICON, 96 ESCON channels or 192 FCUp to 4 DKUs (R1, R2, L1, L2)About four times the throughput of the Lightning 9980V systemUltimate consolidation and transaction machine
USP1100
The USP1100 is the ultimate configuration. It comes with everything standard and the only choices are the number of back-end and front-end directors installed. It is the most performing configuration and the highest available of the models.
A standard full configuration would compose 128 FC ports (eight FED Features) and 64 DKA FC-AL ports (four BED Features)
Configuring 192 FC ports entails removing two BED Features (DKA Options). The Universal Storage Platform configuration would then comprise six FED Features and 2 BED Features.
distributed, in whole or in part, without the prior written consent of HDS. Page 2-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Models - Overview
Models - Overview
11
Features
Features
*Each NAS Blade consists of dual NAS serversNote: All capacities are based on 1GB=1,000,000,000 bytes (1TB = 1000GBs)
This manual may not be copied, transferred, reproduced, disclosed, or
12*Each NAS Blade consists of dual NAS serversNote: All capacities are based on 1GB=1,000,000,000 bytes (1TB = 1000GBs)
Page 2-12 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Key Features and Strengths
13
Designed to meet your needs for data lifecycle managementEnables continuous data availabilitySupports concurrent attachment to heterogeneous systemsSupports mainframe and open-systems compatibilityIncorporates a NAS environmentAccommodates scalability
The Universal Storage Platform is designed to meet your evolving and increasing needs for data lifecycle management in the 21st century:
Instant access to data around the clock: 100-percent data availability guarantee with no single point of failure Highly resilient multi-path fibre architecture Fully redundant, hot-swappable components and non-disruptive microcode
updates Global dynamic hot sparing Duplexed write cache with battery backup Hi-Track® “call-home” service/remote maintenance tool RAID-1 and/or RAID-5 array groups within the same subsystem
Unmatched performance and capacity: Multiple point-to-point data and control paths Up to 68GB/sec internal subsystem (data) bandwidth Fully addressable 128GB data cache; separate 6GB control memory Extremely fast and intelligent cache algorithms
distributed, in whole or in part, without the prior written consent of HDS. Page 2-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Non-disruptive expansion to over 330TB raw capacity Simultaneous transfers from up to 64 separate hosts Up to 1152 high-throughput (10K or 15K rpm) fibre-channel, dual-active
disk drives Extensive connectivity and resource sharing:
Concurrent operations of UNIX®, Windows®, Linux®, and mainframe (z/OS®, S/390®)
Fibre-channel, FICON™, and Extended Serial Adapter™ (ESCON®) server connections
Fibre-channel switched, arbitrated loop, and point-to-point configurations
14
Designed to meet your needs for data lifecycle managementEnables continuous data availabilitySupports concurrent attachment to heterogeneous systemsSupports mainframe and open-systems compatibilityIncorporates a NAS environmentAccommodates scalability
The Universal Storage Platform is designed for nonstop operation and continuous access to all user data. To achieve nonstop customer operation, the Universal Storage Platform accommodates online feature upgrades and online software and hardware maintenance. Main components are implemented with a duplexed or redundant configuration. The Universal Storage Platform has no active single point of component failure.
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-14 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
15
Designed to meet your needs for data lifecycle managementEnables continuous data availabilitySupports concurrent attachment to heterogeneous systemsSupports mainframe and open-systems compatibilityIncorporates a NAS environmentAccommodates scalability
The Universal Storage Platform RAID subsystem supports concurrent attachment to UNIX® servers, PC servers, mainframe servers. The Universal Storage Platform provides heterogeneous connectivity to support all-open, all-mainframe, and multiplatform configurations.
Fibre-channel: When fibre-channel interfaces are used, the Universal Storage Platform can provide up to 192 ports for attachment to UNIX based and/or PC-server platforms. The type of host platform determines the number of logical units (LUs) that may be connected to each port (maximum 1024 per port). Fibre-channel connection provides data transfer rates of up to 200 MB/sec (2 Gbps). The Universal Storage Platform supports fibre channel arbitrated loop (FC-AL) and fabric fibre-channel topologies as well as high availability (HA) fibre-channel configurations using hubs and switches.
FICON™: When FICON channel interfaces are used, the Universal Storage Platform can provide up to 64 control unit (CU) images and 16,384 logical devices (LDEVs). Each physical FICON channel interface (port) supports up to 65,536 logical paths (1024 host paths × 64 CUs) for a maximum of 131,072 logical paths per Universal Storage Platform subsystem. FICON connection provides transfer rates of up to 200MB/sec (2 Gbps).
distributed, in whole or in part, without the prior written consent of HDS. Page 2-15
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Hitachi Extended Serial Adapter™ (ExSA™) ESCON-compatible channel adapter (compatible with ESCON® protocol): When ExSA channel adapter interfaces are used, the Universal Storage Platform can provide up to 64 control unit (CU) images and 16,384 logical devices (LDEVs). Each physical ExSA channel interface (port) supports up to 128 logical paths (32 host paths × 4 CUs) for a maximum of 8192 logical paths per Universal Storage Platform. ExSA channel adapter connection provides transfer rates of up to 17MB/sec.
16
Designed to meet your needs for data lifecycle managementEnables continuous data availabilitySupports concurrent attachment to heterogeneous systemsSupports mainframe and open-systems compatibilityIncorporates a NAS environmentAccommodates scalability
Mainframe compatibility
The Universal Storage Platform supports 3990-6, 3990-6E, and 2105 controller emulations and can be configured with multiple concurrent logical volume image (LVI) formats, including 3390-3, 3390-3R, 3390-9, and larger.
Open-systems compatibility
The Universal Storage Platform supports multiple concurrent attachments to a variety of host operating systems (OS) and is compatible with most fibre channel host bus adapters (HBAs). The number of logical units (LUs) that may be connected to each port is determined by the type of host platform being attached. The Universal Storage Platform supports the following platforms at this time:
IBM® AIX®
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-16 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Sun™ Solaris™ HP-UX® Microsoft® Windows NT® Microsoft® Windows® 2000 Microsoft® Windows® 2003 Novell® NetWare® HP Tru64 UNIX® HP OpenVMS™ SGI™ IRIX® Red Hat® Linux®
17
Designed to meet your needs for data lifecycle managementEnables continuous data availabilitySupports concurrent attachment to heterogeneous systemsSupports mainframe and open-systems compatibilityIncorporates a NAS environmentAccommodates scalability
The Hitachi NAS Blade system provides a NAS environment based on NAS packages incorporated in the Universal Storage Platform. Clients (such as end users, application servers, and database servers) can access file systems on the disks via the NAS Blade packages installed on the Universal Storage Platform disk subsystem. The main features of the NAS Blade system are:
Open data-sharing environment that utilizes legacy systems High-performance NAS environment High availability
distributed, in whole or in part, without the prior written consent of HDS. Page 2-17
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Scalability Safety assuredness (optional functionality) High reliability (optional functionality)
18
Designed to meet your needs for data lifecycle managementEnables continuous data availabilitySupports concurrent attachment to heterogeneous systemsSupports mainframe and open-systems compatibilityIncorporates a NAS environmentAccommodates scalability
The architecture of the Universal Storage Platform accommodates scalability to meet a wide range of capacity and performance requirements. The Universal Storage Platform storage capacity can be increased from a minimum of 288GB raw (one RAID5 (3D+1P) parity group, 72-GB HDDs) to a maximum of 332TB raw (287 RAID-5 (7D+1P) parity groups of 300-GB HDDs). The Universal Storage Platform nonvolatile cache can be configured from 8GB to 128GB in increments of 4GB. All disk drive and cache upgrades can be performed without interrupting user access to data.
Front-end directors. The Universal Storage Platform can be configured with the desired number and type(s) of channel adapters (CHAs), installed in pairs. The Universal Storage Platform can be configured with one to six CHA pairs to provide up to 192 paths (16 ports × 12 CHAs) to attached host processors.
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-18 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Back-end directors. The Universal Storage Platform can be configured with the desired number of disk adapters (DKAs), installed in pairs. The DKAs transfer data between the disk drives and cache. Each DKA pair is equipped with 16 device paths. The Universal Storage Platform can be configured with up to four DKA pairs, providing up to 64 concurrent data transfers to and from the disk drives.
19
Full fault-toleranceSeparate power supply systemsBattery backup and destage option for HDDsDynamic scrubbing and sparing for disk drivesDynamic duplex cacheRemote copy featuresHi-Track toolNondisruptive service and upgradesError Reporting
Reliability Availability Serviceability
The reliability, availability, and serviceability strengths of the Universal Storage Platform include:
Full fault-tolerance: The Universal Storage Platform provides full fault-tolerance capability for all critical components. The subsystem is protected against disk drive error and failure by enhanced RAID technologies and dynamic scrubbing and sparing. The Universal Storage Platform uses component and function redundancy to provide full fault-tolerance for all other subsystem components (microprocessors, control storage, power supplies, etc.). The Universal Storage Platform has no active single point of component failure and is designed to provide continuous access to all user data.
distributed, in whole or in part, without the prior written consent of HDS. Page 2-19
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Separate power supply systems: Each storage cluster is powered by a separate set of power supplies. Each set can provide power for the entire subsystem in the unlikely event of power supply failure. The power supplies of each set can be connected across power boundaries, so that each set can continue to provide power if a power outage occurs. The Universal Storage Platform can sustain the loss of multiple power supplies and still continue operation.
Battery backup and destage option for HDDs: A new feature on the Universal Storage Platform provides separate battery backup for the hard disk drives (HDDs) with an optional setting to destage data from cache to the HDDs during a power outage.
Dynamic scrubbing and sparing for disk drives: The Universal Storage Platform uses special diagnostic techniques and dynamic scrubbing to detect and correct disk errors. Dynamic sparing is invoked automatically if needed. The Universal Storage Platform can be configured with up to 40 spare disk drives (4 + 36 optional), and any spare disk can back up any other disk of the same speed (RPMs) and the same or less capacity, even if the failed disk and spare disk are in different array domains (attached to different backend directors).
Dynamic duplex cache: All cache memory in the Universal Storage Platform is nonvolatile and is protected by 48-hour battery backup (without destage option). The cache in the Universal Storage Platform is divided into two equal areas (called cache A and cache B) on separate cards. Cache A is in cluster 1, and cache B is in cluster 2. The Universal Storage Platform places all read and write data in cache. Write data is normally written to both cache A and B with one front-end director (CHA) write operation, so that the data is always duplicated (duplexed) across logic and power boundaries. If one copy of write data is defective or lost, the other copy is immediately destaged to disk. This “duplex cache” design ensures full data integrity in the unlikely event of a cache memory or power-related failure.
Remote copy features: The Hitachi TrueCopy and XRC Replication data movement features enable the user to set up and maintain duplicate copies of mainframe and open-system data over extended distances. In the event of a system failure or site disaster, the secondary copy of data can be invoked rapidly, allowing applications to be recovered with guaranteed data integrity.
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-20 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Key Features and Strengths
Hi-Track tool: The Hi-Track tool monitors the operation of the Universal Storage Platform at all times, collects hardware status and error data, and transmits this data via modem to the Hitachi Data Systems Support Center. The Support Center analyzes the data and implements corrective action as needed. In the unlikely event of a component failure, Hi-Track tool calls the Hitachi Data Systems Support Center immediately to report the failure without requiring any action on the part of the user. Hi-Track tool enables most problems to be identified and fixed prior to actual failure, and the advanced redundancy features enable the subsystem to remain operational even if one or more components fail. Note: Hi-Track tool does not have access to any user data stored on the Universal Storage Platform. The Hi-Track tool requires a dedicated RJ-11 analog phone line.
Nondisruptive service and upgrades: All hardware upgrades can be performed nondisruptively during normal subsystem operation. All hardware subassemblies can be removed, serviced, repaired, and/or replaced nondisruptively during normal subsystem operation. Shared memory for the Universal Storage Platform is installed on separate PCBs, and the fibre-channel PCBs for the Universal Storage Platform are equipped hot-swappable fibre SFP transceivers (GBICs). All microcode upgrades can be performed during normal operations using the service processor (SVP) and the alternate path facilities of the host. Online microcode upgrades can be performed without interrupting open-system host operations.
Error Reporting: The Universal Storage Platform reports service information messages (SIMs) to notify users of errors and service requirements. SIMs can also report normal operational changes, such as remote copy pair status change. The SIMs are logged on the Universal Storage Platform SVP, reported directly to the mainframe and open-system hosts, and reported to Hitachi Data Systems via Hi-Track tool.
distributed, in whole or in part, without the prior written consent of HDS. Page 2-21
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Module Review
Module Review
20
1. On the Hitachi TagmaStore™ Universal Storage Platform and Hitachi TagmaStore™ Network Storage Controller system, is the Shared Memory and Cache on the PCB board?
2. What is the target market for the model NSC55?3. Can the model NSC55 be upgraded to a model USP100? Can the
model USP600 be upgraded to the model USP1100?4. Does the model NSC55 use the same microcode as the Universal
Storage Platform?5. How do the FED and BED Features different on the Universal
Storage Platform and the model NSC55?6. Which has the most cache, the model NSC55 or the model
USP100?7. What is the maximum number of cache switches and internal
bandwidth on the Universal Storage Platform?
This manual may not be copied, transferred, reproduced, disclosed, or Page 2-22 distributed, in whole or in part, without the prior written consent of HDS.
3. Hardware and Architecture
distributed, in whole or in part, without the prior written consent of HDS. Page 3-1
This manual may not be copied, transferred, reproduced, disclose, or
Hardware and Architecture Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:• Describe the basic Hitachi TagmaStore™ Universal Storage Platform
and Hitachi TagmaStore™ Network Storage Controller hardware architectural concepts and fundamentals
• State hardware features and architectural advantages
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-2 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Hardware Overview
Hardware Overview
3
DKC/DKU FramesUniversal Star Network ArchitectureCHA, DKA, and ClustersDisks, Batteries, and Cache DestagingHi-Track
distributed, in whole or in part, without the prior written consent of HDS. Page 3-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Universal Storage Platform Overview
Universal Storage Platform Overview
4
Disk ControllerFrameDKC
Array FrameDKU-R2
Disk ControlFrameDKC
Array FrameDKU-L2
Array FrameDKU-L1
Array FrameDKU-R1
The Universal Storage Platform Is a revolutionary platform for managing and organizing your data Is enabled by the world’s most scalable enterprise storage architecture Provides efficient and affordable storage consolidation:
Highly scalable, flexible, and performing Lower environmental and management costs Protects software and hardware investments
Allows you to manage multiple and tiered heterogeneous storage systems Provides comprehensive storage management software
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-4 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Disk Controller - DKC
Disk Controller - DKC
5
DKC Front View DKC Rear View
The Universal Storage Platform consists of a Disk Controller (DKC) in which 128 disk drives can be installed and disk array frames (DKUs) in each of which 256 disk drives can be installed. The DKC is capable of controlling up to 1,152 HDDs when it is connected with four DKUs.
Disk Controller (DKC) The box frame contains channel adapters, disk adapters, cache memories,
shared memories, CSWs, the HDU box containing disk drives, power supplies, and battery boxes.
Most components have a redundant configuration, which achieves nonstop operation against failure of single points.
Components can be replaced and added, and the microcode can be upgraded while the subsystem is in operation.
The control unit is equipped with an SVP (Special PC), which is used to service the subsystem, monitor its running condition, and analyze faults. Connecting the SVP with a service center enables remote maintenance of the subsystem.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Disk Unit - DKU
Disk Unit - DKU
6
DKU Front View DKU Rear View
Disk Unit (DKU) A DKU consists of four hard drive unit (HDU) boxes each with 64 disk drives,
cooling fans, power supplies, and battery boxes It has a redundant power supply system and cooling fans The disk drives achieve nonstop operation against failure by employing
RAID1+0 or RAID5. Components can be replaced and added while the subsystem is in operation
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-6 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture DKC Box
DKC Box
7
The DKC box contains:– Disk adapters (DKA)
BEDs – Back-end Directors
– Channel adapters (CHA)FEDs – Front-end Directors
– Cache memory– Shared memory– Cache switch (CSW)– HDU Box
The DKC section consists of CHAs, DKAs, caches, shared memories, and cache switches (CSWs). Each component is connected with the cache paths and/or SM paths.
The DKA is a component of DKC that controls data transfer between the hard disk and the cache memory. In the current version of the Universal Storage Platform, the number of ports per DKA has increased to eight and every port is controlled by a microprocessor. The transfer rate per port has increased to 2Gb/s.
The CHA is a component of DKC that processes the channel commands from the host(s) and manages the host access to cache.
The Universal Storage Platform provides various CHA options. These options support connectivity to mainframe, Storage Area Network, and Network Attached Storage.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Universal Star Network Architecture
Universal Star Network Architecture
8
Comprised of:– Cache– Shared Memory– Non-blocking Cache Switches
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-8 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Universal Star Network Architecture
9
SM
DKADKADKACHADKADKADKADKA
DKADKADKACHADKADKADKADKA
CSWCSWCSWCSW
CACHECACHECACHECACHE
SMSM SM
Channel Interface
Cache Path(68GB/s)
SM Path(13GB/s)
DKC
The Universal Star Network™ Architecture is a network architecture that improves the performance of internal data transfer by using high-speed non-blocking crossbar switches.
The Universal Storage Platform introduces the third generation of the revolutionary Hierarchical Star (HiStar) Network (HSN) architecture which utilizes multiple point-to-point data and command paths to provide redundancy and improve performance. Each data command path is independent. The individual paths between the channel or disk adapters and cache are steered by high-speed cache switch cards (CSWs). The Universal Storage Platform does not have any common buses, thus eliminating the performance degradation and contention that can occur in a bus architecture. All data stored on the Universal Platform is moved into and out of cache via the redundant high-speed paths.
The performance of accessing the cache memory varies depending on the configuration of the Cache and the CSW. The High Performance Cache Access model is the maximum configuration, in which the Cache and the CSW options are installed, increases the cache bandwidth up to 68GB/s and total bandwidth up to 81GB/s.
The reliability of data stored in the cache memory is enhanced through the duplication of the data separated into the clusters 1 and 2.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Universal Star Network Architecture
10
New generation “non blocking” switching architecture– Faster switch components– Enhanced data and control bandwidth – Helps consolidations and fast transactions applications– Separate Control Memory region for increased reliability and
serviceabilityExpanded Cache Memory and flexible Front-end/Back-end configuration– Faster and increased number of RISC processors across the system– Conform better to the applications needs– Improved Fibre Channel and FICON design– Additional reliability: reduces outages, increases productivity
Faster back-end design– 2Gbps disk interface – Optional faster Back-end director (Later release)– Ease performance bottlenecks, helps with applications consolidation
and QoS
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-10 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Architecture Advantages
Architecture Advantages
11
Linear upgrade– Add board
Add processorsAdd paths
Shorter instruction path (fewer instructions)– Not communicating with other processors– Coordination with very fast shared memory– Less content switching
Elegant architecture allows expandable features– Example: externally sourced storage
distributed, in whole or in part, without the prior written consent of HDS. Page 3-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Optional Hardware Features
Optional Hardware Features
12
High-speed Cache Memory (four cache cards)– Hitachi TagmaStore™ Universal Storage Platform, model
USP600/USP1100High-availability Control (Shared) Memory (four SM cards)– Model USP600/USP1100
Front-end Director Features (Standard and Optional)– Model USP100 = two max– Model USP600 = four or six max– Model USP1100 = four or six max
Back-end Director Features (Standard and Optional)– Model USP1100 = four max– Model USP600 = four (model change required)
Cache increments– Model USP100/600 = 4GB– Model USP1100 = 8GB
For the model USP600 and model USP1100 to install the maximum number of six FED features, you can only have two BED features installed. The 5th and 6th FED features use the slots reserved for the 3rd and 4th BED features.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-12 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Clusters
Clusters
13
Power supply sphere boundary
Rear Side - Cluster 22X 2W 2V 2U 2T 2CB 2SB/2SD 2CD 2R 2Q 2P 2N 2M
1L 1K 1J 1H 1G 1CC 1SA/1SC 1CA 1F 1E 1D 1B 1A
C
H
A
6
D K A 0
C
H
A
3
C
H
A
0
C
H
A
2
C
H
A
7
C
H
A
1
D K A 5
D K A 3 / C H A
C
H
A
4
C
H
A
5
D K A 6 / C H A
CACHE1
CACHE3
C
S
W
3
C
S
W
1
Cache (Add.)
Cache (Basic)
C
S
W
0
C
S
W
2
D K A 7 / C H A
DIMM DIMM
DIMM DIMM D K A 2 / C H A
DKA4
CSW (Add)
SM3
SM1
SM0
SM2
Front Side - Cluster 1
2SB
2SD
1SC
1SA CACHE2
CACHE0
D K A 1
DKC-F510I-CSW
Controlled by CSW (Basic)
Logic PL
(Bas ic)
(Bas ic)
(Add)
(Add)
Upper
Upper
Lower
Lower
PCB Location
PCB Location
Cache (Bas ic )
Cache (Add.)
CSW (Add.)
CSW (Basic)
CSW (Basic)
CHA (Basic)
CHA (Bas ic )
CHA (Add.1)
CHA (Add.1)
CHA (Add.2)
CHA (Add.2)
CHA (Add.3)
CHA (Add.3)
Option B
Option BOption1
Option1 Option2
Option2
Option3
Option3
Controlled by CSW (Add)
Each controller frame consists of two redundant controller halves called storage clusters. Each storage cluster contains all physical and logical elements (e.g., power supplies, channel adapters, disk adapters, cache, control storage) needed to sustain processing within the subsystem. Both storage clusters should be connected to each host using an alternate path scheme, so that if one storage cluster fails, the other storage cluster can continue processing for the entire subsystem. On the Universal Storage Platform, one section is located on the front side of DKC, and the other section is located on the rear side. These two sections function as Cluster 1 and Cluster 2, respectively.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Clusters
Each pair of channel adapters is split between clusters to provide full backup for both front-end and back-end directors. Each storage cluster also contains a separate, duplicate copy of cache and shared memory contents. In addition to the high-level redundancy that this type of storage clustering provides, many of the individual components within each storage cluster contain redundant circuits, paths, and/or processors to allow the storage cluster to remain operational even with multiple component failures. Each storage cluster is powered by its own set of power supplies, which can provide power for the entire storage subsystem in the unlikely event of power supply failure. Because of this redundancy, the Universal Storage Platform can sustain the loss of multiple power supplies and still continue operation.
Each cluster has a structure in which two sets of CHAs, DKAs, caches, and CSW are installed symmetrically to the right and left.
The basic CSWs installed at the Printed Circuit Board (PCB) locations, 1D and 2P, control the Basic CHA, Add.1 CHA, Option B, and Option 1 DKA.
When installing the CHA/DKA in the location, 1G/2T, 1H/2U, 1K/2W, or 1L/2X, the DKC-F510I-CSW must be installed in location 1J and 2V.
The locations for installing the SM PCBs are separated from the locations for other components and the SM PCBs are installed in two layers: upper and lower layers. The SM PCBs installed in the upper and lower layers are different from each other. Besides, locations for installing the basic and additional SM PCBs in the Cluster 1 are different from those in the Cluster 2.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-14 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Architecture: Cache Memory
Architecture: Cache Memory
14
You can configure cache memory in either of the following two ways:– Standard Cache Access Model
Two Cache Boards– High Performance Cache Access Model
Four Cache BoardsWrite Data in Universal Storage Platform and Network Storage Controller is duplexed/mirrored– Cache boards installed in pairs– Cache board pairs installed in separate Clusters/power boundaries– Write data is written twice, in one operation
Once to each board in a Cache Board Pair
The Universal Storage Platform can be configured with a maximum of 128GB of cache (increments of 4GB).
All cache memory in the Universal Storage Platform is nonvolatile and is protected by 48- hour battery backup (without destage option). The cache in the Universal Storage Platform is divided into two equal areas (called cache A and cache B) on separate cards. Cache A is in cluster 1, and cache B is in cluster 2.
The Universal Storage Platform places all read and write data in cache. Write data is normally written to both cache A and B with one front-end director (CHA) write operation, so that the data is always duplicated (duplexed) across logic and power boundaries. If one copy of write data is defective or lost, the other copy is immediately destaged to disk. This “duplex cache” design ensures full data integrity in the unlikely event of a cache memory or power-related failure.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-15
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Standard Cache Access Model
Standard Cache Access Model
15
Data Path BandwidthCache BasicCSW Basic17GB/s
In the Standard Cache Access model, you add an additional PCB when the basic PCB has reached its maximum capacity of 64GB.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-16 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture High Performance Cache Access Model
High Performance Cache Access Model
16
In the High Performance Cache Access model, the cache DIMMs are mounted on the basic and additional PCBs in parallel.
In this model, the cache memories with a total capacity of 16GB (Four Sets DKC-F510I-C4G) or larger are added to the Universal Storage Platform storage system.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-17
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Shared Memory
Shared Memory
17
Shared memory stores– Cache Directory Information– Storage System Configuration data – Path Group Array Information
Dynamic path selectionSize of shared memory storage is determined by
– Total cache size– Number of logical devices (LDEVs)– Replication software in use
Shared Memory is duplexed (mirrored)Physical Location
– In the Universal Storage Platform, shared memory is mounted on dedicated PCBs – NEW
– In the Network Storage Controller, shared memory is mounted on the cache PCBs
– In the Lightning 9900 V Series system, shared memory is mounted on the cache PCBs
Contains the run-time task list for FEDs and BEDs
The nonvolatile shared memory contains the cache directory and configuration information for the Universal Storage Platform. The path group arrays (e.g. for dynamic path selection) also reside in the shared memory. The shared memory is duplexed (mirrored), and each side of the duplex resides on the first two SM cards, which are in clusters 1 and 2. The shared memory has separate power supplies and is protected by separate seven-day battery backup.
For the Universal Storage Platform model, shared memory is now mounted on separate boards (previously on the cache boards). This new design eliminates the performance degradation (caused by write-through mode) that was previously experienced during shared memory replacement.
The basic size of the shared memory is 3GB (two cards), and the maximum size is 6GB (four cards). The size of the shared memory storage is determined by several factors, including total cache size, number of logical devices (LDEVs), and replication functions in use.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-18 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Shared Memory
The replication functions affecting shared memory include: Hitachi TrueCopy™ Remote Replication software Hitachi ShadowImage™ In-System Replication software Hitachi Universal Replicator software Hitachi Copy-on-Write Snapshot software, Hitachi FlashCopy-compatible Mirroring software for IBM® z/OS® Hitachi Volume Migration software Hitachi Serverless Backup Enabler software Copy Manager for TPF.
Any required increase beyond shared memory stores configuration data and the information for controlling the cache memory and disk drives. The memory can be accessed commonly from the CHA and DKA.
PCB – Printed Circuit Board
distributed, in whole or in part, without the prior written consent of HDS. Page 3-19
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Architecture: Shared Memory Path
Architecture: Shared Memory Path
18
In the Universal Storage Platform, the Shared Memory is installed on the exclusive PCB (SM-PK). This enables a number of paths from the DKA/CHA to the SM to be increased and makes the performance of accessing the shared memory higher. The High Performance Shared Memory Access Model, in which the SM-PCB is added optionally, realizes the bandwidth of up to 13GB/s.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-20 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Directors
Directors
19
Front-end Directors (FEDs)Back-end Director (BEDs)
distributed, in whole or in part, without the prior written consent of HDS. Page 3-21
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Front-end Director (FED)
Front-end Director (FED)
20
Microprocessor
FC Protocol
1A 5A
1. Fibre channel protocol chip: Handles and passes data to microprocessor
2. Microprocessor: Real time O/S IP Stack What to do information from shared memory Data to cache switch
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-22 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Front-end Director (FED) Features
Front-end Director (FED) Features
21
MP-PK
MP-PK
MP-PK
MP-PK
Front-end Director (FED)– A microprocessor– Controllers two FC ports– 4Gbps
CHA (Channel Adapters)– A Board (PCB) containing FEDs
Front-end Director Feature – Also called - CHA Option– A pair of CHAs
Fibre Channel PCB
Fibre Port
Short Wavelength Fibre SFP Transceiver is standardLong Wavelength Transceivers are optional
MP-PK
MP-PK
MP-PK
MP-PK
Front-end Director Features-CHA Option:
The channel adapter boards (CHAs) contain the front-end directors (microprocessors) which process the channel commands from the hosts and manage host access to cache. In the mainframe environment, the front-end directors perform CKD-to-FBA and FBA-to-CKD conversion for the data in cache. Channel adapter boards are installed in pairs. The channel interfaces on each board can all transfer data at once, independently. Each channel adapter board pair is composed of one type of channel interface (e.g., fibre channel, FICON™, ExSA™, and NAS). Fibre-channel adapters and FICON™-channel adapters are available in both shortwave (multimode) and longwave (single-mode) versions. The Universal Storage Platform can be configured with multiple channel adapter pairs to support various interface configurations.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-23
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Front-end Director (FED) Features
Fibre-Channel: The Universal Storage Platform supports up to 192 fibre-channel ports. The fibre ports are capable of data transfer speeds of 400 MB/sec (4Gbps). Fibre channel features can have either 16 or 32 ports per pair of channel adapter boards. Fibre Channel CHAs may use Long Wavelength or Short Wavelength to connect to hosts, arrays or switches. This is made possible by installing long or short wavelength transceiver on every port on PCB.
FICON™: The Universal Storage Platform supports up to 96 FICON™ ports. FICON™ ports are capable of data transfer speeds of up to 200 MB/sec (2 Gbps). FICON™ features, available in both shortwave (multimode) and longwave (single mode) versions, can have either have 8 or 16 FICON™ host interfaces per pair of FICON™ channel adapter boards.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-24 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Front-end Director Features
Front-end Director Features
22
Optional Fibre Channel Features – 4Gbps – 16 FC (ports) has 8 Directors (four each PCB)– 32 FC ports has 16 Directors (eight each PCB)
Half populated, four FC ports incrementsShortwave and longwave intermix
Optional FICON feature – 2Gbps– Eight FICON port features– Short or long wave, no intermix
Optional ESCON feature – 16 ESCON ports
Optional Embedded NAS Blade feature – Initial release = 8 x 1Gbps ports
PCB - Printed Circuit Board
distributed, in whole or in part, without the prior written consent of HDS. Page 3-25
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture CHA Options
CHA Options
23
10Km500m/300m *110Km500m /300m*13KmMaximum cable length
16/32/48/64(80/96)
16/32/48/64(80/96)
8/16/24/32(40/48)
8/16/24/32(40/48)
16/32/48/64(80/96)
Number of ports / Subsystem( ): DKA slot used
16168816Number of ports / Option
1/2/3/4(5/6)
1/2/3/4(5/6)
1/2/3/4(5/6)
1/2/3/4(5/6)
1/2/3/4(5/6)
Number of options installed( ): DKA slot used
100/200100/200100/200100/20017Data transfer rate (MB/s)
FICONFICONFICONFICONESCONHost interface
16ML16MS8ML8MS16SOption name
Long Wave Short WaveLong WaveShort Wave
Mainframe Fibre 16portMainframe Fibre 8portESCON
--10Km10KmLong Wavelength *3
500m/275m *2500m/275m *2500m/300m *1500m/300m *1Short WaveMaximum cable
16/32/48/64(80/96)
8/16/*532/64/96/128(160/192)
16/32/48/64(80/96)
Number of ports / Subsystem( ): DKA slot used
1683216Number of ports / Option
1/2/3/4(5/6)
1/2/3/4*51/2/3/4(5/6)
1/2/3/4(5/6)
Number of options installed( ): DKA slot used
100100100/200100/200Data transfer rate (MB/s)
Gigabit EthernetGigabit EthernetFCPFCPHost interface
16IS*48NS*432HS*316HS*3Option name
16port8port32port16port
iSCSINASFibre
CHA Options:
The channel adapter (CHA) controls data transfer between the upper host and the cache memory. The TagmaStore Universal Storage Platform provides various kinds of CHAs, which support the mainframe, SAN (Storage Area Network), NAS (Network Attached Storage), and the iSCSI interface, as options to be added in units of pair.
CHA is a component of DKC that processes the channel commands from the host(s) and manages the host access to cache.
The graphic lists all the CHA options provided by Universal Storage Platform.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-26 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture CHA Options - Fibre Channel Ports
CHA Options - Fibre Channel Ports
24
16 Port CHA PCB – CL1
Eight Port CHA PCB – CL1
• CL [Odd # Ports] are on Cluster 1• Front• CL1, CL3, CL5, CLB, etc• B, D, .. Are odd
• CL [Even # Ports] are on Cluster 2• Rear• CL2, CL4, CL6, CLA, etc• A, C, .. Are even
distributed, in whole or in part, without the prior written consent of HDS. Page 3-27
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Back-end Director (BED) Feature
Back-end Director (BED) Feature
25
MP-PK (2MP)(SH350–A)
MP-PK (2MP)(SH350–A)
MP-PK (2MP)(SH350–A)
MP-PK (2MP)(SH350–A)MP-PK (2MP)
(SH350–A)
MP-PK (2MP)(SH350–A)
MP-PK (2MP)(SH350–A)
MP-PK (2MP)(SH350–A)
Fibre Port
Back-end Director (BED)– A microprocessor (MP)– MP controls one or two FC-AL ports– 2Gbps FC-AL
DKA– Disk Adapter– A Board (PCB) containing BEDs
Back-end Director (BED) Feature– Also called a DKA Option– A pair of DKAs
The disk adapters, which control the transfer of data between the disk drives and cache, are installed in pairs for redundancy and performance. The Universal Storage Platform can be configured with up to four DKA pairs. All functions, paths, and disk drives controlled by one DKA pair are called an “array domain.” An array domain can contain a variety of LVI and/or LU configurations.
The disk drives are connected to the DKA pairs by Fibre cables using an arbitrated-loop (FCAL) topology. Each DKA has 8 independent fibre backend paths controlled by 8 back-end directors (microprocessors). Each dual-ported Fibre-channel disk drive is connected via its two ports to each DKA in a pair via separate physical paths for improved performance as well as redundancy.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-28 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture DKA Option
DKA Option
26
First BED PairR0+R1+R2
48 Disks/Loop
2nd-4th BED PairsR1+R2
36 Disks/Loop
46 4700 01
46 4700 01
46 4700 01
46 4700 01
DKA Pair2Gbps
Fibre Loop
46 4700 01
46 4700 01
46 4700 01
46 4700 01
DKA(CL1)
01234567
DKA(CL2)
01234567
Fibre Port Number Fibre Port
DKA Option
DKA is a component of DKC that controls data transfer between the hard disk and the cache memory. In the current version of Hitachi TagmaStore Universal Storage Platform, the number of ports per DKA has increased to eight and the transfer rate per port has increased to 2Gb/s.
Full Specification Model (DKA-F510I-400): Controls one Fibre-channel port with one microprocessor.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-29
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture DKA Options - Enhanced Backend FC-AL
DKA Options - Enhanced Backend FC-AL
27
48Processor per DKA
3.212.8 or 6.4Total bandwidth (GBps)
12Bandwidth of FC path (Gbps)
48Number of Backend path DKA
3264Number of Backend path (Max)
Lightning 9900 VUniversal Storage Platform
Item
In the Universal Storage Platform, the number of ports per DKA is increased to eight (8) and furthermore, the transfer rate per Fibre port is made higher to 2Gbps. In the subsystem with the maximum configuration, disk drives are accessed through the four (4) DKA pairs and 64 paths.
The Full Specification Model (DKA-F510I-400) controls one fibre channel port with one microprocessor (8 FC-AL and 8 MPs per PCB).
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-30 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Optional Disk Features
Optional Disk Features
28
Also called hard disk drives (HDDs) All Universal Storage Platform models support
– 73GB /15Krpm– 146GB/10Krpm– 146GB/15Krpm– 300GB/10Krpm
Dual vendor policy will be applied wherever possible
– HGST and SeagateUniversal Storage Platform/Network Storage Controller HDDs are not interchangeable with other Hitachi Data Systems Storage Systems
Universal Storage Platform supports four types of disk drives, these drives have a 2Gb/s Fibre channel interface. The respective capacities of the disk drives are:
• 73GB/15k rpm • 146GB/10k rpm
• 146GB/15k rpm • 300GB/10k rpm
Each disk drive can be replaced non-disruptively on site. The Universal Storage Platform utilizes diagnostic techniques and background dynamic scrubbing that detect and correct disk errors. Dynamic sparing is invoked automatically if needed. For an array group of any RAID level, any spare disk drive can back up any other disk drive of the same rotation speed and the same or lower capacity anywhere in the subsystem, even if the failed disk and the spare disk are in different array domains (attached to different DKA pairs). The Universal Storage Platform can be configured with a minimum of one and a maximum of 40 spare disk drives (4 slots for spare disks + 36 slots for spare or data disks). The standard configuration provides one spare drive for each type of drive installed in the subsystem. The Hi-Track monitoring and reporting tool detects disk failures and notifies the Hitachi Data Systems Support Center automatically, and a service representative is sent to replace the disk drive.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-31
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Read Hit
Read Hit
29
Cache
Storage Array
I/O Request
Data
Shared Memory
Read hit: For a read I/O, when the requested data is already in cache, the operation is classified as a read hit. The front-end director searches the cache directory, determines that the data is in cache, and immediately transfers the data to the host at the channel transfer rate.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-32 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Read Miss
Read Miss
30
Cache
Storage Array
I/O Request
Data
Shared Memory
Data
Read miss: For a read I/O, when the requested data is not currently in cache, the operation is classified as a read miss. The front-end director searches the cache directory, determines that the data is not in cache, disconnects from the host, creates space in cache, updates the cache directory, and requests the data from the appropriate DKA pair. The DKA pair stages the appropriate amount of data into cache, depending on the type of read I/O (e.g., sequential).
distributed, in whole or in part, without the prior written consent of HDS. Page 3-33
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Fast Write
Fast Write
31
Storage Array
Data1
2
ACK
Shared Memory
Fast write: All write I/Os to the Universal Storage Platform are fast writes, because all write data is written to cache before being destaged to disk. The data is stored in two cache locations on separate power boundaries in the nonvolatile duplex cache (see section 2.3.3). As soon as the write I/O has been written to cache, the Universal Storage Platform notifies the host that the I/O operation is complete, and then destages the data to disk.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-34 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Algorithms for Cache Control
Algorithms for Cache Control
32
All read and write data goes through cache– 100% of cache memory is available for read operations
Write pending rate – Percent of total cache used for write pending data– The amount of fast-write data in cache is dynamically managed by the
cache control algorithms– Provides the optimum amount of read and write cache– Dependent on the workload and read and write I/O characteristics
Hitachi Data Systems Intelligent Learning Algorithm Least-recently-used (LRU) algorithm Sequential prefetch algorithm
The Universal Storage Platform places all read and write data in cache, and 100% of cache memory is available for read operations. The amount of fast-write data in cache is dynamically managed by the cache control algorithms to provide the optimum amount of read and write cache, depending on the workload read and write I/O characteristics. The algorithms for internal cache control used by the Universal Storage Platform include the following:
Hitachi Data Systems Intelligent Learning Algorithm: The Hitachi Data Systems Intelligent Learning Algorithm identifies random and sequential data access patterns and selects the amount of data to be “staged” (read from disk into cache). The amount of data staged can be a record, partial track, full track, or even multiple tracks, depending on the data access patterns.
Least-recently-used (LRU) algorithm (modified): When a read hit or write I/O occurs in a non-sequential operation, the least-recently-used (LRU) algorithm marks the cache segment as most recently used and promotes it to the top of the appropriate LRU list. In a sequential write operation, the data is destaged by priority, so the cache segment marked as least-recently used is immediately available for reallocation, since this data is not normally accessed again soon.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-35
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Algorithms for Cache Control
Sequential prefetch algorithm: The sequential prefetch algorithm is used for sequential-access commands or access patterns identified as sequential by the Intelligent Learning Algorithm. The sequential prefetch algorithm directs the back-end directors to prefetch up to one full RAID stripe (24 tracks) to cache ahead of the current access. This allows subsequent access to the sequential data to be satisfied from cache at host channel transfer speeds.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-36 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Battery
Battery
33
Universal Storage PlatformNickel-Hydrogen battery
The battery installed in the Universal Storage Platform is a nickel-hydrogen battery. The batteries are connected with the control section (cache memories, shared memories, DKAs, and CHAs) and the DKU section. When the batteries appropriate to the subsystem configuration are installed, the subsystem can continue its operation with the power supplied from the batteries when the AC input power is shut off owing to a power failure. In this case, the duration of the power stoppage is not longer than one minute no matter if it is DKC power-off or DKU power-off.
The Network Storage Controller adopts the nickel metal hydride battery consisting of environmental friendly material. The batteries are connected with the control section (cache memories, shared memories). When the batteries (appropriate to the subsystem configuration) are installed, the subsystem can continue its operation with battery power when the AC input power is shut off due to a power failure and the duration is not longer than 20m seconds. When duration of the power outage is longer than 20m seconds, the subsystem executes the backup process.
distributed, in whole or in part, without the prior written consent of HDS. Page 3-37
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Battery
Battery Backup One minute UPS and de-stage function is not supported in Network Storage
Controller. The de-stage function and hardware connection (with Universal Storage
Platform) are not supported. Time for withstanding instantaneous power failure: 20ms
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-38 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Cache Destaging Process - Universal Storage Platform
Cache Destaging Process - Universal Storage Platform
34
Cache Destaging (new for the Universal Storage Platform) will be configured for either
– Backup Mode– Destaging Mode– Not supported Network Storage Controller
In the event of a power failure longer then 1 minute the Universal Storage Platform will begin the backup process
– The backup process is configured when the Universal Storage Platformis installed
The backup process is set to either– Destaging Mode
Destage to disk all write IO in cache– Backup Mode
Provide power to cache until power resumesCache Destage Requires
– Cache Memory Batteries– Additional DKU Batteries– Destaging Mode = On
distributed, in whole or in part, without the prior written consent of HDS. Page 3-39
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Hi-Track Tool
Hi-Track Tool
35
Hi-Track® “call-home” service and remote maintenance tool – HiTrack is standard for all the Universal Storage Platform models
Monitors the operation of the Universal Storage Platform at all times– Collects hardware status and error data– Transmits this data via modem to the Hitachi Data Systems
Support CenterThe Support Center analyzes the data and implements corrective action as needed
The Hi-Track tool monitors the operation of the Universal Storage Platform at all times, collects hardware status and error data, and transmits this data via modem to the Hitachi Data Systems Support Center. The Support Center analyzes the data and implements corrective action as needed. In the unlikely event of a component failure, Hi-Track service calls the Hitachi Data Systems Support Center immediately to report the failure without requiring any action on the part of the user. Hi-Track tool enables most problems to be identified and fixed prior to actual failure, and the advanced redundancy features enable the system to remain operational even if one or more components fail.
Note: Hi-Track tool does not have access to any user data stored on the Universal Storage Platform. The Hi-Track tool requires a dedicated RJ-11 analog phone line.
This manual may not be copied, transferred, reproduced, disclosed, or Page 3-40 distributed, in whole or in part, without the prior written consent of HDS.
Hardware and Architecture Hi-Track Tool
36
In the event of a component failure– Calls the Hitachi Data Systems Support Center immediately to report
the failure– No customer action is required
Provides for Remote MaintenanceHi-Track tool does not have access to any user data stored on the Universal Storage PlatformThe Hi-Track tool requires a dedicated RJ-11 analog phone line
distributed, in whole or in part, without the prior written consent of HDS. Page 3-41
This manual may not be copied, transferred, reproduced, disclosed, or
Hardware and Architecture Module Review
Module Review
37
1. Which disk drives does the Universal Storage Platform support? 2. What is the maximum cache capacity and the maximum number of
cache boards in the Universal Storage Platform? 3. What is the maximum number of BEDs supported by the Universal
Storage Platform? 4. What is the maximum connection speed of a FED and a BED on the
Universal Storage Platform? 5. Which of the following components does the DKU section of the
Universal Storage Platform contain? FED Features, Cache Switches, Shared memory, or Disk drives.
6. If a LU has two LU Paths, the LU Paths should be to host ports that are located in different __________. (Fill in the blank)
7. How is a FED or BED connected to Cache?8. What happens if a cluster fails in a Universal Storage Platform or
Network Storage Controller system?
38
9. In the high performance cache access model how many cache switches and cache cards are installed?
10.What information does Shared Memory store?11.What is the maximum number of ports on a Fibre Channel FED
Feature?12.What are the four types of FED Features supported on the USP?13.What is needed to enable cache destage on the USP?14.What two services does Hi-Track perform?
This manual may not be copied, transferred, reproduced, disclosed, or
Page 3-42 distributed, in whole or in part, without the prior written consent of HDS.
4. DKU Back-end Architecture and Logical Units
distributed, in whole or in part, without the prior written consent of HDS. Page 4-1
This manual may not be copied, transferred, reproduced, disclose, or
DKU Back-end Architecture and Logical Units Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:• Describe how Hard Drive Units and BEDs are connected• Describe the back-end architecture of the Hitachi TagmaStore™
Universal Storage Platform including Arbitrated Loops, Hard Drive Units, RAID Groups, and Emulation
• Describe the process used to create a LU• Describe LUSE and VLL Volumes and their benefits
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-2 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units DKU Back-end Architecture and Logical Units Overview
DKU Back-end Architecture and Logical Units Overview
3
HDU– HDU Box– Extending HDU’s– BED – HDU Connections – Domains
Creating a Logical Device– Block of Four– Block of Four Addressing– Parity Groups– RAID Protection– Parity Group Addressing– Emulation– Volume, CU:LDEV
4
VLL and LUSELUN Manager and LUN SecurityRAID Intermix and High Performance Disk Access Configuration
This manual may not be copied, transferred, reproduced, disclosed, or distributed, in whole or in part, without the prior written consent of HDS.
Page 4-3
DKU Back-end Architecture and Logical Units HDU Box
HDU Box
5
HDU-Box
*1: The FSW-A is installed as the standard irrespective of the number of HDDs on the FCAL.*2: When connecting the 32 HDDs in the HDU using the two FCALs, installation of the FSW -A is indispensable
FCAL boundaryFSW-A *1
FSW-A *1FSW-A *2
FSW-A *2
07 06 05 04 03 02 01 00
0F 0E 0D 0C 0B 0A 09 081F 1E 1D 1C 1B 1A 19 18
17 16 15 14 13 12 11 10
FSW-A *1
FSW-A *1FSW-A *2
FSW-A *2
07 06 05 04 03 02 01 00
0F 0E 0D 0C 0B 0A 09 081F 1E 1D 1C 1B 1A 19 18
17 16 15 14 13 12 11 10
DKU-F505I-FSWA
FCAL boundary
HDU
Configuration in HDU Box in which 32 HDDs are controlled by two FCALs
HDU
Extension of FCAL
connection
HDD Location
HDU Right SideHDU Left Side
In the DKC and DKU frames, two HDUs each contain 32 HDDs. The HDDs are connected to the DKA via the fibre channel interface switches (FSWs) with the FC-AL.
When the FSW-A is installed, the HDDs in the HDU are controlled by the two FC-ALs.
For the HDU box in the DKC (DKU-R0), the FSW-A is the standard installation. The HDDs are controlled by eight fibre channel ports in the 1st DKA pair.
Fibre Channel Interface Switch PCB is a board that provides the physical interface (cable connectors) between the ACP ports and the disks housed in a given disk drive.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-4 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Extending HDU Boxes
Extending HDU Boxes
6
HDDDD
HDDDD
HDDDD
HDDDD
HDDDD
HDDDD
HDDDD
HDDDD
DKA SW SW SWSW SWSW SW SWSW SWFSW -A FSW -A
SW SW SWSW SWSW SW SWSW SW
DKA16HDD 16HDD
FSW -AFSW -A
HDUExtending the HDU
Extending the HDU
Connections inside the HDU (16 HDDs /FCAL)
DKA Pair
With FSW-A installed, the HDDs in the HDU are controlled by the two FC-ALs.
distributed, in whole or in part, without the prior written consent of HDS. Page 4-5
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units DKU Hardware and Interconnections
DKU Hardware and Interconnections
7
High Speed Disk Access Model 4 DKA Pair Configuration
DKC R1 DKUL1 DKUL2 DKU
HDU
R0 DKU
R2 DKU
(1L/2X)*1
(1B/2N)*1
*1: Number of slot in which the DKA is installed (Cluster 1/Cluster 2)
(1A/2M) *1
(1K/2W) *
HDU HDU
HDU HDU
HDU HDU
HDU HDU
HDU HDU
HDU HDU HDU HDU
HDU HDU
DKADKA(Opt.3)
DKADKA(Opt.2)
DKADKA(Opt.1)
DKADKA(Opt.B)
HDU
The DKU section consists of HDU boxes and HDDs.
FC-AL connections are used to access the HDDs installed in the HDU-boxes.
A maximum of 48 HDDs on two FC-AL loops from the OPT B BED pair is possible and a maximum of 32 HDDs on two FC-AL loops from each of the other three BED pairs is possible. Every HDD is connected to two FC-ALs.
High Performance Disk Access Model
Device Interface Cable options: R0 → R1 DKC → R1 R1 → R2 (two features) DKC → L1 DKC → L1 L1→ L2 (two features)
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-6 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Creating a Logical Device
Creating a Logical Device
8
Block of FourBlock of Four AddressingParity GroupsRAID ProtectionParity Group AddressingEmulationVolume, CU:LDEV
distributed, in whole or in part, without the prior written consent of HDS. Page 4-7
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units Virtualized Back-end
Virtualized Back-end
9
PDEV VDEV LDEV LUN
Max 64CU = 0300
01
02
…
EF
FF
03:01
Names PDEV – Physical Device VDEV – Virtual Device LDEV – Logical Device LUN – Logical Unit Number – presented to host
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-8 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units DKU - HDU Numbering
DKU - HDU Numbering
10
A group of four canisters is called a Block of Four, or B4.
The DKU in R0 contains two B4s, with the first one comprising canisters 0 through 3 and the second one comprising of canisters 4 through 7.
The DKUs in R1, R2, L1, L2 have four B4s; 0-3, 4-7, 8-b, and c-f.
The Block of Four concept is important foundational knowledge since Array Groups are created within Blocks of Four.
distributed, in whole or in part, without the prior written consent of HDS. Page 4-9
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units B4 Numbering in the Universal Storage Platform
B4 Numbering in the Universal Storage Platform
11
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-10 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Array Groups
Array Groups
12
DKU Front DKU Rear
257 3 1
0
9B/S
46 28A
057 3 1
0
9B/S
46 28A
157 3 1
0
9B/S
46 28A
357 3 1
0
9B/S
46 28A
Parity Group 1-1
The terms Array Group and Parity Group are used interchangeably.
A Parity Group consists of four disks, each residing in the same slot number in a Block of Four. This diagram shows the four canisters in the Block of Four. The shaded areas identify slots that are part of a Parity Group located in slot 0. Two disks are visible from the front canisters, and the other two are visible from the rear canisters.
A Parity Group Address consists of two numbers: the Block of Four identifier and the Array Group Number. The Array Group number is the installation order number from the Array Group. In this case, 0 is the hardware address of the disk, disk 0 is the first disk installed so its installation order is 1.
To understand the numbering scheme, consider this example. This set of four disks for Parity Group 1-1 is located in Block of Four Number 1. The array group designation of one means that the disks are located in slot 0 of the canisters 0, 1, 2, and 3 in Block of Four One.
To determine a volume’s physical location, or CU:LDEV, you must identify which Parity Group contains that volume.
distributed, in whole or in part, without the prior written consent of HDS. Page 4-11
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units RAID Protection
RAID Protection
13
RAID 0/1– 2D-2D (2 Disk-2 Disk)– 4D-4D (4 Disk-4 Disk)
N/A - Hitachi Lightning 9900™ V Series enterprise storage systemsN/A - Network Storage Controller
RAID 5– 3D-1P (3 Data Disk - 1Parity Disk)
N/A – Hitachi TagmaStore™ Network Storage Controller with 300GB Drives– 7D-1P (7 Data Disk-1Parity Disk)
N/A - Lightning 9900™ Series system enterprise storage systemsN/A - Network Storage Controller
RAID 6– 6D-2P (6 Data Disk-2Parity Disk)
Survives a Double Disk Failure of any two disksN/A - Lightning 9900 V Series system or Lightning 9900 Series system
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-12 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Parity Group Addressing
Parity Group Addressing
14
The parity group (RAID group) address indicates the physical location(s) for volumes
– Is staticBlock of Four # - Array Group
– Parity Group 7-1Block of Four 7 Array Group 1
Spares do not count as parity groups
distributed, in whole or in part, without the prior written consent of HDS. Page 4-13
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units Emulations
Emulations
15
Parity Group (2D-2D)Usable Space
146GB HDDs: LDEVs
Add Open-VEmulation
292GB
30GB
30GB
100GB
45GB
?
You create a logical device or LDEV by placing emulation on a Parity Group.
The LDEV is a logical representation (in hex) of a slice or segment of disk (HDD) that spans a Parity Group.
When using the Open-V emulation you can create variable size LDEVs on the Parity Group.
Since changing an emulation type requires reformatting the disks in a parity group (destroying data), caution should be exercised when performing this task.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-14 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Emulation Types
Emulation Types
16
Lightning 9900 V Series system Universal Storage Platform Open-3 2.4 GB 2.4 GBOpen-8 7.3 GB 7.3 GBOpen-9 7.4 GB 7.4 GBOpen-E 14.5 GB 14.5 GBOpen-L 33.94 GB 33.94 GBOpen-M 43.94 GB N/AOpen-V Custom Size Custom Size
Open-V is the Recommended Emulation– Open Virtual
Any size volume can be created with Open-V (min 46 MB, max 737,256MB)
The slide lists the emulation types and the size of the LDEV’s that are created when that emulation type is applied.
distributed, in whole or in part, without the prior written consent of HDS. Page 4-15
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units Control Unit
Control Unit
17
Control Unit (CU)– Needed to address an LDEV– Logical Unit is identified by CU:LDEV address– A single CU can address 256 LDEVs– It is not a piece of hardware
Universal Storage Platform and Network Storage Controller 55– 64 Control Units
64 control units to 256 LDEV ranges (00-ff) Allows for 16,384 volumes (64 x 256)
Hitachi Lightning 9980V™ multi-cabinet enterprise storage system– 32 Control Units
Allows for 8192 volumes (32 x 256)Hitachi Lightning 9960™ storage system
– 16 Control Units Allows for 4096 volumes (16 x 256)
The Lightning 9900 Series systems supports up to 16 CUs. The Lightning 9900 V Series system supports up to 32 CUs. A Control UNIX can control up to 256 volumes per CU.
A Control Unit is assigned to a group of LDEVs (maximum of 256). The CU:LDEV combination creates a volume. The control unit acts as a logical identifier, to the LDEV. Using Control Units allows assigning up to 256 LDEVs for each CU (0-255).The Lightning 9900 Series systems has a total of 16 control units, which range from 0 to f (hex), to give a total of 4,096 volumes. The Lightning 9900 V Series system can access 8,192 Volumes.
CUs can span domains.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-16 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Volumes
Volumes
18
LDEV + CU = Logical Unit
CU:LDEV00:0F
The terms LDEV, Logical Unit/Volume, and LUN must not be used interchangeably.
A Logical Unit/Volume is created by assigning a CU to an LDEV (CU:LDEV). A Logical Unit/Volume is the collective logical representation of the CU plus the logical device number.
The range of valid CU-to-LDEV assignments includes: CUs 0x00h through 0x2fh (0-64), and LDEVs 0x00h through 0xffh (00-256).
distributed, in whole or in part, without the prior written consent of HDS. Page 4-17
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units VLL
VLL
19
Virtual Logical Volume Image/Logical Unit Number– Create LDEVs of varying size
VLL Operations Overview– Converting logical volumes to space areas (free spaces)– Creating a CV volume– Deleting a CV volume– Initializing the CV-created volumes (into initial volumes)– Make Volume (OPEN-V)– Viewing concatenated parity groups
The Volume-to-Space and Volume Initialize operations destroy existing data. The data on logical volumes to be converted or initialized is lost when the Volume-to-Space or Volume Initialize operation is completed. To protect data, be sure to make a backup copy of the existing data before starting these operations.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-18 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units VLL Functions
VLL Functions
20
Virtual Logical Volume Image / Logical Unit NumberVLL for OPEN-Vs (Make Volume and Install CV [Custom Volume])
The Make Volume function deletes the LDEV and allows you to create CVs.
Open-VOpen-V VDEV after the creation of CVs using the
This manual may not be copied, transferred, reproduced, disclosed, or
Install CV function
Make Volume
Install CV
CV
Free Space Free Space
CV
The function allows you to additional CVs from the .Install CV create Free Space
21
Delete all the LDEVs to create free space. Then create Custom Volumes.
VDEV in initial condition VDEV after the creation of CVs using the Install CV function
Delete Custom Volume
Install CVCV
CV
CV
CV
CV
CV
Free Space Free Space
CV
The function allows you to additional CVs from the .Install CV create Free Space
Make Volume will delete all the volumes and allow for creation of all the LDEVs in the selected Parity group.
distributed, in whole or in part, without the prior written consent of HDS. Page 4-19
DKU Back-end Architecture and Logical Units LUSE Overview
LUSE Overview
22
Purpose of LUSE– A LUSE is created by combining backend Logical Volumes (LDEVs)
and then mapping the LUSE to a host port as a LUN– Provides users with larger volumes– Improves performance by striping data over multiple volumes and
RAID Groups
The LUSE function is applied to open-system logical volumes and enables the creation of one large logical volume by combining several smaller LDEVs. This function enables a host with a limited number of LDEVs per "Fibre" port to access a greater amount of data with a fewer number of LDEVs. A range of 2 to 36 LDEVs can be unified. Be sure to back up your data before LUSE operations.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-20 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units LUSE Specifications
LUSE Specifications
23
LUSE Specifications– Combining TagmaStore Storage Platform volumes and external
volumes into the same LUSE volume is not supported – Combining Command Devices, Just In Time and Volume Migration
volumes into a LUSE is not supported– Normal or VLL (custom volume size) LDEVs
LDEVs of the same sizeLDEVs of the same emulation
– Cannot combine LUSE VLL and normal volumes– LDEVs that are to be combined into LUSE volumes must have no
assigned SCSI paths and be un-mounted from the host – Process is destructive (backup data before combining)– The maximum capacity is 60TB
Additional Restrictions Combining non-sequential LDEVs into a LUSE is supported. Combining normal volumes and LUSE volumes into the same LUSE volume,
and combining existing LUSE volumes into another LUSE volume are supported.
Combining Virtual LVI/LUN volumes into a LUSE is supported, provided they are all of the same size and emulation type. The order of operations is important: you must first create one or more Virtual LVI/LUN volumes, and then combine those VLL volumes into a LUSE volume.
You cannot perform Virtual LVI/LUN operations on an existing LUSE volume because a LUSE volume must have a SCSI path already specified.
distributed, in whole or in part, without the prior written consent of HDS. Page 4-21
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units LUSE Specifications
TrueCopy-z/OS®, Hitachi TrueCopy™ remote replication software, ShadowImage-z/OS®, and Hitachi ShadowImage™ In-system replication software pair volumes cannot be targets of LUSE operations because a LUSE volume must have a SCSI path already specified.
Combining RAID 1 and RAID 5 volumes into the same LUSE is not recommended.
Combining emulation types (OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L or OPEN-V) into the same LUSE is not supported.
Combining LUSE volumes into larger LUSE volumes is not supported. Some operating systems may experience slow disk access times with large
logical units, particularly if they contain a large number of high-usage files. The size of a LUSE can affect the amount of time required to perform backups. The maximum supported capacity is 60 TB.
LDEVs combined into a LUSE volume must have the same IO suppression mode and cache mode settings.
LDEVs combined into a LUSE volume must have the same drive mode (either all SATA or all non-SATA).
You may not change the capacity of an existing LUSE volume. If you want a LUSE volume to have a different capacity, you must release the LUSE volume and then re-define the LUSE volume.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-22 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units LUN Manager Overview
LUN Manager Overview
24
Create LU Path– LUN Mapping/Map Volume to CHA Port– Give LUN Number
LUN/Volume Security– Create Host Group– Place in a Host Group
25
Volumes (CU:LDEV) must be mapped to array port(s) before a host can see/use itLUN Manager provides this facility– Process is called LUN mapping or volume mapping– “Create LU Path”– Maps a Volume (CU:LDEV) to a FED port– Add LUN address– No-single-point-of-failure (NSPF) configurations
Map volume to more than one FED portDifferent Clusters
To map LUNs/Volumes to ports:Determine which Volumes need to be accessed by a hostDecide which ports the volume will be mapped toDetermine which LUN numbers are available
distributed, in whole or in part, without the prior written consent of HDS. Page 4-23
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units LUN Manager Overview
26
LUN Security Example
CU:LDEV
01:05
portCL1-A
host grouphpux-g01
host grouphpux -G01
host groupwin-G02
LUN 0
LUN 1
host groupwin-g02
CU:LDEV
02:00LUN 0
LUN 102:01 02:02
In the figure above, the two hosts in the hpux-G01 group can reference LUN0 and LUN1 associated with the same host group, but cannot reference LUN0 in the win-G02 host group. Therefore, the hosts in hpux-G01 can access only the two LUs, which are identified by 01:05 and 02:01; the hosts cannot access the LUs 02:00 and 02:02. The two hosts in hpux-G01 must be a cluster since they have shared access to LUNs. On the other hand, the host in the win-G02 group cannot reference LUN0 and LUN1 in hpux-G01 and therefore cannot access the LUs 01:05 and 02:01.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-24 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units LUN Manager Operations
LUN Manager Operations
27
LUN/Volume Security – Why?Multiple Hosts have access to (share) a FED port
– If multiple hosts have access to a LUN then they can all write to it– Great method to corrupt data
LUN/Volume Security– Restricts LUN access to an individual host or cluster
LUN/Volume Security – three steps:1. Find World Wide Names (WWNs) of host bus adapters2. Create a Host Group for each host or cluster3. Add individual Host or Cluster to Host Group4. Associate (map) Host Groups with LUs
distributed, in whole or in part, without the prior written consent of HDS. Page 4-25
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units Universal Storage Platform RAID Intermix
Universal Storage Platform RAID Intermix
28
46 47
46 47
46 47
46 47
46 47
46 47
46 47
46 47
00
00
00
00
Fibre Port
DKA Pair
Fibre Loop
00
00
00
00
01
01
01
01
01
01
01
01
DKA(CL1)
01234567
DKA(CL2)
01234567
RAID5(3D+1P) *2
*2: One RAID group consisting of the four HDDs is composed of the fibre portswith numbers 0, 2, 4, and 6 and the other RAID group is composed of thosewith numbers 1, 3, 5, and 7.
RAID group (4D+4D)
02
02
02
02
02
02
02
02
RAID1(2D+2D) *2
n-0*1
n-1
n-2
n-3
n-4
n-5
n-6
n-7
*1: n-N--- DKA pair number (1st ,2nd ,3th or 4th –fibre port number on DKA pair
RAID group (7D+1P)
A RAID group extending over differentDKA pairs is not allowed to exist.
RAID group (6D+2P)
Fibre Port Number
RAID5 (3D+1P), RAID5 (7D+1P), RAID5 (6D+2P), RAID1 (2D+2D), and RAID1 (4D+4D) can be intermixed per RAID group in the storage system.
RAID5 (3D+1P) and RAID1 (2D+2D): RAID groups in units of four HDDs. The mixture is allowed per RAID group controlled by the fibre ports with even or odd numbers under a DKA pair.
RAID5 (7D+1P/6D+2P) and RAID1 (4D+4D): RAID groups in units of eight HDDs. The mixture is allowed per RAID group controlled by the fibre ports under the same DKA pair.
You cannot create a RAID group spread over more than one DKA pair.
One RAID group consisting of the four HDDs is composed of the fibre ports with the numbers, 0, 2, 4, and 6. The other RAID group is composed of the fibre ports with the numbers, 1, 3, 5, and 7.
HDD intermix is allowed under a DKA pair per RAID group but not within a RAID group.
Device emulation intermix is allowed under a DKA pair per RAID group but not within a RAID group.
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-26 distributed, in whole or in part, without the prior written consent of HDS.
DKU Back-end Architecture and Logical Units Module Review
Module Review
29
1. What does the HDU Box contain?2. How does a DKA connect to HDU Box in R2?3. The High Speed Disk Access Model contains how many BED
Features?4. What is the recommended emulation type for Open Systems?5. A Parity Group is created using disks in the same _________
located in the same ___________. (Fill in the two Blanks)6. What is the difference between RAID 6 and RAID 5?7. When using Open-V emulation the Logical Devices are of what
size?8. How is a Logical UNIT uniquely identified?9. A Block of Four is made up of ________ _______ that are
located in the same ___________. (Fill in the three Blanks)
30
10.What does VLL do?11.What does LUSE do?12.Can a BED Feature support RAID 10 (4D-4D) and RAID 6?13.Can a RAID 5 (7D+1P) span two BED Features?14.Using LUN Manager what must be done to enable NSPF
configurations?15.Bonus – A customer has an application with a high level of IO. The
application uses four Open-V LDEVs of 200GB each and is bottlenecked by disk IO. The customer has just installed a bunch of new disks. What can they do to improve application performance and not consume more raw disk space?
distributed, in whole or in part, without the prior written consent of HDS. Page 4-27
This manual may not be copied, transferred, reproduced, disclosed, or
DKU Back-end Architecture and Logical Units Module Review
This manual may not be copied, transferred, reproduced, disclosed, or Page 4-28 distributed, in whole or in part, without the prior written consent of HDS.
5. Hitachi Resource Manager™ Utility Package
distributed, in whole or in part, without the prior written consent of HDS. Page 5-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi Resource Manager™ Utility Package Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:– State the functions of Storage Navigator– Explain how Storage Navigator connects to the Hitachi TagmaStore™
Universal Storage Platform– Understand how to perform LUN Management and implement LUN
Security
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Resource Manager™ Utility Package Components
Components
3
The local storage resource manager provides management for a single Hitachi storage system, with as many as six functions– Hitachi Storage Navigator
Main Component– Hitachi Graph-Track™ performance monitor feature– Hitachi Cache Residency Manager feature (formerly FlashAccess)– Hitachi Volume Security software (formerly SANtinel)– Hitachi LUN Manager
Licensing for Hitachi Resource Manager™ utility package– Total raw capacity of internal and external storage
distributed, in whole or in part, without the prior written consent of HDS. Page 5-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Resource Manager™ Utility Package Hitachi Storage Navigator Overview
Hitachi Storage Navigator Overview
4
Tool to manage the Universal Storage Platform and Lightning-class storage systems– Provides system configuration and status information– Management Tools
LUN Management– Create/modify/delete Logical Units (LU)– Create/modify/delete LU Paths (map LU to ports)– Create/modify/delete VLL– Create/modify/delete LUSE
Secure LUN– Create/delete Host Groups– Add LU and Hosts to Host Groups
Configure FC Ports– Can connect to/manage multiple storage systems
One Array at a time
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Resource Manager™ Utility Package Hitachi Storage Navigator Overview
5
Universal Storage Platform
Management LAN
Universal Storage Platform
The Universal Storage Platform Storage Navigator consists of a group of Java™ applet programs that enable the user to manage the Universal Storage Platform system. The Storage Navigator Java applet programs run on a web browser to provide a user-friendly interface for the Universal Storage Platform web client functions. The Universal Storage Platform service processor (SVP) is the computer inside the subsystem that functions as a web server. The SVP is also used by Hitachi Data Systems representatives to perform maintenance. The Storage Navigator computer functions as a web client. Each time you log onto the Storage Navigator computer and connect to the SVP, a Java applet program is downloaded from the SVP to the Storage Navigator computer.
The Storage Navigator software communicates directly with the Universal Storage Platform system via a local-area network (LAN) to obtain subsystem configuration and status information, and send user-requested commands to the subsystem.
The Storage Navigator software displays the detailed subsystem information, and allows you to configure and perform operations on the Universal Storage Platform system.
distributed, in whole or in part, without the prior written consent of HDS. Page 5-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Resource Manager™ Utility Package Hitachi Storage Navigator Overview
The Universal Storage Platform SVP is connected to two LANs. The internal LAN is a private LAN that is used to connect the SVP to the Universal Storage Platform. You should have a secure Management LAN which allows you to access one or more SVPs from individual Storage Navigator computers. This configuration allows you to easily access and control the registered Universal Storage Platform systems. In a SAN environment, where several systems may be connected together, you must designate a primary SVP, which can be either an SVP connected to a Universal Storage Platform system, or a web server with the exact same configuration as an SVP.
6
Important Terms and Concepts– Java Applet Program
The Storage Navigator is provided as a Java applet program– Remote Method Invocation (RMI) Object
The Storage Navigator PC calls the methods of Java objects for Storage Navigator operations from the SVP RMI server
– Storage Device ListThe list includes storage device information such as device name, IP address, device location, etc.
– User Account ListThe list includes user information such as user ID, password, and write permission for each option
– Supported Operating SystemsWindows, Solaris, IRIX, HP-UX, and Red Hat Linux
Java Applet Program: The Universal Storage Platform Storage Navigator is provided as a Java applet program. A Java applet program can execute on any machine that supports a Java Virtual Machine (JVM). The Storage Navigator PC hosts the Java applet program and is attached to the Universal Storage Platform system(s) via a TCP/IP local-area network (LAN). When a Storage Navigator PC user accesses and logs into the desired SVP, the Storage Navigator Java applet is downloaded from the SVP into the Web browser on the Storage Navigator PC.
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Resource Manager™ Utility Package Hitachi Storage Navigator Overview
Remote Method Invocation (RMI) Object: Remote Method Invocation is a remote procedure call that allows Java objects stored in the network to be run remotely. In the Storage Navigator network environment, the Storage Navigator PC calls the methods of Java objects for Storage Navigator operations from the SVP (RMI server). Storage Device List: Registered storage devices are listed in the storage device list stored in the main SVP. The list includes storage device information such as device name, IP address, device location, etc. You can display the storage device list on your web browser by accessing the following URL:
http://<IP address of the main SVP>
Note: The Storage Navigator administrator must register the main SVP and storage devices in the storage device list.
User Account List: Registered users of a specific subsystem are listed in the user account list stored in that subsystem's SVP. The list includes user information such as user ID, password, and access permission for each option.
distributed, in whole or in part, without the prior written consent of HDS. Page 5-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Resource Manager™ Utility Package Connecting to the Storage Navigator
Connecting to the Storage Navigator
7
User Logon– Start a Web browser on the Storage Navigator PC– To open the Storage Device List panel, specify the URL of the Main
SVP as follows: http://xxx.xxx.xxx.xxx– Click on the hyper link of a SVP you want to log on to– Enter the user ID and password
The Storage Navigator user or administrator must add desired subsystems in the storage device list on the specific SVP called Main SVP (Web Server), before logging on to any SVP. The Main SVP can be any SVP connected to a Universal Storage Platform system.
To select a desired SVP from the storage device list: Start a Web browser (Internet Explorer or Netscape Navigator) on the Storage
Navigator PC. To open the Storage Device List panel, specify the URL of the Main SVP as
follows: http://xxx.xxx.xxx.xxx Click on the hyperlink (shown as the IP address number or NetBIOS name) of
the SVP you want to log onto. The Logon panel appears. Enter the user ID and password.
Note: The default user ID/password is root. Click the OK button. If you log on to the selected SVP successfully, the Storage
Navigator main panel opens.
Note: You can enter the IP address and you will be directed to the Storage Navigator main panel.
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Resource Manager™ Utility Package Storage Navigator Panels
Storage Navigator Panels
8
System Panel
The System tab displays the equipment information and the port information of the connected subsystem.
Port Status shows the port information using the image of the ports mounted on the subsystem.
Cluster-1 and Cluster-2 indicate the clusters. The ports of the Cluster-1 are listed on the upper part, and the ports of the Cluster-2 are listed on the lower part of the Port Status.
Each cluster has eight PCBs (Printed Circuit Board) and the name of each PCB is displayed on the header of the port list. The icons show the port name, port LED status, and equipment information of port.
The yellow oval (1E and 1F) indicates host connectivity. The Black (1A, 5A etc.) are ports available for host connectivity. The Grayed out (1C, 3A etc.) are ports that are not available for host
connectivity.
The Base Information includes: Product name: Product name of the connected subsystem Serial Number: Serial number of the connected subsystem IP Address: IP address of the connected subsystem (SVP)
distributed, in whole or in part, without the prior written consent of HDS. Page 5-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Resource Manager™ Utility Package Storage Navigator Panels
Main FW Version: Version of the microprogram installed in the connected subsystem
SVP Version: Version of the Storage Navigator Java applet program installed in the SVP of the connected subsystem
RMI Server Version: Version of the RMI server installed in the SVP of the connected subsystem
9
Status PanelThis window appears when you right-clickon an entry and select Detail.
The Status tab of the Storage Navigator main panel shows detailed information on the internal status of the connected subsystem, including the service information message (SIM) information that is being reported to the host. There are five SIM severity levels: good, service, moderate, serious, and acute.
The Status tab has the following features: Ref. Code displays the SIM Reference Code. Error Level displays the error levels: Good, Service, Moderate, Serious, and
Acute Status displays either Complete (if the SIM has been deleted from the SVP) or
Not Complete (if the error has not been deleted). Date displays the date that the SIM occurred. Total Rows displays the total number of rows that are listed.
Right clicking on each will show the details of the SIM.
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Resource Manager™ Utility Package LUN Manager Operation Example
LUN Manager Operation Example
10
Creating Host Groups1.Right click the port and select LUN Security: Disable Enable
item from the pop-up menu. (This step is needed if more than one group must be created on one port, and only one time per port.)
5.Click
This manual may not be copied, transferred, reproduced, disclosed, or distributed, in whole or in part, without the prior written consent of HDS.
Page 5-11
OK
6.Click Apply
2.Right click the port and select item from
the pop-up menuAdd New Host Group
3.Enter the Host Group name
4.Select a Host Mode from the list
11
Registering Hosts in Host Groups
1.Right click the Host Group and select item from the pop-up menu.
Add New WWN
2.Select desired . Directly enter desired WWN if not existing in the list.
WWN
3.If necessary, for human reading purpose. (max 16 chars.)
enter a name5.Click Apply
4.Click OK
Hitachi Resource Manager™ Utility Package LUN Manager Operation Example
12
Associating Host Groups (mapping) with Logical Devices2.Select the CU number which target LDEV
belongs to from the CU drop-down list.
5.Click
3.Drag the desired LDEV(s) to the targeted LUN in the LU path table. Target LUN must be unassigned ( icon).
Note: If multiple LDEVs are dropped together, all of them will be mapped to unassigned numbers automatically. This is a considerable reduction of operation steps .
The button can also be used to add multiple LDEVs as LUNs.
ADD LUPath
4.Click for the confirmation message.
OKApply
1.Select the Host Group
LUN Management lets you define LU paths by associating host groups with logical volumes. For example, if you associate a group of three hosts with logical volumes, LU paths are defined between the three hosts and the logical volumes.
To define LU paths:
1. In the tree view, select one of your host groups.
2. From the CU drop-down list above the LDEV table, select a CU number.
3. The LDEV table lists LDEVs in the specified CU image. Do the following:
4. Select one or more logical volumes in the LDEV table using left-click (start) and shift-click (end)
5. Drag the volumes to the LU Path table, which is located above the LDEV table by using ctl-click then ctl-click/hold/drag.
6. Place the list at the beginning of the LUNS you want to map in the LU PATH table. A message appears, displaying information about LU paths to apply.
7. Click OK to close the message.
8. The settings are reflected in the LU Path table, but are not applied to the disk subsystem yet.
9. Click Apply in the Storage Navigator main panel.
The settings are applied to the disk subsystem and the LU paths are defined.
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-12 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Resource Manager™ Utility Package Other Resource Manager Components
Other Resource Manager Components
13
Graph-Track feature– Monitors hardware performance and supplies complete system
storage information through a graphical interfaceCache Residency Manager feature– Allows users to "lock" and "unlock" data into cache in real time– Primarily used in mainframe environments
Hitachi LUN Manager software – Streamlines configuration management processes – Can assign multiple paths to a single LUN
Volume Security software– Enables configuring LUN Security– Included in LUN Manager for Universal Storage Platform and
Lightning 9900 V Series systems
Graph-Track™ performance monitor offers an efficient, reliable and centralized way to manage performance. This unique tool monitors hardware performance and supplies complete system storage information through a graphical interface.
Cache Residency Manager feature (formerly FlashAccess) allows users to "lock" and "unlock" data into cache in real time to optimize access to your most frequently accessed data.
LUN Manager software streamlines configuration management processes by enabling you to define, configure, add, delete, revise and reassign LUNs to specific paths without having to re-boot your system. Because LUN Manager can assign multiple paths to a single LUN, you gain the necessary infrastructure to support alternative path failover, path load balancing and clustered systems.
Volume Security software SAN security software helps ensure security in storage area networking environments through restricted server access. With SANtinel you can deny access to unauthorized users and safeguard your mission-critical information.
distributed, in whole or in part, without the prior written consent of HDS. Page 5-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Resource Manager™ Utility Package Module Review
Module Review
14
1. Which component of Resource Manager feature provides the ability to connect to and manager the Universal Storage Platform and Network Storage Controller platforms?
2. How is licensing allocated for the Resource Manager feature?3. Which component of Resource Manager feature monitors storage system
performance?4. When implementing the Resource Manager feature, which Operating
Systems are supported for the Resource Manager Server?5. How does Resource Manager connect to the Universal Storage Platform
and what protocols does it use?6. What does the Universal Storage Platform use to identify hosts for LUN
security?7. What are the four steps in defining LU Paths?8. How does the Universal Storage Platform and Network Storage Controller
support heterogeneous hosts access to front-end host ports?
This manual may not be copied, transferred, reproduced, disclosed, or Page 5-14 distributed, in whole or in part, without the prior written consent of HDS.
6. Hitachi NAS Blade for TagmaStore™ Universal Storage Platform
distributed, in whole or in part, without the prior written consent of HDS. Page 6-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:– State the purpose and benefits of using Hitachi NAS Blade for Hitachi
TagmaStore™ Universal Storage Platform – Explain the concept of embedded NAS– Identify the hardware components of the NAS Blade– Identify the software components of the NAS Blade– List the backup and restore features
This manual may not be copied, transferred, reproduced, disclosed, or Page 6-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform NAS Server Blade
NAS Server Blade
3
WinTel
UNIX
ClientsUNIX / WinTel
ServersUNIX / WinTel
StorageJBOD / RAID
StorageNetwork
Ether-LANAccess Protocols
File AccessBlock Access
CIFS
NFS
NFS
CIFS
FTP, HTTP…
FTP, HTTP…
FTP, HTTP…
FTP, HTTP…
Par./ser. SCSI
Par./ser. SCSI
NFS + CIFS
FTP, …
Servers:DATA Sharing,Application,WEB Exchange,Print, Backup,DB, Terminal,Security, Users,Virus Scanning,
etc….
UNIX ®
distributed, in whole or in part, without the prior written consent of HDS. Page 6-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform Location of NAS Server Blade Feature
Location of NAS Server Blade Feature
4
NAS Server Blade
feature 1
Subnet 1
Subnet 4
NAS Server Blade
feature 1
Subnet 3Subnet 2
A maximum of four NAS blade features can be installed in the Universal Storage Platform. Four Pairs of NAS adapters. A Feature includes two PCBs (Printed Circuit Boards).
Four Ethernet ports per adapter. Eight Ethernet ports per NAS Feature.
This manual may not be copied, transferred, reproduced, disclosed, or Page 6-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform NAS Blade Software Components
NAS Blade Software Components
5
NAS OS– NAS OS – DataControl
ProprietaryIntegrates with array
– NAS OS – FileSharingBuilt on Debian GNU/LinuxBundled with SAMBA for CIFSProvides the following protocols for Client Access
– CIFS Server– NFS Server– FTP Server
Standard Hitachi Network Attached Storage/Base - Data Control (standard): It is a proprietary product that links the CHN and disk subsystem. It performs
as a driver to process data requests (read/write, etc.) from kernel to disk subsystem. This is kernel internal process, and a user does not have access to it.
Hitachi Network Attached Storage/Base - File Sharing (standard): It complies with the Open Source license such as GPL. It includes kernel, file
system, commands, etc., to provide file-sharing and basic NAS management functions.
Processes requests from NFS/CIFS clients to provide file-sharing function. Provides WWW server functions that make NAS management functions
accessible from Web Browser on an administrator’s console. Provides kernel and commands functions to execute NAS management.
distributed, in whole or in part, without the prior written consent of HDS. Page 6-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform NAS Blade Software Components
Provides fail over functions between CHNs. Provides installation function for software that runs on CHN.
Hitachi Network Attached Storage/Management (standard): A proprietary product that provides NAS management functions: Provides
browser-based GUI that a NAS administrator accesses via HTTPS. GUI calls NAS management processing to realize NAS management functions. Provides installation functions for software, which runs on CHN from SVP. Provides single sign-on function with Hitachi HiCommand® Device Manager
software.
Software: NAS Blade is build on Debian GNU/Linux Linux implementation tuned by Hitachi Data Systems Current release from Debian is “woody” GNU/Linux 3.0 Current Linux Kernel in Debian is: 2.2.20 To support CIFS, NAS Blade is bundled with SAMBA
This manual may not be copied, transferred, reproduced, disclosed, or Page 6-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform NAS Management
NAS Management
6
NAS/Management Base– Provides the features for setup, operation, and control of the NAS
Blade system.
– HiXFS– LVM – Failover– Installer– Web Server– SSH Server
– NTP Server– SNMP agent– DNS client – NIS client
Optional: Hitachi Network Attached Storage/Backup Restore (optional):
A proprietary product that provides and manages backup/restore functions. Provides a browser-based GUI that gives a NAS administrator access via
HTTPS. GUI calls Backup/Restore processing for backup/restore functions. NDMP server collaborates with other vendor’s NDMP backup software.
Hitachi Network Attached Storage/Anti Virus Agent (optional): A proprietary agent that provides virus check function via collaboration with
external virus scan server. Customer purchased Scan Engine is sold and supported by Symantec. The initial release will support Symantec’s enterprise-class Anti-Virus software solution.
Support for McAfee and Trend Micro’s Anti-virus software is also planned for a future release, but no timetable has been set.
distributed, in whole or in part, without the prior written consent of HDS. Page 6-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform NAS Blade File System
NAS Blade File System
7LDEV
The NAS Blade file systems:
– HiXFS
Universal Storage PlatformLinuxLinux
Windows disk
RAID GroupLDEV
Logical Volume
UNIX Mount point
CIFS NFS
CHN
RAID GroupLDEV
RAID GroupLDEV
RAID GroupLDEV
CHP CHP
LinuxLinux
CHP CHP
LDEV
LDEV : Passive
(note)A logical Volume can contain more than 1 LDEV.
containing1 File System
LVM
CL1 CL2
Network
: Active
FTPClient
FTP
Version > 3.1!
This manual may not be copied, transferred, reproduced, disclosed, or Page 6-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform File System - HiXFS
File System - HiXFS
8
NAS/Management uses HiXFS to build hierarchical file systems that assure fast file access and high reliability. NAS/Management provides the following functionality for file system management: – Function for creating file systems – Functions for extending file systems– Functions for mounting and unmounting file systems
distributed, in whole or in part, without the prior written consent of HDS. Page 6-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform High Availability Cluster Architecture
High Availability Cluster Architecture
9
Non-stop application serviceAutomatic failover between CHNsFailover between two arrays with Hitachi TrueCopy™ Remote Replication software (Manual or scripted!)Common middle-ware integration between SAN & NAS
CHN CHN
TagmaStore™
Monitor Monitor
CHT
NIC NICCIFSNFS CIFSNFS
This manual may not be copied, transferred, reproduced, disclosed, or
Heartbeat
CHT
LANLAN SANSAN SANSANCan be taken over
Network info.IP address
File system info.NFS export info.Samba config.Mount info.Etc.
File system File system
TrueCopy
TakeoverTakeoverCHN
Monitor
NICCIFSNFS
File system
Linux driver info.Device nameLVM info. TagmaStore™
Page 6-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform User Authentication and Name Resolution
User Authentication and Name Resolution
10
Windows– NAS Blade can join Active Directory domain natively
Kerberos authentication is supported– Mixed-mode and Native-mode domain are supported– CIFS service continues when CHN fail-over
UNIX– Supports NIS
Supports DNSSupports External LDAP Server
distributed, in whole or in part, without the prior written consent of HDS. Page 6-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform NAS Optional Products (PP)
NAS Optional Products (PP)
11
Optional products that run on the NAS OS: – Backup/Restore – provides snapshot and backup/restore features.– SyncImage – adds the ability for incremental SnapShots (Bundled!)
Up to 124 SyncImage Snapshots– AntiVirus – enables connectivity to AntiVirus Scan Engine over the
network
This manual may not be copied, transferred, reproduced, disclosed, or Page 6-12 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform Backup and Restore
Backup and Restore
12
NAS Server Blade supports the following backup functions– Backup/restore using Network Data Management Protocol
FileSystem Backup (via LAN)– LAN free Backup/Restore (VERITAS NetBackup)
Volume Backup (via SAN) TBC!– Fully integrates Snapshot Hitachi ShadowImage™ In-System
Replication softwareCreate Point-in-Time copy of data
– File system freezeBackup Data Image
– Snapshot SyncImageMulti-generations of SnapshotOne “differential volume” for all changes (copy on write Pool)
distributed, in whole or in part, without the prior written consent of HDS. Page 6-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi NAS Blade for TagmaStore™ Universal Storage Platform Module Review
Module Review
13
1. Which protocols does the NAS Blade support?2. What is the maximum number of NAS Blade Features that can be
installed in an model USP1100?3. Which operating systems support on-line expansion of HDS NAS
Volumes?4. How does the NAS Blade provide user authentication in a Windows
environment?5. Which two replication tools does the NAS Blade support for local
system PIT copies? How many PITs can be created with each?6. Which operating system controls the processor on the NAS Blades?7. Which file system is used in the NAS Blade?8. Bonus: A model USP1100 has four NAS Blade Features installed.
Can the USP be configured to support a NAS Blade DR configuration. If so how and what tool does it use?
This manual may not be copied, transferred, reproduced, disclosed, or Page 6-14 distributed, in whole or in part, without the prior written consent of HDS.
7. Hitachi Dynamic Link Manager™ Path Manager Software
distributed, in whole or in part, without the prior written consent of HDS. Page 7-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi Dynamic Link Manager( Path Manager Software Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:• Identify Describe the purpose and benefits of Hitachi Dynamic Link
Manager™ path manager software• Describe the architecture of the Dynamic Link Manager software• Identify key features and functions of the Dynamic Link Manager
software• Identify and use the Dynamic Link Manager software Graphical
User Interface (GUI) screens
This manual may not be copied, transferred, reproduced, disclosed, or Page 7-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Dynamic Link Manager( Path Manager Software Overview
Overview
3
Purpose of Dynamic Link Manager Software– Is server based software that provides path failover and load
balancing capabilities– Provides:
Support for fibre channel connectivityAutomatic path discovery, which supports a SAN environmentAutomatic path failover and failbackTwo applications exist: Command Line Interface (CLI) and GUISupport for Hitachi storage systemsSupport for Hitachi TagmaStore™ Universal Storage Platform systems
– Integrates with Hitachi HiCommand® Suite software
Dynamic Link Manager software: Is a family of Hitachi provided middleware software utilities that are server
based Enhances the availability of RAID systems by providing automatic error
recovery and path failover from server-to-RAID connection failures. Provides load balancing in addition to path failover by re-directing I/O activity
to the least busy path using complex algorithms.
Just because a system is RAID-protected doesn’t mean that it is protected against connection bus failures, which is why Dynamic Link Manager software is required for true nonstop operations. Dynamic Link Manager allows system administrators to take advantage of the multiple paths by adding redundant connections between data servers and RAID systems. Dynamic Link Manager software, therefore, provides increased reliability and performance. Supported platforms include, IBM® AIX®, Sun Solaris®, Microsoft Windows NT, and Microsoft Windows 2000.
distributed, in whole or in part, without the prior written consent of HDS. Page 7-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Dynamic Link Manager( Path Manager Software Overview
4
Benefits– Provides load balancing across multiple paths– Utilizes the hardware’s ability to provide multiple paths to the same
device (up to 64 paths)– Provides failover protection by switching to a good path, if a path fails
Dynamic Link Manager software automatically provides path failover and load balancing for open systems.
5
Features
64Maximum Paths per LUN2048Maximum Physical Paths
YesCLIYesGUIYesFailbackYesFailover
Round-Robin, Extended Round-RobinLoad Balance
MSCS, VCS, Sun Cluster, HACMP, MC/SGSupport Cluster SoftwareNT, W2K, XP, Sun, AIX, HP...Support Platforms
SCSI, FCConnectivity
This manual may not be copied, transferred, reproduced, disclosed, or Page 7-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Dynamic Link Manager( Path Manager Software Overview
6
Functions– Removes HBA as single point of failure– Automatically detects failed path and reroutes I/O to alternate path– Automatic discovery of HBAs and LUNs in SAN environment– Up to 256 LUNs and 64 paths to each LUN– Uses round-robin or extended round-robin to balance I/O’s across
available paths– Provides tools to control and display path status– Supports the most popular cluster technologies– HBA vendors and standard open drivers support– GUI and CLI support – Error logging capability
Dynamic Link Manager software provides the ability to reduce the server’s host bus adapter as the single point of failure in an OPEN environment. The strength of Dynamic Link Manager software is its ability to configure itself automatically. It is designed to enhance the operating system by putting all alternate paths offline in Disk Administrator. It functions equally well in both SCSI and fibre channel environments. Dynamic Link Manager supports the most popular cluster technologies like HACMP, MSCS, MC Service Guard, Sun Cluster, and VERITAS Cluster Server™. It has GUI/CLI support for configuration management, performance monitoring and management and authentication of user id’s using HiCommand facility.
distributed, in whole or in part, without the prior written consent of HDS. Page 7-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Dynamic Link Manager( Path Manager Software Features
Features
7
Load Balancing– Dynamic Link Manager software distributes storage accesses across
the multiple paths and improves the I/O performance with load balancing
Load Balancing
Storage
Applications
Volumes
Storage
Applications
Volumes
Regular Driver HDLM
Without Load Balancing With Load Balancing
I/O Bottle-neck
Server Server
When there is more than one path to a single device within an LU, Dynamic Link Manager software can distribute the load across the paths by using the paths to issue I/O commands. Load balancing prevents a heavily loaded path from affecting performance of the entire system.
This manual may not be copied, transferred, reproduced, disclosed, or Page 7-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Dynamic Link Manager( Path Manager Software Features
8
Dynamic Link Manager features the following two types of load balancing– Round robin
Round robin distributes all I/Os among multiple paths– Extended round robin
Extended round robin distributes I/Os to paths depending on the type of the I/OFor sequential access, a single path will be used when issuing an I/OFor random access, I/Os will be distributed to multiple paths
When multiple applications that request sequential access are run concurrently, we recommend that you use the round robin algorithm in order to distribute I/Os across multiple paths.
When you execute only a single application that requests sequential access, such as a batch job running at night, we recommend that you use the extended round robin algorithm. The recommended algorithm depends on the type of applications and the operations policy.
distributed, in whole or in part, without the prior written consent of HDS. Page 7-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Dynamic Link Manager( Path Manager Software Features
9
Failover– Dynamic Link Manager software provides continuous storage access
and high availability by taking over the inactive paths
Simple Failover
Storage
Applications
Volume
HDLM
Stand-byFailure Reduction of Balancing
Paths
Storage
With Load Balancing
Applications
Failure
Volume
HDLM
Server Server
The failover function automatically places the error path offline to allow the system to continue to operate using another online path.
Trigger error levels: Error Critical
The online command restarts and the offline command is used to force path switching.
This manual may not be copied, transferred, reproduced, disclosed, or Page 7-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Dynamic Link Manager( Path Manager Software GUI Interface
GUI Interface
10
Options WindowDynamic Link Manager Version
Basic function settings• Load balancing• Path Health Checking• Auto failback• Intermittent Error Monitor• Reservation Level• Remove LU
Error management function settingsSelect the severity of Log and Trace Levels
distributed, in whole or in part, without the prior written consent of HDS. Page 7-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Dynamic Link Manager( Path Manager Software GUI Interface
11
Path Management Window
This is the Path List window. In this example, LUN 0, 1, 2, and 3 are available through two paths (1C and 2C –both are owner paths. Non owner paths are applicable only on the Hitachi Thunder 9200™ modular storage system and Hitachi Thunder 9500 V Series modular storage systems to the host. To clear the data from the screen, click on Clear Data. To export this data to a CSV file click on Export CSV. To set an individual path to OFFLINE or ONLINE status, select the path and click on Online and Offline options in the top right hand side corner of the screen. If you select a single LUN on the Tree in the left, only the paths for that LUN will be displayed in the Path list on the right.
This manual may not be copied, transferred, reproduced, disclosed, or Page 7-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Dynamic Link Manager( Path Manager Software Operations
Operations
12
Path View Operation for the CLI– Allows you to see information about data paths– dlnkmgr view -path
View allows you to see two types of information: Information about your data paths -path Information about the Dynamic Link Manager system settings – sys
Both of these parameters can take several different values.
distributed, in whole or in part, without the prior written consent of HDS. Page 7-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Dynamic Link Manager( Path Manager Software Module Review
Module Review
13
1. A host has four LU Paths to a LU. What does Dynamic Link Manager software do with the four LU Paths?
2. A host has two LU Paths to a LU. One of the Paths fails while I/O is being transmitted. What does Dynamic Link Manager software do? What is the effect on the operating system?
3. What is a performance enhancing feature of Dynamic Link Manager software?
4. Where is Dynamic Link Manager software installed?5. What is the maximum number of paths per LU that Dynamic Link
Manager supports?6. Dynamic Link Manager software can be managed using which two
interfaces?7. What additional feature does extended round robin provide over
round robin?
This manual may not be copied, transferred, reproduced, disclosed, or Page 7-12 distributed, in whole or in part, without the prior written consent of HDS.
8. Hitachi HiCommand® Device Manager Software
distributed, in whole or in part, without the prior written consent of HDS. Page 8-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi HiCommand® Device Manager Software Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:State the purpose and the advantages of Hitachi HiCommand® Suite and Hitachi HiCommand® Device Manager softwareExplain the general architecture of Device Manager software Name the different components Device Manager software Describe the Storage Management features provided by Device Manager software State the purpose and advantages of using the CLI properties fileDescribe the CLI commands and their purpose and functionality
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software HiCommand Suite 4.x Products
HiCommand Suite 4.x Products
3
Business ApplicationModules
Storage OperationsModules
Hitachi Data Systems ArrayServices
• Path Management
• Capacity Monitoring
• Performance Monitoring
Backup Backup Services Services ManagerManager
Tuning Tuning ManagerManager
Tiered Tiered Storage Storage ManagerManager
Replication Replication MonitorMonitor
Hitachi Dynamic Link ManagerHitachi Dynamic Link ManagerPath failover and failbackPath failover and failback
load balancingload balancing
Protection ManagerProtection ManagerExchange Exchange -- SQL ServerSQL Server
QoS Application ModulesQoS Application ModulesOracle Oracle -- Exchange Exchange -- SybaseSybase
QoS for QoS for File ServersFile Servers
SRMSRM
ChargebackChargeback PathPathProvisioningProvisioning Global Global
ReporterReporter
Storage Services Manager Storage Services Manager
Hitachi Hitachi Resource Resource ManagerManager
Hitachi Hitachi Performance Performance
MaximizerMaximizer
Hitachi Storage SpecificHitachi Storage SpecificHeterogeneousHeterogeneousNew Product Updated Product
Device Manager softwareDevice Manager software
ConfigurationConfiguration ReportingReporting APIAPICIM/SMICIM/SMI--SS ProvisioningProvisioning ReplicationReplication
This slide is an overview of the HiCommand 4.0 Suite laid out according to functional layer. Light blue modules support heterogeneous environments. Dark blue modules support heterogeneous environments but at Hitachi Storage System specific.
This is not a top-down dependency chart, although there are some top-down dependencies here. Rather it is sorted into rows according to what the purpose/benefit of the product is aimed at.
The first layer at the bottom is Hitachi Storage System-specific modules for supporting and interfacing with Hitachi arrays to get the most out of Hitachi Data Systems storage.
The second layer is made up of products that support storage systems on an operational basis; things that make efficient and reliable management of storage possible.
The top layer consists of modules that are application specific tools to improve application-to-storage service levels.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Device Manager Software Centrally Manages All Tiers of Hitachi Storage
Device Manager Software Centrally Manages All Tiers of Hitachi Storage
4
Single console, single product for managing all tiers of Hitachi storage – One common interface, browser and CLI
SMI-S 1.1 EnabledDiscover, Configure, Monitor, Report, Provision
– Centrally manage Hitachi HiCommand® Replication Monitor software: Hitachi ShadowImage™ In-System Replication softwareHitachi Copy-on-Write Snapshot softwareHitachi TrueCopy™ Remote Replication software
– Centrally manage LUN security– Single sign-on access to Replication Monitor software, Hitachi HiCommand® Tiered
Storage Manager software, HiCommand Dynamic Link Manager™ path manager software, and other Hitachi software
Device Manager software is the next level up. It manages all HDS arrays, Thunder, Lightning and Universal Storage Platform with the same interface. It can also manage multiple arrays in a network environment. It is fully path-aware, but only manages HDS arrays.
With 4.0, we added: The ability to configure and report on Universal Storage Platform logical
partitions to allow better tuning and reserving of storage resources and service levels on a per application basis.
Added support for array-based business continuity products – UR Faster operations on large SANs and storage arrays Expanded device support:
HP-UX 11i v2 is supported by Device Manager software – Agent Red Hat Enterprise Linux AS 3.0 (IA-64) is supported by Device Manager
software – Agent AIX 5.3 is supported by HiCommand Device Manager software – Agent
Moved to a sharable common HiRDB embedded database resulting in improved performance and scalability
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Device Manager Software Centrally Manages all Tiers of Hitachi Storage
Device Manager Software Centrally Manages all Tiers of Hitachi Storage
5
Benefits– Improved alignment with business functions using logical grouping– Improved productivity of IT resources
Integrated data center and enterprise operationsUtilization of enterprise storage assets
– Risk mitigationProactive alerts on storage arrays to prevent outagesReduced manual error-prone storage processesDisaster recovery management to minimize downtime
distributed, in whole or in part, without the prior written consent of HDS. Page 8-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Overview
Overview
6
Device Manager software is a core product for storage managementof:– Hitachi storage systems -- TagmaStore™ Universal Storage Platform,
Lightning 9900™ V Series enterprise storage system, Thunder 9500™ V Series modular storage systems, Lightning 9900 Series enterprise storage system, and Thunder 9200™ modular storage systems
– Sun Microsystems Corporation StorEdge™Device Manager software allows for:– Users to begin proactively managing complex and heterogeneous
storage environments providing an easy-to-use browser-based GUI– Remote storage management over secure IP connections
Device Manager software enables users to consolidate storage operations and manage capacity in systems that contain multiple Hitachi storage subsystems as well as subsystems from other companies. Targeted for users managing multiple storage arrays in open or shared environments, Device Manager software quickly discovers the key configuration attributes of storage systems and allows users to begin proactively managing complex and heterogeneous storage environments quickly and effectively using an easy-to-use browser-based GUI. Device Manager software enables remote storage management over secure IP connections and does not have to be direct-attached to the storage system.
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Overview
7
Provides easy access to the existing array configuration, monitoring, and data management features– Can manage multiple storage subsystems– Provides the capability to manage client connections with one or
multiple Device Manager Web Clients and Agents– Offers a Command Line Interface (CLI)
Allows users to perform Array Management Operations such as:– Adding/deleting storage– Configuring volume paths and fibre channel ports– Creating custom-size volumes– Managing LUN security
Allows uses to configure– TrueCopy software– ShadowImage software– Copy-on-Write Snapshot software
The Device Manager software system includes the Device Manager software server, the storage arrays connected to the server, the (optional) Device Manager software agents, and the Device Manager software clients. The Device Manager software Web Client provides a web-distributed client for real-time interaction with the storage arrays being managed.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Overview
8
Designed as an open frameworkProvides a set of application programming interfaces (APIs) tointegrate applications from industry-leading software vendorsEnables management of storage by user-defined hierarchical groupsProvides subsystem alert presentationAllows for monitoring and displaying volume usage statistics using the Device Manager software AgentOffers reports for exportDoes not support SCSI Ports
Device Manager software provides APIs which allow industry-leading software vendors such as Sun Microsystems, VERITAS, Microsoft, BMC, Computer Associates, and InterSAN to seamlessly integrate their applications. Users can also “plug in” existing or new applications to the Device Manager software system.
You can also configure Device Manager software to monitor and display volume usage statistics using the Device Manager software Agent (optional).
Device Manager software has a built-in report facility that compiles and presents key information in preformatted reports (HTML) and as comma-separated values for export.
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Customer View
Customer View
9
Features– License key management– Service Pack installation– SNMP support for integration into Enterprise Management
Frameworks – or any SNMP application– Access control– Configuration display– Logically grouped storage management– Host and WWN management– Volume (LUN) configuration– TrueCopy software and ShadowImage software management for
open systems– Windows 2003 Virtual Disk Services (VDS) support– Alert presentation
distributed, in whole or in part, without the prior written consent of HDS. Page 8-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Device Manager Architecture
Device Manager Architecture
10
9900 VTagmaUSP
9900SNMP
FTP
T3HTTP
9200/9500DAMP
Windows 2000
SAN
Web ClientJava WebStart
Web ClientJava Web Start
Web Client
Windows NT
Solaris
TCP/IP
HTTP
TCP/IP
AIX
FC
FC
FC
FC
Fibre Channel
RMI
HTTP
Windows 2000
SAN
......
Web ClientJava WebStart
Web ClientJava Web Start
Web Client
Windows NT
Solaris
TCP/IP
HTTP
TCP/IP
AIX
HiCommandServerWindows/SOLARIS
Device Manager software Server: Server is LAN-attached to the storage arrays and controls Device Manager
software operations based on requests from the clients Receives information from agents Uses HiRDB as a repository Operating System: SOLARIS 8, SOLARIS 9, or Windows
HTML GUI: A browser-based application that can be accessed from web browsers.
Java GUI: A stand-alone application which is deployed using the Java Web Start software.
TCP/IP is used for communication between the Device Manager software Server and the storage subsystems.
Device Manager software clients include the web client, CLI, third-party applications.
The Java GUI provides the windows and dialog boxes for the subsystem and user management features.
The HTML GUI provides the windows and dialog boxes for functions other than the subsystem and user management features.
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Components
Components
11
HiCommand Suite Common Component– Provides feature common to all HiCommand products
Also included as part of other HiCommand products– Includes:
Single sign-on (SSO) user authenticationIntegrated common event/error logging
Single sign-on:
Integrated single sign-on is used during the link & launch operation. Once user ID and password are authenticated, they are available to all HiCommand software, so that users do not need to re-enter their user ID and password. User privileges are maintained across HiCommand products.
Common logging:
This provides a common log repository for the various logs of the HiCommand products.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software HiCommand Suite Common Component
HiCommand Suite Common Component
12
The “HiCommand™ Suite Common Component” is also called “HBase” and gets silently installed as a part of the HiCommand Suite Management Server installation.
HiCommand Management Server
Device Manager
Provisioning Manager
Dynamic Link Manager
[HDLM Web GUI]
Tuning Manager
HBase
Web
Clients
Production Servers
ProductionServer A
9500-V9900
9900-V
ProductionServer B Production
Server C
SAN devices
Agent Agent Agent
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-12 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software HiRDB Embedded Edition
HiRDB Embedded Edition
13
In version 4.0 of the HiCommand™ Suite products, the database being used in HBase is been changed from InterBase to HiRDB Embedded Edition.
Tuning ManagerServlet
Provisioning Manager Servlet
Device ManagerServlet
SSO Client
Web Server
JDK
ServletGUI Framework
SSO Server
HiR
DB
Embedded Edition
Web C
ontainer Service
SSO Client
Maintenance Utilities
SSO Client Other ProductServlet
JDB
C
SSO Client
Dynamic Link Manager Web GUI
ServletSSO Client
HBase provides the following infrastructure with HiCommand™ Management Server:
GUI Framework (Console) Web server Web Container Service Single Sign On mechanism (SSO) HiRDB Database JDBC JDK Maintenance Utilities License mechanism and etc.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Device Manager Software Agent Support
Device Manager Software Agent Support
14
AIXHP-UX
SUSE Linux Enterprise ServerVERITAS Cluster Server
Solaris 8, 9, 10
Windows NT, 2000,2003
Microsoft Cluster Server
IRIXRed Hat Enterprise Linux
Operating System
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-14 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Storage Management Concepts
Storage Management Concepts
15
Centralized and distributed storage management User access controlUser defined hierarchical group management for disk storage
16
Distributed Storage Management
Administrator C
Domain B
Domain ADomain
C
Administrator BAdministrator A Administrator C
Domain B
Domain ADomain
C
Administrator BAdministrator A
distributed, in whole or in part, without the prior written consent of HDS. Page 8-15
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Storage Management Concepts
17
Centralized Storage Management
Manufacturing
Marketing
Administrator(Local)
New York Corp. (Customer)
Administrator(Local)
Denver Corp. (SSP)
Administrator(Centralized)
San Jose Electric (Customer) Manufacturing
MarketingMarketing
Administrator(Local)
New York Corp. (Customer)
Administrator(Local)
Denver Corp. (SSP)
Administrator(Centralized) Administrator
(Local)New York Corp.
(Customer)New York Corp.
(Customer)
Administrator(Local)
Denver Corp. (SSP)Denver Corp. (SSP)
Administrator(Centralized)
San Jose Electric (Customer)
San Jose Electric (Customer)
Global and Local Domains
18
User Access Control
Global
and
Local
Domains
User ResourcesUser Resources
Storage Storage ResourcesResources
Local Domains
User ResourcesUser Resources Storage Storage ResourcesResources
Global Domain
Privileges of access to all the Device Manager resources.
AdministratorAdministratorAdministrator
Privileges of access to all the local resources.
Local AdministratorLocal AdministratorLocal Administrator
Privileges of access to only local storage resources
Storage AdministratorStorage AdministratorStorage Administrator
Privileges of access to global storage resources
Storage AdministratorStorage AdministratorStorage Administrator
User ResourcesUser Resources
Storage Storage ResourcesResources
Storage Storage ResourcesResources
Local Domains
User ResourcesUser Resources Storage Storage ResourcesResources
Storage Storage ResourcesResources
Global Domain
Privileges of access to all the Device Manager resources.
AdministratorAdministratorAdministrator
Privileges of access to all the local resources.
Local AdministratorLocal AdministratorLocal Administrator
Privileges of access to only local storage resources
Storage AdministratorStorage AdministratorStorage Administrator
Privileges of access to only local storage resources
Storage AdministratorStorage AdministratorStorage Administrator
Privileges of access to global storage resources
Storage AdministratorStorage AdministratorStorage AdministratorPrivileges of access to global storage resources
Storage AdministratorStorage AdministratorStorage Administrator
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-16 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Basic Operations - Best Practices
Basic Operations - Best Practices
19
Add each host that will be managing storage including WWN’s
Add each host that will be managing storage (name and WWNs) to Device Manager software by either installing/running the Device Manager software Agent on each host, or by entering each host manually using the Device Manager software Web Client or CLI.
Caution: It is recommended that you add all hosts before performing a LUN Scan operation. If you do not enter the host before performing a LUN Scan, the LUN Scan will automatically create a unique host name (HOST_0, HOST_1, etc.) for each WWN found securing any LUN. This can create a significant number of hosts depending on the size of the environment.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-17
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Basic Operations -- Best Practices
Basic Operations -- Best Practices
20
Set up users and access privilegesExample of how to use logical groups
Create the logical group hierarchy for your storage groups. If desired, you can use the group hierarchy created by the LUN Scan operations, instead of creating your own logical groups, and you can reconfigure your group hierarchy as needed (e.g., add and delete groups, change the level/parent of a group).
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-18 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Basic Operations -- Best Practices
21
Example of how to use logical groups
Storage groups can be nested within logical groups or at the top level as needed. A logical group cannot be nested within a storage group.
Example of how to use logical groups: For copy pair operations, you should consider creating a logical group for LUs that are reserved or pooled for copy operations. You could then create a storage group for each subsystem within the top-level logical group for copy volumes.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-19
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Basic Operations -- Best Practices
22
Add the desired user groups and users, and assign user access capabilities.
Add the desired user groups and users, and assign user access capabilities.
Note: To prevent unauthorized access, make sure to either change the default System Administrator login or add at least one System Administrator and then delete the default System Administrator.
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-20 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Basic Operations -- Best Practices
23
Perform a LUN Scan operation on each newly-discovered subsystem
Perform a LUN Scan operation on each newly-discovered subsystem.
The LUN Scan operation creates the LUN Scan logical group hierarchy and categorizes the existing LUNs into subgroups organized by subsystem and port. If the LUN Scan finds a WWN which has not already been added to Device Manager software, the LUN Scan creates a unique host (HOST_0, HOST_1, etc.) for that WWN.
distributed, in whole or in part, without the prior written consent of HDS. Page 8-21
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Basic Operations -- Best Practices
24
Results of the LUN Scan operation
25
LUN Scan automatically created host names, you can rename these hosts to more easily identify them, combine WWNs that are on a single physical host, and delete the extraneous hosts as neededAfter completing steps (2) through (7), you are ready to performstorage operations, such as adding and moving storage, adding and deleting volume paths, configuring LUN security, and managing copy pairsWhen you are finished performing Device Manager software operations, make sure to always log out to prevent unauthorized access
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-22 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Support
Support
26
Support for the Universal Storage Platform– Storage Navigator is unified as Device Manager software physical
view for Universal Storage Platform
distributed, in whole or in part, without the prior written consent of HDS. Page 8-23
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Report Operations
Report Operations
27
Report Operations– Device Manager software provides a built-in reporting function which
allows the user to generate reports in HTML format and comma-separated value (CSV) format
– The Device Manager software reports include:Physical Configuration of Storage System report:
– Physical configuration of the storage arrays being managed by Device Manager software.
Storage Utilization by Host report– Storage utilization organized and presented by host.
Storage Utilization by Logical Group report– Storage utilization organized and presented by logical group.
Users and Permissions report:– Device Manager software users and permissions
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-24 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Report Operations
Report Operations
28
Storage Utilization by Host report
distributed, in whole or in part, without the prior written consent of HDS. Page 8-25
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi HiCommand® Device Manager Software Command Line Interface
Command Line Interface
29
The Device Manager software Command Line Interface (CLI) is available for users who prefer a character-based interface to create their own automation scripts.
The CLI enables you to perform Device Manager software operations by issuing commands from the system command line prompt.
The CLI communicates with and runs as a client of the Device Manager software server.
Device Manager software CLI
Device Manager software
SAN
Hitachi USP, Lightning, NSC and ThunderHP Storage Works XP
Communicated by XML/API on HTTP (or HTTPS) protocol
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-26 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi HiCommand® Device Manager Software Module Review
Module Review
30
1. On which server can Hitachi Device Manager Server be installed?2. Device Manager is used to configure USP FED ports. Which protocols does
Device Manager support when configuring FEDs?ESCONSCSIFICONFibre Channel
3. Which of the following features does Device Manager support?Host file system capacity utilizationReports on storage system configurationManagement of replication softwareScripting of common storage system management tasksScripting of repetitive replication processes
4. Device Manager supports configuration and management of which storage arrays?
Hitachi USP EMC SymmetrixHitachi NSC Sun StorEdge 9990HP StorageWorks XP12000 Sun StorEdge 9985
31
5. A company wants to manage and configure storage systems as well as provide host based utilization and reporting features? Which tool should they use?
Hitachi Device ManagerHitachi Storage NavigatorHitachi Provisioning Manager
6. A company has two Hitachi storage arrays located in one facility and a third located in a remote facility. Which of the following is true?
They need to install three Device Manager serversThey need to install two Device Manager servers one in each facilityThey only need to install one Device Manager server to manage all three arrays
7. Which database is used by Device Manager V4.0?Which four benefits will a customer receive by using Hitachi Device Manager?
Total storage configured on the storage systemPercent full of Host file systemsSAN switch bottlenecksProvides agents to integrate host and array informationProvides global and local security management domainsEnables managing storage systems with logical and storage groupsProvides reports on storage performance metrics by host and storage system
8. Bonus -
This manual may not be copied, transferred, reproduced, disclosed, or distributed, in whole or in part, without the prior written consent of HDS.
Page 8-27
Hitachi HiCommand® Device Manager Software Module Review
This manual may not be copied, transferred, reproduced, disclosed, or Page 8-28 distributed, in whole or in part, without the prior written consent of HDS.
9. Business Continuity
distributed, in whole or in part, without the prior written consent of HDS. Page 9-1
This manual may not be copied, transferred, reproduced, disclose, or
Business Continuity Module Objectives
Module Objectives
2
On completion of this module, the learner should be able to:• Describe the Business Continuity solutions and the available software
This manual may not be copied, transferred, reproduced, disclosed, or Page 9-2 distributed, in whole or in part, without the prior written consent of HDS.
Business Continuity A New Way of Looking at Business Continuity
A New Way of Looking at Business Continuity
3
It is not a simple as it sounds…– There is an increasing number of complex technologies– Growing volumes of data leads to significant management and restoration issues– There is no ‘One-size-fits-all’ solution: There are methodologies and components
The traditional focus on catastrophic events understates true business value, because catastrophic exposure is real but remoteIt is about operational resiliency which, by design, addresses a larger variety of events and circumstances. Resiliency will by design “produce” Business ContinuityCompliance requirements are accelerating the processDetermine the balance between optimum level of safety without losing productive business ground. What is the true impact?It is about business: Achieving the fine balance between sensible investments and cost
distributed, in whole or in part, without the prior written consent of HDS. Page 9-3
This manual may not be copied, transferred, reproduced, disclosed, or
Business Continuity Business Continuity Solutions
Business Continuity Solutions
4
VERITAS Cluster Server, Microsoft MSCSHitachi TrueCopy Agent for VERITAS VCS
Data Protection Suite, Powered By CommVault ®
Backup and RecoveryData MigrationData Archiver
Quick RecoveryData Protection Monitor
PlatformsUniversal Storage Platform, Lightning 9900 V Series system, and Thunder 9500 V Series System
Backup/RecoveryVERITAS NetBackup
Hitachi Backup and RecoveryServerless Backup EnablerHitachi Cross-System Copy
Extended ClustersVERITAS GCM, Microsoft MSCS IBM GDPS
Extended ClustersVERITAS GCM, Microsoft MSCS IBM GDPS
Point-in-Time Clones and SnapshotsShadowImage Software
Copy-on-Write Snapshot Software
Point-in-Time Clones and SnapshotsShadowImage Software
Copy-on-Write Snapshot Software
Multi-SiteTrueCopy,
Universal Replicator3 Datacenter,Cross-System
Copy
Multi-SiteTrueCopy,
Universal Replicator3 Datacenter,Cross-System
Copy
Remote Replication
Universal ReplicatorTrueCopy
Remote Replication
Universal ReplicatorTrueCopy
Tape VaultHitachi Backupand Recovery
VERITAS NetBackup
Tape VaultHitachi Backupand Recovery
VERITAS NetBackup
Local – High AvailabilityLocal – High Availability Remote – Disaster ProtectionRemote – Disaster Protection
Path FailoverHitachi Dynamic Link Manager Software
Disaster Recovery, DR Testing,Planned Outages
Disaster Recovery, DR Testing,Planned Outages
HiCommand Mgmt Suite, including Backup Services Manager and Replication Monitor, Hitachi Business Continuity Manager
HiCommand Mgmt Suite, including Backup Services Manager and Replication Monitor, Hitachi Business Continuity Manager
Hitachi Data Systems Continuity ServicesHitachi Data Systems Continuity Services
Point-in-Time Clones and SnapshotsHitachi ShadowImage™ In-System Replication Software
Hitachi Copy-on-Write Snapshot Software
Point-in-Time Clones and SnapshotsHitachi ShadowImage™ In-System Replication Software
Hitachi Copy-on-Write Snapshot Software
On the left side of the graphic are examples of the Hitachi Data Systems high-availability solutions that are built on the foundation of the high-end Hitachi Storage systems and their 100% availability. On the right side, the focus is placed on remote data protection technologies and solutions, and in essence, Disaster Recovery solutions components.
Disaster Recovery is the planning and the processes associated with recovering your data/information. Disaster Protection is usually focused on providing the ability to duplicate key components of the IT infrastructure at a remote location, in the event that the primary IT site is unavailable for a prolonged period of time. Disaster protection solutions can also be used to minimize the duration of “planned” outages by providing an alternate processing facility while software or hardware maintenance technology refresh is provided at the primary site.
A Disaster Recovery environment is typically characterized by: Servers far apart Servers have separate resources
This manual may not be copied, transferred, reproduced, disclosed, or Page 9-4 distributed, in whole or in part, without the prior written consent of HDS.
Business Continuity Business Continuity Solutions
Recovery from large-scale outage Major disruption Difficult to return to normal Recovery
High-availability (HA): The practice of keeping systems up and running by exploiting technology, people, skills, and processes. High Availability is usually focused on component redundancy and recovery at a local site to protect from failure of an infrastructure component.
HA environment is characterized by: Co-located servers Shared disks and other resources Recovery from isolated failures Minor disruptions only Easy or few steps to return to normal
These are complementary disciplines. Business Continuity requires practicing or implementing both advanced DR solutions and DR on top of additional organizational processes.
This simple framework identifies the building blocks for Business Continuity solutions – the blocks identified here are key functional/technology components.
Trademarks:
Hitachi TagmaStore™ Universal Storage System
Hitachi Lightning 9900™ V Series enterprise storage systems
Hitachi Thunder 9500™ V Series modular storage systems
distributed, in whole or in part, without the prior written consent of HDS. Page 9-5
This manual may not be copied, transferred, reproduced, disclosed, or
Business Continuity Hitachi ShadowImage™ In-System Replication Software
Hitachi ShadowImage™ In-System Replication Software
5
Features– Full copy of a volume at a point in time– Immediately available for concurrent use
by other applications– No host processing cycles required– No dependence on operating system, file
system or database – All copies are additionally RAID protected– Up to 10 copies on Lightning 9900 V
Series system and 2 copies on Thunder 9500 V Series systems
– Hitachi FlashCopy-compatible Mirroring software for IBM® z/OS® option is 100% compatible with IBM FlashCopy
Benefits– Protects data availability– Simplifies and increases disaster
recovery testing– Eliminates the backup window– Reduces testing and development cycles– Enables non-disruptive sharing of critical
information
Point-in-time Copy for parallel processing
Copy ofProduction
Volume
NormalProcessingcontinuesunaffected
ProductionVolume
This manual may not be copied, transferred, reproduced, disclosed, or Page 9-6 distributed, in whole or in part, without the prior written consent of HDS.
Business Continuity Hitachi Copy-on-Write Snapshot Software
Hitachi Copy-on-Write Snapshot Software
6
Differential Data Save
Primary Host Secondary Host
Read Write
Virtual VolumeP-VOL
POOL
11:00 am11:00 am 12:00 pm12:00 pm10:00 am10:00 am
Read Write
Features– Provides nondisruptive, volume
“snapshots”– Uses less space than full copies
or “clones”– Allows up to 14 frequent, cost-
effective, point-in-time copies– Immediate read/write access to
virtual copy – Nearly instant restore from any
copy
Benefits– Protects data availability with
rapid restore– Simplifies and increases
disaster recovery testing– Eliminates the backup window– Reduces testing and
development cycles– Enables non-disruptive sharing
of critical information
distributed, in whole or in part, without the prior written consent of HDS. Page 9-7
This manual may not be copied, transferred, reproduced, disclosed, or
Business Continuity Hitachi TrueCopy™ Remote Replication Software
Hitachi TrueCopy™ Remote Replication Software
7
Features– Synchronous and asynchronous support– Support for mainframe and Open
environments– The remote copy is always a “mirror”
image – Provides fast recovery with no data loss– Asynchronous version ensures that
update sequence is maintained, even in database environments
– Installed in the highest profile DR sites around the world
Benefits– Complete data protection solution over
any distance enables more frequent disaster recovery testing
– Improves customer service by reducing downtime of customer-facing applications
– Increases the availability of revenue producing applications
– Improves competitiveness by distributing time-critical information anywhere and anytime
P-VOL S-VOL
This manual may not be copied, transferred, reproduced, disclosed, or Page 9-8 distributed, in whole or in part, without the prior written consent of HDS.
Business Continuity Hitachi Universal Replicator Software
Hitachi Universal Replicator Software
8
Features– Asynchronous replication– Leverages Universal Storage Platform– Performance-optimized disk-based
journaling– Resource-optimized processes– Advanced 3 Data Center capabilities– Mainframe and Open Systems support
Journal is transferred asynchronously
Primary sitePrimary site Secondary siteSecondary siteJournal data is stored in Journal data is stored in the journal volumethe journal volume
Transfer of the journal file Transfer of the journal file to the remote subsystemto the remote subsystem
Journal Data is written to Journal Data is written to application volumeapplication volume
JNLJNL
Universal Storage PlatformUniversal Storage Platform
JNLJNL
Universal Storage Universal Storage PlatformPlatform
ApplicationApplicationVolumeVolumeApplicationApplication
VolumeVolume
WRT
Benefits– Resource optimization– Mitigation of network problems and
significantly reduced network costs– Enhanced disaster recovery capabilities
through 3 Data Center solutions– Reduced costs due to ‘single pane of glass’
heterogeneous replication
This describes the basic technology behind the disk-optimized journals. 1. I/O is initiated by the application and sent to the Universal Storage Platform 2. It is captured in cache and sent to the disk journal at which point it is written to
disk 3. The “I/O complete” is released to the application 4. The remote system pulls the data and writes it to its own journals and then to
the replicated application volumes
Universal Replicator software sorts the I/Os at the remote site by sequence and time stamp (mainframe) and guaranteed data integrity.
It should also be noted that Universal Replicator software offers full support for consistency groups through the journal mechanism (journal groups).
distributed, in whole or in part, without the prior written consent of HDS. Page 9-9
This manual may not be copied, transferred, reproduced, disclosed, or
Business Continuity Hitachi Cross-System Software
Hitachi Cross-System Software
9
Features– Store, move or replicate data, based on requirements for
availability, performance, accessibility, retention, security– Move data between Universal Storage Platform, Lightning 9900
V Series system, and Thunder 9500 V Series systems:Host IndependentDatabase independentLocal via SANRemote via WANWithout stealing server processing cyclesDisk-to-Disk backupManage the process centrally
Benefits– Consolidate tape backup to a single Lightning 9900 V Series
system or Universal Storage Platform – Helps enable efficient data life cycle management solutions– Consolidate view for data mining– Eases migration to new storage– No server cycles required; no impact on application performance
This manual may not be copied, transferred, reproduced, disclosed, or Page 9-10 distributed, in whole or in part, without the prior written consent of HDS.
Business Continuity Module Review
Module Review
10
distributed, in whole or in part, without the prior written consent of HDS. Page 9-11
This manual may not be copied, transferred, reproduced, disclosed, or
Business Continuity Module Review
This manual may not be copied, transferred, reproduced, disclosed, or Page 9-12 distributed, in whole or in part, without the prior written consent of HDS.
10. Hitachi ShadowImage™ In-System Replication
distributed, in whole or in part, without the prior written consent of HDS. Page 10-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi ShadowImage™ In-System Replication Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to:• Describe the key features and operations of Hitachi ShadowImage™
In-System Replication software• List the rules associated with the operations of ShadowImage
software• Describe the key competitive advantages of ShadowImage software• Identify ShadowImage software commands and the operations of
each command
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Overview
Overview
3
ShadowImage software– Replicates information within the Hitachi Lightning 9900™ Series enterprise
storage system, Hitachi Lightning 9900™ V Series enterprise storage system, Hitachi TagmaStore™ Universal Storage Platform, Hitachi Thunder 9500™ V Series modular storage systems, and Hitachi Thunder 9200™ modular storage system without disrupting operations
– Creates a Point-In-Time (PIT) copy of the data Once PIT copy is created, data can be used for
– Data warehousing/data mining applications– Backup and recovery– Application development
– Supports the creation of up to nine system protected copies from each source volume (Universal Storage Platform, Lightning 9900, Lightning 9900 V Series systems only)
– High performance achieved through asynchronous copy facility to secondary volumes
ShadowImage software allows you to replicate information within the Lightning 9900 Series system, Lightning 9900 V Series system, Universal Storage Platform, Thunder 9500 V Series systems, and Thunder 9200 storage systems without disrupting operations. Once copied, data can be used for data warehousing/data mining applications, backup and recovery, or application development, allowing more complete and frequent testing for faster deployment.
ShadowImage software supports the creation of up to nine system protected copies from each source volume (Lightning 9900 Series systems /Lightning 9900 V Series systems /Universal Storage Platform systems only). When used in conjunction with Hitachi TrueCopy™ Remote Replication software, ShadowImage software supports up to twenty copies of critical information that can reside on either local or secondary systems located within the same data center, or at remote sites.
High performance is achieved through the asynchronous copy facility to secondary volumes.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Overview
4
P-VOL and S-VOL start out as independent/simplex volumes– P-VOL – Production Volume– S-VOL – Secondary Volume
P-VOL and S-VOL are synchronized using ShadowImage OperationsP-VOL and S-VOL are split creating a Point-In-Time (PIT) copyS-VOL can be used independently of P-VOL with no performance impact on P-VOL
P-VOL
P-VOL
S-VOLP-VOL
S-VOL
S-VOLIndependent
Synchronize
Split
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Overview
5
ShadowImage software for the Universal Storage Platform
ShadowImage operations on the Remote Console-Storage Navigator involve the primary and secondary volumes in the Universal Storage Platform system and the Remote Console-Storage Navigator Java applet program downloaded from the connected SVP. This figure shows a typical ShadowImage configuration. The ShadowImage system components are:
ShadowImage pairs (P-VOLs and S-VOLs) and Java applet program downloaded from the Universal Storage Platform SVP
(Web Server) to the Remote Console-Storage Navigator computer (Web Client)
distributed, in whole or in part, without the prior written consent of HDS. Page 10-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Overview
6
Max. 3 S/P-VOL or 3 S-VOL
Total 9 Copies:Three Level 1’s
AndSix Level 2’s
Max. 2 S-VOL
Max. 2 S-VOL
Asynchronous Write Asynchronous Write
Write DataWrite DataWrite Data
Cascade Connection
Max. 2 S-VOL
P-VOL
S-VOL
S-VOL
S-VOL
S-VOL
S-VOL
S-VOL
S-VOL
S-VOL
S-VOL
Universal Storage Platform, Lightning 9900 and Lightning 9900 V Series systems
ShadowImage software enables you to maintain system-internal copies of all user data on the Universal Storage Platform, Lightning 9900, and Lightning 9900 V Series systems for purposes such as data backup or duplication. The RAID protected duplicate volumes are created within the same system as the primary volume at hardware speeds. ShadowImage software is used for UNIX-based and PC server data as well as mainframe data. ShadowImage can provide up to nine duplicates of one primary volume for UNIX based and PC server data only. For mainframes, ShadowImage can provide up to three duplicates of one primary volume.
The paircreate command creates the first Level 1 “S” volume. The set command can be used to create a second and third Level 1 “S” volume. And the cascade command can be used to create the Level 2 “S” volumes off the Level 1 “S” volumes.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Overview
7
ShadowImage software for the Universal Storage Platform– Supports a maximum of 16,382 ShadowImage volumes: 8,191 pairs:
(8,191 P-VOLs and 8,191 S-VOLs)When ShadowImage pairs include size-expanded LUs, the maximum number of pairs decreases
– The ShadowImage license key code is required to enable the ShadowImage option on the Universal Storage Platform
A separate license code is required for each Universal Storage Platform
– Maximum concurrent copies is 128
distributed, in whole or in part, without the prior written consent of HDS. Page 10-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Overview
8
Initial Copy Operation– Full Copy– Block for Block Copy
P-VOL
P-VOL
S-VOLP-VOL
S-VOL
S-VOLSMPL
COPY (PD)
PAIR
Initial Copy
Start
Finished
The ShadowImage initial copy operation takes place when you create a new volume pair. The ShadowImage initial copy operation copies all data on the P-VOL to the associated S-VOL. The P-VOL remains available to all hosts for read and write I/Os throughout the initial copy operation. Write operations performed on the P-VOL during the initial copy operation will be duplicated at the S-VOL by update copy operations after the initial copy is complete. The status of the pair is COPY(PD) (PD = pending duplex) while the initial copy operation is in progress. The status changes to PAIR when the initial copy is complete.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Overview
9
P-VOL S-VOL
S-VOL
PAIR
Differential Data
P-VOL
Host I/O
Update Copy
Update Copy Operations
The ShadowImage update copy operation, updates the S-VOL of a ShadowImage pair after the initial copy operation is complete. Update copy operations take place only for duplex pairs (status = PAIR). As write I/Os are performed on a duplex P-VOL, the system stores a map of the P-VOL differential data, and then performs update copy operations periodically based on the amount of differential data present on the P-VOL as well as the elapsed time between update copy operations. The update copy operations are not performed for pairs with the following status:
COPY(PD) (pending duplex) COPY(SP) (steady split pending) PSUS(SP) (quick split pending) PSUS (split) COPY(RS) (resync) COPY(RS-R) (resync-reverse) PSUE (suspended)
distributed, in whole or in part, without the prior written consent of HDS. Page 10-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Overview
10
– System* replies “write complete”to host as soon as the data is written to cache memory
– Data in cache memory is asynchronously written to P-VOL and S-VOL optimizing copy performance
Fast response to host side&
intelligent asynchronous copy
This S-VOL location is one example
DKA PairDKA Pair
Max. 9 S-VOL
CacheCache Memory(1) Write I/O
(2) Write complete
Data
P-VOLS-VOL
S-VOL
S-VOL
SUN
Solaris(3) Asynchronous write at the best timing
High Performance by Asynchronous Access to Secondary Volumes
When creating pairs, you can select the pace for the initial copy operation: Slower Medium Faster
The slower pace minimizes the impact of ShadowImage operations on system I/O performance, while the faster pace completes the initial copy operation as quickly as possible. The best timing is based on the amount of write activity on the P-VOL and the amount of time elapsed between update copies.
NOTE: * Implies Universal Storage Platform, Lightning 9900 and Lightning 9900 V Series systems.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Overview
11
Rules– Two LDEVs which compose a ShadowImage paired volume
must be the same emulation type and size– ShadowImage software supports LUSE VOLs, VLL* and Flash
Access VOLsMust be same size
– The RAID levels do not have to match
NOTE:* VLL stands for Virtual Logical LUN
distributed, in whole or in part, without the prior written consent of HDS. Page 10-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
Commands
12
PaircreatePairsplit– Steady– Quick
PairResync– Normal– Quick– Reverse– Quick Restore
Suspend (pairsplit –E)Delete (pairsplit –S)
If a Steady Split is performed, all the pending updated data from the P-VOL is sent to the S-VOL. Once the update copy is finished, then the S-VOL becomes available for reads and writes. After you are finished working with the S-VOL, you can resync. During the resync, the bitmaps are merged and an update copy is performed sending updated data from P-VOL to S-VOL. If suspend is performed, no update copy occurs and the S-VOL becomes immediately available for reads and write. Resyncing a suspended S-VOL forces an initial copy.
Quick Split
Be aware that the Quick Restore function available in ShadowImage (HMRCF) and Open Shadow Image (HOMRCF), results in the physical locations for the primary and secondary LDEVs becoming swapped. So, depending on the assignment of the primary and secondary volumes, the LDEVs may be relocated to a different parity group, possibly behind a different DKA/ACP pair, on different capacity disks, or even with a different RAID level.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-12 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
When controlling HMRCF/HOMRCF from the Remote Console or SVP you can choose either Quick Restore (LDEVs swapped) or Reverse Copy (LDEVs fixed). But if HOMRCF is controlled from the host by RAID Manager (CCI) then a pairresync -restore command results in a Quick Restore (LDEVs swapped) if the microcode supports it, and there is no option in RAID Manager to choose a normal Reverse Copy.
13
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
NAR/W
NAR/W
R/W OR RR/W
PAIRPAIR
COPY(PD)COPY(PD)
SMPLSMPL
S-VOLP-VOL
PAIRCREATE1. Select a volume and issue PAIRCREATE2. Initial copy takes place3. Volume status changes to PAIR
The paircreate command generates a new volume pair from two unpaired volumes. The paircreate command can create either a paired logical volume or a group of paired volumes.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
14
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
R/WR/W
NAR/W
NAR/W
SSUSPSUS
COPY(SP)COPY(SP)
PAIRPAIR
S-VOLP-VOL
PAIRSPLIT - STEADY1.Creates a P.I.T. Copy of P-VOL2. Bitmap transferred from P-VOL to S-VOL3. Update copy takes place4. Volume status changes to PSUS5. S-VOL is now available
The pairsplit command stops updates to the secondary volume of a pair and can either maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes. The pairsplit command can be applied to a paired logical volume or a group of paired volumes. The pairsplit command allows read access or read/write access to the secondary volume, depending on the selected options. You can create and split ShadowImage pairs simultaneously using the -split option of the paircreate command.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-14 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
15
P-VOL S-VOLAsynchronous Updates
HO
ST
I O
10:00 AM . Status = PAIR
Dirty Tracks3, 10, 15,18
Updates -->P-VOL S-VOL
HO
ST
I O
10:00:01 AM . Pairsplit (Steady)
Dirty Tracks3, 10, 15,18 Tracks 3, 10 15, and 18 sent from P-VOL
to S-VOL
P-VOL S-VOL
HO
ST
I O
10:00:55 AM . Status = PSUS
Dirty Tracks Dirty Tracks
HO
ST
I O
Steady Split Illustration
1. The P-VOL and S-VOL are in PAIR status as of 10:00 AM. Tracks 3, 10, 15 and 18 are marked as dirty because of Host I/O.
2. At 10:00:01 AM a Pairsplit (Steady) command is issued. Tracks 3, 10, 15 and 18 are sent across to the S-VOL from the P-VOL
3. Once the update operation in step 2 is complete, the status of the P-VOL and S-VOL is changed to PSUS. During this state there are track bitmaps attached to both the P-VOL and the S-VOL. These bitmaps keep track of changes on both the P-VOL and the S-VOL.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-15
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
16
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
R/WR/W
R/WR/W
NAR/W
SSUSPSUS
COPY(SP)COPY(SP)
PAIRPAIR
S-VOLP-VOL
PAIRSPLIT - QUICK1. Creates a P.I.T. Copy of P-VOL2. Bitmap transferred from P-VOL to S-VOL3. Volume status changes to PSUS4. Update copy takes place in background5. S-VOL is available instantly
The pairresync quick operation speeds up the normal pairresync operation by copying the P-VOL differential data map, only without copying the P-VOL data to the S-VOL. The P-VOL and the S-VOL are resynchronized when update copy operations are performed for duplex pairs (status = PAIR). The pair status during a quick pairresync is COPY(RS) until the differential map is copied, and the P-VOL remains accessible to all hosts for both read and write operations. The S-VOL becomes inaccessible to all hosts during a quick pairresync.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-16 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
17
P-VOL S-VOLAsynchronous UpdatesH
OS
T I
O
10:00 AM. Status = PAIR
Dirty Tracks3, 10, 15,18
10:00:01 AM. Status = PSUS
Updates -->P-VOL S-VOL
HO
ST
I ODirty Tracks Tracks 3, 10 15, and 18 sent
from P-VOL to S-VOL in thebackground H
OS
T I O Dirty Tracks
P-VOL S-VOL
HO
ST
I O
10:00:01 AM. Pairsplit (Quick)
Dirty Tracks Dirty Tracks
HO
ST
I O
Status Immediatlychanges to PSUS
Quick Split Illustration
1. The P-VOL and S-VOL are in PAIR status as of 10:00 AM. Tracks 3, 10, 15 and 18 are marked as dirty because of Host I/O.
2. The status of the P-VOL and the S-VOL is changed instantly to PSUS and the S-VOL is immediately available for reads and writes.
3. Tracks 3, 10, 15 and 18 are sent across to the S-VOL from the P-VOL in the background. If during this update copy operation there is any I/O to tracks 3, 10, 15, or 18 on the S-VOL then the system fetches the data from the P-VOL. During the PSUS state there are track bitmaps attached to both the P-VOL and the S-VOL. These bitmaps keep track of changes on both the P-VOL and the S-VOL.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-17
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
18
Quick Split– There is no waiting time – S-VOL write is executed as soon as a quick split request is issued– Background copy of delta data between P-VOL and S-VOL
“Quick Split”requestP-VOL
Asynchronous Copy
S-VOL
Read/Write
Status: PSUS
Read/Write
Background copy of delta data
Quick Split complete
(almost zero seconds)
Data which isn’t copied to S-VOL yet
Status: Pair
Read/Write
P-VOL S-VOL
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-18 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
19
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
NAR/W
NAR/W
R/WR/W
PAIRPAIR
COPY(RS)COPY(RS)
SSUSPSUS
S-VOLP-VOL
PAIRRESYNC - NORMAL– S-VOL is no longer available to host– Bitmap transferred from S-VOL to P-VOL– Dirty tracks are marked– Update copy takes place from P-VOL to S-VOL– Volume status changes to PAIR
ShadowImage software allows you to perform normal/quick pairresync operations on split and suspended pairs, but reverse/quick and restore pairresync operations can only be performed on split pairs.
Pairresync for split pair: When a normal/quick pairresync operation is performed on a split pair (status = PSUS), the system merges the S-VOL track map into the P-VOL track map and then copies all flagged tracks from the P-VOL to the S-VOL. When a reverse or quick restore pairresync operation is performed on a split pair, the system merges the P-VOL track map into the S-VOL track map and then copies all flagged tracks from the S-VOL to the P-VOL. This ensures that the P-VOL and S-VOL are properly resynchronized in the desired direction. This also greatly reduces the time needed to resynchronize the pair.
Pairresync for suspended pair: When a normal/quick pairresync operation is performed on a suspended pair (status = PSUE), the subsystem copies all data on the P-VOL to the S-VOL, since all P-VOL tracks were flagged as difference data when the pair was suspended. Reverse and quick restore pairresync operations cannot be performed on suspended pairs. The normal pairresync operation for suspended pairs is equivalent to and takes as long as the ShadowImage initial copy operation.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-19
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
20
Updates -->P-VOL S-VOL
HO
ST
I O
10:00:01 AM. Pairresync (Normal)
Tracks10,15,18,19,23,29
sent from P-VOL toS-VOL
Dirty Tracks
P-VOL S-VOLH
OS
T I O
10:00 AM. Status = PSUS
Dirty Tracks10,15,18,29
Dirty Tracks10,19, 23
HO
ST
I O
10:00:45 AM. Status = PAIR
P-VOL S-VOL
HO
ST
I ODirty Tracks
Asynchronous Updates
Normal Resync Illustration
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18, and 19 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are marked as dirty on the track bitmap for the S-VOL.
2. At 10:00 AM a pairresync (Normal) command is issued. The track bitmaps for the P-VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23, and 29 marked as dirty. These tracks are sent from the P-VOL to the S-VOL as part of an update copy operation.
3. Once the update copy operation in step 2 is complete the P-VOL and S-VOL are declared as a PAIR.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-20 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
21
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
NAR/W
NAR/W
R/WR/W
PAIRPAIR
COPY(RS)COPY(RS)
SSUSPSUS
S-VOLP-VOL
PAIRRESYNC - QUICK1.S-VOL is no longer available to host2.Bitmap transferred from S-VOL to P-VOL3.Volume status changes to PAIR4.Dirty tracks are marked5.Update copy takes place in background from P-VOL to S-VOL
distributed, in whole or in part, without the prior written consent of HDS. Page 10-21
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
22
10:00:45 AM . Status =PAIR
P-VO L S-VOLH
OS
T I
O
10:00 AM . Status = PSUS
Dirty Tracks10,15,18,29
Dirty T racks10,19, 23
HO
ST
I O
10:00:01 AM . Pairresync (Q uick)
P-VOL S-VOL
HO
ST
I ODirty Tracks
Asynchronous Updates
Status changes toPAIR Im m ediatly
Updates -->P-VO L S-VOL
Tracks10,15,18,19,23,29
sent from P -VO L toS-VOL in thebackground
Dirty T racks
HO
ST
I O
Quick Resync Illustration
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18, and 19 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19, and 23 are marked as dirty on the track bitmap for the S-VOL.
2. At 10:00 AM a pairresync (Normal) command is issued. The status of the P-VOL and the S-VOL changes instantly to PAIR.
3. The track bitmaps for the P-VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23, and 29 marked as dirty. These tracks are sent from the P-VOL to the S-VOL as part of an update copy operation in the background.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-22 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
23
QuickResync– Command completed in less than 1 sec/pair– Copies only delta bitmap– Delta data will be copied during PAIR status– Command by RAID Manager
QuickResync Request
Status: PSUS
Read/Write Read/Write
Asynchronous Copy
Read/Write
Status: PAIR
Delta data is copied
Delta bitmap
Read-Only
Delta bitmap
P-VOL S-VOL P-VOL S-VOL
24
QuickResync and QuickSplit together– Reduces resync (primary to secondary) time
Possible to issue “Split” request or
“Quick Split ” request in the next backup
cycle
P-VOL S-VOL
Very Short Term(almost zero)
Read&
Write
“Quick Resync”
DB I/O
Background copy of pending delta data
Using S-VOL to backup or batch
process
Completion of Quick Split
Readonly
Status:PSUS
distributed, in whole or in part, without the prior written consent of HDS. Page 10-23
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
25
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
R/WNA
RNA
R/WR/W
PAIRPAIR
COPY(RS-R)COPY(RS-R)
SSUSPSUS
S-VOLP-VOL
PAIRRESYNC - REVERSE1.Bitmap transferred from P-VOL to S-VOL2.Dirty tracks are marked3.Update copy takes place from S-VOL to P-VOL4.Volume status changes to PAIR
Can only be done from L1 to P-VOL
The reverse pairresync operation synchronizes the P-VOL with the S-VOL. The copy direction for a reverse pairresync operation is S-VOL to P-VOL.
The pair status during a reverse resync operation is COPY(RS-R), and the P-VOL and S-VOL become inaccessible to all hosts for write operations. As soon as the reverse pairresync operation is complete, the P-VOL becomes accessible. The reverse pairresync operation can only be performed on split pairs, not on suspended pairs. The reverse pairresync operation cannot be performed on L2 cascade pairs.
The P-VOL remains read-enabled during the reverse pairresync operation only to enable the volume to be recognized by the host. The data on the P-VOL is not guaranteed until the reverse pairresync operation is complete and the status changes to PAIR.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-24 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
26
Reverse Resync Illustration
Start Host IO to P_VOL10:00:45 AM Status = PAIR
Stop Host IO to P-VOL and S-VOL10:00:01 AM Pairresync (Reverse)
10:00 AM Status = PSUS
Dirty Tracks
S-VOL
Tracks10,15,18,19,23,29sent from S-VOL to
P-VOL
<-- UpdatesP-VOL
Dirty Tracks
HO
ST I
O
S-VOL
Dirty Tracks 10,19, 23
Dirty Tracks 10,19, 23 H
OST
I O
P-VOL
S-VOLAsynchronous Updates
HO
ST I
O
P-VOL
Dirty Tracks
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18, and 19 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19, and 23 are marked as dirty on the track bitmap for the S-VOL.
2. At 10:00 AM a pairresync (normal) command is issued. The track bitmaps for the P-VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23, and 29 marked as dirty. These tracks are sent from the S-VOL to the P-VOL as part of an update copy operation.
3. Once the update copy operation in step 2 is complete the P-VOL and S-VOL are declared as a PAIR.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-25
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
27
After
During
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
NAR/W
RNA
R/WR/W
PAIRPAIR
COPY(RS-R)COPY(RS-R)
SSUSPSUS
S-VOLP-VOL
PAIRRESYNC – QUICK RESTORE1. Swap of LDEV ID takes place
Can only be done from L1 to P-VOLSwap and freeze option available
The quick restore operation speeds up the reverse resync operation by changing the volume map in the Lightning 9900 Series system/Lightning 9900 V Series system/ Universal Storage Platform to swap the contents of the P-VOL and S-VOL without copying the S-VOL data to the P-VOL. The P-VOL and S-VOL are resynchronized when update copy operations are performed for pairs in the PAIR status. The pair status during a quick restore operation is COPY(RS-R) until the volume map change is complete. The P-VOL and S-VOL become inaccessible to all hosts for write operations during a quick restore operation. Quick restore cannot be performed on L2 cascade pairs.
The P-VOL remains read-enabled during the quick restore operation only to enable the volume to be recognized by the host. The data on the P-VOL is not guaranteed until the quick restore operation is complete and the status changes to PAIR.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-26 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
28
Quick Restore Illustration
10:00:03 AM. Status = PAIR
10:00:01AM.Pairresync (Quick Restore)
P-VOL S-VOLH
OST
I O
10:00 AM. Status = PSUS
Dirty Tracks10,15,18,29
Dirty Tracks10,19, 23
HO
ST I
O
P-VOL S-VOL
HO
ST I
ODirty Tracks
Asynchronous Updates
S-VOL P-VOL
Dirty Tracks
LDEV 2:03 (RAIDGroup 1-1)
LDEV 1:04 (RAIDGroup 2-3)
LDEV locations are Swapped
LDEV 2:03 (RAIDGroup 2-3)
LDEV 1:04 (RAIDGroup 1-1)
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18, and 19 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are marked as dirty on the track bitmap for the S-VOL. The P-VOL LDEV ID is 2:03 and the RAID Group that the P-VOL belongs to is 1-1. The S-VOL LDEV ID is 1:04 and the RAID Group that the S-VOL belongs to is 2-3.
2. At 10:00:01 a Quick Restore command is issued. The LDEV locations are swapped so that the P-VOL now belongs to RAID Group 2-3 and the S-VOL now belongs to RAID Group 1-1.
3. At 10:00:03 after the SWAP operation is complete the P-VOL and S-VOL are declared as a PAIR.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-27
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
29
Application has nothing to change– Same FC/SCSI address– Same Port/LDEV/attribute
P-VOL S-VOL
Application
Batch Job Failure
Stop of App.Quick Restore
Internal P/S SwapDataP-VOLS-VOL
Read/Write Resumption of App.
CopyData
Batch Job Failure
Stop of Application
Start of restore from backup media to S-VOL
Restore
Internal P/S Swap (Quick Restore request)
Resumption of Application
Copy from P-VOL to S-VOL
Read/Write to Virtual (temporary) P-VOL
Application
QuickRestore– Application can use the virtual P-VOL as soon as data is
restored from the back-up media– Uses internal P/S swap technology
30
After
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
R/WR/W
NAR/W
SSUEPSUE
PAIRPAIR
S-VOLP-VOL
PAIRSPLIT –E (SUSPEND)1. Immediate access to S-VOL
No update copy
Marks the entire PVOL as dirty2.Forces an initial copy on resync
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-28 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Commands
The ShadowImage pairsplit-E operation suspends the ShadowImage copy operations to the S-VOL of the pair. The user can suspend a ShadowImage pair at any time. When a ShadowImage pair is suspended (status = PSUE), the system stops performing ShadowImage operations. Copy operations to the S-VOL, continue accepting write I/O operations to the P-VOL, and mark the entire P-VOL track map as difference data. When a pairresync operation is performed on a suspended pair, the entire P-VOL is copied to the S-VOL. The reverse and quick restore pairresync operations cannot be performed on suspended pairs.
The subsystem will automatically suspend a ShadowImage pair when it cannot keep the pair mirrored for any reason. When the subsystem suspends a pair, sense information is generated to notify the host. The subsystem will automatically suspend a pair under the following conditions:
When the ShadowImage volume pair has been suspended or deleted from the UNIX/PC server host using CCI.
When the Lightning 9900 Series system/Lightning 9900 V Series system/Universal Storage Platform detects an error condition related to an update copy operation
When the P-VOL and/or S-VOL track map in shared memory is lost (e.g., due to offline microprogram exchange). This applies to COPY(SP) and PSUS(SP) pairs only. For PAIR, PSUS, COPY(RS), or COPY(RS-R) pairs, the pair is not suspended, but the entire P-VOL (S-VOL for reverse or quick restore pairresync) is marked as difference data.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-29
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Commands
31
After
Before
Time frame Host AccessVOL StatusHost AccessVOL Status
R/WR/W
NAR/W
SMPLSMPL
PAIRPAIR
S-VOLP-VOL
PAIRSPLIT –S (DELETE)1. Immediate access to S-VOL
No update copy2. Changes volume status back to simplex
The ShadowImage pairsplit-S operation (delete pair) stops the ShadowImage copy operations to the S-VOL of the pair and changes the pair status of both volumes to SMPL. A ShadowImage pair can be deleted by the user at any time except during the quick pairsplit operation [i.e., any status except SMPL and PSUS(SP)]. After you delete a ShadowImage pair, the S-VOL is still not available for write operations until the reserve attribute is reset.
When a ShadowImage pair is deleted, the pending update copy operations for the pair are discarded, and the status of the P-VOL and S-VOL is changed to SMPL. The S-VOL of a duplex pair (PAIR status) may not be identical to its P-VOL, due to the asynchronous ShadowImage update copy operations.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-30 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Tools
Tools
32
Hitachi Storage Navigator– GUI
HiCommand Device Manager– GUI
RAID Manager/CCI– HORCM– HORCM Files– Command Line– Used to script replication process
The basic steps in creating ShadowImage pairs include: 1. Open LUN Manager
Create your destination S-VOLs Record port / LUN information
2. Open ShadowImage software Reserve S-VOL Select P-VOL Issue a paircreate command
Why Reserve? You would want to Reserve to keep someone from trying to mount a designated S-VOL and then place data on that LUN. If that were to happen and then next you create a pair using that same S-VOL, it would overwrite the data on that LUN.
Once the pairs are established, you can then split, suspend, or delete them. Remember, deleting the pair does NOT destroy any data. It ceases the asynchronous update copy.
If you split them, you can the resync them.
distributed, in whole or in part, without the prior written consent of HDS. Page 10-31
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Tools
33
Open ShadowImage – Select P-VOL
Select the Port CL1-A on the tree panel on the left hand side of the screen. Select the P-VOL on the right hand side and right click on the line entry. Select paircreate from the pop up menu.
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-32 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi ShadowImage™ In-System Replication Module Review
Module Review
34
1. In an open systems environment, how many L1 point-in-time copies can be made from a P-VOL (referred to as an _______volume)?
2. In an open systems environment, how many L2 point-in-time copies can be made from a L1 S-VOL?
3. What are the two ShadowImage copy modes of operation?4. ShadowImage operations are (a) asynchronous or (b)
synchronous?5. During Open ShadowImage operations, the P-VOLs remain
available to all hosts for R/WI/s (except during_____________)?6. S-VOLs become available for host access only after the pair has
been _____________.7. Describe the difference between the pairsplit and pairsplit –S
functions.8. What are the three ways of re-synchronizing a Split Pair?
distributed, in whole or in part, without the prior written consent of HDS. Page 10-33
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi ShadowImage™ In-System Replication Module Review
This manual may not be copied, transferred, reproduced, disclosed, or Page 10-34 distributed, in whole or in part, without the prior written consent of HDS.
11. Hitachi Copy-on-Write Snapshot Software
distributed, in whole or in part, without the prior written consent of HDS. Page 11-1
This manual may not be copied, transferred, reproduced, disclose, or
Hitachi Copy-on-Write Snapshot Software Module Objectives
Module Objectives
2
Upon completion of this module, the learner should be able to: – Describe the purpose of Hitachi Copy-on-Write Snapshot software– List key Copy-on-Write Snapshot software specifications– Compare the functionality of Copy-on-Write Snapshot software to
Hitachi ShadowImage™ In-System Replication software– Describe typical Copy-on-Write Snapshot software operations
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-2 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Overview
Overview
3
Copy-on-Write Snapshot software– Allows you to internally retain a logical duplicate of the primary
volume data– Is used to restore data during the Snapshot instruction, if a logical
error occurs in the primary volume– The duplicated volume consists of physical data stored in the primary
volume and differential data stored in the data pool– Hitachi TagmaStore™ Universal Storage Platform and Hitachi
TagmaStore™ Network Storage ControllerUp to 64 V-VOLs per P-VOL
– Hitachi TagmaStore™ Adaptable Modular Storage and Hitachi Thunder 9500™ V Series modular storage system
– Up to 14 V-VOLs per P-VOL– To create or split a pair an instruction is issued from the host using
RAID Manager/CCI– For monitoring status and progress the web interface can be used
Copy-on-Write Snapshot software (formerly known as QuickShadow) allows you to internally retain a logical duplicate of the primary volume data. It is used to restore data during the Snapshot instruction, if a logical error occurs in the primary volume.
The duplicated volume of the Copy-on-Write Snapshot function consists of physical data stored in the primary volume and differential data stored in the data pool. This differs from the ShadowImage function where all data is retained in the secondary volume. Although the capacity of the used data pool is smaller than that of the primary volume, a duplicated volume can be created logically when the Snapshot instruction is given. The data pool can share two or more primary volumes and the differential data of two or more duplicated volumes.
The Copy-on-Write Snapshot function can create up to 14 V-VOLs (Snapshot images) per primary volume and manage data in two or more generations within the disk subsystem. This is clearly distinct from the ShadowImage software function.
To create or split a pair using the Copy-on-Write Snapshot function, an instruction is issued from a UNIX and/or PC-server host using Command Control Interface (CCI). For information and instructions on using CCI to perform ShadowImage software Thunder 9500 V Series system operations, refer to the Thunder 9500 V Series Command Control Interface (CCI) User and Reference Guide.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-3
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Overview
4
Comparing ShadowImage software and Copy-on-Write Snapshot Functions
P-VOL
Read Write
Main Server
S-VOL
Backup Server
Read Write
P-VOLV-VOL V-VOL
Pool
Virtual Volume
V-VOL
Read Write
Differential Data Save
Main Server
Link
Backup Server
Read Write
ShadowImage softwareAll data is saved from Primary Volume (P-VOL) to Secondary
Volume (S-VOL)
Copy-on-Write Snapshot softwareOnly differential data is saved from Primary Volume (P-VOL)
to Data Pool area (Pool)Pool is shared by multiple Snapshot images (V-VOL)
ShadowImage Software
ShadowImage on Thunder 9500 V Series is a storage-based hardware solution for duplicating logical volumes which reduce backup time and provide point-in-time backup.
The primary volumes (P-VOLs) contain the original data; the secondary volume(s) (S-VOLs) contain the duplicate data. Since each P-VOL is paired with its S-VOL independently, each volume can be maintained as an independent copy set that can be split (pairsplit), resynchronized (pairresync), and released (pairsplit –S) separately.
Copy-on-Write Snapshot software
Copy-on-Write Snapshot on Thunder 9500 V series is a storage-based hardware solution for duplicating logical volumes that reduce backup time and provide point-in-time backup. The Copy-on-Write Snapshot primary volumes (P-VOLs) contain the original data; the Snapshot images (V-VOLs) contain the Snapshot data. Since each P-VOL is paired with its V-VOL independently, each volume can be maintained as an independent copy set that can be created (paircreate), given the Snapshot instruction (pairsplit), and released (pairsplit –S) separately.
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-4 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Overview
Each Copy-on-Write Snapshot pair consists of one primary volume (P-VOL) and up to 14 Snapshot Images (V-VOLs), which are located in the same Thunder 9500 V Series system. The Copy-on-Write Snapshot P-VOLs are the primary volumes, which contain the original data. The Copy-on-Write Snapshot V-VOLs are duplicated volumes which contain the data that exists at the time of Snapshot instruction.
The Copy-on-Write Snapshot Image (V-VOL) contains physical data from the primary volume and differential data stored in the Data Pool (Pool). (This differs from the ShadowImage software Thunder 9500 V Series function where all the data is retained in the secondary volume only; it is actually a pseudo volume with no capacity.) One Pool can be set for each controller. The Pool can handle two or more primary volumes and the differential data of two or more V-VOLs.
All LUNs that will be used for P-VOL and Pool must belong to the same controller. Individual LUN ownership change for a V-VOL is not supported.
5
Restorefrom any V-VOL
P-VOL can be restored from S-VOL
Restore
1 : 641 : 9
Pair Configuration
P-VOL ≧ Pool for one V-VOLP-VOL = S-VOLSize of
Physical Volume
Copy-on-Write Snapshot softwareShadowImage software
Comparing ShadowImage software and Copy-on-Write Snapshot software
P-VOL S-VOL= P-VOL Pool≧
P-VOL S-VOL
P-VOL S-VOL
P-VOL
V-VOL V-VOL V-VOL V-VOL
P-VOL
V-VOL V-VOL V-VOL V-VOL
….
….
Size of Physical Volume:
The P-VOL and the S-VOL have exactly the same size in ShadowImage software. In Copy-on-Write Snapshot software, less disk space is required for building a V-VOL image since only part of the V-VOL is on the Pool and the rest is still on the primary volume.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-5
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Overview
Pair Configuration:
Only one S-VOL can be created for every P-VOL in ShadowImage software. In Copy-on-Write Snapshot software there can be up to 14 V-VOLs per primary volume.
Restore:
A primary volume can only be restored from the corresponding secondary volume in ShadowImage software. With Copy-on-Write Snapshot software the primary volume can be restored from any Snapshot Image (V-VOL).
6
Copy-on-Write Snapshot Volume Size– The capacity needed for one Snapshot image (V-VOL) within Data
Pool area (Pool) is smaller than the P-VOL
P-VOL
V-VOL
Pool
Link
Capacity usedby one V-VOL
Smaller than P-VOL
Host recognizes P-VOL and V-VOL as the same capacity
Virtual Volume
In Copy-on-Write Snapshot software, the V-VOL as such does not physically exist but represents a set of pointers that point to the locations where the data is physically located, partly in the Pool and partly (still) in the P-VOL. Since only part of the data belonging to the V-VOL is located in Pool (and the other part is still on P-VOL), Copy-on-Write Snapshot software does not require twice the disk space to establish a pair, as in ShadowImage software.
However, a host will recognize the P-VOL and the V-VOL as a pair of volumes with identical capacity.
For the Thunder and AMS systems the P-VOL, Pool, and V-VOL must be configured on the same CTL (LU ownership change cannot be used).
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-6 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Operation Scenarios
Operation Scenarios
7
Before Modifying the Data Block on P-VOL– V-VOL will get the data from P-VOL
Pool
P-VOL Tuesday
Monday
V02
V01
Link
Link
1.Snapshot createdon Monday
2.Snapshot createdon Tuesday
Physical VOL Virtual VOL (V-VOL)
This picture shows a situation where two Snapshots have been taken. The highlighted data block in the Snapshots is available on the primary volume and a request for this block through the V-VOL would be physically taken from the P-VOL.
This situation will last as long as the corresponding block on the P-VOL is not altered.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-7
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Operation Scenarios
8
After Writing to the Data Block on P-VOL– When there is a write after having created a Snapshot, that data is
saved on the Data Pool area (Pool) first. – This saved data from now on is used by the V-VOL(s).– The Pool can be shared by multiple V-VOLs; therefore only one copy
of the data is required.
Virtual VOL(V-VOL )
Tuesday
Monday
2. Snapshot createdon Tuesday
1. Snapshot createdon Monday
3. Write on Tuesday V02
V01
Physical VOL
Pool
P-VOL4. Data saved onto
Pool. At this time,V01 and V02, whichhas Snapshot images, will refer to this data.
Link
In order to link Pool and Snapshot images, Addresses and Generations are managed by cache.
Now the data block on the P-VOL needs to be written to. However, before the actual write is executed, the block is copied to the Pool area. The set of pointers that actually represent the V-VOL will be updated and if there is a request now for the original block through a V-VOL, the block is physically taken from Pool.
From the host's perspective the V-VOL (Snapshot Image) has not changed, which was the plan.
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-8 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Operation Scenarios
9
Restore is Possible from any Snapshot Image (V-VOL)– To the host, it appears as if the restore was done instantly– Actual data copying from V-VOL to P-VOL is done in the background
P-VOL
Read Write
Main Server
Read/Write is possibleimmediately after a restore command.
Restore
V-VOL01
V-VOL02
V-VOL03
Only differential data is copied
Restoring a primary volume can be done instantly from any V-VOL. It can be done instantly because it does not involve immediate moving of data from Pool to P-VOL. Only pointers need to be modified.
The background data will then be copied from Pool to P-VOL.
If the P-VOL became physically damaged, all V-VOLs would be destroyed as well and a restore is not possible.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-9
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Copy-on-Write Snapshot Requirements
Copy-on-Write Snapshot Requirements
10
RAID level RAID1, RAID5, or RAID6Shared memory – The shared memory for the differential table must be installed on the
base board.– The shared memory must be configured so that the V-VOL
management area can be created in the shared memoryShadowImage software needs to be installed.Pool– More than one Pool-VOL needs to be created. The maximum capacity
of the pool changes with the capacity of the shared memory.V-VOL – More than one V-VOL is required.
Storage Navigator and RAID Manager CCI are required
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-10 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Pool Volume
Pool Volume
11
Also called the data pool– Uses Pool-VOLs
Pool-VOL Requirements– Volume type cannot be:
External VolumeLUSEP-VOL or S-VOLNAS VolumesSet Protect or Read Only by Hitachi Data Retention Utility
– Emulation Type must be OPEN-V– Cannot have a path definition
The pool is composed of Pool-VOLs. Volumes with path definition cannot be specified as a Pool-VOL.
The capacity of a pool is equal to the total capacity of registered pool-VOLs in the pool. If the usage rate of the pool exceeds its capacity, the status of the Copy-on-Write Snapshot pair changes to PSUE (status when failure occurred). If this happens, snapshot data cannot be stored in the pool and the Copy-on-Write Snapshot pair must be deleted.
When a Copy-on-Write Snapshot pair is deleted, the snapshot data stored in the pool is deleted and the P-VOL and V-VOL relationship is released. Use pairsplit -s in the CCI or the Pairsplit -S panel of the Storage Navigator to delete a Copy-on-Write Snapshot pair.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-11
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software P-VOL
P-VOL
12
Emulation type OPEN-V– LUSE volumes are supported
You cannot specify the following volumes as Copy-on-Write Snapshot P-VOLs– Volumes used as pool-VOLs– V-VOLs of Copy-on-Write Snapshot pairs– Volumes used by a pair or migration plan of another program product
except TrueCopy software and Universal Replicator software– AS volumes
Maximum number of P-VOLS on the Universal Storage Platform 8,192– Maximum number of Copy-on-Write Snapshot pairs is dependent on
the amount of differential tables usedPath definition Required – Must be in a Host GroupMaximum capacity 2 TB
Note: A LUSE P-VOL must be paired with a V-VOL of the same size and the same structure. For example, if a LUSE P-VOL is created by combining the volumes of 1GB and 2GB and 3GB in this order, you must specify the LUSE volume which has exactly the same size and the same combination order as the V-VOL.
Differential Tables
The Universal Storage Platform can use a maximum of 13,652 differential tables, if additional shared memory is not installed, and 30,718 if additional shared memory is installed. To calculate the number of Copy-on-Write Snapshot pairs that can be created, calculate the number of differential tables required for the Copy-on-Write Snapshot pair and compare it with the number of differential tables of the whole subsystem. Other than Copy-on-Write Snapshot, ShadowImage, ShadowImage for z/OS, Compatible Mirroring for IBM® FlashCopy®, Cross-system Copy, and Volume Migration use differential tables.
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-12 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software P-VOL
To calculate the number of differential tables for each Copy-on-Write Snapshot pair, use the formula below:
[[Capacity of the volume (kb)] ÷ 256] ÷ (slots that a differential table) – 1GB = 1,048,576 kb
In the Universal Storage Platform the number of slots that a differential table can manage is 61312 (1,916 × 32)
Note that you must round up the result of the calculation to the nearest whole number.
For example, for each Copy-on-Write Snapshot pair a 14GB volume requires 1 differential table; a 16GB volume requires 2 differential tables; and a 29GB volume requires two differential tables.
There are additional restrictions on the total number of COW Snapshots based on the amount of shared memory reserved for V-VOL Management. The V-VOL management area consists of the following 3 elements.
– Pool association information
– Pool management block
– Management information other than these above (shared memory is fixed to 8MB)
Refer to the Hitachi TagmaStore™ Universal Storage Platform and Network Storage Controller Copy-on-Write Snapshot User’s Guide section 2.3 for more information.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-13
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Copy-on-Write Snapshot Workflow
Copy-on-Write Snapshot Workflow
13
You need to use Storage Navigator to create the pools and V-VOLs. 1. Initialize V-VOL management area2. Create Pool-VOLs3. Create Virtual Volume4. Create Copy-on-Write Snapshot Pair
You need to use Storage Navigator to create the pools and V-VOLs. You can also use Storage Navigator to delete pairs. Use the Command Control Interface (CCI) to create Copy-on-Write Snapshot pairs and perform subsequent operations.
Initialize V-VOL Management Area
The virtual volume management area must be created in shared memory before creating a pool. The virtual volume management area is automatically created when additional shared memory is installed. The virtual volume management area must be initiated before creating a pool. Use the Initialize button on the Pool panel of Storage Navigator to initialize the V-VOL management area or the pool management block. If there is no pool in the disk subsystem, this button initializes the entire V-VOL management area. If pools exist in the disk subsystem, this button initializes the pool management block in the V-VOL management area. Initialization of the pool management block, needs up to 20 minutes to complete.
Create Pool and Add Pool-VOL
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-14 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Copy-on-Write Snapshot Workflow
To create a new Pool, right click on the Copy-on-Write Snapshot icon in the Pool panel of Storage Navigator and select “New Pool”.
The Pool panel provides a list of free LDEVs. To add volumes to the pool use the “Add Pool-VOL” button on the Pool panel of Storage Navigator. Select the free volumes) from the Free LDEVs list and click the “Add Pool-VOL” button.
Create a V-VOL
Select the LUN Expansion (LUSE) / Virtual LVI/LUN (VLL) pane in Storage Navigator. Select the V-VOL tab and create a new V-VOL group by right clicking the disk subsystem icon. After creating the V-VOL group the Create V-VOL wizard will start. When creating the V-VOL you will need to enter the size of the V-VOL (which needs to match the P-VOLs you will be pairing it with) and the number of V-VOLs to create. You will then need to select the CU and LDEV number for the V-VOLs.
Create a Copy-on-Write Snapshot Pair
When creating a Copy-on-Write Snapshot pair, you must decide which pool will be used by the pair. If you create two or more Copy-on-Write Snapshot pairs that share the same P-VOL, you need to specify the same pool for these pairs.
Use the paircreate command in the CCI to create Copy-on-Write Snapshot pairs. When creating a Copy-on-Write Pair the V-VOL may or may not have a LU Path. However, to use a Snapshot volume via the host, it is necessary to map the Snapshot S-VOL to a LUN. Snapshot uses two techniques called V-VOL mapping and Snapshot using copy on write. Snapshot volumes need to be associated with a Snapshot pool. The Snapshot pool is specified as pool ID when a Snapshot is made. The V-VOL replies to a SCSI Inquiry or raidscan command as an OPEN-0V. This is to clearly identify it as a V-VOL.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-15
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Tools – Storage Navigator
Tools – Storage Navigator
14
If you click on Data Pool 0, the LUNs that are currently assigned to this Pool and their status are listed in the bottom pane.
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-16 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software Status Transitions
Status Transitions
15
Status and Commands for Copy-on-Write Snapshot softwarePAIR is a pseudo status that exists in order to give compatibility with the command system of ShadowImage software. The actual status is the same as the PSUS.
paircreate-split
SMPL PSUS
PSUE
COPY(RS-R)
pairresync-restore
Restore Complete
PAIR
pairresync
pairsplit -S
pairsplit
pairsplit -S
pairsplit -S
Failure
paircreate
Pseudo Status
Failure
PSUE
Failure
Failure
SMPL
pairsplit -S
If a volume is not assigned to a Copy-on-Write Snapshot pair, its status is SMPL. When you create a Copy-on-Write Snapshot pair by executing the paircreate –split or pairsplit operation after paircreate has been issued, statuses of the P-VOL and the V-VOL change to PSUS.
It is possible to access the P-VOL or V-VOL in the PSUS state. The pair status changes to PSUE (interruption) when the V-VOL cannot be created or updated, or when the V-VOL data cannot be retained due to a disk subsystem failure. When the -S option is specified by executing pairsplit, the pair is split and the pair status changes to SMPL.
Please refer to the Copy-on-Write Snapshot software User's Guide for a detailed explanation of every pair status.
distributed, in whole or in part, without the prior written consent of HDS. Page 11-17
This manual may not be copied, transferred, reproduced, disclosed, or
Hitachi Copy-on-Write Snapshot Software Tools – RAID Manager
Tools – RAID Manager
16
Copy-on-Write Snapshot Operation is Executed from RAID Manager
RAID Manager command issued
Main Server
RAID Manager RAID ManagerCommand
ConfigurationDefinition File
RAID Manager Instance
Backup Server
Communication betweenRAID Managers
LANRAID Manager
LoggingFunction
RAID ManagerCommand
ConfigurationDefinition File
RAID Manager Instance
Command Device
paircreate
pairrestorepairsplitP-VOL
LoggingFunction
Copy-on-WriteSnapshot software
Copy-on-Write Snapshot software can be operated from CCI in the same way as with ShadowImage software.
Whenever a Snapshot Image has been created it will be assigned a LUN number (the lowest free number) and the Snapshot Image (V-VOL) can be accessed using that LUN number, providing the LUN is visible through a port (Mapping Mode, LUN Security, LUN Management).
This manual may not be copied, transferred, reproduced, disclosed, or Page 11-18 distributed, in whole or in part, without the prior written consent of HDS.
Hitachi Copy-on-Write Snapshot Software RAID Manager Commands
RAID Manager Commands
17
paircreate -g oradb1– This command will create the pointers from the target volumes to the
source volumes – It will start the snapshot process– The transition to PAIR is almost instantaneous
Pair Status transition SIMPLEX >>> COPY >>>PAIR– PAIR is a pseudo status to fit into the existing RAID Manager
structurepairsplit -g oradb1 – The pair is put into Suspend Status (PSUS)– Changed-Data Track Table is maintained in shared memory– Pair Status Transition PAIR >>> PSUS
18
pairsplit -g oradb1 –S– Deletes the pair
pairresync -g oradb1 – psus > copy > pair– Re-establishes the pair and recreates the pointers to the current state
of the P-VOL– Changed-Data in the Pool is deleted (it is no longer needed)
This manual may not be copied, transferred, reproduced, disclosed, or distributed, in whole or in part, without the prior written consent of HDS.
Page 11-19
Hitachi Copy-on-Write Snapshot Software Module Review
Module Review
19
1. After performing a paircreate, when can you split the P-VOL and have a consistent PIT copy on the V-VOL?
2. A paircreate operation is performed on a 50GB P-VOL. The P-VOL is being used as a file system and is half full. How much data is copied from the P-VOL to the S-VOL?
a. 0GBb. 25GBc. 50GB
3. What must be created before performing paircreate operations?4. How many snapshots can be created for one P-VOL?5. A P-VOL has four copy-on-write snapshots. How many pools are
needed.6. Copy-on-write snapshots were created at 8 AM, 10AM, 1 PM, and 3
PM from one P-VOL. Data is changed on the P-VOL at 4PM. What is the maximum amount of times the pool is updated?
7. How big does the Pool need to be in relation to the P-VOL?
20
8. Can a P-VOL be restored from a V-VOL?9. Bonus – what is the storage system process of writing to a P-VOL
that is split from a V-VOL?10.Super Bonus – What happens when a new file is written to a P-VOL
that is split from a V-VOL?
This manual may not be copied, transferred, reproduced, disclosed, or
Page 11-20 distributed, in whole or in part, without the prior written consent of HDS.