Date post: | 04-Jan-2016 |
Category: |
Documents |
Upload: | carol-briggs |
View: | 213 times |
Download: | 0 times |
© 2006 EMC Corporation. All rights reserved.
Business Continuity: Local ReplicationBusiness Continuity: Local Replication
Module 4.3
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 2
Local Replication
After completing this module you will be able to:
Discuss replicas and the possible uses of replicas
Explain consistency considerations when replicating file systems and databases
Discuss host and array based replication technologies– Functionality
– Differences
– Considerations
– Selecting the appropriate technology
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 3
What is Replication?
Replica - An exact copy (in all details)
Replication - The process of reproducing data
Original Replica
REPLICATIONREPLICATION
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 4
Possible Uses of Replicas
Alternate source for backup
Source for fast recovery
Decision support
Testing platform
Migration
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 5
Considerations
What makes a replica good?– Recoverability
Considerations for resuming operations with primary
– Consistency/re-startability How is this achieved by various technologies
Kinds of Replicas– Point-in-Time (PIT) = finite RPO
– Continuous = zero RPO
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 6
Replication of File SystemsHost
Apps
Volume Management
DBMS Mgmt Utilities
File System
Multi-pathing Software
Device Drivers
HBA HBA HBA
Operating System
Physical Volume
Buffer
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 7
A database application may be spread out over numerous files, file systems, and devices—all of which must be replicated.
Database replication can be offline or online.
Replication of Database Applications
LogsData
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 8
Database: Understanding Consistency
Databases/Applications maintain integrity by following the “Dependent Write I/O Principle”– Dependent Write: A write I/O that will not be issued by an application
until a prior related write I/O has completed A logical dependency, not a time dependency
– Inherent in all Database Management Systems (DBMS) e.g. Page (data) write is dependent write I/O based on a successful log
write
– Applications can also use this technology
– Necessary for protection against local outages Power failures create a dependent write consistent image A Restart transforms the dependent write consistent to transactionally
consistent i.e. Committed transactions will be recovered, in-flight transactions will be
discarded
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 9
Database Replication: Transactions
Data
Log
Database Application
4 4
3 3
2 2
1 1
Buffer
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 10
Database Replication: Consistency
Data
Log
Source Replica
Consistent
4 4
3 3
2 2
1 1
Log
Data
Note: In this example, the database is online.
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 11
Database Replication: Consistency
Data
Log
Source Replica
Inconsistent
Note: In this example, the database is online.
4 4
3 3
2
1
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 12
Database Application
(Offline)
Database Replication: Ensuring Consistency
Data
Log
Source Replica
Consistent
Off-line Replication– If the database is offline or
shutdown and then a replica is created, the replica will be consistent.
– In many cases, creating an offline replica may not be a viable due to the 24x7 nature of business.
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 13
Online Replication– Some database applications allow
replication while the application is up and running
– The production database would have to be put in a state which would allow it to be replicated while it is active
– Some level of recovery must be performed on the replica to make the replica consistent
Database Replication: Ensuring Consistency
Data
Log
Source Replica
Inconsistent
4 4
3 3
2
1
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 14
Database Replication: Ensuring Consistency
5
Source Replica
Consistent
4 4
3 3
2 2
1 1
5
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 15
Tracking Changes After PIT Creation
At PIT
Source = Target
Later
Source ≠ Target
Resynch
Source = Target
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 16
Local Replication Technologies
Host based– Logical Volume Manager (LVM) based mirroring
– File System Snapshots
Storage Array based– Full volume mirroring
– Full volume: Copy on First Access
– Pointer based: Copy on First Write
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 17
Logical Volume Manager: Review
Physical Storage
Logical Storage
LVM
Host resident software responsible for creating and controlling host level logical storage– Physical view of storage is converted to a
logical view by mapping. Logical data blocks are mapped to physical data blocks.
– Logical layer resides between the physical layer (physical devices and device drivers) and the application layer (OS and applications see logical view of storage).
Usually offered as part of the operating system or as third party host software
LVM Components:– Physical Volumes– Volume Groups– Logical Volumes
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 18
Volume Groups
Physical Disk
Block
Volume Group
Physical Volume 1
Physical Volume 2
Physical Volume 3
One or more Physical Volumes form a Volume Group
LVM manages Volume Groups as a single entity
Physical Volumes can be added and removed from a Volume Group as necessary
Physical Volumes are typically divided into contiguous equal-sized disk blocks
A host will always have at least one disk group for the Operating System– Application and Operating
System data maintained in separate volume groups
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 19
Logical Volumes
Logical Disk Block
Volume GroupPhysical Disk
Block
Physical Volume 1 Physical Volume 2 Physical Volume 3
Logical Volume
Logical Volume
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 20
Logical Volumes
A Logical Volume:– Can only belong to one Volume Group. However, a Volume Group can
have multiple LVs.
– Can span multiple physical volumes.
– Can be made up of physical disk blocks that are not physically contiguous.
– Appears as a series of contiguous data blocks to the OS.
– Can contain a file system or be used directly. Note: There is a one-to-one relationship between LV and a File System.
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 21
Host Based Replication: Mirrored Logical Volumes
Host Logical Volume
Logical Volume
PhysicalVolume 1
PVID1VGDA
PhysicalVolume 2
PVID2VGDA
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 22
Host Based Replication: Mirrored Logical Volumes
Logical Volumes may be mirrored to improve data availability. In mirrored logical volumes every logical partition will map to 2 or more physical partitions on different physical volumes.– Logical volume mirrors may be added and removed dynamically
– A mirror can be split and data contained used independently
The advantages of Mirroring a Logical Volume are high availability and load balancing during reads if the parallel policy is used. The cost of mirroring is additional CPU cycles necessary to perform two writes for every write and the longer cycle time needed to complete the writes.
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 23
Host Based Replication: File System Snapshots
Many LVM vendors will allow the creation of File System Snapshots while a File System is mounted
File System snapshots are typically easier to manage than creating mirrored logical volumes and then splitting them
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 24
Host (LVM) Based Replicas: Disadvantages
LVM based replicas add overhead on host CPUs
If host devices are already Storage Array devices then the added redundancy provided by LVM mirroring is unnecessary– The devices will have some RAID protection already
Host based replicas can be usually presented back to the same server– additional CPU burden; failure of the Volume Group;
unavailable while server fails; accessed only by one host
Keeping track of changes after the replica has been created
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 25
Replication performed by the Array Operating Environment
Replicas are on the same array
Storage Array Based Local Replication
Production Server
Business Continuity Server
Array
ReplicaSource
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 26
Typically Array based replication is done at a array device level– Need to map storage components used by an application/file system
back to the specific array devices used – then replicate those devices on the array
Array 1
Storage Array Based – Local Replication Example
File System 1
Volume Group 1
Logical Volume 1 Source Vol 1
Replica Vol 1
Source Vol 2
Replica Vol 2
c12t1d1 c12t1d2
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 27
Array Based Local Replication: Full Volume Mirror
Source Target
Attached
Array
Read/Write Not Ready
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 28
Array Based Local Replication: Full Volume Mirror
Source Target
Detached - PIT
Read/Write Read/Write
Array
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 29
Array Based Local Replication: Full Volume Mirror
For future re-synchronization to be incremental, most vendors have the ability to track changes at some level of granularity (e.g., 512 byte block, 32 KB, etc.)– Tracking is typically done with some kind of bitmap
Target device must be at least as large as the Source device– For full volume copies the minimum amount of storage required is
the same as the size of the source
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 30
Copy on First Access (COFA)
Target device is made accessible for BC tasks as soon as the replication session is started.
Point-in-Time is determined by time of activation
Can be used in Copy First Access mode (deferred) or in Full Copy mode
Target device is at least as large as the Source device
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 31
Write to Source
Copy on First Access Mode: Deferred Mode
Source Target
Read/Write Read/Write
Write to Target
Read from Target
Source Target
Source Target
Read/Write Read/Write
Read/Write Read/Write
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 32
Copy on First Access: Full Copy Mode
On session start, the entire contents of the Source device is copied to the Target device in the background
Most vendor implementations provide the ability to track changes: – Made to the Source or Target
– Enables incremental re-synchronization
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 33
Array: Pointer Based Copy on First Write
Targets do not hold actual data, but hold pointers to where the data is located– Actual storage requirement for the replicas is usually a small fraction
of the size of the source volumes
A replication session is setup between the Source and Target devices and started– When the session is setup based on the specific vendors
implementation a protection map is created for all the data on the Source device at some level of granularity (e.g 512 byte block, 32 KB etc.)
– Target devices are accessible immediately when the session is started
– At the start of the session the Target device holds pointers to the data on the Source device
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 34
Pointer Based Copy on First Write Example
Source Save Location
TargetVirtual Device
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 35
Array Replicas: Tracking Changes
Changes will/can occur to the Source/Target devices after PIT has been created
How and at what level of granularity should this be tracked?– Too expensive to track changes at a bit by bit level
Would require an equivalent amount of storage to keep track of which bit changed for each the source and the target
– Based on the vendor some level of granularity is chosen and a bit map is created (one for Source and one for Target) One could choose 32 Kb as the granularity For a 1 GB device changes would be tracked for 32768 32Kb chunks If any change is made to any bit on one 32Kb chunk the whole chunk is
flagged as changed in the bit map 1 GB device map would only take up 32768/8/1024 = 4Kb space
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 36
Source 0 0 0 0 0 0 0 0
Target 0 0 0 0 0 0 0 0
Array Replicas: How Changes Are Determined
1 0 1 1 0 1 0 1
0= unchanged 1= changed
Resynch
At PIT
Target 0 0 1 1 0 0 0 1
Source 1 0 0 1 0 1 0 0After PIT…
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 37
Array Replication: Multiple PITs
06:00 A.M.
: 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 :
P.M.A.M.
12:00 P.M.
06:00 P.M.
12:00 A.M.
Source
Target Devices
Point-In-Time
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 38
Array Replicas: Ensuring Consistency
Inconsistent Consistent
Source Replica
4 4
3 3
2 2
1 1
Source Replica
4 4
3 3
2
1
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 39
Mechanisms to Hold IO
Host based Some host based application could be used to hold IO to all the array devices
that are to be replicated when the PIT is created Typically achieved at the device driver level or above before the IO reaches the
HBAs Some vendors implement this at the multi-pathing software layer
Array based IOs can be held for all the array devices that are to be replicated by the Array
Operating Environment in the array itself when the PIT is created
What if the application straddles multiple hosts and multiple arrays?
Federated Databases Some array vendors are able to ensure consistency in this situation
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 40
Array Replicas: Restore/Restart Considerations
Production has a failure– Logical Corruption
– Physical failure of production devices
– Failure of Production server
Solution– Restore data from replica to production
The restore would typically be done in an incremental manner and the Applications would be restarted even before the synchronization is complete leading to very small RTO
-----OR------
– Start production on replica Resolve issues with production while continuing operations on replicas After issue resolution restore latest data on replica to production
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 41
Array Replicas: Restore/Restart Considerations
Before a Restore– Stop all access to the Production devices and the Replica devices– Identify Replica to be used for restore
Based on RPO and Data Consistency
– Perform Restore
Before starting production on Replica– Stop all access to the Production devices and the Replica devices– Identify Replica to be used for restart
Based on RPO and Data Consistency
– Create a “Gold” copy of Replica As a precaution against further failures
– Start production on Replica
RTO drives choice of replication technology
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 42
Array Replicas: Restore Considerations
Full Volume Replicas– Restores can be performed to either the original source device or to
any other device of like size Restores to the original source could be incremental in nature Restore to a new device would involve a full synchronization
Pointer Based Replicas– Restores can be performed to the original source or to any other
device of like size as long as the original source device is healthy Target only has pointers
Pointers to source for data that has not been written to after PIT Pointers to the “save” location for data was written after PIT
Thus to perform a restore to an alternate volume the source must be healthy to access data that has not yet been copied over to the target
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 43
Array Replicas: Which Technology?
Full Volume Replica– Replica is a full physical copy of the source device
– Storage requirement is identical to the source device
– Restore does not require a healthy source device
– Activity on replica will have no performance impact on the source device
– Good for full backup, decision support, development, testing and restore to last PIT
– RPO depends on when the last PIT was created
– RTO is extremely small
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 44
Array Replicas: Which Technology? …
Pointer based - COFW– Replica contains pointers to data
Storage requirement is a fraction of the source device (lower cost)
– Restore requires a healthy source device
– Activity on replica will have some performance impact on source Any first write to the source or target will require data to be copied to the
save location and move pointer to save location Any read IO to data not in the save location will have to be serviced by
the source device
– Typically recommended if the changes to the source are less than 30%
– RPO depends on when the last PIT was created
– RTO is extremely small
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 45
Array Replicas: Which Technology?
Full Volume – COFA Replicas– Replica only has data that was accessed
– Restore requires a healthy source device
– Activity on replica will have some performance impact Any first access on target will require data to be copied to target before
the I/O to/from target can be satisfied
– Typically replicas created with COFA only are not as useful as replicas created with the full copy mode – Recommendation would be to use the full copy mode it the technology allows such an option
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 46
Array Replicas: Full Volume vs. Pointer Based
Full Volume Pointer Based
Required Storage 100% of Source Fraction of Source
Performance Impact None Some
RTO Very small Very small
Restore Source need not be healthy
Requires a healthy source device
Data change No limits < 30%
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 47
Module Summary
Key points covered in this module:
Replicas and the possible use of Replicas
Consistency considerations when replicating File Systems and Databases
Host and Array based Replication Technologies– Advantages/Disadvantages
– Differences
– Considerations
– Selecting the appropriate technology
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 48
Check Your Knowledge
What is a replica?
What are the possible uses of a replica?
What is consistency in the context of a database?
How can consistency be ensured when replicating a database?
Discuss one host based replication technology
What is the difference between full volume mirrors and pointer based replicas?
What are the considerations when performing restore operations for each replication technology?
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 49
Apply Your Knowledge…
Upon completion of this topic, you will be able to:
List EMC’s Local Replication Solutions for the Symmetrix and CLARiiON arrays
Describe EMC’s TimeFinder/Mirror Replication Solution
Describe EMC’s SnapView - Snapshot Replication Solution
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 50
EMC – Local Replication Solutions
EMC Symmetrix Arrays– EMC TimeFinder/Mirror
Full volume mirroring
– EMC TimeFinder/Clone Full volume replication
– EMC TimeFinder/SNAP Pointer based replication
EMC CLARiiON Arrays– EMC SnapView Clone
Full volume replication
– EMC SnapView Snapshot Pointer based replication
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 51
EMC TimeFinder/Mirror - Introduction
Array based local replication technology for Full Volume Mirroring on EMC Symmetrix Storage Arrays– Create Full Volume Mirrors of an EMC Symmetrix device within an Array
TimeFinder/Mirror uses special Symmetrix devices called Business Continuance Volumes (BCV). BCVs:– Are devices dedicated for Local Replication
– Can be dynamically, non-disruptively established with a Standard device. They can be subsequently split instantly to create a PIT copy of data.
The PIT copy of data can be used in a number of ways:– Instant restore – Use BCVs as standby data for recovery
– Decision Support operations
– Backup – Reduce application downtime to a minimum (offline backup)
– Testing
TimeFinder/Mirror is available in both Open Systems and Mainframe environments
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 52
EMC TimeFinder/Mirror – Operations
Establish– Synchronize the Standard volume to the BCV
volume
– BCV is set to a Not Ready state when established BCV cannot be independently addressed
– Re-synchronization is incremental
– BCVs cannot be established to other BCVs
– Establish operation is non-disruptive to the Standard device
– Operations to the Standard can proceed as normal during the establish
Establish
STD BCV
Incremental Establish
BCV
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 53
EMC TimeFinder/Mirror – Operations …
Split– Time of Split is the Point-in-Time
– BCV is made accessible for BC Operations
– Consistency Consistent Split
– Changes tracked STD BCV
Split
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 54
EMC TimeFinder/Mirror Consistent Split
EMC PowerPath
PowerPath is an EMC host based multi-pathing software
PowerPath holds I/O during TimeFinder/Mirror Split
-Read and write I/O
Symmetrix Microcode holds I/O during TimeFinder/Mirror Split
- Write I/O (subsequent reads after first write)
Enginuity Consistency Assist
Host
STDSTD BCVBCVBCVBCVSTDSTD
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 55
EMC TimeFinder/Mirror – Operations …
Restore– Synchronize contents of BCV volume to the
Standard volume
– Restore can be full or incremental
– BCV is set to a Not Ready state
– I/Os to the Standard and BCVs should be stopped before the restore is initiated
Query– Provide current status of BCV/Standard volume
pairs
Incremental Restore
STD BCVSTD
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 56
Incremental establish
or
Incremental restore
StandardStandardvolumevolume
BCVBCV 4:00 a.m.
2:00 a.m.
4:00 a.m.
6:00 a.m.
StandardStandardvolumevolume
Establish
Split
Establish
Split
BCVBCV
BCVBCV
BCVBCV
EMC TimeFinder/Mirror Multi-BCVs
Standard device keeps track of changes to multiple BCVs one after the other
Incremental establish or restore
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 57
TimeFinder/Mirror Concurrent BCVs
Two BCVs can be established concurrently with the same Standard device
Establish BCVs simultaneously or one after the other
BCVs can be split individually or simultaneously.
Simultaneous. “Concurrent Restores”, are not allowed
StandardStandard
BCV1BCV1
BCV2BCV2
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 58
EMC CLARiiON SnapView - Snapshots
SnapView allows full copies and pointer-based copies– Full copies – Clones (sometimes called BCVs)
– Pointer-based copies – Snapshots
Because they are pointer-based, Snapshots– Use less space than a full copy
– Require a ‘save area’ to be provisioned
– May impact the performance of the LUN they are associated with
The ‘save area’ is called the ‘Reserved LUN Pool’
The Reserved LUN Pool– Consists of private LUNs (LUNs not visible to a host)
– Must be provisioned before Snapshots can be made
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 59
The Reserved LUN Pool
Private LUN 5
Private LUN 6
Private LUN 7
Private LUN 8
Reserved LUN Pool
FLARE LUN 5
FLARE LUN 6
FLARE LUN 7
FLARE LUN 8
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 60
Source LUNs
Reserved LUN Allocation
Snapshot 1a
Snapshot 1b
Snapshot 2a
Session 1a
Session 1b
Session 2a
Reserved LUN Pool
LUN 1
LUN 2
Private LUN 5
Private LUN 6
Private LUN 7
Private LUN 8
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 61
SnapView Terms
Snapshot– The ‘virtual LUN’ seen by a secondary host– Made up of data on the Source LUN and data in the RLP– Visible to the host (online) if associated with a Session
Session– The mechanism that tracks the changes– Maintains the pointers and the map– Represents the point in time
Activate and deactivate a Snapshot– Associate and disassociate a Session with a Snapshot
Roll back– Copy data from a (typically earlier) Session to the Source LUN
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 62
COFW and Reads from Snapshot
SP memory
SnapView Map
Map
Source LUN
Snapshot
Reserved LUN
Chunk 0
Chunk 1
Chunk 2
Chunk 3
Chunk 4
Chunk 3
Chunk 0
Chunk 3’
Chunk 3’’
Chunk 0’
Secondary Host
Primary Host
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 63
Writes to Snapshot
SP memory
SnapView Map
Map
Source LUN
Snapshot
Reserved LUN
Chunk 1
Chunk 2
Chunk 4
Chunk 3’’
Chunk 0’
Chunk 3
Chunk 0
Chunk 0
Chunk 0*
Chunk 2
Chunk 2
Chunk 2*
Secondary Host
Primary Host
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 64
Rollback - Snapshot Active (preserve changes)
SP memory
SnapView Map
Map
Source LUN
Snapshot
Reserved LUN
Chunk 1
Chunk 2
Chunk 4
Chunk 3’’
Chunk 0’
Chunk 3
Chunk 0
Chunk 0*
Chunk 2
Chunk 2*
Chunk 0*
Chunk 3
Chunk 2*
Secondary Host
Primary Host
© 2006 EMC Corporation. All rights reserved. Module TitleBusiness Continuity: Local - 65
Rollback - Snapshot Deactivated (discard changes)
SP memory
SnapView Map
Map
Source LUN
Snapshot
Reserved LUN
Chunk 1
Chunk 2
Chunk 4
Chunk 3’’
Chunk 0’
Chunk 3
Chunk 0
Chunk 2
Chunk 2*
Chunk 0*
Chunk 0
Chunk 3
Secondary Host
Primary Host