+ All Categories
Home > Documents > Open Replicator Vmware

Open Replicator Vmware

Date post: 19-Oct-2015
Category:
Upload: naseer-mohammed
View: 63 times
Download: 2 times
Share this document with a friend
Description:
Open Replicator Vmware
Popular Tags:
31
Open Replicator Data Migration of File Server Clusters to VMware 2008 EMC Proven TM Professional Knowledge Sharing Brian Russell, Sr. Storage Administrator Michael Aldo, Windows System Engineer A Leading Healthcare Insurer EMC Proven Professional Knowledge Sharing 2008
Transcript
  • Open Replicator Data Migration of File Server Clusters to VMware

    2008 EMC Proven TM Professional Knowledge Sharing

    Brian Russell, Sr. Storage Administrator Michael Aldo, Windows System Engineer

    A Leading Healthcare Insurer

    EMC Proven Professional Knowledge Sharing 2008

  • Table of Contents

    INTRODUCTION AND BACKGROUND ........................................................................................................... 2 OUR APPROACH ......................................................................................................................................... 2 PLANNING THE STORAGE MIGRATION........................................................................................................ 3

    Migration Overview .............................................................................................................................. 3 Its All In the Details! (Prep-Work) ..................................................................................................... 4 What is Open Replicator? ..................................................................................................................... 4 Open Replicator Considerations........................................................................................................... 5 Software / Licensing Used in this migration ......................................................................................... 5 Open Replicator Prep-Work ................................................................................................................. 5 Document Existing MSCS File Cluster Capacity.................................................................................. 6 Create New Storage for VMware Virtualized MSCS Cluster................................................................ 8 Document Open Replicator Pairs ....................................................................................................... 14 Zone and Mask New DMX-3 Capacity to VMware............................................................................. 16 FA to FA Zoning for Open Replicator ................................................................................................ 17 SAN Cabling........................................................................................................................................ 18 FA to FA Masking for Open Replicator .............................................................................................. 18

    VMWARE / MS CLUSTER PREP-WORK ..................................................................................................... 19 Fresh-Build MSCS .............................................................................................................................. 20

    Adding Shared Storage ................................................................................................................................... 21 Disaster Recovery Cluster Restore.................................................................................................................. 22

    IMPLEMENT MIGRATION .......................................................................................................................... 24 Create Microsoft Cluster .................................................................................................................... 26 Configuration of the Second Node ...................................................................................................... 28 P2V Migration with VMware Converter............................................................................................. 28

    Physical-to-Virtual Migration ......................................................................................................................... 28 Adding Shared Storage ................................................................................................................................... 29 Virtual Machine Configuration ....................................................................................................................... 29 Physical-to-Virtual Cleanup Tasks.................................................................................................................. 30

    OPEN REPLICATOR POST-MIGRATION TASKS........................................................................................... 30 CONCLUSION ............................................................................................................................................ 30

    Disclaimer: The views, processes or methodologies published in this article are those of the authors. They do not necessarily reflect EMC Corporations views, processes or methodologies.

  • Introduction and Background We would like to share a real-world challenge and our solution. Our challenge was to

    migrate a large file server cluster into a virtualized server environment. Thats not all, we

    had to combine this with a storage array hardware refresh.

    We are a Symmetrix shop with a growing VMware infrastructure. The Storage Team

    has adopted a rolling hardware refresh strategy. We replace the oldest storage array

    (EMC Symmetrix DMX-3000) allowing us to introduce newer technology (EMC

    Symmetrix DMX-3) every year. This hardware refresh requires us to migrate all hosts off

    the 4-year old disk array over to the new disk array. Four Windows 2003 Microsoft

    Cluster Server (MSCS) File Servers, and eight nodes connecting to 16 TB of protected

    storage are included. Simultaneously, the Wintel Team has a similar hardware refresh

    strategy requiring them to replace the MSCS File Servers going out of maintenance.

    Given the complexity and increased business dependence on the environment, the team

    had to develop an approach that caused minimal downtime and allowed quick rollback.

    Our Approach

    We decided to migrate the physical File Clusters to our existing VMware infrastructure

    along with presenting new Symmetrix DMX-3 storage to these virtual hosts. We copied

    the file server data to the new storage array via Open Replicator.

    Figure 1 Migration Overview

    EMC ProvenTM Professional Knowledge Sharing 2008 2

  • EMC ProvenTM Professional Knowledge Sharing 2008

    We combined multiple supported solutions from EMC, VMware and Microsoft to

    facilitate our two concurrent migrations. We only experienced a single, brief downtime

    during migration; this was one of the most exciting outcomes of our approach!

    Planning the Storage Migration

    Migration Overview

    Our MSCS File Servers access storage on two separate Symmetrix DMX-3000 arrays.

    To consolidate storage and simplify the future VMware environment, we decided to

    replicate all capacity from both existing arrays to a single, new Symmetrix DMX-3. In

    this article, we will follow one of our four MSCS File Servers through the migration

    process.

    The next few sections will demonstrate how we presented new, replicated storage to

    VMware ESX 3.0 Servers. We recreated the physical cluster nodes as Virtual Machines

    to operate across multiple ESX Servers. The MSCS physical disk resources, which hold

    the file share data, now point to the replicated SAN LUNs using a VMware feature called

    Raw Device Mapping (RDM).

    The Storage and VMware administrators performed most of the preparatory work in

    advance and in parallel. Figure 2 illustrates the flow of major tasks and their

    dependencies. It is a valuable reference as you read the steps presented in this article.

    3

  • Figure 2 Migration Task Flowchart

    Throughout this document, Open Replicator may be abbreviated as OR.

    Its all In the Details! (Prep-Work)

    Give yourself plenty of time to plan the storage migration. There are many components

    involved in any storage configuration change, and you do not want any surprises on

    migration day. Open Replicator introduces some additional configuration planning, but it

    is well worth it. Where we would normally leverage tape restore or network file copy, we

    can now perform block-level replication between two independent disk frames and allow

    host access to storage while data is in transit. The overhead of this copy process runs

    entirely on the Symmetrix DMX over the SAN to remote devices.

    What is Open Replicator?

    Open Replicator for Symmetrix is software that provides a method for copying data from

    various types of arrays within a Storage Area Network (SAN) infrastructure to or from a

    Symmetrix DMX array. We configure and control Open Replicator sessions using the

    symrcopy command (available in Enginuity version 5671 and above). Only the

    control Symmetrix frame needs to be at version 5671 or above.

    EMC ProvenTM Professional Knowledge Sharing 2008 4

  • EMC ProvenTM Professional Knowledge Sharing 2008

    Two licenses are available for Open Replicator. Open Replicator/LM is used for online

    data pulls only (to Symmetrix DMX). Open Replicator/DM is used for everything except

    online pulls. For more information about Open Replicator for Symmetrix please refer to

    the Solutions Enabler Symmetrix Open Replicator CLI Product guide available on Powerlink.

    Open Replicator Considerations

    We must consider some Open Replicator restrictions. For example, the target capacity

    for each device must be equal to or larger than the source, although protection levels

    and Meta volume configuration do not need to be identical. Most commonly, we

    leverage Open Replicator to perform in-place migrations where we present new

    (replicated) storage to an existing UNIX/Windows host. In the end, the data on the new

    storage looks identical to the host with the exception of the LUNS that may be larger

    than the original.

    Note: For detailed considerations and restrictions, refer to the EMC Solutions Enabler Symmetrix Open Replicator manual.

    Software / Licensing Used in this migration

    Always consult the EMC Support Matrix for current interoperability information.

    o VMware ESX Server Enterprise 3.0.1 o VMware VirtualCenter 2.0.1 o VMware Converter Enterprise Edition 3.0.2 o Windows Server 2003 R2 o Microsoft Cluster Server 2003 o Solutions Enabler 6.4.04 was used with the following features:

    o License Key for BASE / Symmetrix o License Key for ConfigChange / Symmetrix o License Key for Device Masking / Symmetrix o License Key for SYMAPI Feature: RcopyOnlinePull / Symmetrix

    Open Replicator/LM required on the control Symmetrix

    Open Replicator Prep-Work

    Here are the steps we need to prepare for:

    o Create new storage for Virtualized MSCS Cluster (OR Control devices) o Set FA Port Flags for new VMware Storage (OR Control Devices) o Map new VMware Storage to FA ports o FA to FA Zoning (open replicator session) o FA to FA Masking (open replicator session) o SAN Cabling (ISLs and Hosts) o Create Open Replicator Pairing File

    5

  • Document Existing MSCS File Cluster Capacity

    First, we collected information about the existing capacity assigned to the MSCS

    Servers. We will be creating identical storage on the DMX-3, so we need to know the

    number of Symmetrix Devices we are replicating as well as capacity of each device.

    Solutions Enabler (symmaskdb) output shows all devices masked to each cluster node

    on both DMX-3000s (see below). This output displays the Symmetrix device IDs,

    capacity, and director mapping. All of this information is helpful in prep-work.

    The MSCS Cluster Nodes A and B server names are fscl1a and fscl1b, respectively, and

    these names may be interchanged in our examples.

    symmaskdb -sid 7667 list capacity -host fscl1a Symmetrix ID : 000187827667 Host Name : fscl1a Identifiers Found : 10000000c93aea8f 10000000c93ae961 Device Cap(MB) Attr Dir:P ------ ------- ---- ---- 000B 2 7C:1 003B 2 10C:1 0F79 414270 (M) 7C:1,10C:1 0F91 414270 (M) 7C:1,10C:1 0FA9 621405 (M) 7C:1,10C:1 0FC1 414270 (M) 7C:1,10C:1 0FD9 414270 (M) 7C:1,10C:1 0FF1 414270 (M) 7C:1,10C:1 1159 129459 (M) 7C:1,10C:1 1163 129459 (M) 7C:1,10C:1 11B3 4315 7C:1,10C:1 ----------------------------- MB Total:2955992 GB Total: 2886.7 symmaskdb -sid 6776 list capacity -host fscl1a Symmetrix ID : 000187886776 Host Name : fscl1a Identifiers Found : 10000000c93ae961 10000000c93aea8f Device Cap(MB) Attr Dir:P ------ ------- ---- ---- 15F7 621405 (M) 8C:1, 9C:1 16F3 414270 (M) 8C:1, 9C:1 170B 414270 (M) 8C:1, 9C:1 ----------------------------- MB Total:1449945 GB Total: 1416.0

    symmaskdb -sid 7667 list capacity -host fscl1b Symmetrix ID : 000187827667 Host Name : fscl1b Identifiers Found : 10000000c93ae9a6 10000000c93ae905 Device Cap(MB) Attr Dir:P ------ ------- ---- ---- 000C 2 7C:1 003C 2 10C:1 0F79 414270 (M) 7C:1,10C:1 0F91 414270 (M) 7C:1,10C:1 0FA9 621405 (M) 7C:1,10C:1 0FC1 414270 (M) 7C:1,10C:1 0FD9 414270 (M) 7C:1,10C:1 0FF1 414270 (M) 7C:1,10C:1 1159 129459 (M) 7C:1,10C:1 1163 129459 (M) 7C:1,10C:1 11B3 4315 7C:1,10C:1 ----------------------------- MB Total:2955992 GB Total: 2886.7 symmaskdb -sid 6776 list capacity -host fscl1b Symmetrix ID : 000187886776 Host Name : fscl1b Identifiers Found : 10000000c93ae905 10000000c93ae9a6 Device Cap(MB) Attr Dir:P ------ ------- ---- ---- 15F7 621405 (M) 8C:1, 9C:1 16F3 414270 (M) 8C:1, 9C:1 170B 414270 (M) 8C:1, 9C:1 ----------------------------- MB Total:1449945 GB Total: 1416.0

    Figure 3 Using symmaskdb command to show host capacity

    Note: You must first rename the alias wwn (awwn) in the masking database in order for the above

    symmask list capacity command to work with a user defined host name. We have Solutions

    Enabler installed on each cluster node, so we found the easiest way to accomplish this was to use

    the symmask discover hba rename -v command (as opposed to using symmask rename).

    EMC ProvenTM Professional Knowledge Sharing 2008 6

  • EMC ProvenTM Professional Knowledge Sharing 2008

    As you can see here, symmask discover performs the operation for each visible Symmetrix: c:\Program Files\EMC\SYMCLI\bin>symmask discover hba -rename -v Symmetrix ID : 000187827667 Device Masking Status : Success WWN : 10000000c93aea8f ip Address : N/A Type : Fibre User Name : fscl1a/10000000c93aea8f WWN : 10000000c93ae961 ip Address : N/A Type : Fibre User Name : fscl1a/10000000c93ae961 Symmetrix ID : 000187886776 Device Masking Status : Success WWN : 10000000c93ae961 ip Address : N/A Type : Fibre User Name : fscl1a/10000000c93ae961 WWN : 10000000c93aea8f ip Address : N/A Type : Fibre User Name : fscl1a/10000000c93aea8f

    The symmaskdb list capacity output in Figure 3 shows twelve devices assigned to each

    node, excluding the gatekeepers. We need to determine the association between logical

    volumes and each Symmetrix device, if they need to be created on the new storage

    array, and if they need to be included in the Open Replicator session.

    We pulled output from two more commands to begin matching these devices with logical volumes. First, we used Solutions Enabler (syminq) to capture output from each node of the cluster. You can also use sympd list, which has cleaner output but requires you to update the local symapi database (symcfg discover) on each node. The second command is an NT Resource Kit utility, called dumpcfg.exe. (We also used this utility later in the

    Disaster Recovery Cluster Restore section of the MSCS Fresh-Build for VMware). The

    output from both syminq and dumpcfg commands is shown below. Figure 4 shows how the dumpcfg Disk Number, Volume Label, and Drive letter related to the syminq

    PHYSICALDRIVE# and Symmetrix ID. This relationship is important as we prepared our

    pairing spreadsheet for Open Replicator.

    7

  • Figure 4 (dumpcfg.exe output related to syminq)

    Using this detail about each Symmetrix device in the MSCS Cluster, we worked with our

    Windows System Administrator to determine which volumes needed to be re-created on

    the DMX-3 and which needed to be replicated. In this experience, all shared disk

    resources had to be re-created and all had to be replicated, with one exception. The

    MSCS Quorum devices did not need to be replicated because we performed a fresh

    build of the MSCS inside VMware.

    Note: You will need to replicate the Quorum if you use the VMware Converter

    approach to moving the MSCS Nodes to VMware

    Create New Storage for VMware Virtualized MSCS Cluster

    Now that we have documented our existing MSCS Capacity, we know we have 11 Meta

    Devices which represent our file share data volumes and one Hyper Volume assigned

    and used as the cluster quorum drive. There were three different size volumes used for

    file share volumes. All of this storage, including cluster quorum, had to be re-created on

    the new DMX-3.

    EMC ProvenTM Professional Knowledge Sharing 2008 8

  • EMC ProvenTM Professional Knowledge Sharing 2008

    The chart in Figure 5 represents the three size volumes used for the file share disks on

    the physical clusters. The chart also shows how we determined the number of required

    Hyper Volume Extensions (HVE) to form our new Meta Devices. The original capacity

    consisted of (2) 126.42 GB volumes, (7) 404.56 volumes and (2) 606.84 GB volumes.

    To support Open Replicator, we had to create devices of equal or greater capacity. To

    arrive at our new HVE count (per volume), we simply divided the original capacity in MB

    by the standard HVE we use on the DMX-3 and rounded up from there (making the new

    volumes slightly larger). On the new DMX-3, our standard HVE size is 35700 MB which

    equates to 38080 cylinders on this 64k Track FBA System. For example, we took the (2)

    129459 MB original volumes and divided those by the new HVE size of 35700 MB to get

    3.6 required HVEs. Obviously, we cannot form a new Meta using 3.6 HVEs, so we just

    rounded up to 4. Since we needed 2 of these Meta Volumes, we required 8 total HVEs.

    The larger HVEs yielded a slightly larger Meta Volume 142800 MB.

    The total HVEs Required column in Figure 5 simply represents the rounded number of

    HVEs multiplied by the quantity of Meta Volumes required.

    Qty Original

    Capacity MB

    Divide by Standard HVE Size

    And round up for new HVE

    Total HVEs

    Required

    New Meta Size (MB)

    2 129459 / 35700 = 3.6 round to 4 8 35700 * 4 = 142800

    7 414270 / 35700 = 11.6 round to 12 84 35700 * 12 = 428400

    2 621405 / 35700 = 17.4 round to 18 36 35700 * 18 = 642600

    Figure 5 New Storage Device Size Calculations

    We added the total HVEs required column and determined that we needed free space to

    support a total of (128) 38080 cylinder (35700 MB) HVEs.

    Note: In DMX (FBA) 64K Track architecture, you specify a number of cylinders to determine the

    Logical Volume size. Cylinder = 15 Tracks; TRACK = 128 blocks; One BLOCK = 512 bytes.

    DMX3 Logical Volume of n cylinders has a useable block capacity of

    n * 15 * 128

    So, one cylinder is 983040 bytes (960 kilobytes)

    Important: To control how Solutions Enabler will report on device sizes, note the following

    parameter in the symapi options file: SYMAPI_TRACK_SIZE_32K_COMPATIBLE. If this

    parameter is set to DISABLED, it will report tracks in native format based on frame type. See the

    EMC Solutions Enabler Symmetrix CLI Command Reference for more details

    9

  • Look for Free Space We tier our capacity inside the DMX-3 using disk groups to isolate spindle speeds and

    drive sizes. We targeted the file server capacity on 300 GB, 10k RPM drives using

    RAID-5 protection. We used 2-Way Mir protection for the cluster quorum devices.

    First, we identified free configured space (unmapped HVEs) on the 300 GB drives. Then

    we looked for available unconfigured capacity to be created into the remaining required

    HVEs. Using the symdisk list by_diskgroup command we identified the 300 GB

    drives in disk_group 1 and also more in disk_group 5. To identify the unmapped HVEs,

    we used the symdev list noport [disk_group #] command, where disk_group #

    was disk_group 1 and disk_group 5. We found (44) free 35700 MB Raid-5 HVEs too,

    but we still need (84) more.

    We checked if there was enough free space on the disks before we created new HVEs

    inside either of these disk groups. We took the output from symdisk list [disk_group 1],

    for example, and entered it into Excel to arrive at the total number of HVEs we could

    create in a disk_group We divided the Free MB column by 35700 and then passed the

    results to the FLOOR function to round down to the nearest integer. Then we added all

    the results to determine how many HVEs we could make. We had plenty of capacity in

    this disk group; so we went on to create the Hypers.

    A Word about Physical Distribution You need to be mindful of the underlying physical spindles whenever you are creating

    devices and forming meta volumes. This is a little more complicated when using RAID5

    devices. Our DMX-3 is configured for RAID5 (3+1) which means each RAID5 hyper will

    span (4) physical spindles. The symdev output in Figure 6 shows a (12) member meta volume (device 12C1) consisting of RAID5 hyper volumes. You see that each RAID5

    hyper volume spans (4) physical disks. Our goal was to ensure that all members of a

    meta volume are on different physical spindles. Because this is a (12) member meta

    consisting of RAID5 hypers of which each span (4) spindles, the meta volume is spread

    across (48) physical disks. The Enginuity code spreads the RAID5 hyper devices

    provided you have enough physical spindles.

    EMC ProvenTM Professional Knowledge Sharing 2008 10

  • EMC ProvenTM Professional Knowledge Sharing 2008

    #symdev -sid 6491 show 12c1 (filtered output showing Meta Volume breakdown Raid-5 along with HyperVolume Raid member to physical disk association) RAID-5 Hyper Devices (3+1): { Device : 12C1 (M) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02D:D5 334 22 11912 4 RW N/A 1 286102 06D:D5 246 22 11912 2 RW N/A 1 286102 12D:D5 252 22 11912 1 RW N/A 1 286102 16D:D5 334 22 11912 3 RW N/A 1 286102 } Device : 12C2 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 01A:D7 358 22 11912 3 RW N/A 1 286102 05A:D7 276 22 11912 1 RW N/A 1 286102 11A:D7 271 22 11912 2 RW N/A 1 286102 15A:D7 359 22 11912 4 RW N/A 1 286102 } Device : 12C3 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02C:C7 94 22 11912 1 RW N/A 1 286102 06C:C7 94 22 11912 3 RW N/A 1 286102 12C:C7 94 22 11912 2 RW N/A 1 286102 16C:C7 94 22 11912 4 RW N/A 1 286102 } Device : 12C4 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 01B:C7 94 22 11912 4 RW N/A 1 286102 05B:C7 94 22 11912 2 RW N/A 1 286102 11B:C7 94 22 11912 3 RW N/A 1 286102 15B:C7 94 22 11912 1 RW N/A 1 286102 } Device : 12C5 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02A:Da 413 22 11912 4 RW N/A 1 286102 06A:Da 301 22 11912 2 RW N/A 1 286102 12A:Da 333 22 11912 1 RW N/A 1 286102 16A:Da 414 22 11912 3 RW N/A 1 286102 } Device : 12C6 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 01C:Ca 142 22 11912 4 RW N/A 1 286102 05C:Ca 142 22 11912 2 RW N/A 1 286102 11C:Ca 142 22 11912 3 RW N/A 1 286102 15C:Ca 142 22 11912 1 RW N/A 1 286102 } Device : 12C7 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02B:C8 118 22 11912 1 RW N/A 1 286102 06B:C8 118 22 11912 3 RW N/A 1 286102 12B:C8 119 22 11912 2 RW N/A 1 286102 16B:C8 119 22 11912 4 RW N/A 1 286102 } Device : 12C8 (m) {

    11

  • -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02A:C7 94 22 11912 1 RW N/A 1 286102 06A:C7 94 22 11912 3 RW N/A 1 286102 12A:C7 94 22 11912 2 RW N/A 1 286102 16A:C7 94 22 11912 4 RW N/A 1 286102 } Device : 12C9 (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 01D:C7 94 22 11912 4 RW N/A 1 286102 05D:C7 94 22 11912 2 RW N/A 1 286102 11D:C7 94 22 11912 3 RW N/A 1 286102 15D:C7 94 22 11912 1 RW N/A 1 286102 } Device : 12CA (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02B:D7 366 22 11912 4 RW N/A 1 286102 06B:D7 278 22 11912 2 RW N/A 1 286102 12B:D7 285 22 11912 1 RW N/A 1 286102 16B:D7 367 22 11912 3 RW N/A 1 286102 } Device : 12CB (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 01C:D7 357 22 11912 3 RW N/A 1 286102 05C:D7 275 22 11912 1 RW N/A 1 286102 11C:D7 269 22 11912 2 RW N/A 1 286102 15C:D7 357 22 11912 4 RW N/A 1 286102 } Device : 12CC (m) { -------------------------------------------------------------- Disk DA Hyper Member Spare Disk DA :IT Vol# Num Cap(MB) Num Status Status Grp# Cap(MB) -------------------------------------------------------------- 02D:D7 358 22 11912 4 RW N/A 1 286102 06D:D7 270 22 11912 2 RW N/A 1 286102 12D:D7 276 22 11912 1 RW N/A 1 286102 16D:D7 358 22 11912 3 RW N/A 1 286102 } } }

    Figure 6 Symdev Output (12) Member Meta on (48) Spindles

    Create Hyper Volumes (symconfigure) We created (84) Raid-5 35700 MB hypers and one 4316 MB hyper using free space in

    disk_group 1 using Solutions Enabler (symconfigure) to perform a Symmetrix

    Configuration change. The single hyper volume will be used as the MSCS Cluster

    Quorum. Figure 8 shows the symconfigure file we used to create our new volumes.

    EMC ProvenTM Professional Knowledge Sharing 2008 12

  • EMC ProvenTM Professional Knowledge Sharing 2008

    Form Meta Volumes We created another file to be used in a configuration change session to form the meta

    volumes. Figure 8 shows the symconfigure file we used to form our Meta Volumes.

    These are the required meta volumes:

    o Create (2) 4-member meta volumes o Create (7) 12-member meta volumes o Create (2) 18-member meta volumes

    Set FA Port Flags for new VMware Storage We created another symconfigure file to be used in a change session to update the bit

    settings for the VMware servers. We consulted the following EMC Techbook for our port

    settings: VMware ESX Server Using EMC Symmetrix Storage Systems Solutions Guide. For

    more details on the SPC2 bit settings, see the EMC Host Connectivity Guide for VMware

    ESX Server. And of course, always consult the EMC Support Matrix for up-to-date port

    setting requirements. To view port flags on one of our New DMX-3 FA ports, we used

    the following command: symcfg -sid 6491 -sa 10d -p 0 -v list

    SCSI Flags { Negotiate_Reset(N) : Disabled Soft_Reset(S) : Disabled Environ_Set(E) : Disabled HP3000_Mode(B) : Disabled Common_Serial_Number(C) : Enabled Disable_Q_Reset_on_UA(D) : Disabled Sunapee(SCL) : Disabled Siemens(S) : Disabled Sequent(SEQ) : Disabled Avoid_Reset_Broadcast(ARB) : Disabled Server_On_AS400(A4S) : Disabled SCSI_3(SC3) : Enabled SPC2_Protocol_Version(SPC2) : Enabled SCSI_Support1(OS2007) : Disabled } Fibre Specific Flags { Volume_Set_Addressing(V) : Disabled Non_Participating(NP) : Disabled Init_Point_to_Point(PP) : Enabled Unique_WWN(UWN) : Enabled VCM_State(VCM) : Enabled OpenVMS(OVMS) : Disabled AS400(AS4) : Disabled Auto_Negotiate(EAN) : Enabled }

    Figure 7 Symmetrix FA Port Flags Configured for VMware

    NOTE: From an array management best practice perspective, we do not mix operating system

    types on the same fibre channel port. We can then manage flags/characteristics/attributes at the

    fibre port level. It is possible to set the port with heterogeneous characteristics and then manage

    the individual characteristics or attributes on an initiator basis using the symmask command. Just

    remember that the settings must be compatible with every system using that fibre port channel

    13

  • when setting flags at the port level. Please refer to the Solutions Enabler Symmetrix Array Controls

    CLI Product Guide and the Solutions Enabler CLI Command Reference guide available on Powerlink.

    Map New VMware Storage to FA ports The final Symmetrix configuration change was to create a mapping file to assign all new

    meta volumes and the quorum LUN to the FA ports dedicated to our VMware hosts.

    See Figure 8 for the config change mapping file.

    Symmetrix Configuration Changes For detailed instructions on using the symconfigure command, please see the EMC

    Solutions Enabler Symmetrix Array Controls CLI. (We used version 6.4) symconfigure sid SymmID f CmdFile

    preview

    prepare

    commit

    Figure 8 Steps to Configure New Space

    Document Open Replicator Pairs

    We created a detailed spreadsheet to track the original logical volume relationship of

    Open Replicator Remote (source) devices paired with Open Replicator Control (target)

    devices. We populated the spreadsheet with syminq.exe output from each physical

    cluster node and matched the information in the dumpcfg.exe output. We also matched

    the newly created Open Replicator Control devices based on capacity, and included the

    new LUN address information in both HEX and DEC format. The VMware administrator

    found this pairing sheet valuable; he could compare source Logical Volume to the new

    EMC ProvenTM Professional Knowledge Sharing 2008 14

  • EMC ProvenTM Professional Knowledge Sharing 2008

    DMX-3 device sizes with decimal LUN ID. The decimal LUN ID is needed to identify

    volumes scanned into VMware and assign Raw Device Mapping to the correct

    virtualized cluster nodes. HEX LUN addresses were pulled from our mapping

    configuration file. You can also use list LUN address for devs already mapped to a

    director.

    The following command would list addresses for FA7DA: symcfg sid 6491 -fa 7d

    p1 address list. We then used Excels HEX2DEC function to convert the addresses

    to decimal.

    Figure 9 Open Replicator Pairing Sheet

    Create Open Replicator Pairing File to be Used with Symrcopy We used the pairing spreadsheet to isolate which devices we needed to replicate. The

    correct format to be used with Open Replicator:

    #OR_CONTROL FILE FSCL1 #OR CONTROL DEVS ON LEFTREMOTE DEVS ON RIGHT symdev=000190106491:12C1 symdev=000187827667:0F79 symdev=000190106491:12CD symdev=000187827667:0F91 symdev=000190106491:0EDE symdev=000187827667:0FA9 symdev=000190106491:12D9 symdev=000187827667:0FC1 symdev=000190106491:12E5 symdev=000187827667:0FD9 symdev=000190106491:12F1 symdev=000187827667:0FF1 symdev=000190106491:0FBA symdev=000187827667:1159 symdev=000190106491:0FBE symdev=000187827667:1163 symdev=000190106491:0EF0 symdev=000187886776:15F7 symdev=000190106491:12FD symdev=000187886776:16F3 symdev=000190106491:1309 symdev=000187886776:170B

    Figure 10 Open Replicator Pairing file

    15

  • Note: For detailed syntax of the symrcopy command, refer to the EMC Solutions Enabler Symmetrix CLI Command Reference manual.

    Zone and Mask New DMX-3 Capacity to VMware

    The VMware environment consists of 3 physical VMware servers (APESX1, APESX2,

    and APESX3). Each has four HBAs, two per fabric, zoned to DMX-3 SymmID 6491

    FA7DA and FA10DA. The four HBAs are referred to as A, B, C and D (see Figure 11):

    SYMMETRIX ID 6491 FA7DA

    SYMMETRIX ID 6491 FA7DA 10000000c94cacb4 is APESX1-A 10000000c94caee6 is APESX2-A 10000000c962ac07 is APESX3-A

    10000000c94cacb3 is APESX1-B 10000000c94caee5 is APESX2-B 10000000c962ac06 is APESX3-B

    10000000c94cadfd is APESX1-C 10000000c9676a4a is APESX3-C 10000000c94cad67 is APESX2-C

    10000000c94cadfe is APESX1-D 10000000c94cad68 is APESX2-D 10000000c9676a4b is APESX3-D

    Figure 11 VMware HBA WWNs zoned to storage FAs

    Note: Our VMware server paths to storage and masking are designed to support EMC Static Load

    balancing using VMware preferred paths as documented in the EMC Techbook, VMware ESX Server

    Using EMC Symmetrix Storage Systems Version 2.0.

    We decided to mask MSCS Node A resources to HBAs A and B and similarly mask the

    Node B resources to HBAs C and D to support load balancing on our VMware paths to

    storage. Later, the VMware administrator will configure preferred paths in VMware.

    #Mask devs to (ESX A and C) WWN's on FA7DA

    #Devs associated with MSCS Cluster NODE-B Resources:

    symmask -sid 6491 -wwn 10000000c962ac07 add devs 12D9,12E5,12F1,0FBE,0EF0,1309 -dir 7d -p 0

    symmask -sid 6491 -wwn 10000000c94caee6 add devs 12D9,12E5,12F1,0FBE,0EF0,1309 -dir 7d -p 0

    symmask -sid 6491 -wwn 10000000c94cacb4 add devs 12D9,12E5,12F1,0FBE,0EF0,1309 -dir 7d -p 0

    #Devs associated with MSCS Cluster NODE-A Resources:

    symmask -sid 6491 -wwn 10000000c94cad67 add devs 12C1,12CD,0EDE,0FBA,12FD -dir 7d -p 0

    symmask -sid 6491 -wwn 10000000c94cadfd add devs 12C1,12CD,0EDE,0FBA,12FD -dir 7d -p 0

    symmask -sid 6491 -wwn 10000000c9676a4a add devs 12C1,12CD,0EDE,0FBA,12FD -dir 7d -p 0

    #Mask devs to (ESX HBA B and HBA D) WWN's to FA10DA

    #These Devs associated with MSCS Cluster NODE-B Resources:

    symmask -sid 6491 -wwn 10000000c94cadfe add devs 12D9,12E5,12F1,0FBE,0EF0,1309 -dir 10d -p 0

    symmask -sid 6491 -wwn 10000000c94cad68 add devs 12D9,12E5,12F1,0FBE,0EF0,1309 -dir 10d -p 0

    symmask -sid 6491 -wwn 10000000c9676a4b add devs 12D9,12E5,12F1,0FBE,0EF0,1309 -dir 10d -p 0

    #Devs associated with MSCS Cluster NODE-A Resources:

    symmask -sid 6491 -wwn 10000000c94cacb3 add devs 12C1,12CD,0EDE,0FBA,12FD -dir 10d -p 0

    symmask -sid 6491 -wwn 10000000c94caee5 add devs 12C1,12CD,0EDE,0FBA,12FD -dir 10d -p 0

    symmask -sid 6491 -wwn 10000000c962ac06 add devs 12C1,12CD,0EDE,0FBA,12FD -dir 10d -p 0

    # This masking is designed to support VMware best practice for Static Load Balancing as well as distribute IO

    traffic between all 4 HBAs.

    Figure 12 VMware Masking for MSCS Cluster devs

    EMC ProvenTM Professional Knowledge Sharing 2008 16

  • EMC ProvenTM Professional Knowledge Sharing 2008

    I gave my VMware system administrator the OR pairing spreadsheet after we zoned and

    masked new devices to VMware. He immediately began scanning for the new storage

    devices. Using the new LUN ID references in the spreadsheet, which correspond to the

    original MSCS volumes, it was relatively easy to identify the scanned-in devices and

    assign storage to the new VMware Guest MSCS Nodes using Raw Device Mapping.

    See the VMware section later in this document entitled Adding Shared Storage.

    No steps were required to align the new devices since the track partition alignment used

    on the original Windows operating systems was replicated to the new storage devices.

    Please remember that if partitions were not aligned properly before, you will not be able

    to modify the track alignment on the new storage devices.

    Note: For more information on proper alignment of disk partition, refer to the EMC Engineering

    white paper Using diskpar and diskpart to Align Partitions on Windows Basic and Dynamic Disks.

    FA to FA Zoning for Open Replicator

    The output from syminq, on each MSCS node, confirms which FA ports were associated

    with each cluster HBA wwn. We then created new zones for the DMX-3000 FA ports

    and the DMX-3 FA ports Using Connectrix Manager . We immediately moved the new

    zones to our production zone set and activated them. This allowed us to proceed to the

    next step and prepare our OR masking using actual FA logon information. See Figure

    13 to see our Open Replicator FA to FA topology and zone names we created.

    17

  • Figure 13 FA to FA Topology and OR Zone Names

    SAN Cabling

    The Open Replicator Remote Devices are on DMX-3000 FA ports configured at 2 GB.

    Open Replicator Control device FA ports are 4 GB capable. We should have been able

    to maximize the available throughput for all FA ports involved since we were pulling data

    from two different 2 GB capable DMX-3000s to 4 GB Control FAs.

    The only potential bottleneck was the number of ISLs and hops between the FA ports

    involved in the Open Replicator session. So, we added (2) 2 GB ISLs (per fabric)

    directly between the two Connectrix directors attached to new and old storage.

    FA to FA Masking for Open Replicator

    We added Open Replicator masking entries to both Symmetrix DMX-3000s, allowing

    the DMX-3 Control FAs to access the remote storage devices. We knew which devices

    to include based on our MSCS File Cluster Capacity research done earlier.

    With the FA to FA zones in place, we confirmed the zones and cabling were correct, and

    prepared our Open Replicator FA to FA masking file. We used the SYMCLI command,

    symmask list logins, to verify the DMX-3 Control FAs were logged onto the DMX-

    3000 remote FAs. We ran the command once for each of the four remote FA to confirm

    we have good FA logons across the board. Actually, we ran our command before and

    after the FA to FA zoning was in place, so we could compare and identify the new logon.

    symmask -sid 7667 list logins -dir 7c -p 1 Symmetrix ID : 000187827667 Director Identification : FA-7C Director Port : 1 User-generated Logged On Identifier Type Node Name Port Name FCID In Fabric ---------------- ----- --------------------------------- ------ ------ ------ 10000000c93ae9a6 Fibre fscl1b 10000000c93ae9a6 690913 Yes Yes 10000000c93aea8f Fibre fscl1a 10000000c93aea8f 690813 Yes Yes 10000000c93aeb71 Fibre dvcl1b 10000000c93aeb71 690f13 Yes Yes 10000000c93aebde Fibre dvcl1a 10000000c93aebde 690e13 Yes Yes 5006048ad52e6e96 Fibre NULL NULL 757c13 No Yes

    Figure 14 Display wwn login information for Symm7667 FA7CB

    We then copied the new wwn info from the list logins output and put it into our masking

    file. Keep in mind, we are just creating a masking file to be used later, after the MSCS

    Physical Clusters are brought offline. Using the wwn information in Figure 14 and our

    cabling and zoning information in Figure 13, we can create our masking entries. The new

    EMC ProvenTM Professional Knowledge Sharing 2008 18

  • EMC ProvenTM Professional Knowledge Sharing 2008

    logged on wwn in Figure 14 is from Symm 6491 FA7DA . We need to assign this wwn

    to Symm 7667 FA8CB and Symm 6776 FA9CB as follows:

    symmask -sid 7667 -wwn 5006048ad52e6e96 add devs

    0F79,0F91,0FA9,0FC1,0FD9,0FF1,1159,1163 -dir 7c -p 1

    Using symmask login output from the other zoned DMX-3000 FA ports, we added the

    remaining masking entries to our masking file to be used for OR:

    symmask -sid 7667 -wwn 5006048ad52e6e99 add devs

    0F79,0F91,0FA9,0FC1,0FD9,0FF1,1159,1163 -dir 10c -p 1

    symmask -sid 6776 -wwn 5006048ad52e6e99 add devs

    15F7,16F3,170B -dir 8c -p 1

    symmask -sid 6776 -wwn 5006048ad52e6e96 add devs

    15F7,16F3,170B -dir 9c -p 1

    Figure 15 FA to FA Masking entries added to support Open Replicator Session

    VMware / MS Cluster Prep-work

    We will outline both a fresh-build approach and a physical-to-virtual (P2V) migration

    using VMware Converter. We tried both methods and found the fresh-build approach to

    better fit in our environment; but we also saw great value in the P2V migration with

    Converter which might work best in other environments. The advantage to the fresh-

    build method is you are starting with no unneeded software, device drivers, or remnants

    from previous software. With the P2V migration, you need to uninstall any software and

    drivers left over from the source physical server. However, the shorter downtime with a

    P2V migration may outweigh the benefits from a cleaner operating system. We were

    able to select a slightly longer server downtime for the benefits of a cleaner environment.

    We must complete same prep work on our ESX servers regardless of which method we

    choose. We created a separate RAID 1 logical drive on each of our ESX servers to

    store our virtual machines because the boot disk for Windows clusters must reside on

    local storage. I will refer to our ESX servers as apesx1 and apesx2. We created new

    datastores using VirtualCenter and labeled them vmcl1 on apesx1 and vmcl2 on apesx2

    using the entire capacity. For the heartbeat network, we used a separate network card

    on each ESX server and connected them to a private VLAN created on our Cisco

    network switch. This isolated the heartbeat traffic following Microsoft best practices for

    clusters.

    19

  • We then created a new virtual machine port group called cluster-hb on each ESX server

    including this network card. The speed was set to 10 Mbps, half duplex per Microsoft

    best practice. Next, we masked all the new devices from the DMX-3 to apesx1 and

    apesx2. In VirtualCenter, we rescanned our storage adapters for the new LUNs with

    only the Scan for New Storage Devices option checked.

    Figure 16

    Fresh-Build MSCS

    This fresh-build approach follows Disaster Recovery procedures we created for

    dissimilar hardware recovery. The benefit is less post-migration cleanup work, and we

    ran the new environment on a clean operating system build.

    To create new guest VMs for Microsoft Clustering, you will need to reference VMware

    documentation Basic System Administration and Setup for Microsoft Cluster Service

    documents both available at vmware.com. Following the Clustering Virtual Machines

    across Physical Hosts procedures outlined in the Setup for Microsoft Cluster Service

    document, we created a new virtual machine on the datastore vmcl1 with two network

    adapters, two processors, 2048 MB of RAM, and a 24 GB boot disk. Install 32-bit

    Windows Server 2003, Enterprise Edition with service pack 2. We named the server

    fscl1a-new so that we could bring the new virtual server online at the same time as the

    source server we were migrating. When the build was complete, we used Virtual

    Machine Console and installed VMware Tools.

    Once complete, we used the clone procedure to create our second node on apesx2,

    vmcl2 datastore; but we needed to use the Customization Wizard to customize the new

    virtual machine. We specified fscl1b-new for the computer name and generated a new

    EMC ProvenTM Professional Knowledge Sharing 2008 20

  • EMC ProvenTM Professional Knowledge Sharing 2008

    SID when prompted. When the clone was complete, we booted the server to complete

    the build. At this point we had two virtual servers built and we were ready to add shared

    disk and start the Microsoft Cluster configuration.

    Adding Shared Storage

    In this step, we were only concerned with the 4 GB quorum LUN. With both fscl1a-new

    and fscl1b-new powered off, the 4 GB quorum LUN was then added to each virtual

    cluster server with Raw Device Mappings. Referring to the Open Replicator Pairing

    Sheet, the quorum device 138E has a decimal LUN number of 9. Following the

    VMware storage device naming convention :::, this quorum LUN 9 is listed as vmhba1:1:9:0. We stored the LUN mapping

    with the Virtual Machine and selected physical compatibility mode.

    /vmfs/devices/disks/vmhba1:1:9:0 4.215 GB

    Figure 17

    You must select a new virtual device node other then SCSI 0. We selected SCSI 1 for

    our shared storage and assigned the quorum LUN to SCSI 1:0. After selecting Finish,

    we saw our new hard disk mapping and a new SCSI controller. Then we selected the

    SCSI controller, and verified the SCSI type was set to LSI Logic and the SCSI Bus

    Sharing was set to Physical.

    Figure 18

    21

  • Then, we repeated this procedure on the second server. Once the quorum disk was

    added to both nodes, we were able to boot up fscl1a-new. We needed to reboot

    because we added new hardware to the server. After the reboot, we now saw the

    quorum disk when opening Disk Management. We shut down fscl1a-new and powered

    up fscl1b-new. Then, we verified the existence of the quorum disk after a reboot. Again,

    we did not initialize the disk. Now we were able to install Microsoft Cluster Service.

    NOTE: The Setup for Microsoft Cluster Service document references a Microsoft Cluster

    Service guide at http://www.microsoft.com/downloads/details.aspx?FamilyID=96f76ed7-

    9634-4300-9159-89638f4b4ef7&displaylang=en#EIAA. For our migration, we performed

    the cluster disaster recovery procedure we use for unlike hardware restores.

    Disaster Recovery Cluster Restore

    Is this step, we installed Microsoft Cluster Service on fscl1a-new and then restored the

    cluster registry from the source physical cluster. This process restored all of our cluster

    groups and resources. At this point, you will need two utilities from the Windows 2000

    Resource Kit: dumpcfg.exe and regback.exe. Dumpcfg.exe is used to gather the disk

    signatures from the source server so they can be written on the new virtual servers.

    Regback.exe is used to backup and restore the cluster registry.

    First, we created a C:\Restore folder on the source server running the quorum resource,

    and copied both utilities to it. From a command prompt, we ran regback.exe

    C:\Restore\clustername-clusbak machine cluster. Then we ran dumpcfg.exe >

    C:\Restore\clustername-dsksig.txt. We copied the restore folder to a network

    share so it could be accessed by the new virtual servers. Next, we booted fscl1a-new

    using Disk Management to initialize the quorum disk. We did not convert them to

    Dynamic Disk. We formatted the drive NTFS and assigned the drive letter Q. The

    volume label is then always set to match the drive letter; so in this case, we set it to

    Drive Q.

    At this point, we copied the Restore folder created on the source server to fscl1a-new to

    C:\. From a command prompt, we ran C:\Restore\dumpcfg.exe. This listed all the

    volumes with associated disk numbers and signature numbers for the new virtual server.

    In the [Volumes] section, each disk volume will have a Volume Label such as Volume

    Label: Drive Q.

    EMC ProvenTM Professional Knowledge Sharing 2008 22

  • EMC ProvenTM Professional Knowledge Sharing 2008

    Example: [Volumes]

    Volume #1:

    Volume name: \\?\Volume{3c8df3b7-eae0-11d8-bb40-505054503030}\

    Drive letter: Q:

    Volume Label: Drive Q

    File System: NTFS

    Boot\Boot.ini\System Volume:

    Volume Type: Simple Volume \ Logical Drive

    Number of members: 1

    Member #1: Partition - Disk: 1, StartingOffset: 32256 bytes, Length: 4314 MB

    Volume Label: Drive Q will have an associated Disk number and in this example Disk:1

    Member #1: Partition - Disk: 1, StartingOffset: 32256 bytes, Length: 4314 MB

    In the [Disks] section each disk will be listed by Disk Number and will list a signature.

    [DISKS] Disk Number: 1 Signature: 4D3509BA Disk Number: 2 Signature: 4D3509BC Disk Number: 3 Signature: EF7729A0 Disk Number: 0 Signature: B326B326

    For the example of Volume Label: Drive Q, the signature is 4D3509BA for Disk Number:

    1. We opened the C:\Restore\clustername-dsksig.txt file in notepad, and found the

    Volume Label used for our quorum drive in the [Volumes] section. We always use Drive

    Q. For Drive Q, it will list the disk number which you can use to find the original drive

    signature in the [DISKS] section. In our case, the original disk signature was 4D3509BB.

    This signature must be restored with dumpcfg.exe to the new quorum LUN presented to

    fscl1a-new. The Command line syntax is C:\Restore\dumpcfg.exe S . Following our example, we ran dumpcfg.exe -S4D3509BB 1.

    Running dumpcfg.exe again listed all the disk signatures to confirm the quorum disk had

    the new restored signature.

    Then we shut down fscl1a-new and booted fscl1b-new. When we opened Disk

    Management, we saw the quorum disk properly formatted with a Volume Label of Drive

    Q. The only thing we needed to do was change the drive letter mapping to Q. Next we

    shut down fscl1b-new. At this point, we were ready to start the migration. Both source

    servers should be shut down.

    23

  • Implement Migration

    At this point, all Storage and VMware prep-work has been completed, and its time to

    begin the downtime cutover. Figure 19 illustrates the tasks we completed. The physical

    MSCS Nodes were taken offline and powered down to prevent access to the OR

    Remote devices during the copy operation.

    Figure 19 Completed Tasks (grey) before we start Open Replicator Copy

    With the OR FA to FA zoning already in place, we added the OR FA to FA masking

    entries to the DMX-3000s. Then, the Open Replicator Session was created and

    activated. Figure 20 shows the syntax for creating the OR session. Notice we did not use the copy option. The default mode for the copy session is CopyOnAccess, which

    copies tracks as they are accessed by the hosts connected to the OR Control Devices.

    Basically, we wanted to get the MSCS File Servers running in VMware before allowing

    Open Replicator to copy all tracks. After we activated the OR session we told the

    VMware administrator that he could begin his work to bring up the MSCS Clusters in

    VMware using either the VMware Fresh-Build Approach or the VMware Converter

    Option (Physical-to-Virtual).

    EMC ProvenTM Professional Knowledge Sharing 2008 24

  • EMC ProvenTM Professional Knowledge Sharing 2008

    Figure 20 Open Replicator Session Creation and Activation

    Figure 20 illustrates how we started our hot pull operation from remote devices on two

    different Symmetrix DMX-3000s. We replicated 11 devices; this figure shows 2.

    Once the MSCS File Servers and their resources were brought back online (running on

    VMware), we started the file copy and set the copy pace. We changed the Open

    Replicator session mode to CopyInProg with the following syntax :

    symrcopy -file or_device_pairs set mode -copy

    Then we set the copy session pace:

    symrcopy file or_device_pairs set pace 3

    The default pace is 5. Pace goes from 0-9 with 0 being fastest.

    We did not want to affect our backup windows, so we decided to copy our data during

    business hours at a medium pace. Our users were not affected. While the tracks are

    copying you can query the progress of the session per device as shown below.

    25

  • Symrcopy file or_device_pairs query i 30 c 10 Legend: R: (Remote Device Vendor Identification) S = Symmetrix, C = Clariion, . = Unknown. I: (Remote Device Specification Identifier) D = Device Name, W = LUN WWN, World Wide Name. Flags: (C): X = The background copy setting is active for this pair. . = The background copy setting is not active for this pair. (D): X = The session is a differential copy session. . = The session is not a differential copy session. (S): X = The session is pushing data to the remote device(s). . = The session is pulling data from the remote device(s). (H): X = The session is a hot copy session. . = The session is a cold copy session. (U): X = The session has donor update enabled. . = The session does not have donor update enabled. Device File Name : or_control_cl1 Control Device Remote Device Flags Status Done ---------------------------- ----------------------------------- ----- -------------- ---- Protected SID:symdev Tracks Identification RI CDSHU CTL REM (%) ------------------ --------- -------------------------------- -- ----- -------------- ---- 000190106491:12C1 434007 000187827667:0F79 SD X..X. CopyInProg 93 000190106491:12CD 376476 000187827667:0F91 SD X..X. CopyInProg 94 000190106491:0EDE 581598 000187827667:0FA9 SD X..X. CopyInProg 94 000190106491:12D9 359717 000187827667:0FC1 SD X..X. CopyInProg 94 000190106491:12E5 401421 000187827667:0FD9 SD X..X. CopyInProg 93 000190106491:12F1 358848 000187827667:0FF1 SD X..X. CopyInProg 94 000190106491:0FBA 13518 000187827667:1159 SD X..X. CopyInProg 99 000190106491:0FBE 0 000187827667:1163 SD X..X. Copied 100 000190106491:0EF0 0 000187886776:15F7 SD X..X. Copied 100 000190106491:12FD 0 000187886776:16F3 SD X..X. Copied 100 000190106491:1309 0 000187886776:170B SD X..X. Copied 100 Total --------- Track(s) 2525585 MB(s) 157849

    We terminated the Open Replicator session once all the tracks status indicate Copied:

    Symrcopy file or_device_pairs terminate Execute 'Terminate' operation for the 11 specified devices in device file 'or_control_cl1' (y/[n]) ? y 'Terminate' operation execution is in progress for the device list in device file 'or_control_cl1'. Please wait... 'Terminate' operation successfully executed for the device list in device file 'or_control_cl1'.

    Create Microsoft Cluster

    Continuing with the fresh-build MSCS approach, we booted fscl1a-new, changed the

    name to fscl1a, and changed the network address of the production port group to match

    the source server IP. Then we shut down fscl1a and repeated for fscl1b-new by

    renaming the server to fscl1b and changing the network address of the production port

    group to match the source server IP.

    Once completed, we shut down fscl1b. We were now ready to create a new cluster on

    fscl1a using Microsoft standard practices with the cluster-hb network as the private

    network for internal cluster communications.

    EMC ProvenTM Professional Knowledge Sharing 2008 26

  • EMC ProvenTM Professional Knowledge Sharing 2008

    Once completed, we stopped the Cluster Service and set it to manual startup. In the

    Start parameters field for the Cluster Service, we entered NoQuorumLogging, and

    clicked on the Start button in the properties window of the Cluster service. Then we

    navigated to Q: and renamed quolog.log under Q:\MSCS to quolog.log.old. We stopped

    the Cluster Service. Open Regedit; highlight HKLM\Cluster; and select File Unload

    Hive (On the menu bar), and click Yes at the prompt. At this point, we copied the cluster

    clustername-clusbak file from C:\Restore to C:\Windows\Cluster. Then we renamed

    clusdb to clusdb.old and cluster.log to cluster.old. Then we renamed clustername-

    clusbak to clusdb.

    Next, we started the Cluster Service with the resetquorumlog parameter. Then we

    opened Cluster Administrator, and clicked Yes to all. The following resources failed, as

    we expected: Virtual server network names (if Kerberos Authentication is enabled), third

    party server applications, File Shares, and disk resources other than Q. All third-party

    server applications needed to be reinstalled once the migration is completed; so we

    deleted any resources or groups associated with them.

    In our case, we had a cluster-aware backup software so we needed to delete its Network

    Name, IP Address and Generic Service resources. If your Network Name resources

    have Kerberos Authentication enabled, you will need to delete them in Active Directory

    in order to start them. In Cluster Administrator, double click a failed network name

    resource and select the Parameters tab. Uncheck Enable Kerberos Authentication, click

    Apply, then choose Yes. In Active Directory Users and Computers, locate the Network

    Name and delete it. Back in Cluster Administrator, check Enable Kerberos

    Authentication and click OK. Right click the Network Name resource and select Bring

    Online. Repeat for all failed Network Name resources. You can now stop the Cluster

    Service and remove the resetquorumlog parameter. Keep the Cluster Service on

    manual start, and shut down fscl1a.

    The remaining new devices were added to fscl1a and fscl1b. Following the same

    procedure, we added the new devices to each server using Raw Device Mapping. The

    quorum LUN was assigned to SCSI 1:0; so each additional device was assigned to SCSI

    1:1 and higher. We booted fscl1a, and like the addition of the quorum disk, we needed

    to reboot at the new hardware detected prompt. After the reboot, we opened Disk

    Management and verified all the newly added devices were visible. We changed all

    drive letter mappings to match the Volume Labels, as we did with drive Q.

    27

  • This can be challenging for environments with a large number of devices. In our

    environment, we often need to assign a Drive Letter to an unused letter temporarily to

    free up the correct drive letter required. In this case, you will need to reboot to free up

    the letter. Once complete, we shut down fscl1a and booted fscl1b. Then we repeated

    the same Drive Letter mapping procedure on this server, and shutdown fscl1b when

    complete.

    Configuration of the Second Node

    It was now time to add the second node into the cluster. We booted fscl1a, and

    changed the Cluster Service to start automatically and started it. Then we opened

    Cluster Administrator, so that all of our resources would be online. In the left column, we

    right-clicked fscl1B, selected Evict Node, and then selected OK to the RPC error

    message. Then we booted fscl1b. Once online, you added this server to the cluster

    following the Microsoft standard procedure to add an additional node We tested the

    failover of a few resource groups just to verify the configuration and then installed our

    backup software. The migration was complete. All cluster resources remained online

    during the Open Replicator copy process.

    P2V Migration with VMware Converter

    Using VMware Converter to take a snap shot of our source servers, we imported them

    on our ESX servers. The benefit of VMware Converter is less pre-work and a slightly

    shorter downtime when compared to the fresh -build approach

    The VMware Converter application can take a snapshot of a running server; but for our

    cluster migration, we found it easier to take the source server completely offline. Before

    taking down the source servers, we set the Cluster Service to manual startup. We also

    found it beneficial to disable any vendor-specific software on the source servers prior to

    shutdown.

    Physical-to-Virtual Migration

    For our P2V migration, we followed the procedures outlined in the VMware Converter

    3.0.2 User's Manual under Using the Converter Boot CD for Local Cold Cloning. We

    cloned one cluster node at a time to minimize downtime. We started by moving all

    cluster groups to fscl1b. After completing the P2V migration of fscl1a, we booted fscl1a

    EMC ProvenTM Professional Knowledge Sharing 2008 28

  • EMC ProvenTM Professional Knowledge Sharing 2008

    and moved all resources back and started the migration of fscl1b. Fscl1a was migrated

    to apesx1 on the vmcl1 datastore and fscl1b went to apesx2 on the vmcl2 datastore.

    Adding Shared Storage

    When the P2V migration was complete, we shutdown the source servers and started the

    same Open Replication copy process used for the fresh-build MSCS approach. After

    the OR session was activated, we used VirtualCenter to map all the new devices with

    Raw Device Mappings to the newly created Virtual Machines fscl1a and fscl1b. We

    stored the LUN mapping with the Virtual Machine and selected physical compatibility

    mode. Just as we did with the fresh-build approach, we selected a new virtual device

    node other than SCSI 0. We selected SCSI 1 for our shared storage and assigned the

    quorum LUN to SCSI 1:0 and the remaining LUNs to 1:1 and higher. After selecting

    Finish, we saw the new hard disk mappings and a new SCSI controller. Then we

    selected the SCSI controller and verified the SCSI type was set to LSI Logic and the

    SCSI Bus Sharing was set to Physical.

    Virtual Machine Configuration

    On the Virtual Machine properties page, we removed the USB and Serial devices

    captured by the converter process. We then set the RAM to 2048 MB, CPU count to two

    and set one network adapter on our production network and one on our cluster-hb

    network. Next boot fscl1a, and install VMware Tools. You will be prompted to reboot

    after new hardware is detected during the first boot-up after the migration. Also, some

    services that are tied to HP hardware will not start but can be ignored at this point.

    After rebooting, we need to set the network configuration for both network adapters to

    match the source server. At this point, we can set Cluster Service to automatic Startup

    type and start it. Open Cluster Administrator and verify all resources are online. Next,

    boot fscl1b and install VMware Tools. After rebooting, we needed to set the network

    configuration for both network adapters to match the source server. Set the Cluster

    Service to automatic Startup type, and start it. We test failover of groups to verify

    operation. At this point, the migration is complete, and all file shares are online during

    the Open Replicator.

    29

  • Physical-to-Virtual Cleanup Tasks

    We removed all software and left over devices from the physical hardware after we

    completed the Physical to Virtual conversion. We did this one server at a time so all

    shares remained online during the cleanup. We started by removing applications that

    were tied to our source physical hardware in Add or Remove Programs. Next we

    removed old hardware devices by running the following command in a command

    prompt: set devmgr_show_nonpresent_devices=1, then Start DEVMGMT.MSC. Click View, and then click Show Hidden Devices on the menu bar. Right-click each

    dimmed devices, and click Uninstall. Once complete, we failed over all groups and

    repeated the cleanup procedure on the other node. This process took approximately

    two hours per server.

    Open Replicator Post-Migration Tasks

    Once the Open Replicator Session is terminated, it was time to clean up zoning and

    masking entries added to support the remote copy operation. After we were

    comfortable running on the new Virtualized MSCS File Servers, we removed the

    physical MSCS File Servers from the Data Center and re-claimed our switch ports and

    SAN cabling.

    Conclusion

    In reality, it took a long time to perform the initial boot of virtual nodes after Raw Device

    Mapping the Open Replicator control devices. We noticed the single CPU in the guest

    VM was pinned at 100%. Our solution was to add a total of two CPUs to each VM,

    which was the target number of CPUs in our new VM guest configuration. The

    additional CPU made the boot go much more quickly.

    We achieved an average sustained throughput of ~ 290 MB / second during our OR

    CopyInProg hot pull session. Our migration strategy required that we present the same

    number of LUNS to a new host configuration, which made Open Replicator the ideal

    vehicle to copy our file server data volumes. Since Open Replicator allowed us to

    access (read/write) the control devices during the copy session, we virtually eliminated

    downtime compared to alternative copy methods. The MSCS File Server outage was

    about 1 hour from the time the physical clusters were brought offline to the point when

    users were able to access file share resources on VMware.

    EMC ProvenTM Professional Knowledge Sharing 2008 30

    Disclaimer: The views, processes or methodologies published in this article are those of the authors. They do not necessarily reflect EMC Corporations views, processes or methodologies. Introduction and Background Our Approach Planning the Storage Migration


Recommended