+ All Categories
Home > Documents > LVM Mirror Walking - Flash Read Preferred

LVM Mirror Walking - Flash Read Preferred

Date post: 02-Jun-2018
Category:
Upload: liew99
View: 215 times
Download: 0 times
Share this document with a friend

of 15

Transcript
  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    1/15

    AIX LVM Mirror walking

    and Flash Storage Preferred Read

    Deployment

    How to walk the LVM data over fromone storage array to a new storage

    array, using a smaller number of largerLUNS.

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    2/15

    Mirror Walking

    All data is available and online throughout the entireprocess. No down time required

    Some impact on Server resources, CPU, I/O.

    Requirements The VG must be scalable to accept the larger LUNs number

    of PPs per LUN.

    varyoffvg ; chvgG ; varyonvg

    This is one place where down time may be required

    The hdisks within a single VG can be walked over to asmaller number of larger hdisks.

    Cannot do this across VGs.

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    3/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1

    lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    vg1

    All hdisks and lvsin a single vg can

    use this technique

    Vg must be scalable and able to acceptLarger LUN size PP count. Introducing

    A larger sized LUN may not be able to

    be added if the vg is not scalable.

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    4/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1

    lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10

    -Create Bigger LUNS on XIV

    -Scan for the new devices

    -xiv_fc_admin -R

    vg1

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    5/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1

    lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10

    Add hdisks to vg1

    -extendvg vg1 hdisk10 hdisk11

    vg1

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    6/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10

    Start the lv mirroring

    -mklvcopy lv1 2 hdisk10 hdisk11

    -mklvcopy lv2 2 hdisk10 hdisk11

    -mklvcopy lv3 2 hdisk10 hdisk11

    -mklvcopy lv4 2 hdisk10 hdisk11

    lslv -l lv1

    vg1

    Check lv for pv contents

    lslv -l lv1

    lvm2lv:/lvm2

    PV COPIES IN BAND DISTRIBUTION

    hdisk1 004:000:000 100% 000:004:000:000:000

    hdisk10 000:000:000 100% 000:000:000:000:000

    hdisk2 004:000:000 100% 000:004:000:000:000

    hdisk11 000:000:000 100% 000:000:000:000:000

    hdisk3 004:000:000 100% 000:004:000:000:000hdisk4 004:000:000 100% 000:004:000:000:000

    hdisk5 004:000:000 100% 000:004:000:000:000

    hdisk6 004:000:000 100% 000:004:000:000:000

    hdisk7 004:000:000 100% 000:004:000:000:000

    hdisk8 004:000:000 100% 000:004:000:000:000

    hdisk9 004:000:000 100% 000:004:000:000:000

    Note the 2 new LUNS have 0 PPs distributed

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    7/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1

    lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10

    All lvsmirrors will be stale

    lsvgl vg1

    vg1:

    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT

    lv1 jfs2 40 80 12 open/stale /lvm1

    lv2 jfs2 40 80 12 open/stale /lvm2lv3 jfs2 40 80 12 open/stale /lvm3

    lv4 jfs2 40 80 12 open/stale /lvm4

    Synchronize the two copies

    syncvgv vg1

    Will take time to complete

    vg1

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    8/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1

    lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10

    Now LVs are synced

    lsvgl vg1

    vg1:

    LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINTlv1 jfs2 40 80 12 open/syncd /lvm1

    lv2 jfs2 40 80 12 open/syncd /lvm2

    lv3 jfs2 40 80 12 open/syncd /lvm3

    lv4 jfs2 40 80 12 open/syncd /lvm4

    vg1

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    9/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10vg1

    Check lv for pv contents and distribution

    lslv -l lv1

    lvm2lv:/lvm2

    PV COPIES IN BAND DISTRIBUTION

    hdisk1 004:000:000 100% 000:004:000:000:000

    hdisk10 020:000:000 100% 000:020:000:000:000

    hdisk2 004:000:000 100% 000:004:000:000:000

    hdisk11 020:000:000 100% 000:020:000:000:000

    hdisk3 004:000:000 100% 000:004:000:000:000hdisk4 004:000:000 100% 000:004:000:000:000

    hdisk5 004:000:000 100% 000:004:000:000:000

    hdisk6 004:000:000 100% 000:004:000:000:000

    hdisk7 004:000:000 100% 000:004:000:000:000

    hdisk8 004:000:000 100% 000:004:000:000:000

    hdisk9 004:000:000 100% 000:004:000:000:000

    Note the 2 new LUNS have more PPs than the original LUNS

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    10/15

    hdisk1

    hdisk2

    hdisk3

    hdisk4

    hdisk5

    hdisk6

    lv1

    lv2

    lv3

    lv4

    hdisk7

    hdisk8

    hdisk9

    hdisk11

    hdisk10

    Remove the old pvsfrom each lv

    rmlvcopy lv1 1 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9

    lslvl lv1

    lv1:

    PV COPIES IN BAND DISTRIBUTION

    hdisk10 020:000:000 100% 000:020:000:000:000hdisk11 020:000:000 100% 000:020:000:000:000

    Migration Done!

    vg1

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    11/15

    Flash Storage Read Preferred

    To enable Flash storage as a read-preferred copy of the logical volume device pair,

    Add in the mirror as discussed on slide 6

    mklvcopy 2 hdisk10 hdisk11

    Synchronize the mirrored logical volumes

    syncvgv

    Remove the primary devices from the rmlvcopy 1 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9

    Add back the original primary devices and sync. They will now be added in as

    secondary

    mklvcopy lvm1lv 2 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9

    Synchronize the mirrored logical volumessyncvgv

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    12/15

    Adjust the write schedule policy

    Check which logical volumes are primary, and which are secondary

    # lslv -m

    lvm1lv:

    LP PP1 PV1 PP2 PV2 PP3 PV3

    0001 0078 hdisk10 0014 hdisk1

    0002 0079 hdisk11 0014 hdisk2

    0003 0079 hdisk10 0014 hdisk30004 0080 hdisk11 0014 hdisk4

    0005 0080 hdisk10 0014 hdisk5

    0006 0081 hdisk11 0014 hdisk6

    0007 0081 hdisk10 0014 hdisk7

    0007 0081 hdisk11 0014 hdisk8

    0007 0081 hdisk10 0014 hdisk9

    Now the devices in the PV1 column are the primary devices. The reads will all be

    supported by the PV1 devices. During Boot, the PV1 devices are the primary copy

    of the mirror, and will be used as the sync point.

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    13/15

    Flash Storage Read Preferred

    There are 5 write policies for LVM mirroring

    The default is parallel

    Write operations are done in parallel to all copies of the mirror.

    Read operations are done to the least busy device

    We want parallel write with sequential read

    Write operations are done in parallel to all copies of the mirror.

    Read operations are ALWAYS performed on the primary copy of the devices in the mirror set

    During Boot, the PV1 devices are the primary copy of the mirror, and will be used as the sync source.

    There is a short amount of downtime for the file system required to change the policy

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    14/15

    Adjust the write schedule policy

    Check the write schedule policy of the logical volumeLOGICAL VOLUME: lvm2lv VOLUME GROUP: lvm_test

    LV IDENTIFIER: 00f62b1d00004c000000013fd4f00d26.2 PERMISSION: read/write

    VG STATE: active/complete LV STATE: closed/syncd

    TYPE: jfs2 WRITE VERIFY: off

    MAX LPs: 512 PP SIZE: 256 megabyte(s)

    COPIES: 2 SCHED POLICY: parallel

    LPs: 40 PPs: 80

    STALE PPs: 0 BB POLICY: relocatableINTER-POLICY: maximum RELOCATABLE: yes

    INTRA-POLICY: middle UPPER BOUND: 1024

    MOUNT POINT: /lvm2 LABEL: /lvm2

    DEVICE UID: 0 DEVICE GID: 0

    DEVICE PERMISSIONS: 432

    MIRROR WRITE CONSISTENCY: on/ACTIVE

    EACH LP COPY ON A SEPARATE PV ?: yes

    Serialize IO ?: NO

    INFINITE RETRY: noDEVICESUBTYPE: DS_LVZ

    COPY 1 MIRROR POOL: None

    COPY 2 MIRROR POOL: None

    COPY 3 MIRROR POOL: None

  • 8/11/2019 LVM Mirror Walking - Flash Read Preferred

    15/15

    Adjust the write schedule policy

    Change the write schedule policy. This must be done with the file system for each of the logicalvolumes unmounted. Here is where the down time will occur.Chlvd ps

    Re-Check the write schedule policyLOGICAL VOLUME: lvm1lv VOLUME GROUP: lvm_test

    LV IDENTIFIER: 00f62b1d00004c000000013fd4f00d26.1 PERMISSION: read/write

    VG STATE: active/complete LV STATE: opened/stale

    TYPE: jfs2 WRITE VERIFY: offMAX LPs: 512 PP SIZE: 256 megabyte(s)

    COPIES: 2 SCHED POLICY: parallel/sequential

    LPs: 40 PPs: 80

    STALE PPs: 40 BB POLICY: relocatable

    INTER-POLICY: maximum RELOCATABLE: yes

    INTRA-POLICY: middle UPPER BOUND: 1024

    MOUNT POINT: /lvm1 LABEL: /lvm1

    DEVICE UID: 0 DEVICE GID: 0

    DEVICE PERMISSIONS: 432

    MIRROR WRITE CONSISTENCY: on/ACTIVE

    EACH LP COPY ON A SEPARATE PV ?: yesSerialize IO ?: NO

    INFINITE RETRY: no

    DEVICESUBTYPE: DS_LVZ

    COPY 1 MIRROR POOL: None

    COPY 2 MIRROR POOL: None

    COPY 3 MIRROR POOL: None


Recommended