+ All Categories
Home > Documents > 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

Date post: 22-Dec-2015
Category:
Upload: jean-franklin
View: 220 times
Download: 0 times
Share this document with a friend
Popular Tags:
19
1 © 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide
Transcript
Page 1: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

1 © 2014 IBM Corporation

Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide

Page 2: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

2 © 2014 IBM Corporation

Table of Contents

Existing SVC Infrastructure Overview BNYM iSCSI Requirements iSCSI Options within IBM Storwize Family Hardware / Software Limitations iSCSI Deployment Options for BNYM Best Practices Overview Host Connectivity Guidelines for ESX Best Practices for Running VMware vSphere® on iSCSI IBM Storage Management Console for VMWare vCenter

Page 3: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

3 © 2014 IBM Corporation

Existing SVC Infrastructure

Four DH8 based SVC Clusters as Outlined Below

Page 4: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

4 © 2014 IBM Corporation

BNYM iSCSI Requirements

TPC & CNJ Data Centers with plans to include NPC later iSCSI to support Cloud Application based on ESX Currently BNYM Cloud only using File based storage No FC cards in ESX servers ~100 ESX Hosts in TPC, ~150 in CNJ ~6,000 VM Guests Environment could double in the next 12 months Replication required, but not for entire footprint CNJ Lab Testing using Existing SVC and 1GB Interfaces iSCSI target volumes used as ESX Datastores, not RDM’s to VM’s

Page 5: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

5 © 2014 IBM Corporation

iSCSI Options within IBM Storwize Family

SAN Volume Controller (SVC) V7000 Block Storage w/ External Virtualization Capabilities V7000 Unified Block/File Storage w/ External Virtualization Capabilities

SAN Volume Controller (SVC)

Hardware Specs– Integrated: (3) 1 Gbps NICs. (Recommended to use separate NICs for iSCSI and management)

– Optional Host Interface Card: (4-port) 10GbE CNA, maximum one per SVC node, limits node to two (4-port) 8Gbit Fibre Channel cards

Host Details– ESX Version

– NIC/CNA/iSCSI hardware

– Network configuration

– Datastore size, number of VM’s per datastore

– Datastore clusters

Storage Arrays– To be shared pools, used by FC- and iSCSI-connected hosts

Page 6: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

6 © 2014 IBM Corporation

iSCSI Options within IBM Storwize Family

SAN Volume Controller (SVC)- Integrated: (3) 1 Gbps NICs per SVC node. Recommended to use separate NICs for iSCSI and

management - Optional Host Interface Card: (4-port) 10GbE CAN, maximum one per SVC node, limits node to two

(4-port) 8Gbit Fibre Channel cards

Page 7: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

7 © 2014 IBM Corporation

iSCSI Options within IBM Storwize Family

V7000 Block Storage w/ External Storage Virtualization Capabilities- Integrated: (3) 1 Gbps NICs per V7000 node/canister. Recommended to use separate NICs for

iSCSI and management - Optional Host Interface Card: (4-port) 10GbE CAN, maximum one per V7000 node/canister, limits

node to one (4-port) 8Gbit Fibre Channel card- Can cluster up to four V7000 controllers with up to 80 expansion enclosures and up to 1056 total

drives

Page 8: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

8 © 2014 IBM Corporation

iSCSI Options within IBM Storwize Family

V7000 Unified Block/File Storage w/ External Storage Virtualization Capabilities- Solution designed to provide NAS, CIFS, GPFS…- Also able to provide block storage like regular V7000, but without clustering capability (yet).

Page 9: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

9 © 2014 IBM Corporation

Hardware / Software Limitations Limitation Scope

Property Maximum Number

Notes

SVC Node iSCSI Sessions per SVC Node 256 512 in IP failover mode (when partner node is unavailable)

IO Group iSCSI Hosts per IO Group 256 Equates to 1024 total iSCSI hosts for an 8-node cluster

IO Group iSCSI names per IO Group 256 A host object may contain both Fibre Channel ports and iSCSI names

IO Group Total FC ports and iSCSI host names per IO Group

2048

iSCSI Host iSCSI names per Host Object 256 A given ESX host should not need more than 2 (maybe 4 or 8 at the absolute most) iSCSI names.

SVC Nodes

Max. # of Hosts

Max. # of iSCSI Hosts

Max.# of Host FC ports

Max.# of Host iSCSI Names

Max. # of FC ports + iSCSI Names

2 512 256 1024 256 2048

4 1024 512 2048 512 4096

6 1536 768 3072 768 6144

8 2048 1024 4096 1024 8192

Page 10: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

10 © 2014 IBM Corporation

Storwize Family Performance Capabilities

Capabilities are listed for 2 Nodes/canisters (or 1 IO Group) All miss workloads for each Storwize system with supported max 15K RPM HDD All miss workloads configured as RAID-5 arrays in Storwize. SVC tests use FlashSystem 840 and 820 backend storage controllers.

Page 11: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

11 © 2014 IBM Corporation

BNYM iSCSI Option - Add iSCSI to existing SVC Clusters Advantages:

- Single cluster for access to storage management (although this can be obtained through VSC for multiple SVC / V7000 clusters)

- Common pool of Virtualized storage, regardless of presentation to hosts via FC or iSCSI, eliminating need for dedicated storage (or at least dedicated LUNs/Mdisks) for iSCSI environment

- Lower hardware cost than a dedicated SVC or V7000 cluster Disadvantages:

- May require reconfiguration of Host Interface Cards in pre-existing and new SVC nodes:• Introducing a 10Gbps Host Interface Card (HIC) limits the number of FC ports per node to 8 ports.• Inter-node (localfcportmask) and replication (partnerfcportmask) parameters are applicable to

entire cluster; they cannot be different across nodes of a cluster. Currently, ports 1,2, 5 & 6 are masked for Inter-node and 11 & 12 for replication.

• Entire cluster masking would need to accommodate new and old nodes. Since 4 ports are recommended at a minimum for host/storage connectivity, this would mean either not allocating any replication ports on the new nodes, or cutting the number of inter-node ports from 4 to 2 across the whole cluster.

- Host and storage zoning will change from the current design. Either the host/storage zoning will be different on the new nodes (based on less FC ports), or (if the old nodes are retrofitted with 10 Gbps HIC) the existing zoning will have to be changed for storage and existing SVC-attached hosts.

- The number of FC host connections available per IO Group would be reduced by the number of iSCSI host connections established.

Page 12: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

12 © 2014 IBM Corporation

BNYM iSCSI Option - Add Dedicated SVC Cluster for iSCSI Advantages:

- There is no additional software cost versus using existing SVC Clusters, only physical SVC node costs

- Allows for new purpose-driven design of FC and 10 Gbps port connectivity and infrastructure, from the ground up

- Easier tracking of cluster growth requirements in terms of maximum numbers of iSCSI versus FC hosts and volumes

Disadvantages:- Additional hardware cost- Silo’d storage – Backend Storage LUNs would have to be dedicated separately for the

FC SVC cluster and for the iSCSI SVC cluster.

Page 13: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

13 © 2014 IBM Corporation

BNYM iSCSI Option - Add Dedicated V7000 for iSCSI Advantages:

- Procurement of net new storage requirement, storage virtualization, and iSCSI presentation layer in one single system

- Allows for new purpose-driven design of FC and 10 Gbps port connectivity and infrastructure, from the ground up

- Easier tracking of cluster growth requirements in terms of maximum numbers of iSCSI versus FC hosts and volumes

- V7000 supports multiple tiers of storage, including SSD, SAS and NL-SAS, all within the same system

- V7000 Unified could be considered, in order to provide both block and NAS storage Disadvantages:

- Additional hardware cost- Silo’d storage – The V7000 drives would be another storage silo to be used for iSCSI,

separate from the existing storage used by the SVC for FC-attached hosts- Potential BNYM issues with getting new storage subsystem approved for deployment in

the bank’s environment

Page 14: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

14 © 2014 IBM Corporation

Best Practices Overview iSCSI IP Addressing on SVC

– Each applicable Ethernet port on each SVC node is given a dedicated IP address for iSCSI.– SVC uses standard iSCSI port (3260) – iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This design reduces the need for

multipathing support in the iSCSI host. The iSCSI target IQN also fails over to the partner node. During takeover the iSCSI initiator is logged out from the failed node and a new session or login will be re-established with the partner (working) node using the IP address of the failed node. The same port index (for example port3) on all SVC nodes must be in the same subnet

– iSCSI addresses may be assigned to only select SVC nodes of a cluster if desired.– SVC supports the Challenge Handshake Authentication Protocol (CHAP) authentication methods for iSCSI:

BI-CHAP authentication with initiators that accept a blank user name field– The iSCSI qualified name (IQN) for a SVC node is iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because

the IQN contains the clustered system name and the node name, it is important not to change these names after iSCSI is deployed.

– Each node can be given an iSCSI alias, as an alternative to the IQN.– iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC.

Host Connection– The IQN of the host is added to an SVC host object in the same way that you add FC WWPNs.– Host objects can have both WWPNs and IQNs.– Standard iSCSI host connection procedures can be used to discover and configure SVC as an iSCSI target.– SVC supports I/O from Fibre Channel and iSCSI initiators in different hosts to the same volumes – i.e. ESX hosts could

access the same Datastores, with some ESX hosts using iSCSI and some using FC.– SVC supports multiple sessions from iSCSI hosts, with a maximum of four sessions from one iSCSI host to each SAN

Volume Controller iSCSI target.– You must create a host object with a different name for use with each iSCSI client if a host has multiple iSCSI clients

(multiple IQNs) or if a clustered-system host server having multiple iSCSI names and different authentication secrets is to be used with a different client. You must use the appropriate IQNs and corresponding secrets in each of the corresponding host objects, and then use all the host objects to map the volume.

Page 15: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

15 © 2014 IBM Corporation

iSCSI Host Connectivity Guidelines for ESX1. VMWare ESX includes iSCSI initiator software. Each ESX host can have a single iSCSI initiator which provides the

source (host) iSCSI Qualified Name (IQN). For a given ESX host, a maximum of four VMware iSCSI initiator sessions per SVC node are supported. Each host IQN must be assigned to the Host definition on the SVC.

2. A single iSCSI initiator does not limit the number or type of network interface cards (NICs), which can be used for iSCSI storage access. VMware best practice recommends that for each physical NIC which will be used, a matching virtualized VMkernel port NIC is created and bonded to the physical NIC. The VMkernel port is assigned an IP address, while the physical NIC acts as a virtual switch uplink. The ESX iSCSI initiator can then be configured with two VMkernel IP addresses, and bonded to two physical NICs.

3. The ESX host’s IQN can be viewed/changed from the Configuration tab of the iSCSI software adapter control panel.4. Dynamic iSCSI discovery (configured through the iSCSI Initiator Properties on ESX host) simplifies the setup of the

iSCSI initiator because only one target (SVC) iSCSI IP address must be entered. The VMware ESXi host queries the storage system for the available target IPs, which will all be used by the iSCSI initiator.

5. The above implementation, illustrated on the next slide, provides redundancy for the ESX host by actively using two physical NICs, and by using 2 iSCSI ports on SVC nodes, while staying within the limit of 4 iSCSI sessions per node.

6. Utilize VMWare storage features:a. Utilize Storage DRS and Datastore Clusters to provide VMFS Datastore load balancing.b. Utilize Storage IO Control (SIOC) to ensure that virtual machines are getting the required amount of I/O performance.

7. VMWare Pluggable Storage Architecture (PSA) and Storage Array Type Plug-in (SATP) configuration:a. IBM SVC uses the VMW_SATP_ALUA SATP with vSphere 5.5 and VMW_SATP_SVC on vSphere 4.0, 5.0, & 5.1.b. Configure the appropriate SATP to use a default of Round Robin (VMW_PSP_RR) Path Selection Plug-In (PSP):

This is the default for the VMW_SATP_ALUA SATP used by vSphere 5.5 ESXi hosts, so no action is needed.For ESX/ESXi 4.x, use CLI: esxcli nmp satp setdefaultpsp –psp VMW_PSP_RR --satp VMW_SATP_SVCFor ESXi 5.0/5.1, use CLI: esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp VMW_SATP_SVC

8. The IBM recommendation for volume and datastore sizing with VMware vSphere 5.5 and SVC / Storwize family is to use Datastore volumes sized between 1 TB to 10 TB, and group these together into VMware Datastore Clusters.

9. The IBM recommendation is to implement and monitor thin provisioning on the storage system. The zeroed thick or eager-zeroed thick disk types should be deployed for virtual machines.

10.Utilize SVC Easy Tier for ESX Datastore volumes where feasible. No special configuration is required on ESX hosts.11. Utilize SVC Real-Time Compression for ESX Datastore volumes. Lab testing and real-world measurements have shown

that Real-Time Compression reduces the storage capacity consumed by VMware virtual machines by up to 70%.

Page 16: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

16 © 2014 IBM Corporation

iSCSI Host Connectivity Illustration for ESX

/ SVC

Page 17: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

17 © 2014 IBM Corporation

Best Practices for Running VMware vSphere® on iSCSI1. Disable "delayed ACK" on vSphere Hosts.2. Use dedicated iSCSI Adapters on ESX hosts if available.3. If using 1 Gb Ethernet NICs, use a feature called a TOE (TCP/IP offload engine). TOEs shift TCP packet processing

tasks from the server CPU to specialized TCP processors on the network adaptor or storage device. Most enterprise-level networking chip sets today offer TCP offload or checksum offload, which vastly improves CPU overhead on the ESX hosts.

4. According to VMWare best practices, “iSCSI should be considered a local-area technology, not a wide-area technology, because of latency issues and security concerns. You should also segregate iSCSI traffic from general traffic. Layer-2 VLANs are a particularly good way to implement this segregation.” Also “Best practice is to have a dedicated LAN for iSCSI traffic and not share the network with other network traffic. It is also best practice not to oversubscribe the dedicated LAN.”

5. Enable Jumbo Frames. All devices sitting in the I/O path (iSCSI target, physical switches, network interface cards and VMkernel ports) must be able to implement jumbo frames for this option to provide the full benefits. On SVC, mtu is set on the Ethernet ports using cfgportip -mtu <1500-9000> -iogrp <IO Grp Name/ID> portid (1,2 or 3).

6. Ensure ARP Redirect / Gratuitous ARP is configured on Ethernet switch to which the iSCSI NICs are attached and the switch to which the SVC iSCSI ports are attached, to properly failover iSCSI IP addresses.

7. BNYM should fully review all recommendations in the VMWare “Best Practices for Running VMware vSphere® on iSCSI” document (http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf).

8. The VMWare Documentation Center should also be reviewed for best practices and procedures related to VMWare and iSCSI. Go to https://www.vmware.com/support/pubs/; under Support Resources on the right, select Documentation and then select the appropriate “Vmware vSphere 5” or “Vmware vSphere 4” link. Select the appropriate ESX Release from the drop-down, scroll down to the “ESXi and vCenter Server Product Documentation” section and click the html link for “vSphere Storage Guide”. Expand the “vSphere Storage” bullet in the Contents pane, and you will see a few subtopics related to iSCSI storage for ESX.

Page 18: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

18 © 2014 IBM Corporation

IBM Storage Management Console for VMWare vCenterIBM has taken advantage of the open plug-in architecture of VMware vCenter Server to develop the IBM Storage Management Console for VMware vCenter Server. The IBM Storage Management Console is a software plug-in that integrates into VMware vCenter and enables management of the supported IBM storage systems including SVC and Storwize, XIV, and SONAS.

When the IBM Storage Management Console for VMware is installed, it runs as a Microsoft Windows Server service on the vCenter Server. When a vSphere client connects to the vCenter Server, the running service is detected and the features provided by the Storage Management Console are enabled for the client.

Features of the IBM Storage Management Console include: • Integration of the IBM storage management controls into the VMware vSphere graphical user interface (GUI) with the

addition of an IBM storage resource management tool and a dedicated IBM storage management tab • Full management of the storage volumes including: volume creation, deletion, resizing, renaming, mapping, unmapping, and

migration between storage pools • Detailed storage reporting such as capacity usage, FlashCopy or snapshot details, and replication status

The graphic on the next slide shows the relationships and interaction between the IBM plug-in, VMware vCenter and vSphere, and the IBM storage system.

Installation and configuration You can download the IBM Storage Management Console for VMware vCenter by accessing the IBM Fix Central website (at ibm.com/support/fixcentral/) and searching for updates available for any of the supported IBM storage systems. Download the installation package that is appropriate for the architecture of the vCenter server.

On x86 architectures – IBM_Storage_Management_Console_for_VMware_vCenter-3.0.0-x86_1338.exe

On x64 architectures – IBM_Storage_Management_Console_for_VMware_vCenter-3.0.0-x64_1338.exe

An installation and administrative guide is included in the software download.

Page 19: 1© 2014 IBM Corporation Bank of New York Mellon SAN Volume Controller (SVC) iSCSI Guide.

19 © 2014 IBM Corporation

Storage Management Console relationships and interaction between components


Recommended