+ All Categories
Home > Documents > Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see...

Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see...

Date post: 31-Jan-2018
Category:
Upload: hadung
View: 223 times
Download: 0 times
Share this document with a friend
50
The Module one: SAN infrastructure Data Ontap and SAntricity NetApp is a champion of SAN technology with two strategic operating systems: Data ONTAP and SANtricity. This course focuses on Data ONTAP. Data ONTAP offers unified architecture. The 7-Mode and clustered Data ONTAP operating systems run both NAS and SAN protocols on NetApp FAS and V-Series storage platforms. V-Series platforms connect with almost all types of disk arrays in the storage industry. For the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful, intuitive interface that is specialized for performancesensitive sites with massive amounts of data and runs on the NetApp E-Series storage platform. This course focuses on 7-Mode and clustered Data ONTAP. NAS versus SAN In networked environments, client operating systems and application servers read and write data that is located on remote storage servers. Application servers write at the file level or the block level. NAS provides file-level access to data on a storage system. Clients access data on the storage system over the network by using transport protocols such as CIFS and NFS. SAN provides block-level access. SAN connections between client hosts and storage systems can be iSCSI, FC, or FCoE protocols. NetApp unified storage allows SAN and NAS configurations to exist on the same storage system, which provides flexibility for storage administrators. Logical unit numbers (LUNs)
Transcript
Page 1: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

The Module one: SAN infrastructure

Data Ontap and SAntricity

NetApp is a champion of SAN technology with two strategic operating systems: Data ONTAP and SANtricity. This course focuses on Data ONTAP. Data ONTAP offers unified architecture. The 7-Mode and clustered Data ONTAP operating systems run both NAS and SAN protocols on NetApp FAS and V-Series storage platforms. V-Series platforms connect with almost all types of disk arrays in the storage industry. For the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful, intuitive interface that is specialized for performancesensitive sites with massive amounts of data and runs on the NetApp E-Series storage platform. This course focuses on 7-Mode and clustered Data ONTAP.

NAS versus SAN

In networked environments, client operating systems and application servers read and write data that is located on remote storage servers. Application servers write at the file level or the block level. NAS provides file-level access to data on a storage system. Clients access data on the storage system over the network by using transport protocols such as CIFS and NFS. SAN provides block-level access. SAN connections between client hosts and storage systems can be iSCSI, FC, or FCoE protocols. NetApp unified storage allows SAN and NAS configurations to exist on the same storage system, which provides flexibility for storage administrators.

Logical unit numbers (LUNs)

Page 2: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Application servers that require block-level access must read and write to local devices that are directly attached to the application host. However, within a SAN environment, application servers can read and write to remote, centralized storage within logical units that are referred to as LUNs. LUNs are specialized files that are referenced by an identifier that is called a logical unit number identifier (or abbreviated as LUN ID). Within a SAN environment, a LUN is created as a single file within a volume or qtree on a storage system or Vserver. To the application server, LUNs appear as local devices that are directly attached to the application host and become SCSI target objects to read and write to.

SAN portals

Application servers that require block-level access must read and write to local devices that are directly attached to the application host. However, within a SAN environment, application servers can read and write to remote, centralized storage within logical units that are referred to as LUNs. LUNs are specialized files that are referenced by an identifier that is called a logical unit number identifier (or abbreviated as LUN ID). Within a SAN environment, a LUN is created as a single file within a volume or qtree on a storage system or Vserver. To the application server, LUNs appear as

Page 3: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

local devices that are directly attached to the application host and become SCSI target objects to read and write to.

SAN protocols

Data between the initiator and the target is communicated over FC, FCoE, or iSCSI SAN portals. In an FC SAN, the data is communicated over Fibre Channel ports. In an FCoE SAN, data transmits over a converged network adapter (abbreviated as CNA) port. In an IP SAN, the data is communicated over Ethernet ports.

Fibre Channel fabrics may access a LUN on NetApp storage using the Fibre Channel protocol (or FCP) or over Ethernet networks using the Fibre Channel over Ethernet protocol, referred to as FCoE. IP SANs may access a LUN on NetApp storage over an Ethernet network using the Internet SCSI, or iSCSI, protocol. In all cases, the transport protocols, whether FCP, FCoE, or iSCSI, carry encapsulated SCSI-3 commands as the payload within the packet.

FC SAN node and port names

In Fibre Channel SANs, a worldwide node name (or WWNN) describes a machine, while a worldwide port name (WWPN) describes a physical portal attached to that machine. The Fibre Channel specification for the naming of nodes and ports on those nodes can be fairly complicated, but basically each node is given a globally unique worldwide node name and an associated worldwide port name for each port on the node. Node names and port names are 64-bit addresses made up of 16 hexadecimal digits grouped together in twos with a colon separating each pair, as shown here. The first number in a node or port address defines what the other numbers in the address represent according to the Fibre Channel specification. The first number is generally 1, 2, or 5. For QLogic initiator host bus adapters (or HBAs), the first number is generally 2. For Emulex initiator HBAs, the first number is generally 1. On a NetApp storage system operating in Data ONTAP 7-Mode, the physical adapter WWNN and WWPN start with 5; while a clustered Data ONTAP vserver WWNN and the target LIF WWPN start with 2.

Page 4: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

IP SAN node and portals

In IP SANs, an iSCSI node name describes a machine and the portal describes a physical interface. Furthermore, each iSCSI node in IP SANs must have a node name. There are two possible node name formats: iSCSI Qualified Name (also called IQN) and or Extended Unique Identifier (or EUI). The IQN designator format is shown in this diagram. We will look at the differences between these formats in Module 3.

Connecting initiators and targets

IP SANs and Fibre Channel SANs can be implemented in any of one several topologies. The two basic concepts of connecting initiators and targets are direct connection topologies and networked topologies through FC or Ethernet switches. A Fibre Channel switched network is called a fabric. Similar to Ethernet virtual LANs (also called VLANs), Fibre Channel zones can be implemented for security and network performance advantages. Zoning on an FC switch groups initiators and targets

Page 5: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

that are permitted to communicate. Initiators and targets can communicate with each other only if they are members of the same zone. Ethernet or FC switched networks enable scalability and flexibility for SAN implementations. Clustered Data ONTAP does not support direct connection topologies

Connectivity between HA pairs

NetApp FAS storage systems can be configured as high-availability pairs. In 7-Mode, FC SAN traffic can traverse the failover connection between high-availability controllers. In clustered Data ONTAP, FC SAN traffic can traverse the intercluster connection between two storage failover pairs. Therefore, if storage platforms are configured for high availability, it is possible to configure FC SAN traffic to failover to the high-availability partner. However, in an IP SAN, initiator-to-target traffic never flows over the high-availability connection. You will hear more about this concept in Module 6

Seven steps for implementing a SAN

Page 6: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Step one discover the target

There are seven steps for implementing a SAN. You must Discover the target, create a session, create an igroup, create a LUN, map the LUN, Find the disk, and prepare the disk. The first step is to guarantee that an initiator discovers the target. In IP SANs, you must supply the initiator with the target IP address. For 7-Mode, the target IP address is the physical IP address of the target Ethernet port. For clustered Data ONTAP, by default, you supply the initiator with the IP address of one of the data iSCSI LIFs. You can also connect the initiator and target through an iSCSI name service that is called Internet Storage Name Service (or abbreviated as iSNS). In FC SANs, when a target and initiator are directly attached or connected to the same fabric and zone, you do not need to supply target information. Discovery is automatic. You can connect initiators and targets over a single path or multiple paths. FC and iSCSI single-path connectivity are the topics of Modules 2 and 3. FC and iSCSI multipath connectivity are the topics of Modules 6 and 7.

Step two creating a session

The second step for implementing a SAN is to create a session. A session associates the initiator with a target. In an FC SAN, a session is automatically created upon discovery. In an IP SAN, a session might be created automatically, depending on the host OS. You can also configure a session to persist, or automatically reconnect, if the initiator host reboots.

Step three, Create IGROUP

The third step is to create an initiator group—also called an igroup—on the storage system. As with fabric zoning, igroups identify the initiators that are allowed to access a LUN. In IP SANs, you identify an initiator by its worldwide node name. An iSCSI worldwide node name is designated by its iSCSI Qualified Name (abbreviated as IQN) or Extended Unique Identifier (abbreviated as EUI identifier). In Fibre Channel SANs, you identify an initiator by its worldwide port name. In the example on this slide, the user created a Windows iSCSI igroup that is called "My IP igroup," associated it with the IQN of a Windows host, and created a Windows Fibre Channel igroup that is called "My FC igroup." "My FC igroup" is associated with a QLogic HBA on the Windows host.

Step four create a LUN

The fourth step for implementing a SAN is to create a LUN. You've learned that a LUN is a representation of physical storage. When a LUN is created, it is stored as a specialized file, in a volume, and contained within an aggregate. Data that is written to the LUN is automatically striped across the physical disks that are associated with that aggregate. Data ONTAP manages LUNs at the block level. The file system and the data that is written to the LUN are managed by the initiator OS. Data ONTAP does not interpret the file system or the data that is contained within the LUN. In the example on this slide, two LUNs were created: LUNa and LUNb.

Step five: map the LUN to the Igroup

The fifth step is to map the LUN to an igroup. When you map LUNs to igroups, you assign a reference ID number. In the example on this slide, LUNa is assigned the number 1 and LUNb is assigned the number 2. This step is also referred to as LUN masking

Step six find the lun

In step 6, from the initiator, you scan for the LUN that is offered to that initiator by the igroup on the storage system. The LUN must be identified by the initiator host OS. From the initiator host, the LUNs appear as local disk devices. You can format these disks and store data on them. In this

Page 7: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

example, disk 1 is a locally attached disk, disk 2 is LUNa, and disk 3 is LUNb. When multiple paths to the LUN are present, the host might identify the same LUN multiple times. To combine these logical disks into a single disk device, multipathing software is needed.

Step seven : prepare the disk

The final step is to prepare the disk for the initiator host OS. Using the initiator's OS, you must label, format, and add a file system to the LUN. You must then mount the LUN as a disk device to be written to by application servers on the initiator host. Preparing the LUN as a disk differs based on the initiator host OS. LUNs can be prepared as single disks or combined by using a host-based volume manager.

Page 8: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Module two

FC Connectivity

Direct attached topology

Initially, Fibre Channel direct-attached or point-to-point topologies were seen as a replacement for the parallel SCSI bus to overcome the bandwidth and distance limitations of parallel architecture. Fibre Channel at 100 Mbps was superior to SCSI at 10 to 20 Mbps, and as SCSI progressed to 40, 80, and then 160 Mbps, Fibre Channel stayed ahead with 200 and then 400 Mbps. Eventually, parallel SCSI bandwidth reached a technological ceiling while Fibre Channel was just getting started. Fibre Channel point-to-point also overcame the severe distance limitations of SCSI, although one limitation remained: Fibre Channel connected one initiator to one target, which supported only the simplest topology. Because direct-attached topology does not scale, it is not appropriate for most

Page 9: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

enterprise environments. Furthermore, within this architecture, there is no fault tolerance. If the cable or HBA is defective on the initiator or target, a host loses connectivity with its storage. Direct-attached topology is supported by 7-Mode but not by clustered Data ONTAP.

Fabric topologies

To overcome the inherent limitations of direct-attached topology, a Fibre Channel switch can be introduced. A switched fabric (or network) uses a 24-bit addressing scheme with 64-bit port names and node names. This scheme has 12 million possible addresses, and the initiator-target pair has a dedicated non-blocking path to ensure full bandwidth. A single fabric is a switched fabric topology in which the servers are attached to NetApp storage controllers through a single Fibre Channel fabric. Single fabric designs have a single point of failure and therefore are not completely fault tolerant. This limitation will be eliminated with a dual fabric design discussed later in this course. In this module, we will investigate a single Fibre Channel path. We will look at dual paths in Module 6.

Physical wiring and FC configuration

HBAs on the initiator must be installed and properly cabled to the Fibre Channel switch. The example on this slide uses the NetApp storage system's built-in HBAs, named 0b and 0d, as target adapters. This module focuses on FC port 0b and assumes that it is properly cabled to the switch. Module 6 focuses on FC port 0d and FC multipathing. This course also explores what happens when a target HBA is part of a NetApp highavailability pair or a clustered Data ONTAP configuration.

Fibre channel switch configuration

First we will investigate the initial switch configuration. We can assume that the HBAs on the NetApp storage are in their initial default state. As for the initiator configuration, assume that the HBA adapter has not been fully configured. After logging into the switch, type “version” to investigate which version of the switch OS is being used. Next, enter the switchshow command to view the current nodes connected to the switch. As you can see, ports 0 through 7 show a status saying “No Light.” There are currently no nodes logged into the switch. Additionally, this Fibre Channel switch is not zoned. Fabric zoning is not covered in this course.

Page 10: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Data ONTAP: Configuring FC HBAs

Configuring DATA ONTAP :Configuring the FC HBAs

DATA ONTAP: Licensing FC

Page 11: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Clustered DATA ONTAP : licensing FC

DATA ONTAP: Enabling Adapters

Clustered data ontap ; configuring a vserver with an FC lIF

Page 12: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Clustered data ontap verifying fc switch connectivity

Page 13: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Installation and configuration

At the beginning of the process for the preparation of the initiator, you must verify supportability of the host OS releases and patches by referencing the Interoperability Matrix Tool on the NetApp Support site. You install compatible HBAs on the initiator and then install and configure the required HBA's drivers and utilities. Depending on the initiator OS and the notes that are offered by the NetApp Interoperability Matrix Tool, you might need to install compatible NetApp Host Utilities. NetApp Host Utilities offer a set of scripts that are used to adjust specific host operating systems and HBAs for optimal Fibre Channel communication with the NetApp storage. During this process, you must also cable the HBAs to the switch.

Initiators HBA Verification

After installation of one or more HBAs, the adapter port or ports will be visible in the HBA vendor tools, or by using Storage Explorer within Windows Server 2008 R2. In the left pane, you can select the Servers category and then the host name of your server to reveal the attached HBAs. In this example, two Emulex HBAs are installed. In this module, we will assume that only one HBA has been

Data ontap verifying initiators

Page 14: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

FCoE connectivity overview

Page 15: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Module three iSCSI connectivity

iSCSI topologies

- Direct attached - Network

Network environments

In a network environment, servers are attached to NetApp controllers through Ethernet switches. A network can consist of multiple Ethernet switches in any configuration. There are two types of switched environments: dedicated Ethernet and shared Ethernet. In a dedicated Ethernet environment, there is no extraneous network traffic. In other words, the network is totally dedicated to iSCSI and related management traffic. This kind of network is typically located in a secure data center. In a shared Ethernet environment, the network is shared with other corporate Ethernet network or Internet traffic. Shared Ethernet environments typically include firewalls, routers, and IP security (also called IPsec) to secure the network. The example on this slide shows a single switch with a single path to the target. Note that this design is not fault-tolerant, and it is shown only to demonstrate basic iSCSI concepts. Multiple paths for iSCSI will be introduced in Module 7

Data ontap licensing and configuring iSCSI

Page 16: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Data ontap iSCSI WWNNs

Page 17: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Clustred Data ontap iscsi wwnns

Configuring the initiator host

Configuring local interfaces on the initiator host

To prepare the initiator, you must first confirm that the host OS version and required patches are compatible with your hardware. This task can be accomplished easily by using the Interoperability Matrix Tool. Next, you install or verify the system's network adapters. Third, you install any required drivers and utilities. During this process, you cable the network adapters to the switch. Finally, you install compatible NetApp Host Utilities, if warranted, just as in a Fibre Channel configuration. For the implementation exercise, assume that the network interfaces are installed and properly cabled.

Indentifying the IQN on the initiator host

After installation, we need to identify and configure the local network interfaces we wish to use in the IP SAN. Within Windows Server 2008 R2, we can launch the Network and Sharing Center to complete your configuration. From this dialog box, we can double click on the Local Area Connection link, which displays the Local Area Connection Status dialog box. We can then access the properties dialog box for this interface by clicking the Properties button. The exact details of configuring an interface within Windows Server 2008 R2 is beyond the scope of this course. For more information, please see Microsoft’s TechNet Website.

Describing the iscsi discovery

Unlike Fibre Channel, iSCSI initiators and targets do not discover each other automatically. We need to configure the Windows software initiator to properly discover the e0b target portal on the storage system.

Page 18: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Discovering target portals

To configure discovery on the Windows Server 2008 R2 software initiator, first select the Discovery tab. The methods available to configure discovery are by manually defining the target portals or by designating an Internet Storage Name Service (or iSNS). For more information about configuring an iSNS, please see the SAN Design Web-based training course and the SAN Implementation Workshop instructor-led course. In this course, we will manually define e0b’s target portal by clicking the Discover Portal button. The Discover Target Portal dialog will appear. Enter the IP address for the e0b adapter on the target storage system. NOTE: iSCSI uses the TCP port 3260 for communication. This should not be changed. Any firewall between the storage system and the initiator host must not block this port.

Connecting to a target portal

o connect to the target portal and begin a session, click the Target tab. Note that the storage system was discovered but the current state is inactive. Select the storage system in the target list, and then click the Connect button. The Connect To Target dialog box appears. Note that you can enable multipath on this dialog box. Multipath details are provided later in this course. In the Connect To Target dialog box, click OK. The status changes to Connected. On the storage system CLI, a message appears to notify you that an iSCSI session with an initiator was established.

Module four

Preparing a lun

With Data ONTAP, there are three steps for preparing a LUN for an initiator. First, an igroup must be created with a list of authorized initiators. Second, the LUN must be created on a storage system or Vserver within an existing volume or qtree. Third, the LUN must be mapped to an igroup and given a LUN identifier. There are many methods for completing these three steps. This presentation covers only the CLI method. For more details and hands-on practice, see the instructor-led training with the SAN learning maps.

Step one create an igroup

The first step is to create an igroup to provide access to a logical unit. Initiator groups are tables of host identifiers-- either Fiber Channel worldwide port names or iSCSI worldwide node names—that

Page 19: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

are used to control access to logical units. Typically, all of the host's host bus adapters or software initiators should have access to a logical unit. If you are using multipathing software or have clustered hosts, each HBA or software initiator for each clustered host needs redundant paths to the same logical unit. You can create igroups that specify which initiators have access to the logical units either before or after you create a logical unit, but you must create igroups before you can map a logical unit to an igroup. igroups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a logical unit to multiple igroups that have the same initiator. Also, an initiator cannot be a member of igroups of differing operating system types (or os types).

Clustered data ontap creating an igroup

Step two create a LUN

Page 20: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

NetApp recommends creating logical units in a flexible volume or qtree with no other data. In this example, LUNa was created in volume 1 in aggregate 1 with a complete path of /vol/vol1/LUNa.lun. LUNb was created in volume 2 in aggregate 1 with a complete path of /vol/vol2/LUNb.lun. NetApp does not recommend creating logical units within the root volume. In clustered Data ONTAP, aggregates are created first. Vservers are then created and provisioned by aggregates. LUNs are created on Vservers within volumes or qtrees. Creating a qtree is an optional step.

Clustered data ontap creating an igroup

Page 21: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Step three data ontap map the lun to an igroup

The third step is to map the LUN to an igroup. This step is also known as LUN masking. When you map the LUN to an igroup, you grant the initiators in the igroup access to the LUN. In the process, you also assign an identifier to the mapped LUN, also called a LUN ID. For the LUN ID, you can specify the number or allow Data ONTAP to automatically assign a number. If you do not map the LUN to an igroup, the LUN is inaccessible to any initiator.

Discovery of the LUN

Once iSCSI connectivity is established, return to the initiator and use the Disk Management tool within Windows Server 2008 R2 to rescan the disks subsystems. To do this, right-click Disk Management. After awhile, the LUN should appear. You will see that the LUN is initially offline

Initialize the lun

Right-click the Disk header, and then select Initialize. On the Initialize Disk dialog box, select the MBR (Master Boot Record) or the GPT (Partition Table). The MBR partition style is the default. Select the GPT style if the disk size is greater than 2 terabytes, and then click OK.

The new simple volume wizard launch

After initialization, the disk state is unallocated. Now it is time to format the disk as a volume. Right-click the Disk header, and then select New Simple Volume to start the New Simple Volume Wizard. Click Next to continue formatting the disk.

The new simple volume wizard size and mount

On the next screen, the storage administrator can specify the volume size to create. By default, the wizard selects the largest size available. To accept this, click Next. On the next screen, the storage administrator can specify whether to mount the new volume as a drive letter, a path, or not to mount it at all. We will accept the default by clicking Next.

File syste and label

On the next screen, the storage administrator can choose to either not format the volume or to format it. We will use the default, which is to format the volume as the NTFS file system that has the default allocation unit size and a label of New Volume. Then, click Next. Finally, review the output the steps the wizard will perform and click Finish to execute these steps. After the wizard successfully runs, our LUN will be a Windows NTFS volume mounted as drive letter E.

Page 22: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Additional roles

On the next screen, the storage administrator can choose to either not format the volume or to format it. We will use the default, which is to format the volume as the NTFS file system that has the default allocation unit size and a label of New Volume. Then, click Next. Finally, review the output the steps the wizard will perform and click Finish to execute these steps. After the wizard successfully runs, our LUN will be a Windows NTFS volume mounted as drive letter E.

Module five LUN provisioning

Default volume creation

As you have seen, LUNs are created within a volume. When you create a volume, a percentage of the data blocks of the containing aggregate are reserved for the volume. For example, if you create a 10 GB volume, the aggregates displays that 10 GBs have been consumed, even though there is no data currently written within the volume. By default, 95% of the volume is allocated as the active file system. The remaining 5% is allocated to Snapshot reserve. If you perform a df-r command for the volume, you see the space and Snapshot reserve allocation. You can increase or decrease the Snapshot reserve with the snap reserve command. In our demonstration, for the sake of having even numbers and easy graphics, we will change the Snap reserve from 5% to 20%.

Page 23: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Nonspace-reserved lun

n a previous module, you created a LUN. As you know, LUNs are also called logical units. By default, LUNs are space-reserved unless specified as nonspace-reserved. If a LUN is space-reserved, the volume reports that the LUN consumes space within that volume even before a single bit is written to the LUN. This process is similar to a spacereserved volume that reports space that is used within its containing aggregate before data is written to the volume. You can also create a nonspace-reserved LUN. A nonspace-reserved LUN does not report space consumption within its volume until data is written to the LUN. In this example, an administrator is creating a nonspace-reserved 8-gigabyte LUN. When you perform a df -r command on the volume, the command reports that no space was consumed out of the active file system. Note that you can also create a 10-gigabyte nonspace-reserved LUN, even though only 8 gigabytes are available within the active file system of the volume. The size of the LUN is greater than the amount of data within the LUN at its creation. Such LUNs are called thinly provisioned LUNs. Remember that all LUNs are just files, and files can be normal or sparsely allocated. LUNs are always sparse files, which means that a LUN has data only when a host writes data to that LUN. Therefore, the host OS on the initiator can accurately report the total space that is available from the LUN, regardless of whether the LUN is space-reserved or nonspace-reserved on its containing volume.

Thinly provisioned LUNs

The ability to create thinly provisioned LUNs can help you to quickly satisfy a service request. However, thinly provisioned LUNs must be monitored carefully. If you write 100% to the 8-gigabyte LUN from the initiator host and then perform a df -r command on the containing volume, the write succeeds. If you try to write more than 8 gigabytes to the thinly provisioned LUN, the write fails. From the initiator host, the LUN has 10 gigabytes of space available, but the containing volume had only 8 gigabytes available. Remember, Snapshot reserve cannot be used as part of the active file system unless you reconfigure the volume to have no Snapshot reserve. This example does not use that type of configuration. Therefore, writing more than 9.5 gigabytes to this thinly provisioned 10-gigabyte LUN results in failure. The LUN is marked offline by the Data ONTAP operating system, which makes the LUN inaccessible to the initiator.

Space reserved lun

Page 24: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

You've learned that a space-reserved LUN is the default. Next you'll learn more about how this characteristic affects the creation of Snapshot copies of the volume that contains the LUN. In this example, you'll create a 2-gigabyte LUN. The df -r command reports that 2 gigabytes of the volume's 8-gigabyte active file system is consumed or used immediately, as expected. Remember, however, that the LUN is a sparse file, so the initiator host reports that 2 gigabytes are available with this disk.

Writing to a space reserved lun

Now if you write 100% to the LUN, the initiator host reports that no space is available. However, on the storage system, a df -r report of the containing volume has no change, because the LUN was space-reserved, and when the LUN was created, the volume accounted for the LUN size.

First snapshot copy of a space reserved lun

Often storage administration requires Snapshot copies to preserve backups of a LUN. When creating a Snapshot copy, you must ensure that the file system contained on the LUN in the snapshot is consistent and usable. Usually, this is done by using the NetApp SnapDrive application. However, when we create a Snapshot copy of the volume containing the LUN, something unique happens by default. As with other Snapshot copies, no space is consumed by the initial Snapshot creation process. However, on the block consumption report, we see that the used portion of the active file system is now 4 GB, and there is a new reserved amount allocated. The actual LUN usage is still 2 GB. This reserve is graphically displayed at the end of the volume, but it is not physically partitioned there. Instead, this is just an accounting mechanism of reserving blocks within the volume for what is called overwrite protection or guarantee. This overwrite space is calculated by multiplying the size of the Snapshot copy by the fractional reserve attribute of the volume. To find vol1’s fractional reserve quotient, execute the following command. The fractional reserve attribute is 100, meaning 100% of the blocks written will have matching space reserved by the overwrite protection option. Therefore, because the fractional reserve is 100% and the Snapshot size is the amount of space used within the active file system (in this case, 2 GB), the resulting overwrite guarantee reserve is 2 GB. This overwrite protection is reported in the reserve column and also accounted for in the used column with the df –r report. In earlier versions of Data ONTAP, to increase the amount of space available to the active file system, it was possible set the fractional reserve attribute to a value less than 100%, for example 50 or 50%. We could set the fractional reserve option to something less than 100%, such as 50 or 50%. This would result in a overwrite reserve space of 1 GB instead of 2 GB for this example.

Page 25: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

However, In Data ONTAP 8.2, you can set fractional reserve to either 0 or 100 (that is off or on). In the rest of this presentation, it will be assumed that fractional reserve is 100% unless we state otherwise.

First overwriting a space reserved lun

Now if we write 100% to the LUN again, we are actually ―overwriting‖ the data on the LUN. But the overwrite protection space is not used yet since there is space available in the active file system. Therefore, the data is written to the active file system. However, the previous data of the LUN is not destroyed because we have created a Snapshot copy to preserve it. Now, the df –r report shows that the Snapshot reserve has no space available since 2 GB are consumed by a Snapshot copy.

Second snapshot copy of a space reserved lun

Next, if the storage administrator creates another Snapshot copy on the volume, the block consumption report would look as follows. No change occurs after the second Snapshot copy is created because the overwrite reserve space is already equal to the Snapshot copy size multiplied by the fractional reserve option of the volume. As you can see, the overwrite reserve space will never grow larger than the LUN size multiplied by the fractional reserve quotient.

Second overwriting a space reserved lun

If we entirely overwrite the LUN again, then the df –r will report the following. Notice that the Snapshot reserve is over its total of 2 GB by an amount of 2 GB. The active file system reports that 6 GB are used. However, still only 2 GB is the current LUN size and 2 GB is reserved for the overwrite protection. We did not need to use this overwrite protection yet since there was still space available in the active file system.

Third overwriting a space reserved lun

Now, let’s assume that the initiator host overwrites the LUN entirely again. There is still space available in the active file system, so it will be used instead of the overwrite protection space. The previous data within the LUN is not destroyed, because it is preserved by the previous Snapshot copy. The space consumption report will appear as follows, with the active file system reported as being used completely. Remember, this includes the overwrite protection space reservation. The

Page 26: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Snapshot reserve reports that it is over its total by 4 GB, meaning that we have 6 GB of Snapshot copy data.

Fourth snapshot copy of a space reserved lun

Just as before, if the storage administrator creates another Snapshot copy on the volume, there is no change in the block consumption report after this fourth Snapshot copy is created

Fourth overwriting a space reserved lun

With all other space used up within the volume, if the initiator host now overwrites the LUN once again, it now will use the overwrite reserve that was created after the first Snapshot copy. The df –r report will reveal that all space is consumed within the volume and that the Snapshot reserve is over the reserve’s total by an amount of 6 GB. This means that there is 8 GB of Snapshot copy data within the volume.

Fifth snapshot copy of a space reserved lun

If the storage administrator attempts to take a fifth Snapshot copy, notice what happens this time. The Snapshot creation process will fail and an error message will appear within the command-line interface of the storage system. The Snapshot copy fails because the overwrite space cannot be guaranteed. Remember, this overwrite space is the Snapshot size multiplied by the fractional reserve percentage. The initiator host can still overwrite the LUN, but any data that is overwritten in this case will be lost. The solution to this problem would be to either delete Snapshot copies or expand the flexible volume if there is room available within its containing aggregate. We will look at steps to automate these two possible solutions later in this module. However, these steps can also be performed manually by the storage administrator.

Fractional reserver anaylsis

Notice that even though we had a fractional reserve set to 100%, the system still ran out of space. Fractional reserve did, however, prevent the host from experiencing an error during writing. The initiator host can overwrite the LUN completely and can lose data not protected by a Snapshot copy, but the initiator host will never fail to continue writing to the LUN and have the Data ONTAP LUN offline, because overwrite room is always guaranteed through the volume’s fractional reserve option. However, fractional reserve may not be an efficient use of your storage. If you never used all the active file system space within the volume because you maintained only a few Snapshot copies or didn’t write extensively to the LUN, the overwrite reserve might be wasted space. Also, with the default fractional reserve, administrators have to create a larger volume size to provide for the guaranteed overwrite reserve when a smaller volume, with a fractional reserve, set to zero, might be adequate. Sometimes all that is needed is better space management of your volumes, and this is often accomplished through better management of Snapshot copies.

Space management

You've learned that Snapshot copies can fill up a volume if they are not managed properly. This situation can result in preventing writes to a LUN if the volume has less than 100% fractional reserve or if a LUN is nonspace-reserved. To better manage space, you can use two key configurations. One configuration enables you to automatically expand the size of the volume that contains the LUN. The other configuration enables you to automatically delete Snapshot copies.

Page 27: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Automatically increasing volume size

The first key configuration is the vol autosize option. In clustered Data ONTAP, the command is volume autosize. To prevent out-of-space issues for the initiator host, you can automatically grow the volume that contains the LUN when the volume is almost out of space. In Data ONTAP, the volume automatically grows when its size reaches the reclaim threshold parameter set by WAFL, also called the Write Anywhere File Layout system. WAFL sets the wafl_reclaim_threshold parameter based on the capacity of the volume. You can use the -grow-threshold-percent parameter to change the volume's almost-full threshold. In clustered Data ONTAP, beginning with version 8.2, you can set a volume to automatically grow or shrink with the -autosize mode. Set the autosize mode to off, grow, or grow_shrink. The grow_shrink parameter enables you to reclaim underused space.

Snapshot automatic deletion

A second way that you can reclaim space is by enabling the Snapshot autodelete option. This option, when set at the volume level, deletes Snapshot copies automatically when the volume reaches a configurable threshold. You configure Snapshot automatic deletion by first enabling a snap autodelete and then setting parameters that enable you to specify when to trigger a Snapshot deletion, which Snapshot copies to delete first, and other details.

Your choice

In 7-Mode and clustered Data ONTAP you can set up automatic Snapshot copy deletion and automatic volume expansion for your volumes. You can enable one or both of these space management options. When space in your volume is getting low, by default, the first volume will automatically resize to the set size limit or until the aggregate is used up, and then the oldest Snapshot copies are deleted. Expanding the volume preserved data at the expense of disk space. In Data ONTAP 7-Mode, you use the try_first option to customize your automatic space management policy. If you can change the try_first setting to snap_delete, then Snapshot copies are deleted based on the Snapshot autodelete policy before the volume is expanded. In clustered Data ONTAP, instead of the try_first volume option, the command you use to customize space management is volume modify with the -space-mgmt-try-first switch.

Page 28: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,
Page 29: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Module six FC multipathing

Multipath topologies

Introducing a second path on both the initiator host and on the NetApp storage that is connected through two switches provides redundancy. If the two switches are connected by way of an Inter-Switch Link (or ISL), then the topology is referred to a single-fabric implementation. If the switches are not connected, the topology is referred at as a dualfabric implementation. For complete redundancy, clustering the initiator host is also recommended.

Describing the demonstration environment

In Module 2, we had only a single path over target host-bus adapter 0b on node 1. However, we are now implementing 0d on node 1, and introducing 0b and 0d on node 2. Both storage systems will be configured in a high-availability or active-active controller configuration. Likewise, another Fibre Channel port is configured on the Windows initiator host.

HA configurations

The storage systems in this example were configured as a high-availability pair (also called an HA pair). So that the controllers can fail over to one another, the storage controllers are cabled together with an interconnect cable. On each storage system, you add the controller-failover license and reboot the partner storage systems. After the reboot, you enable controller failover with the cf enable command. The storage systems are now in an HA configuration. In clustered Data ONTAP, two nodes in a cluster can also be cabled together and licensed as failover partners. This configuration in clustered Data ONTAP is called storage failover rather than controller failover. In Data ONTAP 8.2 7-Mode, HA does not need not to be licensed. The commands to set up HA in Data ONTAP 8.2 are these: a. options cf.mode ha b. reboot c. Use cf status to check it In clustered Data ONTAP 8.2, HA does not require a license. Use thes commands in clustered Data ONTAP to set up storage failover: a. Storage failover modify –mode ha –node cluster1-01 cluster1-02 b. Reboot each node c. Use this command to verify the storage failover pair: storage failover show

HA and multipathing

FC multipathing, configured with high availability, compounds the connections between the initiator and the target. In Data ONTAP, the two systems of an HA pair are viewed in fabric as a single system with a shared, single, worldwide node name. In clustered Data ONTAP, the Vserver, rather than the HA pair, is viewed in Fibre Channel as a single system with a single worldwide node name. In the diagram in this animation, the hosts have primary connectivity to the LUN on Controller 1 through adapter 0b and the Fabric 1 switch and through adapter 0d through the Fabric 2 switch. Additionally, because this traffic will cross the interconnect between Controller 1 and Controller 2, the LUN is

Page 30: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

available on Controller 2 through 0b and the Fabric 1 switch and through adapter 0d through the Fabric 2 switch. Therefore, through the first adapter on the host—in other words, the LUN that is managed primarily by Controller 1—there are a total of two separate paths. If you have a second adapter on the initiator host, these two paths are also available, which brings the total number of separate connectivity paths to four. Multipathing software on the host is required to successfully implement this configuration. With multipathing software, the paths over the interconnect, such as the 0b and 0d paths on Storage System 2 that communicate to the LUN, are used only if designated manually or if no primary path is available.

Loss of a fabric

Therefore, if the Fabric 1 switch experiences a failure, the multipathing software will work around the failure. For example, the LUN would still have primary access through controller 1’s 0d adapter by way of the Fabric 2 switch. This path will be favored because they will be faster than the alternative interconnect paths.

Page 31: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Loss of a controller

If controller 1 experiences a failure, controller 1 will failover to controller 2, and controller 2 will again control the shelves serviced by controller 1. Multipathing software will then work around the failure by sending traffic across Fabric 1 to controller 2’s 0b interface or by using the path through Fabric 2’s switch to controller’s 2 0d interface. Controller 2 will access the LUN because within the HA pairing the shelves normally controlled by controller 1 and hosting the LUN are also connected to controller 2 through the failover loop.

Page 32: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,
Page 33: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

How fc target ports appear on the initiator

Earlier in this module, you enabled and connected additional target ports. If you have a single Fibre Channel-attached LUN, this single LUN appears in the Disk Management window one time for every path to the LUN. In this example, you have 2 ports available on each controller and a single port open on the initiator host. Multipathing software such as Microsoft MPIO or the NetApp devicespecific module (abbreviated as DSM) is required on the initiator. Without multipathing software on a single-fabric implementation, the LUN appears in the initiator's Disk Management window four times. If you add a second port to the initiator and properly cable a dual fabric design with two port with the HA pair connected to the each switch, then a single LUN would appear 8 times without multipathing software. After you install Microsoft MPIO or the NetApp DSM, the LUN is aggregated into a single-disk device.

Mpio configuration in windows

To install multipath I/O support, you need to install the feature from Server Manager. With Windows Server 2008 R2, there is no need for a reboot after you accomplish this task. This feature installs the multipathing support along with the Microsoft default Device Specific Module (or DSM).

Dsms supported by data ontap for windows hosts

A DSM is required for multipathing to manage the path failover by communicating with storage systems. You can have only one DSM for a given vendor ID, product ID (VIDPID) pair. The Data ONTAP DSM for Windows MPIO is the NetApp multipathing software for Windows. The Data ONTAP operating system uses asymmetric logical unit access (ALUA) to identify optimized paths. ALUA must be enabled for the FC multipathing to work. The Data ONTAP DSM enables multiple FC and iSCSI paths between the Windows host and NetApp storage. The Data ONTAP operating system also supports native DSM for Microsoft Windows Server 2008 and Windows Server 2008 R2. ALUA must be enabled on the storage systems for using the native DSM.If used, the Data ONTAP DSM claims all discovered LUNs on NetApp storage systems. Other DSMs can claim LUNs from other storage systems with other VID-PID values.

Page 34: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Module six

Single path iscsi

In Module 3, we saw how to configure a NetApp storage system and Windows initiator host as a single path IP SAN. However, this design is not fault tolerant. Now we will introduce a second path. Dual switches and clustering the host will allow complete redundancy in our implementation.

Architecture diagram

In Module 3, there was only a single path over portal e0b on Storage System 1. In this module, you'll see a second path introduced on the initiator host and portal e0c on the storage system. Note that unlike FC traffic, iSCSI traffic does not normally travel over the interconnect. The interfaces on storage system 2 do not have to be enabled for iSCSI traffic, even if the systems are configured as HA partners. However, if you want the storage system 2 network adapters to take over in case storage system 1 fails, you must configure the storage system 2 adapters in standby or multihomed mode to manage iSCSI traffic from the initiator host on behalf of the failed HA partner. See the Data ONTAP administrator's guides or the clustered Data ONTAP administration guides for the exact commands to configure network failover for HA configurations.

The dsm multipathing technique

In Data ONTAP 7-Mode, the MPIO technique with a DSM is the classic method for multipathing iSCSI connections. Depending on the initiator host OS and the DSM version that are installed, you might be able to multipath across FC and iSCSI paths. This technique supports multiple loadbalancing algorithms and numerous software and hardware initiators.

Page 35: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Data ontap iscsi mpio configuration

Try this exercise first with 7-Mode commands and then with clustered Data ONTAP commands. To have multiple paths within iSCSI, we need to ensure that we have more than one interface on the storage controller. Therefore, we must verify that the e0c interface is properly configured. Verify the configuration now. We have already licensed and started the iSCSI service in module three. Verify that e0c allows iSCSI traffic.

Clustred data ontap iscsi mpio configuration

Page 36: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

mpio configuration in windows

In clustered Data ONTAP, to have multiple paths within iSCSI, you must ensure that you have more than one interface. Next you'll create a LIF that is associated with e0c on node 1. Unlike in 7-Mode, here you can use LIFs that are associated with ports on nodes other than the node where the LUN is located. Next you'll create a LIF that is associated with e0b on node 2. You can optionally create a LIF for e0c on node 2, but for this course, you will skip that step.

Describing DSMs

The Data ONTAP DSM for Windows MPIO is a device-specific module or DSM that works with Microsoft Windows MPIO drivers to manage multiple paths between NetApp storage systems and Windows hosts. Microsoft Native DSMs - msiscsidsm with Windows Server 2003 and msdsm with Windows Server 2008 can also be used to manage multiple iSCSI paths. The Data ONTAP DSM and the Microsoft native DSM can co-exist, as long as the configuration is listed on the appropriate support matrixes. Remember, to find the supported version of the NetApp device-specific module for a specific platform, see the Interoperability Matrix.

Configure 2 nd local interfaces

To ensure complete redundancy we need to configure a second network adapter on the initiator host. Within Windows Server 2008 R2, you can launch the Network in Sharing Center. From this dialog box, you can double-click on the Local Area Connection 2 link, which displays the Local Area Connection 2 Status dialog box. Configure the interface as appropriate.

First iscsi session revisted

Back in Module 3, we created a first session using the Local Area Connection on the initiator and e0b on the target. If we click the Properties button with the target selected, the Properties dialog box appears. From the Sessions tab, we can see that the first session is connected to target portal group tag of 1001. On the storage system, if we enter the iscsi tpgroup show command, we can see that network interface e0b is associated with 1001. On the clustered Data ONTAP CLI, we enter the vserver iscsi tp group show command. We can see that network interface iS1 is associated with 1001.

Page 37: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Configuring the second iscsi session

Within the Properties dialog box, we can select the Portal Groups tab to ensure that e0b and e0c are visible to the initiator. If this tab didn’t have both portals, we would then have to troubleshoot discovery. To review how to configure iSCSI discovery, revisit Module 3 of this course. We can add a new, second iSCSI session to this target by clicking the Add session button. With this second session, we want to connect from Local Area Connection 2 on the initiator host to e0c on the storage system, so click the Advanced button when the Connect To Target dialog box appears. In the Advanced Settings dialog box, select the Local Area Connection 2’s IP address in the Initiator IP combination box and e0c in the Target portal IP combination box. Then click OK to approve this configuration. Back in the Connect To Target dialog box, click OK to create the second session.

Second iscsi session

From the Sessions tab, we can see that the second session is now connected to target portal group tag of 1002. On the storage system, if we enter the command iscsi tpgroup show, we can see that the network interface e0c is associated with 1002. We have successfully, implemented the MPIO technique to multipath iSCSI. Next, we could configure the load balancing policy to distribute the iSCSI traffic across the first and second session. For more information about load balancing policies, please see the SAN Implementation Workshop instructor-led course. On the clustered Data ONTAP CLI, we again enter the vserver iscsi tp group show command to see the second session. Note that network interface iS1 is associated with 1001.

Mcs multipathing technique

iSCSI has another technique for multipathing. MCS (Multiple Connections per Session) is a feature of the iSCSI protocol which allows you to combine several connections inside a single session for performance and failover purposes. Use of this technique does not require any special configuration on the Ethernet infrastructure. MCS is supported by Microsoft software initiator 2.0 and later. This technique is not supported by iSCSI HBAs. The MCS technique is not supported by clustered Data ONTAP, but it is supported by Data ONTAP 7-Mode.

Page 38: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Mpio versus mcs

With MPIO, every session that is created has exactly one connection. A session is associated with a target portal group (abbreviated as TPG), so each TPG must have only one interface that is associated with the group. However, MCS creates a single session that is associated with a single-target portal group. Within that session, you can have multiple connections that are associated with multiple interfaces that are available within the TPG. However, as you saw, TPGs by default have only a single associated interface per group. This type of configuration requires Data ONTAP 7-Mode. MCS is not currently supported in clustered Data ONTAP.

iscsi mcs

To configure iSCSI MCS connections to a Data ONTAP FAS system, target NICs must be installed and enabled. In this course, we have been using e0b and e0c adapters on the storage controller. Next, we need to create a new target portal group on the storage system and add necessary interfaces to it. Next, on the Windows initiator host, we need to set the Microsoft iSCSI software initiator to discover the interfaces within the new target portal group and then connect to the target by creating the session. The first connection will be created automatically when the session is created. Then, we can add more connections with the Microsoft software initiator to connect to another interfaces associated with the session's target portal group. In slides 16 through 23, slides, we demonstrate these steps.

Page 39: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Configuring target portal groups

To implement multiple connections per session, we have to create a new target portal group by using the iscsi tpgroup create command. In this example, we are creating a group called mytp. Then we need to assign one or more interfaces to the group by using the iscsi tpgroup add command. In this example, we are adding e0b and e0c to the mytp group. We use the force switch, -f, to change an interface that already belong to another target portal group. Remember that all interfaces belong to one target port group by default. To verify the current target portal group configuration, use the iscsi tpgroup show command. We can see from this output that mytp group has a target portal group tag of one. We will see this tag later on in the Windows initiator interface. Placing more than one interface into the target portal group and implementing MCS on the initiator host will allow failure of a connection without disrupting the iSCSI session.

Create the session

Remember you have to perform discovery the target first. For a review on configuring the initiator to discover a target, see Module 3 in this course. Next, in the Targets tab of the iSCSI Initiator Properties box, you will see the discovered targets. Select the target and click Connect. In the

Page 40: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Connect To Target dialog box, click Advanced. In the Advanced Setting dialog box, choose the portals for the first connection. Click OK, and the Connect To Target dialog box will create the session with the target.

First session verified

Select the target and then click the Properties button to investigate the session. The Properties dialog box for the target will appear. Notice that there is one session with one connection. This session is with the new target port group, which has a tag of one. This is the new mytp group we created on the storage system.

Verify current sessions

Back on the storage system, we can see the sessions using the iscsi session show command. In this example, session 58 is the session from the Windows initiator host. Verify the current connections using the iscsi connection show –v command. In this example, session 58 has one connection between e0b and the Local Area Connection on the Windows initiator host.

Verify connections

To verify the connections on the Windows initiator host, click the MCS button in the Properties dialog box. The Multiple Connected Sessions (MCS) dialog box shows the first connection between the Local Area Connection interface and e0b on the storage system.

Second connection

To create a second connection for this session, thereby creating a multiple connected session, click the Add button. The Add Connection dialog box will appear. Click the Advanced button and then select the Local Area Connection 2 as the initiator host and the second interface (in this case e0C) within the target port. Click OK to accept these changes. Finally, click Connect in the Add Connection box to create a second connection with the session.

The second connection shows the MCS dialog. We can set a load-balancing policy, such as Least Queue Depth or Round Robin. The connection count will now appear as 2 in the mytp target portal group on the target’s Properties dialog box.

Verify sessions and connections

Page 41: Web viewFor the latest V-Series solution support information, visit the NetApp Support site and see the VSeries Support Matrix. SANtricity offers a powerful,

Back on the storage system, we see the sessions with the iscsi session show command. Again, session 58 is the session from the Windows initiator host. We can then verify the current connections using the iscsi connection show –v command. In this example, the session 58 has two connections: one connection between e0b and the Local Area Connection on the Windows initiator host, and another connection between Local Area Connection 2 and e0c on the storage system.


Recommended