+ All Categories
Home > Documents > Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered...

Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered...

Date post: 07-Feb-2018
Category:
Upload: danghuong
View: 229 times
Download: 5 times
Share this document with a friend
246
Basic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2
Transcript
Page 1: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

Basic Concepts for Clustered DataONTAP 8.3.1

December 2015 | SL10237 Version 1.2

Page 2: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

2 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Before You Begin

Figure :

You must choose whether you want to complete this lab using OnCommand System Manager, NetApp'sGUI management tool, or the Command Line Interface (CLI) for configuring the clustered Data ONTAPsystem in this lab.

This document contains two complete versions of the lab guide, one which utilizes System Manager for the lab'sclustered Data ONTAP configuration activities, and another that utilizes the CLI. Both versions walk you throughthe same set of management tasks.

• If you want to use System Manager, begin here.• If you want to use the CLI, begin here.

Page 3: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

3 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

TABLE OF CONTENTS

1 GUI Introduction.............................................................................................................................. 5

2 Introduction...................................................................................................................................... 6

2.1 Why clustered Data ONTAP?................................................................................................... 6

2.2 Lab Objectives........................................................................................................................... 7

2.3 Prerequisites.............................................................................................................................. 7

2.4 Accessing the Command Line.................................................................................................8

3 Lab Environment........................................................................................................................... 10

4 Lab Activities................................................................................................................................. 12

4.1 Clusters.....................................................................................................................................12

4.1.1 Connect to the Cluster with OnCommand System Manager............................................................................. 12

4.1.2 Advanced Drive Partitioning............................................................................................................................... 14

4.1.3 Create a New Aggregate on Each Cluster Node...............................................................................................18

4.1.4 Networks............................................................................................................................................................. 25

4.2 Create Storage for NFS and CIFS..........................................................................................31

4.2.1 Create a Storage Virtual Machine for NAS........................................................................................................ 33

4.2.2 Configure CIFS and NFS................................................................................................................................... 45

4.2.3 Create a Volume and Map It to the Namespace............................................................................................... 58

4.2.4 Connect to the SVM From a Windows Client.................................................................................................... 76

4.2.5 Connect to the SVM From a Linux Client.......................................................................................................... 81

4.2.6 NFS Exporting Qtrees (Optional)....................................................................................................................... 83

4.3 Create Storage for iSCSI........................................................................................................ 88

4.3.1 Create a Storage Virtual Machine for iSCSI...................................................................................................... 89

4.3.2 Create, Map, and Mount a Windows LUN......................................................................................................... 96

4.3.3 Create, Map, and Mount a Linux LUN............................................................................................................. 141

5 References....................................................................................................................................159

6 Version History............................................................................................................................ 160

Page 4: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

4 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7 CLI Introduction...........................................................................................................................161

8 Introduction.................................................................................................................................. 162

8.1 Why clustered Data ONTAP?............................................................................................... 162

8.2 Lab Objectives....................................................................................................................... 163

8.3 Prerequisites.......................................................................................................................... 163

8.4 Accessing the Command Line.............................................................................................164

9 Lab Environment......................................................................................................................... 166

10 Using the clustered Data ONTAP Command Line..................................................................168

11 Lab Activities............................................................................................................................. 170

11.1 Clusters.................................................................................................................................170

11.1.1 Advanced Drive Partitioning........................................................................................................................... 170

11.1.2 Create a New Aggregate on Each Cluster Node...........................................................................................173

11.1.3 Networks......................................................................................................................................................... 175

11.2 Create Storage for NFS and CIFS......................................................................................177

11.2.1 Create a Storage Virtual Machine for NAS.................................................................................................... 178

11.2.2 Configure CIFS and NFS............................................................................................................................... 182

11.2.3 Create a Volume and Map It to the Namespace Using the CLI.....................................................................185

11.2.4 Connect to the SVM From a Windows Client................................................................................................ 189

11.2.5 Connect to the SVM From a Linux Client...................................................................................................... 194

11.2.6 NFS Exporting Qtrees (Optional)................................................................................................................... 196

11.3 Create Storage for iSCSI.................................................................................................... 199

11.3.1 Create a Storage Virtual Machine for iSCSI.................................................................................................. 200

11.3.2 Create, Map, and Mount a Windows LUN..................................................................................................... 202

11.3.3 Create, Map, and Mount a Linux LUN........................................................................................................... 236

12 References..................................................................................................................................244

13 Version History.......................................................................................................................... 245

Page 5: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

5 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1 GUI IntroductionThis begins the GUI version of the Basic Concepts for Clustered Data ONTAP 8.3.1.

Page 6: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

6 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2 IntroductionThis lab introduces the fundamentals of clustered Data ONTAP®. In it you will start with a pre-created 2-nodecluster, and configure Windows 2012R2 and Red Hat Enterprise Linux 6.6 hosts to access storage on the clusterusing CIFS, NFS, and iSCSI.

2.1 Why clustered Data ONTAP?One of the key ways to understand the benefits of clustered Data ONTAP is to consider server virtualization.Before server virtualization, system administrators frequently deployed applications on dedicated servers in orderto maximize application performance, and to avoid the instabilities often encountered when combining multipleapplications on the same operating system instance. While this design approach was effective, it also had thefollowing drawbacks:

• It did not scale well — adding new servers for every new application was expensive.• It was inefficient — most servers are significantly under-utilized, and businesses are not extracting the

full benefit of their hardware investment.• It was inflexible — re-allocating standalone server resources for other purposes is time consuming, staff

intensive, and highly disruptive.

Server virtualization directly addresses all three of these limitations by decoupling the application instancefrom the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware, allowingbusinesses to consolidate their server workloads to a smaller set of more effectively utilized physical servers.Additionally, the ability to transparently migrate running virtual machines across a pool of physical serversreduces the impact of downtime due to scheduled maintenance activities.

Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with servervirtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a singlelogical cluster that can non-disruptively service multiple storage workload needs. With clustered Data ONTAP youcan:

• Combine different types and models of NetApp storage controllers (known as nodes) into a sharedphysical storage resource pool (referred to as a cluster).

• Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on thesame storage cluster.

• Consolidate various storage workloads to the cluster. Each workload can be assigned its own StorageVirtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own datavolumes, LUNs, CIFS shares, and NFS exports.

• Support multi-tenancy with delegated administration of SVMs. Tenants can be different companies,business units, or even individual application owners, each with their own distinct administrators whoseadmin rights are limited to just the assigned SVM.

• Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.• Non-disruptively migrate live data volumes and client connections from one cluster node to another.• Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively

removed from the cluster, meaning that you can non-disruptively scale a cluster up and down duringhardware refresh cycles.

• Leverage multiple nodes in the cluster to simultaneously service a given SVM's storage workloads.This means that businesses can scale out their SVMs beyond the bounds of a single physical node inresponse to growing storage and performance requirements, all non-disruptively.

• Apply software and firmware updates, and configuration changes without downtime.

Page 7: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

7 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2.2 Lab ObjectivesThis lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow you tofocus on the topics that specifically interest you. The "Clusters" section is prerequisite for the other sections. If youare interested in NAS functionality then complete the “Storage Virtual Machines for NFS and CIFS” section. If youare interested in SAN functionality, then complete the “Storage Virtual Machines for iSCSI” section, and at leastone of it's Windows or Linux subsections (you may do both if you so choose).

Here is a summary of the exercises in this lab, along with their Estimated Completion Times (ECT):

• Clusters (Required, ECT = 20 minutes).

• Explore a cluster• View Advanced Drive Partitioning.• Create a data aggregate.• Create a Subnet.

• Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)

• Create a Storage Virtual Machine.• Create a volume on the Storage Virtual Machine.• Configure the Storage Virtual Machine for CIFS and NFS access.• Mount a CIFS share from the Storage Virtual Machine on a Windows client.• Mount a NFS volume from the Storage Virtual Machine on a Linux client.

• Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)

• Create a Storage Virtual Machine.• Create a volume on the Storage Virtual Machine.

• For Windows (Optional, ECT = 40 minutes)

• Create a Windows LUN on the volume and map the LUN to an igroup.• Configure a Windows client for iSCSI and MPIO and mount the LUN.

• For Linux (Optional, ECT = 40 minutes)

• Create a Linux LUN on the volume and map the LUN to an igroup.• Configure a Linux client for iSCSI and multipath and mount the LUN.

This lab includes instructions for completing each of these tasks using either SystemManager, NetApp’s graphical administration interface, or the Data ONTAP command line.The end state of the lab produced by either method is exactly the same so use whichevermethod you are the most comfortable with.

2.3 PrerequisitesThis lab introduces clustered Data ONTAP, and makes no assumptions that the user has previous experiencewith Data ONTAP. The lab does assume some basic familiarity with storage system related concepts such asRAID, CIFS, NFS, LUNs, and DNS.

This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume thatthe lab user has a basic familiarity with Microsoft Windows.

This lab also includes steps for mounting NFS volumes and LUNs on a Linux client. All steps are performed fromthe Linux command line, and assumes a basic working knowledge of the Linux command line. A basic workingknowledge of a text editor such as vi may be useful, but is not required.

Page 8: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

8 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2.4 Accessing the Command LinePuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order torun command line commands.

1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host JUMPHOST asshown in the following screenshot; just double-click on the icon to launch it.

Tip: If you already have a PuTTY session open and you want to start another (even to a differenthost), you will instead need to right-click the PuTTY icon and select PuTTY from the contextmenu.

1

Figure 2-1:

Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. Thisexample shows a user connecting to the Data ONTAP cluster named cluster1.

2. By default PuTTY should launch into the “Basic options for your PuTTY session” display as shown in thescreenshot. If you accidentally navigate away from this view just click on the Session category item toreturn to this view.

3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it toopen the connection. A terminal window will open and you will be prompted to log into the host. You canfind the correct username and password for the host in the Lab Host Credentials table found in the “LabEnvironment” section of this guide.

Page 9: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

9 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2

3

Figure 2-2:

If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a littleinitimidating. However, the commands are actually quite easy to use if you remember the following 3 tips:

• Make liberal use of the Tab key while entering commands, as the clustered Data ONTAPcommand shell supports tab completion. If you hit the Tab key while entering a portion of acommand word, the command shell will examine the context and try to complete the rest ofthe word for you. If there is insufficient context to make a single match, it will display a list of allthe potential matches. Tab completion also usually works with command argument values, butthere are some cases where there is simply not enough context for it to know what you want,in which case you will just need to type in the argument value.

• You can recall your previously entered commands by repeatedly pressing the up-arrowkey, and you can then navigate up and down the list using the up-arrow and down-arrowkeys.When you find a command you want to modify, you can use the left-arrow, right-arrow, and Delete keys to navigate around in a selected command to edit it.

• Entering a question mark character (?) causes the CLI to print contextual help information. Youcan use this character on a line by itself or while entering a command.

The clustered Data ONTAP command lines supports a number of additional usability features that makethe command line much easier to use. If you are interested in learning more about this topic then pleaserefer to the "Hands-On Lab for Advanced Features of Clustered Data ONTAP 8.3.1" lab, which containsan entire section dedicated to this subject.

Page 10: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

10 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3 Lab EnvironmentThe following figure contains a diagram of the environment for this lab.

Figure 3-1:

All of the servers and storage controllers presented in this lab are virtual devices, and the networks thatinterconnect them are exclusive to your lab session. While we encourage you to follow the demonstrationsteps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data ONTAPfeatures that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of thesame functionality as physical storage controllers, they are not capable of providing the same performance as aphysical controller, which is why these labs are not suitable for performance testing.

Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.

Table 1: Table 1: Lab Host Credentials

Hostname Description IP Address(es) Username Password

JUMPHOSTWindows 20012R2 RemoteAccess host

192.168.0.5 Demo\Administrator Netapp1!

RHEL1 Red Hat 6.6 x64 Linux host 192.168.0.61 root Netapp1!

RHEL2 Red Hat 6.6 x64 Linux host 192.168.0.62 root Netapp1!

DC1 Active Directory Server 192.168.0.253 Demo\Administrator Netapp1!

cluster1 Data ONTAP cluster 192.168.0.101 admin Netapp1!

cluster1-01 Data ONTAP cluster node 192.168.0.111 admin Netapp1!

cluster1-02 Data ONTAP cluster node 192.168.0.112 admin Netapp1!

Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.

Page 11: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

11 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Table 2: Table 2: Preinstalled NetApp Software

Hostname Description

JUMPHOSTData ONTAP DSM v4.1 for Windows MPIO, Windows Unified Host Utility Kitv7.0.0, NetApp PowerShell Toolkit v3.2.1.68

RHEL1, RHEL2 Linux Unified Host Utilities Kit v7.0

Page 12: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

12 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4 Lab Activities

4.1 ClustersExpected Completion Time: 20 Minutes

A cluster is a group of physical storage controllers, or nodes, that are joined together for the purpose of servingdata to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute it’swork across the member nodes. Communication and data transfer between member nodes (such as when aclient accesses data on a node other than the one actually hosting the data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while management and client data traffic passes overseparate management and data networks configured on the member nodes.

Clusters typically consist of one, or more, NetApp storage controller High Availability (HA) pairs. Both controllersin an HA pair actively host and serve data, but they are also capable of taking over their partner’s responsibilitiesin the event of a service disruption by virtue of their redundant cable paths to each other’s disk storage. Havingmultiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support non-disruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This meansthat cluster expansion and technology refreshes can take place while the cluster remains fully online, and servingdata.

Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an evennumber of controller nodes. There is one exception to this rule, the “single node cluster”, which is a specialcluster configuration that supports small storage deployments using a single physical controller head. The primarydifference between single node and standard clusters, besides the number of nodes, is that a single node clusterdoes not have a cluster network. Single node clusters can be converted into traditional multi-node clusters, andat that point become subject to all the standard cluster requirements like the need to utilize an even number ofnodes consisting of HA pairs. This lab does not contain a single node cluster, and so this lab guide does notdiscuss them further.

Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although thenode limit can be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters that also hostiSCSI and FC can scale up to a maximum of 8 nodes.

This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulatedcontroller, also known as a VSIM, is a virtual machine that simulates the functionality of a physical controllerwithout the need for dedicated controller hardware. The vsim is not designed for performance testing, but doesoffer much of the same functionality as a physical FAS controller, including the ability to generate I/O to disks.This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product features. The vsimis limited when a feature requires a specific physical capability that the vsim does not support. For example,vsims do not support Fibre Channel connections, which is why this lab uses iSCSI to demonstrate block storagefunctionality.

This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes DataONTAP licenses, the cluster’s basic network configuration, and a pair of pre-configured HA controllers. In thisnext section you will create the aggregates that are used by the SVMs that you will create in later sections of thelab. You will also take a look at the new Advanced Drive Partitioning feature introduced in clustered Data ONTAP8.3.

4.1.1 Connect to the Cluster with OnCommand System Manager

OnCommand System Manager is NetApp’s browser-based management tool for configuring and managingNetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you hadto download and install on your client OS. In 8.3, System Manager is now moved on-board the cluster, so youjust point your web browser to the cluster management address. The on-board System Manager interface isessentially the same that NetApp offered in the System Manager 3.1, the version you install on a client.

Page 13: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

13 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open the webbrowser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if you prefer oneof those. All three browsers already have System Manager set as the browser home page.

1. Launch Chrome to open System Manager.

1

Figure 4-1:

2. Enter the User Name as admin, and the Password as Netapp1!, and then click Sign In.

The OnCommand System Manager Login window opens.

System Manager is now logged in to cluster1, and displays a summary page for the cluster. If you areunfamiliar with System Manager, here is a quick introduction to its layout.

2

Figure 4-2:

Use the tabs on the left side of the window to manage various aspects of the cluster.3. The Cluster tab accesses configuration settings that apply to the cluster as a whole.4. The Storage Virtual Machines tab allows you to manage individual Storage Virtual Machines (SVMs,

also known as Vservers).

Page 14: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

14 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. The Nodes tab contains configuration settings that are specific to individual controller nodes.

Please take a few moments to expand and browse these tabs to familiarize yourself with their contents.

3

4

5

Figure 4-3:

Note: As you use System Manager in this lab, you may encounter situations where buttons atthe bottom of a System Manager pane are beyond the viewing size of the window, and no scrollbar exists to allow you to scroll down to see them. If this happens, then you have two options;either increase the size of the browser window (you might need to increase the resolution ofyour jumphost desktop to accommodate the larger browser window), or in the System Managerwindow, use the tab key to cycle through all the various fields and buttons, which eventuallyforces the window to scroll down to the non-visible items.

4.1.2 Advanced Drive Partitioning

Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storagein clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e.,cabling) to a given controller head.

Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for agroup of disks that are all physically attached to the same node. A given disk can only be a member of a singleaggregate.

By default each cluster node has one aggregate known as the root aggregate, which is a group of the node’slocal disks that host the node’s Data ONTAP operating system. A node’s root aggregate is automatically createdduring Data ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks

Page 15: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

15 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

(1 data, 2 parity), and has a name that begins the string aggr0. For example, in this lab the root aggregate ofthe node cluster1-01 is named “aggr0_cluster1_01.”, and the root aggregate of the node cluster1-02 is named“aggr0_cluster1_02”.

On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controller’s rootaggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root aggregate diskoverhead requirement signficantly reduces the disks available for storing user data. To improve usable capacity,NetApp introduced Advanced Drive Partitioning in 8.3, which divides the Hard Disk Drives (HDDs) on nodesthat have this feature enabled into two partititions; a small root partition, and a much larger data partition. DataONTAP allocates the root partitions to the node root aggregate, and the data partitions for data aggregates. Eachpartition behaves like a virtual disk, so in terms of RAID, Data ONTAP treats these partitions just like physicaldisks when creating aggregates. The key benefit is that a much higher percentage of the node’s overall diskcapacity is now available to host user data.

Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs installedin their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system installationtime, and there is no way to convert an existing system to use Advanced Drive Partitioning other than tocompletely evacuate the affected HDDs, and re-install Data ONTAP.

All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of HDDs. Thecapability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3 also introducesSSD partitioning for use with Flash Pools, but the details of that feature lie outside the scope of this lab.

In this section, you will use the GUI to determine if a cluster node is utilizing Advanced Drive Partitioning. SystemManager provides a basic view into this information, but if you want to see more detail then you will want to usethe CLI.

1. In System Manager’s left pane, navigate to the Cluster tab.2. Expand cluster1.3. Expand Storage.4. Click Disks.5. In the main window, click on the Summary tab.6. Scroll the main window down to the Spare Disks section, where you will see that each cluster node

has 12 spare disks with a per-disk size of 26.88 GB. These spares represent the data partitions of thephysical disks that belong to each node.

Page 16: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

16 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

2

3

4

5

6

Figure 4-4:

If you scroll back up to look at the “Assigned HDDs” section of the window, you will see that there areno entries listed for the root partitions of the disks. Under daily operation, you will be primarly concernedwith data partitions rather than root partitions, and so this view focuses on just showing information aboutthe data partitions. To see information about the physical disks attached to your system you will need toselect the Inventory tab.

7. Click on the Inventory tab at the top of the Disks window.

Page 17: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

17 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

Figure 4-5:

System Manager’s main window now shows a list of the physical disks available across all the nodesin the cluster, which nodes own those disks, and so on. If you look at the Container Type column yousee that the disks in your lab all show a value of “shared”; this value indicates that the physical disk ispartitioned. For disks that are not partitioned you would typically see values like “spare”, “data”, “parity”,and “dparity”.

For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automaticallydetermines the size of the root and data disk partitions at system installation time based on the quantityand size of the available disks assigned to each node. In this lab each cluster node has twelve 32 GBhard disks, and you can see how your node’s root aggregates are consuming the root partitions on thosedisks by going to the Aggregates page in System Manager.

8. On the Cluster tab, navigate to cluster1 > Storage > Aggregates.9. In the “Aggregates” list, select aggr0_cluster1_01, which is the root aggregate for cluster node

cluster1-01. Notice that the total size of this aggregate is a little over 10 GB. The Available and Usedspace shown for this aggregate in your lab may vary from what is shown in this screenshot, dependingon the quantity and size of the snapshots that exist on your node’s root volume.

10. Click the Disk Layout tab at the bottom of the window. The lower pane of System Manager nowdisplays a list of the disks that are members of this aggregate. Notice that the usable space is 1.52 GB,which is the size of the root partition on the disk. The Physical Space column displays to total capacityof the whole disk that is available to clustered Data ONTAP, including the space allocated to both thedisk’s root and data partitions.

Page 18: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

18 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8 9

10

Figure 4-6:

4.1.3 Create a New Aggregate on Each Cluster Node

The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregateshould not be used to host user data, so in this section you will be creating a new aggregate on each of the nodesin cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will be creating later in thislab.

A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of thestorage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to useone or more specific aggregates to host the SVM’s volumes. Multiple SVMs can be assigned to use the sameaggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just asingle SVM provides greater workload isolation.

In this lab activity, you create a single user data aggregate on each node in the cluster.

You can create aggregates from either the Cluster tab, or the Nodes tab. For this exercise use the Cluster tabas follows:

1. Select the Cluster tab.

Tip: To avoid confusion, always double-check to make sure that you are working in the correctleft pane tab context when performing activities in System Manager!

2. Go to cluster1 > Storage > Aggregates.3. Click on the Create button to launch the Create Aggregate Wizard.

Page 19: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

19 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

23

Figure 4-7:

The Create Aggregate wizard window opens.

4. Specify the “Name” of the aggregate as aggr1_cluster1_015. Click Browse.

4

5

Figure 4-8:

Page 20: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

20 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The “Select Disk Type” window opens.

6. Select the Disk Type entry for the node cluster1-01.7. Click OK.

6

Figure 4-9:

The Select DiskType window closes, and focus returns to the Create Aggregate window.

8. The “Disk Type” should now show as VMDISK.9. Set the “Number of Disks” to 5.10. Click Create to create the new aggregate and to close the wizard.

8

9

10

Figure 4-10:

The “Create Aggregate” window closes, and focus returns to the Aggregates view in System Manager.The newly created aggregate should now be visible in the list of aggregates.

11. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected.12. Click the Details tab to view more detailed information about this aggregate’s configuration.

Page 21: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

21 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP8, an aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 and later only supports 64-bitaggregates. If you have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates andyou plan to upgrade that cluster to 8.3+, you must convert those 32-bit aggregates to 64-bit aggregatesprior to the upgrade. The procedure for that migration is not covered in this lab, so if you need furtherdetails then please refer to the clustered Data ONTAP documentation.

11

12 13

Figure 4-11:

Now repeat the process to create a new aggregate on the node "cluster1-02".

14. Click the Create button again.

Page 22: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

22 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

Figure 4-12:

The “Create Aggregate” window opens.

15. Specify the Aggregate’s “Name” as aggr1_cluster1_0216. Click Browse.

Page 23: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

23 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

15

16

Figure 4-13:

The “Select Disk Type” window opens.

17. Select the Disk Type entry for the node cluster1-02.18. Click OK.

17 18

Figure 4-14:

The “Select Disk Type” window closes, and focus returns to the “Create Aggregate” window.

19. The “Disk Type” should now show as VMDISK.20. Set the Number of Disks to 5.21. Click Create to create the new aggregate.

Page 24: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

24 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

20

21

Figure 4-15:

The “Create Aggregate” window closes, and focus returns to the “Aggregates” view in System Manager.

22. The new aggregate, aggr1_cluster1_02 now appears in the cluster’s aggregate list.

22

Figure 4-16:

Page 25: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

25 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4.1.4 Networks

This section discusses the network components that Clustered Data ONTAP provides to manage your cluster.

Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps) youcan create to aggregate those connections, and the VLANs you can use to subdivide them.

A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of associatedcharacteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can failover to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM,and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs, in part, onall nodes that are hosting its LIFs.

Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM has it’sown routing table, changes to one SVM’s routing table does not have impact on any other SVM’s routing table.

IPspaces are new in Data ONTAP 8.3, and allow you to configure a Data ONTAP cluster to logically separateone IP network from another, even if those two networks are using the same IP address range. IPspaces are amult-tenancy feature that allow storage service providers to share a cluster between different companies while stillseparating storage traffic for privacy and security. Every cluster includes a default IPspace to which Data ONTAPautomatically assigns new SVMs, and that default IPspace is probably sufficient for most NetApp customers whodeploy a cluster within a single company or organization that uses a non-conflicting IP address range.

Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to thesame layer 2 networks, both physical and virtual (i.e., VLANs). Every IPspace has it’s own set of BroadcastDomains, and Data ONTAP provides a default broadcast domain to go along with the default IPspace. Broadcastdomains are used by Data ONTAP to determine what ports an SVM can use for it’s LIFs.

Subnets in Data ONTAP 8.3 are a convenience feature intended to make LIF creation and management easierfor Data ONTAP administrators. A subnet is a pool of IP addresses that you can specify by name when creatinga LIF. Data ONTAP will automatically assign an available IP address from the pool to the LIF, along with a subnetmask and a gateway. A subnet is scoped to a specific broadcast domain, so all the subnet’s addresses belongto the same layer 3 network. Data ONTAP manages the pool automatically as you create or delete LIFs, and ifyou manually configure a LIF with an address from the pool, it will detect that the address is in use and mark it assuch in the pool.

DNS Zones allow an SVM to manage DNS name resolution for it’s own LIFs, and since multiple LIFs can sharethe same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To use DNSZones you must configure your DNS server to delegate DNS authority for the subdomain to the SVM.

4.1.4.1 Create SubnetsIn this section of the lab, you will create a subnet that you will leverage in later sections to provision SVMs andLIFs. You will not create IPspaces or Broadcast Domains, as the system defaults are sufficient for this lab.

1. In the left pane of System Manager, select the Cluster tab.2. In the left pane, navigate to cluster1 > Configuration > Network.3. In the right pane, select the Broadcast Domains tab.4. Select the Default subnet.

Page 26: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

26 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

2

3

4

Figure 4-17:

Review the Port Details section at the bottom of the Network pane and note that the e0c – e0g ports onboth cluster nodes are all part of this broadcast domain. These are the network ports that you will use inthis lab.

Now create a new Subnet for this lab.

5. Select the Subnets tab, and notice that there are no subnets listed in the pane. Unlike BroadcastDomains and IPSpaces, Data ONTAP does not provide a default Subnet.

6. Click the Create button.

Page 27: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

27 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

6

Figure 4-18:

The “Create Subnet” window opens.

Set the fields in the window as follows.7. “Subnet Name”: Demo.8. “Subnet IP/Subnet mask”: 192.168.0.0/24.9. The values you enter in the “IP address” field depend on what sections of the lab guide you intend to

complete.

Attention: It is important that you choose the right values here so that the values in your lab willcorrectly match up with the values used in this lab guide.

• If you plan to complete just the NAS section or both the NAS and SAN sections then enter192.168.0.131-192.168.0.139.

• If you plan to complete just the SAN section then enter 192.168.0.133-192.168.0.139.10. “Gateway”: 192.168.0.1.11. Click the Browse button.

Page 28: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

28 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

8

9

10

11

Figure 4-19:

The “Select Broadcast Domain” window opens.

12. Select the Default entry from the list.13. Click OK.

Page 29: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

29 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

13

Figure 4-20:

The “Select Broadcast Domain” window close, and focus returns to the “Create Subnet” window.

14. The values in your Create Subnet window should now match those shown in the following screenshot,the only possible exception being for the IP Addresses field, whose value may differ depending on whatvalue range you chose to enter to match your plans for the lab.

15. If it's not already displayed, click on the the Show ports on this domain link under the BroadcastDomain textbox to see the list of ports that this broadcast domain includes.

16. Click Create.

Page 30: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

30 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

16

Figure 4-21:

The Create Subnet window closes, and focus returns to the Subnets tab in System Manager.

17. Notice that the main pane pane of the Subnets tab now includes an entry for your newly createdsubnet, and that the lower portion of the pane includes metrics tracking the consumption of the IPaddresses that belong to this subnet.

Page 31: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

31 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

17

Figure 4-22:

Feel free to explore the contents of the other available tabs on the Network page. Here is a briefsummary of the information available on those tabs.

• The Ethernet Ports tab displays the physical NICs on your controller, which will be a supersetof the NICs that you saw previously listed as belonging to the default broadcast domain. Theother NICs you will see listed on the Ethernet Ports tab include the node’s cluster networkNICs.

• The Network Interfaces tab displays a list of all of the LIFs on your cluster.• The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they

will be used for iSCSI or FCoE connections. The simulated NetApp controllers you are usingin this lab do not include FC adapters, and this lab does not make use of FCoE.

4.2 Create Storage for NFS and CIFSExpected Completion Time: 40 Minutes

If you are only interested in SAN protocols then you do not need to complete this section. However, werecommend that you review the conceptual information found here, and at the beginning of each of this section’ssubsections, before you advance to the SAN section as most of this conceptual material will not be repeatedthere.

Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operatewithin a cluster that serve data out to storage clients. A single cluster can host hundreds of SVMs, with each SVMmanaging its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g.,NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients, its own namespace.

Page 32: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

32 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and customersare encouraged to actively embrace this feature in order to take full advantage of a cluster’s capabilities. Werecommend against any organization starting out on a deployment intended to scale with only a single SVM.

You explicitly configure which storage protocols you want a given SVM to support at the time you create thatSVM. You can later add or remove protocols as desired. A single SVM can host any combination of the supportedprotocols.

An SVM’s assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. Asyou saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means that anSVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a directrelationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associatedcharacteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can failover to, an assigned SVM, a role, a routing group, and so on. You can only assign a given LIF to a single SVM,and since LIFs map to physical network ports on cluster nodes, this means that an SVM runs in part on all nodesthat are hosting its LIFs.

When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes hostedby the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is afunction of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility underNetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some loadbalancing by responding to different clients with different LIF addresses, but this distribution is not sophisticatedand requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFSServers do not handle name resolution on their own.

DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname.DNS is supported by both NFS and CIFS clients, and works equally well with clients on local area and wide areanetworks. Since DNS is an external service that resides outside of Data ONTAP, this architecture creates thepotential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline.To compensate for this condition you can configure DNS servers to delegate the name resolution responsibilityfor the SVM’s hostname records to the SVM itself, so that it can directly respond to name resolution requestsinvolving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding whatLIF address to return in response to a DNS name resolution request.

LIFS that map to physical network ports that reside on the same node as a volume’s containing aggregate offerthe most efficient client access path to the volume’s data. However, clients can also access volume data throughLIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP usesthe high speed cluster network to bridge communication between the node hosting the LIF and the node hostingthe volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that hasan aggregate that is hosting volumes for that SVM. If you desire additional resiliency then you can also create aNAS LIF on nodes not hosting aggregates for the SVM.

A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to anotherin the event of a component failure. Any existing connections to that LIF from NFS and SMB 2.0 (and later)clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates toa different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing networkrequests from that new node/port. Throughout this operation the NAS LIF maintains its IP address. Clientsconnected to the LIF may notice a brief delay while the failover is in progress, but as soon as it completes theclients resume any in-process NAS operations without any loss of data.

The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storagecontroller node can host a maximum of 125 SVMs, so you can calculate the cluster’s effective SVM limit bymultiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but thereis a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node ispart of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node canalso accommodate it’s HA partner’s LIFs in the event of a failover event).

Each SVM has its own NAS namespace, a logical grouping of the SVM’s CIFS and NFS volumes into a singlelogical filesystem view. Clients can access the entire namespace by mounting a single share or export at thetop of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistentview of the SVM’s data to all clients rather than having to reproduce that view structure on each individual

Page 33: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

33 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

client. As an administrator maps and unmaps volumes from the namespace, those volumes instantly becomevisible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVM’s namespace.Administrators can also create NFS exports at individual junction points within the namespace, and can createCIFS shares at any directory path in the namespace.

4.2.1 Create a Storage Virtual Machine for NAS

In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a volumeover NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the cluster.

Start by creating the storage virtual machine.

1. In System Manager, open the Storage Virtual Machines tab.2. Select cluster1.3. Click Create to launch the Storage Virtual Machine Setup wizard.

1

2

3

Figure 4-23:

The “Storage Virual machine (SVM) Setup” window opens.

4. Set the SVM Name: value to svm1.5. In the Data Protocols: area, check the CIFS and NFS checkboxes.

Tip: The list of available Data Protocols is dependent upon what protocols are licensed on yourcluster; if a given protocol isn’t listed, it is because you are not licensed for it. (In this lab all theprotocols are licensed.)

6. Set the Security Style: value to NTFS.7. Set the Root Aggregate: listbox to aggr1_cluster1_01.8. Click Submit & Continue.

Page 34: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

34 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

6

7

8

Figure 4-24:

The "Storage Virtual Machine (SVM) Setup" window opens.

9. The Subnet setting defaults to Demo, since this is the only subnet definition that exists in your lab.10. Click Browse next to the Port textbox.

Page 35: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

35 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9

10

Figure 4-25:

The “Select Network Port or Adapter” window opens.

11. Expand the list of ports for the node cluster1-01, and select port e0c.12. Click OK.

Page 36: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

36 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12

Figure 4-26:

The “Select Network Port or Adapter” window closes, and focus returns to the protocols portion of theStorage Virtual Machine (SVM) Setup wizard.

13. The Port textbox should have been populated with the cluster and port value you just selected.14. Set the CIFS Server Name: value to svm1.15. Set the Active Directory: value to demo.netapp.com.16. Set the Administrator Name: value to Administrator.17. Set the Password: value to Netapp1!.18. The optional “Provision a volume for CIFS storage” textboxes offer a quick way to provision a simple

volume and CIFS share at SVM creation time, with the caveat that his share will not be multi-protocol.Since in most cases when you create a share it will be for an existing SVM, rather than create a sharehere this lab guide will show that more full-featured procedure in the following sections.

Page 37: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

37 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

14

15

16

17

18

Figure 4-27:

Scroll down in the window to see the NIS Configuration section.19. In the NIS section the “Domain Name” and “IP Addresses” fields are blank. In a NFS environment

where you are running NIS you would want to configure these values, but this lab environment does notutilize NIS, and populating these fields will create a name resolution problem later in the lab.

20. As was the case with CIFS, the “Provision a volume for NFS storage” textboxes offer a quick wayto provison a volume and create an NFS export for that volume. Once again, the volume will not beinherently multi-protocol, and will in fact be a completely separate volume from the CIFS share volumethat you could have selected to create in the CIFS section. This lab will illustrate the more full featuredvolume creation process later in the guide.

21. Click Submit & Continue to advance the wizard to the next screen.

Page 38: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

38 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1920

21

Figure 4-28:

The SVM Administration section of the Storage Virtual Machine (SVM) Setup wizard opens. Thiswindow allows you to set up an administrative account for this specific SVM so you can delegateadministrative tasks to an SVM-specific administrator without giving that administrator cluster-wideprivileges. As the comments in this wizard window indicate, this account must also exist for use withSnapDrive. Although you will not be using SnapDrive in this lab, it is a good idea to create this account,and you will do so here.

22. The “User Name” is pre-populated with the value vsadmin.23. Set the “Password” and “Confirm Password” textboxes to netapp123.24. When finished, click Submit & Continue.

Page 39: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

39 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

22

23

24

Figure 4-29:

The “New Storage Virtual Machine (SVM) Summary” window opens.

25. Review the settings for the new SVM, taking special note of the IP Address listed in the “CIFS/NFSConfiguration” section. Data ONTAP drew this address from the Subnets pool that you created earlier inthe lab. Make sure you use the scrollbar on the right to see all the available information.

26. When finished, click OK .

Page 40: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

40 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

26

Figure 4-30:

The window closes, and focus returns to the System Manager window, which now displays a summarypage for your newly created svm1 SVM.

27. Notice that in the main pane of the window the CIFS protocol is listed with a green background. Thisindicates that a CIFS server is running for this SVM.

28. Notice too, that the NFS protocol is listed with a green background, which indicates that there is arunning NFS server for this SVM.

Page 41: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

41 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2728

Figure 4-31:

The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM.NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access theSVM’s shares through either node. To comply with that best practice you will now create a second LIFhosted on the other node in the cluster.

System Manager for clustered Data ONTAP 8.2 (and earlier) presented LIF management under theStorage Virtual Machines tab, only offering visibility to LIFs for a single SVM at a time. In clustered DataONTAP 8.3, that functionality has moved to the Cluster tab, where you now have a single view formanaging all the LIFs in your cluster.

29. Select the Cluster tab in the left navigation pane of System Manager.30. Navigate to cluster1 > Configuration > Network.31. Select the Network Interfaces tab in the main Network pane.32. Select the only LIF listed for the svm1 SVM. Notice that this LIF is named “svm1_cifs_nfs_lif1”; follow

that same naming convention for the new LIF.33. Click Create to launch the Network Interface Create Wizard.

Page 42: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

42 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

29

3031

3233

Figure 4-32:

The “Create Network Interface” window opens.

34. Set the Name: value to svm1_cifs_nfs_lif2.35. Set the Interface Role: radio button to Serves Data36. Set the SVM: dropdown to svm137. In the Protocol Access: area, check the CIFS and NFS checkboxes.38. In the Management Access: area, check the Enable Management Access checkbox.39. Set the Subnet: dropdown to Demo40. Check the Auto-select the IP address from this subnet checkbox.41. Also expand the Port Selection listbox, and select the entry for cluster1-02 port e0c.42. Click Create to continue.

Page 43: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

43 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

34

35

36

37

38

39

40

42

41

Figure 4-33:

The “Create Network Interface” window closes, and focus returns to the Network pane in SystemManager.

43. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfacestab. Select this entry and review the LIF’s properties in the lower pane.

Page 44: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

44 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

43

Figure 4-34:

Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients canintelligently utilize all of svm1’s configured NAS LIFs. To achieve this objective, the DNS server mustdelegate to the cluster the responsibility for the DNS zone corresponding to the SVM’s hostname,which in this case will be “svm1.demo.netapp.com”. The lab’s DNS server is already configured todelegate this responsibility, but you must also configure the SVM to accept it. System Manager doesnot currently include the capability to configure DNS delegation so you will need to use the CLI for thispurpose.

44. Open a PuTTY connection to cluster1 following the instructions in the “Accessing the Command Line”section at the beginning of this guide. Log in using the username "admin" and the password "Netapp1!",then enter the following commands.

cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone svm1.demo.netapp.comcluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone svm1.demo.netapp.comcluster1::> network interface show -vserver svm1 -fields dns-zone,addressvserver lif address dns-zone------- ----------------- ------------- -------------------svm1 svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.comsvm1 svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com2 entries were displayed.cluster1::>

45. Validate that delegation is working correctly by opening PowerShell on the jumphost and using thenslookup command as shown in the following CLI output. If the nslookup command returns different IPaddresses on different lookup attempts then delegation is working correctly. If the nslookup commandreturns a “Non-existent domain” error, then delegation is not working correctly, and you will need toreview the Data ONTAP commands you entered for any errors. Also notice in the following CLI output

Page 45: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

45 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

that different executions of the nslookup command return different addresses, demonstrating that DNSload balancing is working correctly.

Tip: You may need to run the nslookup command more than 2 times before you see it reportdifferent addresses for the hostname.

Windows PowerShellCopyright (C) 2013 Microsoft Corporation. All rights reserved.PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.comServer: dc1.demo.netapp.comAddress: 192.168.0.253Non-authoritative answer:Name: svm1.demo.netapp.comAddress: 192.168.0.132PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.comServer: dc1.demo.netapp.comAddress: 192.168.0.253Non-authoritative answer:Name: svm1.demo.netapp.comAddress: 192.168.0.131PS C:\Users\Administrator.DEMO

4.2.2 Configure CIFS and NFS

Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the “svm1” SVM in theprevious section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand thatclients cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created anyvolumes on the SVM, but also because you have not told the SVM what you want to share, and who you want toshare it with.

Each SVM has its own namespace. A namespace is a logical grouping of a single SVM’s volumes into a directoryhierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVM’s root volume(svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFSand NFS clients. The SVM’s other volumes are junctioned (i.e. mounted) within that root volume or within othervolumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified,centrally maintained view of the storage encompassed by the namespace, regardless of where those junctionedvolumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not beenjunctioned into the namespace.

CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declaredat the top of the namespace. While this is a very powerful capability, there is no requirement to make the wholenamespace accessible. You can create CIFS shares at any directory level in the namespace, and you cancreate different NFS export rules at junction boundaries for individual volumes and for individual qtrees within ajunctioned volume.

Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy modelthat dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exportsthe root of its namespace and automatically associates that export with the SVM’s default export policy. But thatdefault policy is initially empty, and until it is populated with access rules no NFS clients will be able to accessthe namespace. The SVM’s default export policy applies to the root volume and also to any volumes that anadministrator junctions into the namespace, but an administrator can optionally create additional export policiesin order to implement different access rules within the namespace. You can apply export policies to a volumeas a whole and to individual qtrees within a volume, but a given volume or qtree can only have one associatedexport policy. While you cannot create NFS exports at any other directory level in the namespace, NFS clientscan mount from any level in the namespace by leveraging the namespace’s root export.

In this section of the lab, you are going to configure a default export policy for your SVM so that any volumes youjunction into its namespace will automatically pick up the same NFS export rules. You will also create a singleCIFS share at the top of the namespace so that all the volumes you junction into that namespace are accessiblethrough that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will besetting up name mapping between UNIX and Windows user accounts to facilitate smooth multiprotocol access tothe volumes and files in the namespace.

Page 46: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

46 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVM’s namespace. AnSVM always has a root volume, whether or not it is configured to support NAS protocols. Before you configureNFS and CIFS for your newly created SVM, take a quick look at the SVM’s root volume:

1. Select the Storage Virtual Machines tab.2. Navigate to cluster1 > svm1 > Storage > Volumes.3. Note the existence of the “svm1_root” volume, which hosts the namespace for the svm1 SVM. The root

volume is not large; only 20 MB in this example. Root volumes are small because they only intend tohouse the junctions that organize the SVM’s volumes; all of the files hosted on the SVM should resideinside the volumes that are junctioned into the namespace, rather than directly in the SVM’s root volume.

1

2

3

Figure 4-35:

Confirm that CIFS and NFS are running for our SVM using System Manager. Check CIFS first.

4. Under the Storage Virtual Machines tab, navigate to cluster1 > svm1 > Configuration > Protocols >CIFS.

5. In the CIFS pane, select the Configuration tab.6. Note that the Service Status field is listed as “Started”, which indicates that there is a running CIFS

server for this SVM. If CIFS was not already running for this SVM, then you could configure and start itusing the Setup button found under the Configuration tab.

Page 47: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

47 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

6

Figure 4-36:

Now check that NFS is enabled for your SVM.

7. Select NFS under the Protocols section.8. Notice that the NFS Server Status field shows as “Enabled”. The Enable and Disable buttons on the

menu bar can be used to place the NFS server online and offline if needed. Please leave NFS enabledfor this lab.

9. NFS version 3 is enabled, but versions 4 and 4.1 are not. If you wanted to change this you could use theEdit button to do so, but for this lab NFS version 3 is sufficient.

Page 48: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

48 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

8

9

Figure 4-37:

At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server.However, you have not yet configured those two servers to actually serve any data. The first step in thatprocess is to configure the SVM’s default NFS export policy.

When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS exportpolicy for the SVM that contains an empty list of access rules. Without any access rules that policywill not allow clients to access any exports, so you need to add a rule to the default policy so that thevolumes you will create on this SVM later in this lab will be automatically accessible to NFS clients. If anyof this seems a bit confusing, do not worry; the concept should become clearer as you work through thissection and the next one.

10. In System Manager, select the Storage Virtual Machines tab and navigate to cluster1 > svm1 >Policies > Export Policies.

11. In the Export Polices window, select the default policy.12. Click the Add button in the bottom portion of the Export Policies pane.

Page 49: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

49 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

11

12

Figure 4-38:

The “Create Export Rule” window opens. Using this dialog you can create any number of rules thatprovide fine grained access control for clients and specify their application order. For this lab, you aregoing to create a single rule that grants unfettered access to any host on the lab’s private network.

13. Set the Client Specification: value to 0.0.0.0/014. Set the Rule Index: number to 115. In the Access Protocols: area, check the CIFS and NFS checkboxes.The default values in the other

fields in the window are acceptable.16. When you finish entering these values, click OK.

Page 50: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

50 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1314

15

16

Figure 4-39:

The “Create Export Policy” window closes and focus returns to the “Export Policies” pane in SystemManager.

17. The new access rule you created now shows up in the bottom portion of the pane.

Page 51: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

51 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

17

Figure 4-40:

With this updated default export policy in place, NFS clients will now be able to mount the root ofthe svm1 SVM’s namespace, and use that mount to access any volumes that you junction into thenamespace.

Now create a CIFS share for the svm1 SVM. You are going to create a single share named “nsroot”atthe root of the SVM’s namespace.

18. Select the Storage Virtual Machines tab and navigate to cluster1 > svm1 > Storage > Shares.19. In the “Shares” pane, select Create Share.

Page 52: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

52 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

19

Figure 4-41:

The “Create Share” dialog box opens.

20. Set the Folder to Share: value to / (If you alternately opt to use the Browse button, make sure youselect the root folder).

21. Set the Share Name: value to nsroot22. Click the Create button.

20

21

22

Figure 4-42:

The “Create Share” window closes, and focus returns to “Shares” pane in System Manager. The new“nsroot” share now shows up in the Shares pane, but you are not yet finished.

Page 53: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

53 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

23. Select nsroot from the list of shares.24. Click the Edit button to edit the share’s settings.

23

24

Figure 4-43:

The “Edit nsroot Settings” window opens.

25. Select the Permissions tab. When you create a share, by permissions are set by default to grant“Everyone” Full Control . You can set more detailed permissions on the share from this tab, but thisconfiguration is sufficient for the exercises in this lab.

Page 54: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

54 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

Figure 4-44:

There are other settings to check in this window, so do not close it yet.

26. Select the Options tab at the top of the window and make sure that the Enable as read-only, EnableOplocks, Browsable, and Notify Change checkboxes are all checked. All other checkboxes should becleared.

27. If you had to change any of the settings listed on the previous screen then the Save and Close buttonwill become active, and you should click it. Otherwise, click the Cancel button.

Page 55: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

55 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

26

27

Figure 4-45:

The “Edit nsroot Settings” window closes, and focus returns to the “Shares” pane in System Manager.Setup of the “\\svm1\nsroot” CIFS share is now complete.

For this lab you have created just one share at the root of your namespace, which allows users toaccess any volume mounted in the namespace through that share. The advantage of this approach isthat it reduces the number of mapped drives that you have to manage on your clients; any changes youmake to the namespace , such as adding/removing volumes or changing junction locations, becomeinstantly visible to your clients. If you prefer to use multiple shares then clustered Data ONTAP allowsyou to create additional shares rooted at any directory level within the namespace.

4.2.2 Setting Up Username MappingSince you have configured your SVM to support both NFS and CIFS, you next need to set up username mappingso that the UNIX root accounts and the DEMO\Administrator account will have synonymous access to eachother’s files. Setting up such a mapping may not be desirable in all environments, but it will simplify data sharingfor this lab since these are the two primary accounts you are using in this lab.

1. In System Manager, open the Storage Virtual Machines tab and navigate to cluster1 > svm1 >Configuration > Users and Groups > Name Mapping.

2. In the “Name Mapping” pane, click Add.

Page 56: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

56 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

2

Figure 4-46:

The “Add Name Mapping Entry” window opens.

Create a Windows to UNIX mapping by completing all of the fields as follows:3. Set the Direction: value to Windows to UNIX.4. Set the Position: number to 1.5. Set the Pattern: value to demo\\administrator (the two backslashes listed here is not a typo, and

“administrator” should not be capitalized).6. Set the Replacement: value to root.7. When you have finished populating these fields, click Add.

3

4

5

67

Figure 4-47:

Page 57: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

57 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The window closes and focus returns to the “Name Mapping” pane in System Manager. Click the Addbutton again to create another mapping rule.

The “Add Name Mapping Entry” window opens.

Create a UNIX to Windows to mapping by completing all of the fields as follows:8. Set the Direction: value to UNIX to Windows.9. Set the Position: value to 1.10. Set the Pattern: value to root11. Set the Replacement: value to demo\\administrator (the two backslashes listed here are not a typo,

and “administrator” should not be capitalized).12. When you have finished populating these fields, click Add.

8

9

10

1112

Figure 4-48:

The second “Add Name Mapping” window closes, and focus again returns to the “Name Mapping” panein System Manager.

13. You should now see two mappings listed in this pane that together make the “root” and “DEMO\Administrator” accounts equivalent to each other for the purpose of file access within the SVM.

Page 58: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

58 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-49:

4.2.3 Create a Volume and Map It to the Namespace

Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume onlyresides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate,which can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of avolume can vary depending on what storage controller model is hosting it.

An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can beconfigured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols(varies based on controller model), which means that there is an effective limit on the total number of volumesthat a cluster can host, depending on how many nodes there are in your cluster.

Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the node’s DataONTAP operating system. Do not use the node’s root aggregate to host any other volumes or user data; alwayscreate additional aggregates and volumes for that purpose.

Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin provisioning,deduplication, and compression. One specific storage efficiency feature you will be seeing in the section of the labis thin provisioning, which dictates how space for a FlexVol is allocated in its containing aggregate.

When you create a FlexVol with a volume guarantee of type “volume” you are thickly provisioning the volume,pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume willnever run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volumeguarantee of “none” you are thinly provisioning the volume, only allocating space for it on the containingaggregate at the time and in the quantity that the volume actually requires the space to store the data.

This latter configuration allows you to increase your overall space utilization and even oversubscribe anaggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribedvolumes reached their full size. However, if an oversubscribed aggregate does fill up then all it’s volumes will run

Page 59: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

59 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

out of space before they reach their maximum volume size, therefore oversubscription deployments generallyrequire a greater degree of administrative vigilance around space utilization.

In the Clusters section, you created a new aggregate named “aggr1_cluster1_01”; you will now use thataggregate to host a new thinly provisioned volume named “engineering” for the SVM named “svm1”.

1. In System Manager, open the Storage Virtual Machines tab.2. Navigate to cluster1 > svm1 > Storage > Volumes.3. Click Create to launch the Create Volume wizard.

1

2

3

Figure 4-50:

The “Create Volume” window opens.

4. Populate the following values into the data fields in the window.

• Name: engineering• Aggregate: aggr1_cluster1_01• Total Size: 10 GB• Check the Thin Provisioned checkbox.

Leave the other values at their defaults.5. Click Create .

Page 60: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

60 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

Figure 4-51:

The “Create Volume” window closes, and focus returns to the “Volumes” pane in System Manager.

6. The newly created engineering volume now appears in the Volumes list. Notice that the volume is 10 GBin size, and is thin provisioned.

Page 61: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

61 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

Figure 4-52:

System Manager has also automatically mapped the engineering volume into the SVM’s NASnamespace.

7. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Namespace.8. Notice that the engineering volume is now junctioned in under the root of the SVM’s namespace, and

has also inherited the default NFS Export Policy.

Page 62: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

62 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

8

Figure 4-53:

Since you have already configured the access rules for the default policy, the volume is instantlyaccessible to NFS clients. As you can see in the preceding screenshot, the engineering volume wasjunctioned as “/engineering”, meaning that any client that had mapped a share to \\svm1\nsroot or NFSmounted svm1:/ would now instantly see the engineering directory in the share, and in the NFS mount.

Now create a second volume.

9. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Volumes.10. Click Create to launch the Create Volume wizard.

Page 63: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

63 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9

10

Figure 4-54:

The Create Volume window opens.

11. Populate the following values into the data fields in the window:

• Name: eng_users• Aggregate: aggr1_cluster1_01• Total Size: 10 GB• Check the Thin Provisioned checkbox.

Leave the other values at their defaults.

12. Click the Create button.

Page 64: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

64 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

Figure 4-55:

The “Create Volume” window closes, and focus returns again to the “Volumes” pane in SystemManager. The newly created “eng_users” volume should now appear in the Volumes list.

13. Select the eng_users volume in the volumes list, and examine the details for this volume in theGeneral box at the bottom of the pane. Specifically, note that this volume has a Junction Path value of“/eng_users”.

Page 65: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

65 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-56:

You do have more options for junctioning than just placing your volumes into the root of yournamespace. In the case of the eng_users volume, you will re-junction that volime underneath theengineering volume, and shorten the junction name to take advantage of an already intuitive context.

14. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Namespace.15. In the Namespace pane, select the eng_users junction point.16. Click Unmount.

Page 66: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

66 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

16

Figure 4-57:

The “Unmount Volume” window opens asking for confirmation that you really want to unmount thevolume from the namespace.

17. Click Unmount.

17

Figure 4-58:

The “Unmount Volume” window closes, and focus returns to the “NameSpace” pane in SystemManager. The “eng_users” volume no longer appears in the junction list for the namespace, and sinceit is no longer junctioned in the namespace, that means clients can no longer access it or even see it.Now you will junction the volume in at another location in the namespace.

18. Click Mount.

Page 67: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

67 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

Figure 4-59:

The “Mount Volume” window opens.19. Set the fields in the window as follows.

• Volume Name: eng_users.• Junction Name: users.

20. Click Browse.

19

20

Figure 4-60:

The “Browse For Junction Path” window opens.

21. Select engineering, which will populate “/engineering” into the textbox above the list.22. Click Select to accept the selection.

Page 68: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

68 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

21

22

Figure 4-61:

The “Browse For Junction Path” window closes, and focus returns to the “Mount Volume” window.

23. The fields in the Mount Volume window should now all contain values as follows:

• Volume Name: eng_users.• Junction Name: users.• Junction Path: /engineering.

24. When ready, click Mount.

23

24

Figure 4-62:

The “Mount Volume” window closes, and focus returns to the “Namespace” pane in System Manager.

25. The “eng_users” volume is now mounted in the namespace as “/engineering/users”.

Page 69: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

69 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

Figure 4-63:

You can also create a junction within user created directories. For example, from a CIFS or NFS clientyou could create a folder named “Projects” inside the engineering volume, and then create a “widgets”volume that junctions in under the projects folder. In that scenario, the namespace path to the “widgets”volume contents would be “/engineering/projects/widgets”.

Now you will create a couple of qtrees within the “eng_users” volume, one for each of the users “bob”and “susan”.

26. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Qtrees.27. Click Create to launch the Create Qtree wizard.

Page 70: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

70 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

26

27

Figure 4-64:

The “Create Qtree” window opens.

28. Set the “Name”: value to bob29. Click on the Browse button next to the Volume: property..

Page 71: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

71 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

Figure 4-65:

The "Select a Volume" window opens.

30. Expand the svm1 list and select the eng_users volume. Remember, here you are selecting the name ofthe volume that will host the qtree, not the path where that qtree will reside in the namespace.

31. Click the OK button.

Page 72: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

72 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

30

31

Figure 4-66:

The "Select a Volume" window closes, and focus returns to the "Create Qtree" window.

32. The Volume field is now populated with eng_users.33. Select the Quota tab.

Page 73: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

73 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

32

33

Figure 4-67:

The Quota tab is where you define the space usage limits you want to apply to the qtree. You will notactually be implementing any quota limits in this lab.

34. Click the Create button to finish creating the qtree.

34

Figure 4-68:

Page 74: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

74 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The “Create Qtree” window closes, and focus returns to the “Qtrees” pane in System Manager.

35. The new bob qtree is now present in the qtrees list.36. Now create a qtree for the user account "susan" by clicking the Create button.

35

36

Figure 4-69:

The “Create Qtree” window opens.

37. Select the Details tab and then populate the fields as follows.

• “Name”: susan• “Volume”: eng_users

38. Click Create.

Page 75: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

75 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37

38

Figure 4-70:

The “Create Qtree” window closes, and focus returns to the “Qtrees” pane in System Manager.

39. At this point you should see both the “bob” and “susan” qtrees in System Manager.

Page 76: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

76 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

39

Figure 4-71:

4.2.4 Connect to the SVM From a Windows Client

The “svm1” SVM is up and running and is configured for NFS and CIFS access, so it’s time to validate thateverything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a Windowshost. You should complete both parts of this section so you can see that both hosts are able to seamlessly accessthe volume and it’s files.

This part of the lab demonstrates connecting the Windows client jumphost to the CIFS share \\svm1\nsroot usingthe Windows GUI.

1. On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the taskbar.

1

Figure 4-72:

A Windows Explorer window opens.

2. In Windows Explorer click on Computer.

Page 77: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

77 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3. Click on Map network drive to launch the Map Network Drive wizard.

2

3

Figure 4-73:

The “Map Network Drive” wizard opens.

4. Set the fields in the window to the following values.

• “Drive”: S:• “Folder”: \\svm1\nsroot• Check the Reconnect at sign-in checkbox.

5. When finished click Finish.

Page 78: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

78 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

Figure 4-74:

A new Windows Explorer window opens.

6. The engineering volume you earlier junctioned into the svm1’s namespace is visible at the top of thensroot share, which points to the root of the namespace. If you created another volume on svm1 rightnow and mounted it under the root of the namespace, that new volume would instantly become visiblein this share, and to clients like jumphost that have already mounted the share. Double-click on theengineering folder to open it.

Page 79: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

79 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

Figure 4-75:

File Explorer displays the contents of the engineering folder. Next you will create a file in this folder toconfirm that you can write to it.

7. Notice that the “eng_users” volume that you junctioned in as users is visible inside this folder.8. Right-click in the empty space in the right pane of File Explorer.9. In the context menu, select New > Text Document, and name the resulting file “cifs.txt”.

Page 80: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

80 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8

9

7

Figure 4-76:

10. Double-click the cifs.txt file you just created to open it with Notepad.

Tip: If you aren't seeing file extensions in your lab, you can enable that by going to the Viewmenu at the top of Windows Explorer and checking the File Name Extensions checkbox.

11. In Notepad, enter some text (make sure you put a carriage return at the end of the line, or else whenyou later view the contents of this file on linux the command shell prompt will appear on the same lineas the file contents).

12. Use the File > Save menu in Notepad to save the file’s updated contents to the share. If write access isworking properly you will not receive an error message.

Page 81: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

81 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

11

12

Figure 4-77:

Close Notepad and File Explorer to finish this exercise.

4.2.5 Connect to the SVM From a Linux Client

This section demonstrates how to connect a Linux client to the NFS volume svm1:/ using the Linux command line.

1. Follow the instructions in the “Accessing the Command Line” section at the beginning of this lab guide toopen PuTTY and connect to the system rhel1. Log in as the user root with the password Netapp1!.

2. Verify that there are no NFS volumes currently mounted on rhel1.

[root@rhel1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg_rhel1-lv_root 11877388 4962504 6311544 45% /tmpfs 444612 76 444536 1% /dev/shm/dev/sda1 495844 40084 430160 9% /boot[root@rhel1 ~]#

3. Create the /svm1 directory to serve as a mount point for the NFS volume you will be shortly mounting.

[root@rhel1 ~]# mkdir /svm1[root@rhel1 ~]#

4. Add an entry for the NFS mount to the fstab file.

[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab[root@rhel1 ~]#

Page 82: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

82 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Verify the fstab file contains the new entry you just created.

[root@rhel1 ~]# grep svm1 /etc/fstabsvm1:/ /svm1 nfs rw,defaults 0 0[root@rhel1 ~]#

6. Mount all the file systems listed in the fstab file.

[root@rhel1 ~]# mount -a[root@rhel1 ~]#

7. View a list of the mounted file systems.

[root@rhel1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg_rhel1-lv_root 11877388 4962508 6311540 45% /tmpfs 444612 76 444536 1% /dev/shm/dev/sda1 495844 40084 430160 9% /bootsvm1:/ 19456 128 19328 1% /svm1[root@rhel1 ~]#

The NFS file system svm1:/ now shows as mounted on /svm1.

8. Navigate into the /svm1 directory.

[root@rhel1 ~]# cd /svm1[root@rhel1 svm1]#

9. Notice that you can see the engineering volume that you previously junctioned into the SVM’snamespace.

[root@rhel1 svm1]# lsengineering[root@rhel1 svm1]#

10. Navigate into engineering and list it's contents.

Attention: The following command output assumes that you have already performed theWindows client connection steps found earlier in this lab guide, including creating the cifs.txt file.

[root@rhel1 svm1]# cd engineering[root@rhel1 engineering]# lscifs.txt users[root@rhel1 engineering]#

11. Display the contents of the cifs.txt file you created earlier.

Tip: When you cat the cifs.txt file, if the shell prompt winds up on the same line as the fileoutput then that indicates that you forgot to include a newline at the end of the file when youcreated the file on Windows.

[root@rhel1 engineering]# cat cifs.txtwrite test from jumphost[root@rhel1 engineering]#

12. Verify that you can create file in this directory.

[root@rhel1 engineering]# echo "write test from rhel1" > nfs.txt[root@rhel1 engineering]# cat nfs.txtwrite test from rhel1[root@rhel1 engineering]# lltotal 4-rwxrwxrwx 1 root bin 26 Oct 20 03:05 cifs.txt-rwxrwxrwx 1 root root 22 Oct 20 03:06 nfs.txtdrwxrwxrwx 4 root root 4096 Oct 20 02:37 users[root@rhel1 engineering]#

Page 83: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

83 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4.2.6 NFS Exporting Qtrees (Optional)

Clustered Data ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains how toconfigure qtree exports, and demonstrates how to set different export rules for a given qtree. For this exercise youwill work with the qtrees you created in the previous section.

Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees do stillexist in cluster mode, but their purpose is essentially now limited to just quota management, with most other 7-mode qtree features, including NFS exports, now the exclusive purview of volumes. This functionality changecreated challenges for 7-mode customers with large numbers of NFS qtree exports who were trying to transitionto cluster mode and could not convert those qtrees to volumes because they would exceed clustered DataONTAP’s maximum number of volumes limit.

To solve this problem, clustered Data ONTP 8.2.1 introduced qtree NFS. NetApp continues to recommend thatcustomers favor volumes over qtrees in cluster mode whenever practical, but customers requiring large numbersof qtree NFS exports now have a supported solution under clustered Data ONTAP.

While this section provides a graphical method to configure qtree NFS exports, you must still use the commandline to accomplish some configuration tasks.

Begin by creating a new export and rules that only permit NFS access from the Linux host rhel1.

1. In System Manager, select the Storage Virtual Machines tab.2. Navigate to cluster1 > svm1 > Policies > Export Policies.3. Click the Create button.

1

2

3

Figure 4-78:

The “Create Export Policy” window opens.

4. Set the “Policy Name” to rhel1-only.5. Click the Add button.

Page 84: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

84 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

Figure 4-79:

The “Create Export Rule” window opens.

6. Set “Client Specification” to 192.168.0.61, and notice that you are leaving all of the “Access Protocol”checkboxes unchecked.

7. Click OK.

Page 85: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

85 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

7

Figure 4-80:

The “Create Export Rule” window closes, and focus returns to the “Create Export Policy” window.

8. The new access rule now is now present in the rules window, and the rule’s “Access Protocols” entryindicates that there are no protocol restrictions. If you had selected all the available protocol checkboxeswhen creating this rule, then each of those selected protocols would have been explicitly listed here.

9. Click Create.

Page 86: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

86 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8

9

Figure 4-81:

The “Create Export Policy” window closes, and focus returns to the “Export Policies” pane in SystemManager.

10. The rhel1-only policy now shows up in the export policy list.

Page 87: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

87 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 4-82:

Now you need to apply this new export policy to the qtree. System Manager does not support thiscapability so you will have to use the clustered Data ONTAP command line. Open a PuTTY connectionto cluster1, and log in using the username admin and the password Netapp1!, then enter the followingcommands.

11. Produce a list of svm1’s export policies, and then a list of it’s qtrees:

cluster1::> vserver export-policy showVserver Policy Name--------------- -------------------svm1 defaultsvm1 rhel1-only2 entries were displayed.cluster1::> volume qtree showVserver Volume Qtree Style Oplocks Status---------- ------------- ------------ ------------ --------- --------svm1 eng_users "" ntfs enable normalsvm1 eng_users bob ntfs enable normalsvm1 eng_users susan ntfs enable normalsvm1 engineering "" ntfs enable normalsvm1 svm1_root "" ntfs enable normal5 entries were displayed.cluster1::>

12. Apply the rhel1-only export policy to the “susan” qtree.

cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan -export-policy rhel1-onlycluster1::>

p

Page 88: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

88 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Display the configuration of the “susan” qtree. Notice the Export Policy field shows that this qtree isusing the rhel1-only export policy.

cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan Vserver Name: svm1 Volume Name: eng_users Qtree Name: susan Qtree Path: /vol/eng_users/susan Security Style: ntfs Oplock Mode: enable Unix Permissions: - Qtree Id: 2 Qtree Status: normal Export Policy: rhel1-only Is Export Policy Inherited: falsecluster1::>

14. Produce a report showing the export policy assignments for all the volumes and qtrees that belong tosvm1.

cluster1::> volume qtree show -vserver svm1 -fields export-policyvserver volume qtree export-policy ------- --------- ----- ------------- svm1 eng_users "" default svm1 eng_users bob default svm1 eng_users susan rhel1-only svm1 engineering "" default svm1 svm1_root "" default 5 entries were displayed.cluster1::>

15. Now you need to validate that the more restrictive export policy that you’ve applied to the qtree “susan”is working as expected. If you still have an active PuTTY session open to the the Linux host rhel1then bring that window up now, otherwise open a new PuTTY session to that host (username = root,password = Netapp1!). Run the following commands to verify that you can still access the susan qtreefrom rhel1.

[root@rhel1 ~]# cd /svm1/engineering/users[root@rhel1 users]# lsbob susan[root@rhel1 users]# cd susan[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt[root@rhel1 susan]# cat rhel1.txthello from rhel1[root@rhel1 susan]#

16. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =Netapp1!). This host should be able to access all the volumes and qtrees in the svm1 namespace*except* “susan”, which should give a permission denied error because that qtree’s associated exportpolicy only grants access to the host rhel1.

[root@rhel2 ~]# mkdir /svm1[root@rhel2 ~]# mount svm1:/ /svm1[root@rhel2 ~]# cd /svm1/engineering/users[root@rhel2 users]# lsbob susan[root@rhel2 users]# cd susanbash: cd: susan: Permission denied[root@rhel2 users]# cd bob[root@rhel2 bob]

4.3 Create Storage for iSCSIExpected Completion Time: 50 Minutes

Page 89: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

89 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

This section of the lab is optional, and includes instructions for mounting a LUN on Windows and Linux. If youchoose to complete this section you must first complete the “Create a Storage Virtual Machine for iSCSI” section,and then complete either the “Create, Map, and Mount a Windows LUN” section, or the “Create, Map, and Mounta Linux LUN” section as appropriate based on your platform of interest.

The 50 minute time estimate assumes you complete only one of the Windows or Linux LUN sections. You arewelcome to complete both of those section if you choose, but you should plan on needing approximately 90minutes to complete the entire “Create and Mount a LUN” section.

If you completed the “Create a Storage Virtual Machine for NFS and CIFS” section of this lab then you exploredthe concept of a Storage Virtual Machine (SVM), created an SVM, and configured it to serve data over NFS andCIFS. If you skipped that section of the lab guide, consider reviewing the introductory text found at the beginningof that section, and each of it’s subsections, before you proceed further because this section builds on conceptsdescribed there.

In this section you are going to create another SVM and configure it for SAN protocols, which means you aregoing to configure the SVM for iSCSI since this virtualized lab does not support FC. The configuration steps foriSCSI and FC are similar, so the information provided here is also useful for FC deployment. After you create anew SVM and configure it for iSCSI, you will create a LUN for Windows and/or a LUN for Linux, and then mountthe LUN(s) on their respective hosts.

NetApp supports configuring an SVM to serve data over both SAN and NAS protocols, but it is common to seecustomers use separate SVMs for each in order to separate administrative responsibilities, or for architecturaland operational clarity. For example, SAN protocols do not support LIF failover ,so you cannot use NAS LIFs tosupport SAN protocols. You must instead create dedicated LIFs just for SAN. Implementing separate SVMs forSAN and NAS can in this example simplify the operational complexity of each SVM’s configuration, making eacheasier to understand and manage, but ultimately whether to mix or separate is a customer decision, and not aNetApp recommendation.

Since SAN LIFs do not support migration to different nodes, an SVM must have dedicated SAN LIFs on everynode that you want to service SAN requests, and you must utilize MPIO and ALUA to manage the controller’savailable paths to the LUNs. In the event of a path disruption MPIO and ALUA will compensate by re-routing theLUN communication over an alternate controller path (i.e., over a different SAN LIF).

NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the clusterso that all nodes can provide a path to the LUNs. In large clusters where this would result in the presentationof a large number of paths for a given LUN we recommend that you use portsets to limit the LUN to seeing nomore than 8 LIFs. Data ONTAP 8.3 introduces a new Selective LUN Mapping (SLM) feature to provide furtherassistance in managing fabric paths. SLM limits LUN path access to just the node that owns the LUN and its HApartner, and Data ONTAP automatically applies SLM to all new LUM map operations. For further information onSelective LUN Mapping, please see the Hands-On Lab for SAN Features in clustered Data ONTAP 8.3.

In this lab the cluster contains two nodes connected to a single storage network. You will still configure a total of 4SAN LIFs, because it is common to see implementations with 2 paths per node for redundancy.

This section of the lab allows you to create and mount a LUN for only Windows, only Linux, or both if you desire.Both the Windows and Linux LUN creation steps require that you complete the “Create a Storage Virtual Machinefor iSCSI” section that comes next. If you want to create a Windows LUN, you need to complete the “Create, Map,and Mount a Windows LUN” section that follows. Additionally, if you want to create a Linux LUN, you need tocomplete the “Create, Map, and Mount a Linux LUN” section that follows after that. You can safely complete bothof those last two sections in the same lab.

4.3.1 Create a Storage Virtual Machine for iSCSI

In this section you will create a new SVM named svmluns on the cluster. You will create the SVM, configure it foriSCSI, and create four data LIFs to support LUN access to the SVM (two on each cluster node).

Return to the System Manager window and start the procedure to create a new storage virtual machine.

1. Open the Storage Virtual Machines tab.2. Select cluster1.

Page 90: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

90 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3. Click Create to launch the Storage Virtual Machine Setup wizard.

1

2

3

Figure 4-83:

The “Storage Virual machine (SVM) Setup” window opens.

4. Set the fields as follows:

• “SVM Name”: svmluns• “Data Protocols”: check the iSCSI checkbox.

Tip: The list of available Data Protocols is dependant upon what protocols are licensedon your cluster; if a given protocol is not listed it is because you are not licensed for it.(In this lab the cluster is fully licensed for all features.)

• “Root Aggregate”: aggr1_cluster1_01. If you completed the NAS section of this lab, you willnote that this is the same aggregate you used to hold the volumes for svm1. Multiple SVMscan share the same aggregate.

The default values for IPspace, Volume Type, Default Language, and Security Style are alreadypopulated for you by the wizard, as is the DNS configuration. When ready, click Submit & Continue.

Page 91: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

91 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

Figure 4-84:

The Configure iSCSI Protocol step of the wizard opens.5. Set the fields in the window as follows.

• “LIFs Per Node”: 2• “Subnet”: Demo• Select the Auto-select the IP address from this subnet radio button.

6. The “Provision a LUN for iSCSI Storage (Optional)” section allows to to quickly create a LUN when firstcreating an SVM. This lab guide does not use that in order to show you the much more common activityof adding a new volume and LUN to an existing SVM in a later step.

7. Check the Review or modify LIF configuration (Advanced Settings) checkbox. Checking thischeckbox changes the window layout and makes some fields uneditable, so the screenshot show thischeckbox before it has been checked.

Page 92: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

92 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5 6

7

Figure 4-85:

Once you check the Review or modify LIF configuration checkbox, the “Configure iSCSI Protocol”window changes to include a list of the LIFs that the wizard plans to create.

8. Take note of the LIF names and ports that the wizard has chosen to assign the LIFs you have asked it tocreate.

9. Since this lab utilizes a cluster that only has two nodes, and those nodes are configured as an HA pair,Data ONTAP’s automatically configured Selective LUN Mapping is more than sufficient for this lab sothere is no need to create a portset.

10. Click Submit & Continue.

Page 93: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

93 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8

9

10

Figure 4-86:

The wizard advances to the SVM Administration step. Unlike data LIFS for NAS protocols, whichautomatically support both data and management functionality, iSCSI LIFs only support data protocolsand so you must create a dedicated management LIF for this new SVM.

11. Set the fields in the window as follows:

• “Password”: netapp123• “Confirm Password”: netapp123• “Subnet”: Demo• “Port”: cluster1-01:e0c

12. Click Submit & Continue.

Page 94: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

94 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

11

Figure 4-87:

The “New Storage Virtual Machine (SVM) Summary” winow opens. Review the contents of thiswindow, taking note of the names, IP addresses, and port assignments for the 4 iSCSI LIFs, and themanagement LIF that the wizard created for you.

13. Click OK to close the window.

Page 95: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

95 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-88:

The “New Storage Virtual Machine (SVM) Summary” window closes, and focus returns to SystemManager.

14. System Manager now shows a summary view for the new svmluns SVM.

Page 96: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

96 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

Figure 4-89:

4.3.2 Create, Map, and Mount a Windows LUN

In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you willperform the remaining steps needed to configure and use a LUN under Windows:

• Gather the iSCSI Initiator Name of the Windows client.• Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that volume,

and map the LUN so it can be accessed by the Windows client.• Mount the LUN on a Windows client leveraging multi-pathing.

You must complete all of the subsections of this section in order to use the LUN from the Windows client.

4.3.2.1 Gather the Windows Client iSCSI Initiator NameYou need to determine the Windows client’s iSCSI initiator name so that when you create the LUN you can set upan appropriate initiator group to control access to the LUN.

On the desktop of the Windows client named "jumphost" (the main Windows host you use in the lab), perform thefollowing tasks:

1. Click on the Windows button on the far left side of the task bar.

Page 97: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

97 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

Figure 4-90:

The Start screen opens.

2. Click on Administrative Tools.

2

Figure 4-91:

Windows Explorer opens to the List of Administrative Tools.

3. Double-click the entry for the iSCSI Initiator tool.

Page 98: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

98 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3

Figure 4-92:

The iSCSI Initiator Properties window opens.

4. Select the Configuration tab.5. Take note of the value in the “Initiator Name” field, which contains the initiator name for jumphost.

Attention: The initiator name is iqn.1991-05.com.microsoft:jumphost.demo.netapp.com

You will need this value later, so you might want to copy this value from the properties window and pasteit into a text file on your lab’s desktop so you have it readily available when that time comes.

6. Click OK.

Page 99: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

99 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

6

Figure 4-93:

The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Toolswindow. Leave this window open because you will need to access other tools later in the lab.

4.3.2.2 Create and Map a Windows LUNYou will now create a new thin provisioned Windows LUN named “windows.lun” in the volume winluns on theSVM svmluns. You will also create an initiator igroup for the LUN and populate it with the Windows host jumphost.An initiator group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names of the hosts thatare permitted to see and access the associated LUNs.

Page 100: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

100 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Return to the System Manager window.

1. Open the Storage Virtual Machines tab.2. Navigate to cluster1 > svmluns > Storage > LUNs.3. Click Create to launch the Create LUN wizard.

1

2

3

Figure 4-94:

The “Create LUN Wizard” opens.

4. Click Next to advance to the next step in the wizard.

Page 101: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

101 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

Figure 4-95:

The wizard advances to the General Properties step.

5. Set the fields in the window as follows.

• “Name”: windows.lun.• “Description”: Windows LUN.• “Type”: Windows 2008 or later.• “Size”: 10 GB.• Check the Disable Space Reservation check box.

6. Click Next to continue.

Page 102: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

102 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

Figure 4-96:

The wizard advances to the LUN Container step.

7. Select the radio button to Create a new flexible volume, and set the fields under that heading asfollows.

• “Aggregate Name”: aggr1_cluster1_01.• “Volume Name”: winluns.

8. When finished click Next.

Page 103: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

103 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

8

Figure 4-97:

The wizard advances to the Initiator Mappings step.

9. Click the Add Initiator Group button.

Page 104: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

104 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9

Figure 4-98:

The “Create Initiator Group” window opens.

10. Set the fields in the window as follows.

• “Name”: winigrp• “Operating System”: Windows• “Type”: Select the iSCSI radio button.

11. Click the Initiators tab.

Page 105: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

105 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

10

Figure 4-99:

The “Initiators” tab displays.

12. Click the Add button to add a new initiator.

Page 106: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

106 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

Figure 4-100:

A new empty entry appears in the list of initiators.13. Populate the Name entry with the value of the iSCSI Initiator name for jumnphost that you saved earlier.

In case you misplaced that value, it was:

Attention: iqn.1991-05.com.microsoft:jumphost.demo.netapp.com

14. When you finish entering the value, click the OK button underneath the entry. Finally, click Create.

Page 107: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

107 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

14

Figure 4-101:

An Initiator-Group Summary window opens confiming that the winigrp igroup was created successfully.

15. Click OK to acknowledge the confirmation.

15

Figure 4-102:

The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of theCreate LUN wizard.

16. Click the checkbox under the map column next to the winigrp initiator group.

Caution: This is a critical step because this is where you actually map the new LUN to the newigroup.

17. Click Next to continue.

Page 108: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

108 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

16

17

Figure 4-103:

The wizard advances to the Storage Quality of Service Properties step. You will not be creating anyQoS policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab forAdvanced Concepts for clustered Data ONTAP 8.3.

18. Click Next to continue.

Page 109: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

109 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

Figure 4-104:

The wizards advances to the LUN Summary step, where you can review your selections beforeproceding with creating the LUN.

19. If everything looks correct, click Next.

Page 110: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

110 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

Figure 4-105:

The wizard begins the task of creating the volume that contains the LUN, creating the LUN, andmapping the LUN to the new igroup. As it finishes each step, the wizard displays a green checkmark inthe window next to that step.

20. Click the Finish button to terminate the wizard.

Page 111: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

111 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20

Figure 4-106:

The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager.

21. The new LUN “windows.lun” now shows up in the LUNs view, and if you select it you can review itsdetails in the bottom pane.

Page 112: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

112 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

21

Figure 4-107:

4.3.2.3 Mount the LUN on a Windows ClientThe final step is to mount the LUN on the Windows client. You will be using MPIO/ALUA to support multiplepaths to the LUN using both of the SAN LIFs you configured earlier on the svmluns SVM. Data ONTAP DSM forWindows MPIO is the multi-pathing software you will be using for this lab, and that software is already installed onjumphost.

You should begin by validating that the Multi-Path I/O (MPIO) software is working properly on this windows host.The Administrative Tools window should still be open on jumphost; if you already closed it then you will need tore-open it now so you can access the MPIO tool

1. On the desktop of JUMPHOST, in the Administrative Tools window which you should still have open,double-click the MPIO tool.

Page 113: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

113 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1

Figure 4-108:

The “MPIO Properties” window opens.

2. Select the Discover Multi-Paths tab.3. Examine the Add Support for iSCSI devices checkbox. If this checkbox is NOT greyed out then MPIO

is improperly configured. This checkbox should be greyed out for this lab, but in the event it is not thenplace a check in that checkbox, click the Add button, and then click Yes in the reboot dialog to rebootyour windows host. Once the system finishes rebooting, return to this window to verify that the checkboxis now greyed out, indicating that MPIO is properly configured.

4. Click Cancel.

Page 114: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

114 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2

3

4

Figure 4-109:

The “MPIO Properties” window closes and focus returns to the “Administrative Tools” window forjumphost. Now you need to begin the process of connecting jumphost to the LUN.

5. In Administrative Tools, double-click the iSCSI Initiator tool.

Page 115: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

115 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

Figure 4-110:

The “iSCSI Initiator Properties” window opens.

6. Select the Targets tab.7. Notice that there are no targets listed in the “Discovered Targets” list box, indicating that that are

currently no iSCSI targets mapped to this host.8. Click the Discovery tab.

Page 116: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

116 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

7

8

Figure 4-111:

The Discovery tab is where you begin the process of discovering LUNs, and to do that you must definea target portal to scan. You are going to manually add a target portal to jumphost.

9. Click the Discover Portal… button.

Page 117: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

117 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9

Figure 4-112:

The “Discover Target Portal” window opens. Here you will specify the first of the IP addresses that theclustered Data ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmlunsSVM. Recall that the wizard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.

10. Set the “IP Address or DNS name” textbox to 192.168.0.133, the first address in the range for yourLIFs.

11. Click OK.

10

11

Figure 4-113:

The “Discover Target Portal” window closes, and focus returns to the “iSCSI Initiator Properties”window.

12. The “Target Portals” list now contains an entry for the IP address you entered in the previous step.

Page 118: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

118 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Click on the Targets tab.

12

13

Figure 4-114:

The Targets tab opens to show you the list of discovered targets.14. In the “Discovered targets” list select the only listed target. Observe that the target’s status is Inactive,

because although you have discovered it you have not yet connected to it. Also note that the “Name” ofthe discovered target in your lab will have a different value than what you see in this guide; that namestring is uniquely generated for each instance of the lab. (Make a mental note of that string value as youwill see it a lot as you continue to configure iSCSI in later steps of this process.)

15. Click the Connect button.

Page 119: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

119 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

Figure 4-115:

The “Connect to Target” dialog box opens.

16. Click the Enable multi-path checkbox,.17. Click the Advanced… button.

1617

Figure 4-116:

Page 120: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

120 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The “Advanced Settings” window opens.18. In the “Target portal IP” dropdown menu select the entry containing the IP address you specified when

you discovered the target portal, which should be 192.168.0.133. The listed values are IP Address andPort number combinations, and the specific value you want to select here is 192.168.0.133 / 3260.

19. When finished, click OK.

18

19

Figure 4-117:

The “Advanced Setting” window closes, and focus returns to the “Connect to Target” window.

20. Click OK.

Page 121: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

121 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20

Figure 4-118:

The “Connect to Target” window closes, and focus returns to the “iSCSI Initiator Properties” window.

21. Notice that the status of the listed discovered target has changed from “Inactive” to “Connected”.

21

Figure 4-119:

Thus far you have added a single path to your iSCSI LUN, using the address for thecluster1-01_iscsi_lif_1 LIF the Create LUN wizard created on the node cluster1-01 for the svmlunsSVM. You are now going to add each of the other SAN LIFs present on the svmluns SVM. To begin thisprocedure you must first edit the properties of your existing connection.

22. Still on the Targets tab, select the discovered target entry for your existing connection.23. Click Properties.

Page 122: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

122 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

22

23

Figure 4-120:

The Properties window opens. From this window you will be starting the procedure of connectingalternate paths for your newly connected LUN. You will be repeating this procedure 3 times, once foreach of the remaining LIFs that are present on the svmluns SVM.

LIF IP Address Done

192.168.0.134

192.168.0.135

192.168.0.136

24. The Identifier list will contain an entry for every path you have specified so far, so it can serve as avisual indicator on your progress for defining specify all your paths. The first time you enter this windowyou will see one entry, for the the LIF you used to first connect to this LUN.

25. Click Add Session.

Page 123: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

123 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

24

Figure 4-121:

The Connect to Target window opens.

26. Check the Enable muti-path checkbox.27. Click Advanced….

2627

Figure 4-122:

The Advanced Setting window opens.

28. Select the “Target port IP” entry that contains the IP address of the LIF whose path you are addingin this iteration of the procedure to add an alternate path. The following screenshot shows the192.168.0.134 address, but the value you specify will depend of which specific path you areconfiguring.

29. When finished, click OK.

Page 124: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

124 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

Figure 4-123:

The Advanced Settings window closes, and focus returns to the Connect to Target window.

30. Click OK.

Page 125: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

125 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

30

Figure 4-124:

The Connect to Target window closes, and focus returns to the Properties window where a newidentifier list. Repeat the procedure from the last 4 screenshots for each of the last two remaining LIF IPaddresses.

When you have finished adding all 3 paths the Identifiers list in the Properties window should contain 4entries.

31. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions,one for each path. Note that it is normal for the identifier values in your lab to differ from those in thescreenshot.

32. Click OK.

Page 126: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

126 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

31

32

Figure 4-125:

The Properties window closes, and focus returns to the iSCSI Properties window.

33. Click OK.

Page 127: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

127 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

33

Figure 4-126:

The iSCSI Properties window closes, and focus returns to the desktop of jumphost. If the AdministrativeTools window is not still open on your desktop, open it again now.

If all went well, the jumphost is now connected to the LUN using multi-pathing, so it is time to formatyour LUN and build a filesystem on it.

34. In Administrative Tools, double-click the Computer Management tool.

Page 128: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

128 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

34

Figure 4-127:

The Computer Management window opens.

35. In the left pane of the Computer Management window, navigate to Computer Management (Local) >Storage > Disk Management.

35

Figure 4-128:

36. When you launch Disk Management an Initialize Disk dialog will open informing you that you mustinitialize a new disk before Logical Disk Manager can access it.

Page 129: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

129 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Note: If you see more than one disk listed then MPIO has not correctly recognized that themultiple paths you set up are all for the same LUN, so you will need to cancel the InitializeDisk dialog, quit Computer Manager, and go back to the iSCSI Initiator tool to review your pathconfiguration steps to find and correct any configuration errors, after which you can return to theComputer Management tool and try again.

Click OK to initialize the disk.

36

Figure 4-129:

The Initialize Disk window closes, and focus returns to the Disk Management view in the ComputerManagement window.

37. The new disk shows up in the disk list at the bottom of the window, and has a status of “Unallocated”.38. Right-click inside the “Unallocated” box for the disk (if you right-click outside this box you will get the

incorrect context menu), and select New Simple Volume… from the context menu.

Page 130: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

130 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37

38

Figure 4-130:

The “New Simple Volume Wizard” window opens.

39. Click the Next button to advance the wizard.

Page 131: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

131 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

39

Figure 4-131:

The wizard advances to the “Specify Volume Size” step.

40. The wizard defaults to allocating all of the space in the volume, so click the Next button.

Page 132: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

132 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

40

Figure 4-132:

The wizard advances to the “Assign Drive Letter or Path” step.

41. The wizard automatically selects the next available drive letter, which should be E. Click Next.

Page 133: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

133 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

41

Figure 4-133:

The wizard advances to the “Format Partition” step.

42. Set the “Volume Label” field to WINLUN.43. Click Next.

Page 134: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

134 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

42

43

Figure 4-134:

The wizard advances to the “Completing the New Simple Volume Wizard” step.

44. Click Finish.

Page 135: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

135 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

44

Figure 4-135:

The “New Simple Volume Wizard” window closes, and focus returns to the Disk Management view ofthe Computer Management window.

45. The new WINLUN volume now shows as “Healthy” in the disk list at the bottom of the window,indicating that the new LUN is mounted and ready to use. Before you complete this section of the lab,take a look at the MPIO configuration for this LUN by right-clicking inside the box for the WINLUNvolume.

46. From the context menu select Properties.

Page 136: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

136 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

45

46

Figure 4-136:

The WINLUN (E:) Properties window opens.

47. Click the Hardware tab.48. In the “All disk drives” list select the NETAPP LUN C-Mode Multi-Path Disk entry.49. Click Properties.

Page 137: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

137 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

47

48

49

Figure 4-137:

The “NETAPP LUN C-Mode Multi-Path Disk Device Properties” window opens.

50. Click the MPIO tab.51. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.

We recommend using the Data ONTAP DSM software, as it is the most full-featured option available,although the Microsoft DSM is also supported.

52. The MPIO policy is set to “Least Queue Depth”. A number of different multi-pathing policies areavailable, but the configuration shown here sends LUN I/O down the path that has the fewestoutstanding I/O requests. You can click the More information about MPIO policies link at the bottomof the dialog window for details about all the available policies.

53. The top two paths show both a “Path State” and “TPG State” as “Active/Optimized”. These paths areconnected to the node cluster1-01 and the Least Queue Depth policy makes active use of both paths tothis node. Conversely, the bottom two paths show a “Path State” of “Unavailable”, and a “TPG State” of“Active/Unoptimized”. These paths are connected to the node cluster1-02, and only enter a Path Stateof “Active/Optimized” if the node cluster1-01 becomes unavailable, or if the volume hosting the LUNmigrates over to the node cluster1-02.

54. When you finish reviewing the information in this dialog click OK to exit. If you changed any of thevalues in this dialog you should consider using the Cancel button to discard those changes.

Page 138: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

138 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

50

51

52

53

54

Figure 4-138:

The “NETAPP LUN C-Mode Multi-Path Disk Device Properties” window closes, and focus returns to the“WINLUN (E:) Properties” window.

55. Click OK.

Page 139: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

139 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

55

Figure 4-139:

The “WINLUN (E:) Properties” window closes.

56. Close the Computer Management window.

Page 140: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

140 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

56

Figure 4-140:

57. Close the Administrative Tools window.

57

Figure 4-141:

Page 141: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

141 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

58. You may see a message from Microsoft Windows stating that you must format the disk in drive E:before you can use it. As you may recall, you did format the LUN during the New Simple VolumeWizard", meaning this is an erroneous message from WIndows. Click Cancel to ignore it.

58

Figure 4-142:

Feel free to open Windows Explorer and verify that you can create a file on the E: drive.

This completes this exercise.

4.3.3 Create, Map, and Mount a Linux LUN

In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you willperform the remaining steps needed to configure and use a LUN under Linux:

• Gather the iSCSI Initiator Name of the Linux client.• Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within that

volume, and map the LUN to the Linux client.• Mount the LUN on the Linux client.

You must complete all of the following subsections in order to use the LUN from the Linux client. Note that youare not required to complete the Windows LUN section before starting this section of the lab guide, but thescreenshots and command line output shown here assumes that you have. If you did not complete the WindowsLUN section, the differences will not affect your ability to create and mount the Linux LUN.

4.3.3.1 Gather the Linux Client iSCSI Initiator NameYou need to determine the Linux client’s iSCSI initiator name so that you can set up an appropriate initiator groupto control access to the LUN.

You should already have a PuTTY connection open to the Linux host rhel1. If you do not, then open one nowusing the instructions found in the “Accessing the Command Line” section at the beginning of this lab guide. Theusername will be root and the password will be Netapp1!.

1. Change to the directory that hosts the iscsi configuration files.

[root@rhel1 ~]# cd /etc/iscsi[root@rhel1 iscsi]# lsinitiatorname.iscsi iscsid.conf[root@rhel1 iscsi]#

2. Display the name of the iscsi initiator.

[root@rhel1 iscsi] cat initiatorname.iscsiInitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com[root@rhel1 iscsi]#

Important: The initiator name for rhel1 is iqn.1994-05.com.redhat:rhel1.demo.netapp.com.

Page 142: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

142 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4.3.3.2 Create and Map a Linux LUNIn this activity, you create a new thin provisioned Linux LUN on the SVM “svmluns” under the volume “linluns”,and also create an initiator igroup for the LUN so that only the Linux host rhel1 can access it. An initiator group,or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names for the hosts that are permitted to seethe associated LUNs.

Attention: Switch back to the System Manager window so that you can create the LUN.

1. In System Manager open the Storage Virtual Machines tab.2. In the left pane, navigate to cluster1 > svmluns > Storage > LUNs.3. You may or may not see a listing presented for the LUN windows.lun, depending on whether or not you

completed the lab sections for creating a Windows LUN.4. Click Create.

1

2 3

4

Figure 4-143:

The “Create LUN Wizard” opens.

5. Click Next to advance to the next step in the wizard.

Page 143: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

143 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

Figure 4-144:

The wizard advances to the General Properties step.

6. Set the fields in the window as follows.

• “Name”: linux.lun• “Description”: Linux LUN• “Type”: Linux• “Size”: 10 GB• Check the Disable Space Reservation check box.

7. Click Next to continue.

Page 144: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

144 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7

Figure 4-145:

The wizard advances to the LUN Container step.

8. Select the radio button to Create a new flexible volume, and set the fields under that heading asfollows.

• “Aggregate Name”: aggr1_cluster1_01• “Volume Name”: linluns

9. When finished click Next.

Page 145: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

145 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8

9

Figure 4-146:

The wizard advances to the Initiator Mapping step.10. Click Add Initiator Group.

Page 146: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

146 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 4-147:

The “Create Initiator Group” window opens.

11. Set the fields in the window as follows.

• “Name”: linigrp• “Operating System”: Linux• “Type”: Select the iSCSI radio button.

12. Click the Initiators tab.

Page 147: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

147 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12

Figure 4-148:

The Initiators tab displays.

13. Click the Add button to add a new initiator.

Page 148: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

148 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-149:

A new empty entry appears in the list of initiators.

14. Populate the Name entry with the value of the iSCSI Initiator name for rhel1.

Note: The initiator name is iqn.1994-05.com.redhat:rhel1.demo.netapp.com15. When you finish entering the value, click OK underneath the entry. Finally, click Create.

Page 149: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

149 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

Figure 4-150:

An “Initiator-Group Summary” window opens confiming that the “linigrp igroup” was createdsuccessfully.

16. Click OK to acknowledge the confirmation.

16

Figure 4-151:

The “Initiator-Group Summary” window closes, and focus returns to the Initiator Mapping step of theCreate LUN wizard.

17. Click the checkbox under the “Map” column next to the linigrp initiator group. This is a critical stepbecause this is where you actually map the new LUN to the new igroup.

18. Click Next to continue.

Page 150: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

150 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

17

18

Figure 4-152:

The wizard advances to the Storage Quality of Service Properties step. You will not create any QoSpolicies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab forAdvanced Concepts for clustered Data ONTAP 8.3.1 lab.

19. Click Next to continue.

Page 151: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

151 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

Figure 4-153:

The wizards advances to the LUN Summary step, where you can review your selections beforeproceding to create the LUN.

20. If everything looks correct, click Next.

Page 152: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

152 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20

Figure 4-154:

The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, andmapping the LUN to the new igroup. As it finishes each step the wizard displays a green checkmark inthe window next to that step.

21. Click Finish to terminate the wizard.

Page 153: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

153 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

21

Figure 4-155:

The “Create LUN wizard” window closes, and focus returns to the LUNs view in System Manager.

22. The new LUN “linux.lun” now shows up in the LUNs view, and if you select it you can review its detailsin the bottom pane.

Page 154: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

154 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-156:

The new Linux LUN now exists, and is mapped to your rhel1 client.

Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim spacefrom a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP tonotify the client when the LUN cannot accept writes due to lack of space on the volume. This featureis supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and MicrosoftWindows 2012. The RHEL clients used in this lab are running version 6.6 and so you will enable thespace reclamation feature for your Linux LUN. You can only enable space reclamation through the DataONTAP command line,

23. In the cluster1 CLI, view whether space reclamation is enabled for the LUN.

cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fieldsspace-allocation vserver path space-allocation ------- ---------------------- ---------------- svmluns /vol/linluns/linux.lun disabled cluster1::>

24. Enable space reclamation for the LUN linux.lun.

cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation enabledcluster1::>

25. View the LUN's space reclamation setting again.

cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocationvserver path space-allocation ------- ---------------------- ---------------- svmluns /vol/linluns/linux.lun enabled

Page 155: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

155 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

4.3.3.3 Mount the LUN on a Linux ClientIn this section you will use the Linux command line to configure the host rhel1 to connect to the Linux LUN /vol/linluns/linux.lun you created in the preceding section.

This section assumes that you know how to use the Linux command line. If you are not familiar with theseconcepts, we recommend that you skip this section of the lab.

1. If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root withthe password "Netapp1!".

2. The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, andthe iSCSI initiator name has already been configured for each host. Confirm that is the case:

[root@rhel1 ~]# rpm -qa | grep netappnetapp_linux_unified_host_utilities-7-0.x86_64[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsiInitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com[root@rhel1 ~]#

3. In the /etc/iscsi/iscsid.conf file the node.session.timeo.replacement_timeout value is set to 5 to bettersupport timely path failover, and the node.startup value is set to automatic so that the system willautomatically log in to the iSCSI node at startup.

[root@rhel1 ~]# grep replacement_time /etc/iscsi/iscsid.conf#node.session.timeo.replacement_timeout = 120node.session.timeo.replacement_timeout = 5[root@rhel1 ~]# grep node.startup /etc/iscsi/iscsid.conf# node.startup = automaticnode.startup = automatic[root@rhel1 ~]#

4. You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages anda /etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access theLUN using all of the SAN LIFs you created for the svmluns SVM.

[root@rhel1 ~]# rpm -q device-mapperdevice-mapper-1.02.79-8.el6.x86_64[root@rhel1 ~]# rpm -q device-mapper-multipathdevice-mapper-multipath-0.4.9-72.el6.x86_64[root@rhel1 ~]# cat /etc/multipath.conf# For a complete list of the default configuration values, see# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults# For a list of configuration options with descriptions, see# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated## REMEMBER: After updating multipath.conf, you must run## service multipathd reload## for the changes to take effect in multipathd# NetApp recommended defaultsdefaults { flush_on_last_del yes max_fds max queue_without_daemon no user_friendly_names no dev_loss_tmo infinity fast_io_fail_tmo 5}blacklist { devnode "^sda" devnode "^hd[a-z]" devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^ccis.*"}devices { # NetApp iSCSI LUNs device {

Page 156: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

156 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

vendor "NETAPP" product "LUN" path_grouping_policy group_by_prio features "3 queue_if_no_path pg_init_retries 50" prio "alua" path_checker tur failback immediate path_selector "round-robin 0" hardware_handler "1 alua" rr_weight uniform rr_min_io 128 getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" }}[root@rhel1 ~]#

5. You now need to start the iSCSI software service on rhel1 and configure it to start automatically at boottime. Note that a force-start is only necessary the very first time you start the iscsid service on host.

[root@rhel1 ~]# service iscsid statusiscsid is stopped[root@rhel1 ~]# service iscsid force-startStarting iscsid: OK[root@rhel1 ~]# service iscsi statusNo active sessions[root@rhel1 ~]# chkconfig iscsi on[root@rhel1 ~]# chkconfig --list iscsiiscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@rhel1 ~]#

6. Next discover the available targets using the iscsiadm command. Note that the exact values usedfor the node paths may differ in your lab from what is shown in this example, and that after runningthis command there will not as of yet be active iSCSI sessions because you have not yet created thenecessary device files.

[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.0.133192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4[root@rhel1 ~]# iscsiadm --mode sessioniscsiadm: No active sessions.[root@rhel1 ~]#

7. Create the devices necessary to support the discovered nodes, after which the sessions become active.

[root@rhel1 ~]# iscsiadm --mode node -l allLogging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.134,3260] (multiple)Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.136,3260] (multiple)Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.135,3260] (multiple)Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.133,3260] (multiple)Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.134,3260] successful.Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.136,3260] successful.Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.135,3260] successful.Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.133,3260] successful.[root@rhel1 ~]# iscsiadm --mode sessiontcp: [1] 192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4

Page 157: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

157 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

tcp: [2] 192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4tcp: [3] 192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4tcp: [4] 192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4[root@rhel1 ~]#

8. At this point the Linux client sees the LUN over all four paths but it does not yet understand that all fourpaths represent the same LUN.

[root@rhel1 ~]# sanlun lun showcontroller(7mode)/ device host lun vserver(Cmode) lun-pathname filename adapter protocol size product ------------------------------------------------------------------------------------------------svmluns /vol/linluns/linux.lun /dev/sde host3 iSCSI 10g cDOT svmluns /vol/linluns/linux.lun /dev/sdd host4 iSCSI 10g cDOT svmluns /vol/linluns/linux.lun /dev/sdc host5 iSCSI 10g cDOT svmluns /vol/linluns/linux.lun /dev/sdb host6 iSCSI 10g cDOT [root@rhel1 ~]#

9. Since the lab includes a pre-configured /etc/multipath.conf file you just need to start the multipathdservice to handle the multiple path management and configure it to start automatically at boot time.

[root@rhel1 ~]# service multipathd statusmultipathd is stopped[root@rhel1 ~]# service multipathd startStarting multipathd daemon: OK[root@rhel1 ~]# service multipathd statusmultipathd (pid 8656) is running...[root@rhel1 ~]# chkconfig multipathd on[root@rhel1 ~]# chkconfig --list multipathdmultipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@rhel1 ~]#

10. The multipath command displays the configuration of DM-Multipath, and the multipath -llcommand displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper that you use to access the multipathed LUN (in order to create a filesystem on it and tomount it); the first line of output from the multipath -ll command lists the name of that device file (inthis example 3600a0980774f6a34515d464d486c7137). The autogenerated name for this devicefile will likely differ in your copy of the lab. Also pay attention to the output of the sanlun lun show -pcommand which shows information about the Data ONTAP path of the LUN, the LUN’s size, its devicefile name under /dev/mapper, the multipath policy, and also information about the various device pathsthemselves.

[root@rhel1 ~]# multipath -ll[1m3600a0980774f6a34515d464d486c7137 dm-2 NETAPP,LUN C-Modesize=10G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw|-+- policy='round-robin 0' prio=50 status=active| |- 6:0:0:0 sdb 8:16 active ready running| `- 3:0:0:0 sde 8:64 active ready running`-+- policy='round-robin 0' prio=10 status=enabled |- 5:0:0:0 sdc 8:32 active ready running `- 4:0:0:0 sdd 8:48 active ready running[root@rhel1 ~]# ls -l /dev/mappertotal 0lrwxrwxrwx 1 root root 7 Oct 20 06:50 3600a0980774f6a34515d464d486c7137 -> ../dm-2crw-rw---- 1 root root 10, 58 Oct 19 18:57 controllrwxrwxrwx 1 root root 7 Oct 19 18:57 vg_rhel1-lv_root -> ../dm-0lrwxrwxrwx 1 root root 7 Oct 19 18:57 vg_rhel1-lv_swap -> ../dm-1[root@rhel1 ~]# sanlun lun show -p ONTAP Path: svmluns:/vol/linluns/linux.lun LUN: 0 LUN Size: 10g Product: cDOT Host Device: 3600a0980774f6a34515d464d486c7137 Multipath Policy: round-robin 0

Page 158: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

158 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Multipath Provider: Native--------- ---------- ------- ------------ ----------------------------------------------host vserver path path /dev/ host vserverstate type node adapter LIF--------- ---------- ------- ------------ ----------------------------------------------up primary sdb host6 cluster1-01_iscsi_lif_1 up primary sde host3 cluster1-01_iscsi_lif_2 up secondary sdc host5 cluster1-02_iscsi_lif_1 up secondary sdd host4 cluster1-02_iscsi_lif_2 [root@rhel1 ~]#

You can see even more detail about the configuration of multipath and the LUN as a whole by runningthe commands multipath -v3 -d -ll or iscsiadm -m session -P 3. As the output of thesecommands is rather lengthy, it is omitted here.

11. The LUN is now fully configured for multipath access, so the only steps remaining before you can usethe LUN on the Linux host is to create a filesystem and mount it. When you run the following commandsin your lab you will need to substitute in the /dev/mapper/… string that identifies your LUN (get thatstring from the output of ls -l /dev/mapper):

[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980774f6a34515d464d486c71377mke2fs 1.41.12 (17-May-2010)Discarding device blocks: 0/204800 done Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=1 blocks, Stripe width=16 blocks655360 inodes, 2621440 blocks131072 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=268435456080 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632Writing inode tables: done Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 34 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@rhel1 ~]# mkdir /linuxlun[root@rhel1 ~]# mount -t ext4 -o discard /dev/mapper/3600a0980774f6a345515d464d486c7137 /linuxlun[root@rhel1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg_rhel1-lv_root 11877388 4962816 6311232 45% /tmpfs 444612 76 444536 1% /dev/shm/dev/sda1 495844 40084 430160 9% /bootsvm1:/ 19456 128 19328 1% /svm1/dev/mapper/3600a0980774f6a34515d464d486c7137 10321208 154100 9642820 2% /linuxlun[root@rhel1 ~]# ls /linuxlunlost+found[root@rhel1 ~]# echo "hello from rhel1" > /linuxlun/test.txt[root@rhel1 ~]# cat /linuxlun/test.txthello from rhel1[root@rhel1 ~]# ls -l /linuxlun/test.txt-rw-r--r-- 1 root root 6 Oct 20 06:54 /linuxlun/test.txt[root@rhel1 ~]#

The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.

12. To have RHEL automatically mount the LUN’s filesystem at boot time, run the following command(modified to reflect the multipath device path being used in your instance of the lab) to add the mountinformation to the /etc/fstab file. The following command should be entered as a single line

[root@rhel1 ~]# echo '/dev/mapper/3600a0980774f6a34515d464d486c7137 /linuxlun ext4 _netdev,discard,defaults 0 0' >> /etc/fstab[root@rhel1 ~]#

Page 159: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

159 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5 ReferencesThe following references were used in writing this lab guide.

• TR-3982: “NetApp Clustered Data ONTAP 8.2.X – an Introduction:, July 2014• TR-4100: “Nondisruptive Operations and SMB File Shares for Clustered Data ONTAP”, April 2013• TR-4129: “Namespaces in clustered Data ONTAP”, July 2014

Page 160: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

160 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6 Version History

Version Date Document Version History

Version 1.0 October 2014 Initial Release for Hands On Labs

Version 1.0.1 December 2014 Updates for Lab on Demand

Version 1.1 April 2015 Updated for Data ONTAP 8.3GA and other applicationsoftware. NDO section spun out into a separate lab guide.

Version 1.2 October 2015 Updated for Data ONTAP 8.3.1GA and other applicationsoftware.

Page 161: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

161 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7 CLI IntroductionThis begins the CLI version of the Basic Concepts for Clustered Data ONTAP 8.3.1.

Page 162: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

162 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8 IntroductionThis lab introduces the fundamentals of clustered Data ONTAP®. In it you will start with a pre-created 2-nodecluster, and configure Windows 2012R2 and Red Hat Enterprise Linux 6.6 hosts to access storage on the clusterusing CIFS, NFS, and iSCSI.

8.1 Why clustered Data ONTAP?One of the key ways to understand the benefits of clustered Data ONTAP is to consider server virtualization.Before server virtualization, system administrators frequently deployed applications on dedicated servers in orderto maximize application performance, and to avoid the instabilities often encountered when combining multipleapplications on the same operating system instance. While this design approach was effective, it also had thefollowing drawbacks:

• It did not scale well — adding new servers for every new application was expensive.• It was inefficient — most servers are significantly under-utilized, and businesses are not extracting the

full benefit of their hardware investment.• It was inflexible — re-allocating standalone server resources for other purposes is time consuming, staff

intensive, and highly disruptive.

Server virtualization directly addresses all three of these limitations by decoupling the application instancefrom the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware, allowingbusinesses to consolidate their server workloads to a smaller set of more effectively utilized physical servers.Additionally, the ability to transparently migrate running virtual machines across a pool of physical serversreduces the impact of downtime due to scheduled maintenance activities.

Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with servervirtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a singlelogical cluster that can non-disruptively service multiple storage workload needs. With clustered Data ONTAP youcan:

• Combine different types and models of NetApp storage controllers (known as nodes) into a sharedphysical storage resource pool (referred to as a cluster).

• Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on thesame storage cluster.

• Consolidate various storage workloads to the cluster. Each workload can be assigned its own StorageVirtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own datavolumes, LUNs, CIFS shares, and NFS exports.

• Support multi-tenancy with delegated administration of SVMs. Tenants can be different companies,business units, or even individual application owners, each with their own distinct administrators whoseadmin rights are limited to just the assigned SVM.

• Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.• Non-disruptively migrate live data volumes and client connections from one cluster node to another.• Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively

removed from the cluster, meaning that you can non-disruptively scale a cluster up and down duringhardware refresh cycles.

• Leverage multiple nodes in the cluster to simultaneously service a given SVM's storage workloads.This means that businesses can scale out their SVMs beyond the bounds of a single physical node inresponse to growing storage and performance requirements, all non-disruptively.

• Apply software and firmware updates, and configuration changes without downtime.

Page 163: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

163 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8.2 Lab ObjectivesThis lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow you tofocus on the topics that specifically interest you. The "Clusters" section is prerequisite for the other sections. If youare interested in NAS functionality then complete the “Storage Virtual Machines for NFS and CIFS” section. If youare interested in SAN functionality, then complete the “Storage Virtual Machines for iSCSI” section, and at leastone of it's Windows or Linux subsections (you may do both if you so choose).

Here is a summary of the exercises in this lab, along with their Estimated Completion Times (ECT):

• Clusters (Required, ECT = 20 minutes).

• Explore a cluster• View Advanced Drive Partitioning.• Create a data aggregate.• Create a Subnet.

• Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)

• Create a Storage Virtual Machine.• Create a volume on the Storage Virtual Machine.• Configure the Storage Virtual Machine for CIFS and NFS access.• Mount a CIFS share from the Storage Virtual Machine on a Windows client.• Mount a NFS volume from the Storage Virtual Machine on a Linux client.

• Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)

• Create a Storage Virtual Machine.• Create a volume on the Storage Virtual Machine.

• For Windows (Optional, ECT = 40 minutes)

• Create a Windows LUN on the volume and map the LUN to an igroup.• Configure a Windows client for iSCSI and MPIO and mount the LUN.

• For Linux (Optional, ECT = 40 minutes)

• Create a Linux LUN on the volume and map the LUN to an igroup.• Configure a Linux client for iSCSI and multipath and mount the LUN.

This lab includes instructions for completing each of these tasks using either SystemManager, NetApp’s graphical administration interface, or the Data ONTAP command line.The end state of the lab produced by either method is exactly the same so use whichevermethod you are the most comfortable with.

8.3 PrerequisitesThis lab introduces clustered Data ONTAP, and makes no assumptions that the user has previous experiencewith Data ONTAP. The lab does assume some basic familiarity with storage system related concepts such asRAID, CIFS, NFS, LUNs, and DNS.

This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume thatthe lab user has a basic familiarity with Microsoft Windows.

This lab also includes steps for mounting NFS volumes and LUNs on a Linux client. All steps are performed fromthe Linux command line, and assumes a basic working knowledge of the Linux command line. A basic workingknowledge of a text editor such as vi may be useful, but is not required.

Page 164: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

164 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8.4 Accessing the Command LinePuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order torun command line commands.

1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host JUMPHOST asshown in the following screenshot; just double-click on the icon to launch it.

Tip: If you already have a PuTTY session open and you want to start another (even to a differenthost), you will instead need to right-click the PuTTY icon and select PuTTY from the contextmenu.

1

Figure 8-1:

Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. Thisexample shows a user connecting to the Data ONTAP cluster named cluster1.

2. By default PuTTY should launch into the “Basic options for your PuTTY session” display as shown in thescreenshot. If you accidentally navigate away from this view just click on the Session category item toreturn to this view.

3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it toopen the connection. A terminal window will open and you will be prompted to log into the host. You canfind the correct username and password for the host in the Lab Host Credentials table found in the “LabEnvironment” section of this guide.

Page 165: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

165 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2

3

Figure 8-2:

If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a littleinitimidating. However, the commands are actually quite easy to use if you remember the following 3 tips:

• Make liberal use of the Tab key while entering commands, as the clustered Data ONTAPcommand shell supports tab completion. If you hit the Tab key while entering a portion of acommand word, the command shell will examine the context and try to complete the rest ofthe word for you. If there is insufficient context to make a single match, it will display a list of allthe potential matches. Tab completion also usually works with command argument values, butthere are some cases where there is simply not enough context for it to know what you want,in which case you will just need to type in the argument value.

• You can recall your previously entered commands by repeatedly pressing the up-arrowkey, and you can then navigate up and down the list using the up-arrow and down-arrowkeys.When you find a command you want to modify, you can use the left-arrow, right-arrow, and Delete keys to navigate around in a selected command to edit it.

• Entering a question mark character (?) causes the CLI to print contextual help information. Youcan use this character on a line by itself or while entering a command.

The clustered Data ONTAP command lines supports a number of additional usability features that makethe command line much easier to use. If you are interested in learning more about this topic then pleaserefer to the "Hands-On Lab for Advanced Features of Clustered Data ONTAP 8.3.1" lab, which containsan entire section dedicated to this subject.

Page 166: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

166 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9 Lab EnvironmentThe following figure contains a diagram of the environment for this lab.

Figure 9-1:

All of the servers and storage controllers presented in this lab are virtual devices, and the networks thatinterconnect them are exclusive to your lab session. While we encourage you to follow the demonstrationsteps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data ONTAPfeatures that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of thesame functionality as physical storage controllers, they are not capable of providing the same performance as aphysical controller, which is why these labs are not suitable for performance testing.

Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.

Table 3: Table 1: Lab Host Credentials

Hostname Description IP Address(es) Username Password

JUMPHOSTWindows 20012R2 RemoteAccess host

192.168.0.5 Demo\Administrator Netapp1!

RHEL1 Red Hat 6.6 x64 Linux host 192.168.0.61 root Netapp1!

RHEL2 Red Hat 6.6 x64 Linux host 192.168.0.62 root Netapp1!

DC1 Active Directory Server 192.168.0.253 Demo\Administrator Netapp1!

cluster1 Data ONTAP cluster 192.168.0.101 admin Netapp1!

cluster1-01 Data ONTAP cluster node 192.168.0.111 admin Netapp1!

cluster1-02 Data ONTAP cluster node 192.168.0.112 admin Netapp1!

Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.

Page 167: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

167 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Table 4: Table 2: Preinstalled NetApp Software

Hostname Description

JUMPHOSTData ONTAP DSM v4.1 for Windows MPIO, Windows Unified Host Utility Kitv7.0.0, NetApp PowerShell Toolkit v3.2.1.68

RHEL1, RHEL2 Linux Unified Host Utilities Kit v7.0

Page 168: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

168 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10 Using the clustered Data ONTAP Command LineIf you choose to utilize the clustered Data ONTAP command line to complete portions of this lab then you shouldbe aware that clustered Data ONTAP supports command line completion. When entering a command at the DataONTAP command line you can at any time mid-typing hit the Tab key and if you have entered enough unique textfor the command interpreter to determine what the rest of the argument would be it will automatically fill in thattext for you. For example, entering the text “cluster sh“ and then hitting the tab key will automatically expand theentered command text to cluster show.

At any point mid-typing you can also enter the ? character and the command interpreter will list any potentialmatches for the command string. This is a particularly useful feature if you cannot remember all of the variouscommand line options for a given clustered Data ONTAP command; for example, to see the list of optionsavailable for the cluster show command you can enter:

cluster1::> cluster show ? [ -instance | -fields <fieldname>, ... ] [[-node] <nodename>] Node [ -eligibility {true|false} ] Eligibility [ -health {true|false} ] Healthcluster1::>

When using tab completion, if the Data ONTAP command interpreter is unable to identify a unique expansion itwill display a list of potential matches similar to what using the ? character does.

cluster1::> cluster sError: Ambiguous command. Possible matches include: cluster show cluster statisticscluster1::>

The Data ONTAP commands are structured hierarchically. When you log in you are placed at the root ofthat command hierarchy, but you can step into a lower branch of the hierarchy by entering one of the basecommands. For example, when you first log in to the cluster enter the ? command to see the list of available basecommands, as follows:

cluster1::> ? up Go up one directory cluster> Manage clusters dashboard> (DEPRECATED)-Display dashboards event> Manage system events exit Quit the CLI session export-policy Manage export policies and rules history Show the history of commands for this CLI session job> Manage jobs and job schedules lun> Manage LUNs man Display the on-line manual pages metrocluster> Manage MetroCluster network> Manage physical and virtual network connections qos> QoS settings redo Execute a previous command rows Show/Set the rows for this CLI session run Run interactive or non-interactive commands in the nodeshell security> The security directory set Display/Set CLI session settings snapmirror> Manage SnapMirror statistics> Display operational statistics storage> Manage physical storage, including disks, aggregates, and failover system> The system directory top Go to the top-level directory volume> Manage virtual storage, including volumes, snapshots, and mirrors vserver> Manage Vserverscluster1::>

Page 169: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

169 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The > character at the end of a command signifies that it has a sub-hierarchy; enter the vserver command toenter the vserver sub-hierarchy.

cluster1::> vservercluster1::vserver> ? active-directory> Manage Active Directory add-aggregates Add aggregates to the Vserver add-protocols Add protocols to the Vserver audit> Manage auditing of protocol requests that the Vserver services check> The check directory cifs> Manage the CIFS configuration of a Vserver context Set Vserver context create Create a Vserver dashboard> The dashboard directory data-policy> Manage data policy delete Delete a Vserver export-policy> Manage export policies and rules fcp> Manage the FCP service on a Vserver fpolicy> Manage FPolicy group-mapping> The group-mapping directory iscsi> Manage the iSCSI services on a Vserver locks> Manage Client Locks modify Modify a Vserver name-mapping> The name-mapping directory nfs> Manage the NFS configuration of a Vserver peer> Create and manage Vserver peer relationships remove-aggregates Remove aggregates from the Vserver remove-protocols Remove protocols from the Vserver rename Rename a Vserver security> Manage ontap security services> The services directory show Display Vservers show-protocols Show protocols for Vserver smtape> The smtape directory start Start a Vserver stop Stop a Vserver vscan> Manage Vscancluster1::vserver>

Notice how the prompt changes to reflect that you are now in the vserver sub-hierarchy, and that some of thesubcommands have sub-hierarchies of their own. To return to the root of the hierarchy enter the top command;you can also navigate upwards one level at a time by using the up or .. commands.

cluster1::vserver> topcluster1::>

The Data ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key youcan step through the series of commands you ran earlier, and you can selectively execute a given commandagain when you find it by hitting the Enter key. You can also use the left and right arrow keys to edit thecommand before you run it again.

Page 170: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

170 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11 Lab Activities

11.1 ClustersExpected Completion Time: 20 Minutes

A cluster is a group of physical storage controllers, or nodes, that are joined together for the purpose of servingdata to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute it’swork across the member nodes. Communication and data transfer between member nodes (such as when aclient accesses data on a node other than the one actually hosting the data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while management and client data traffic passes overseparate management and data networks configured on the member nodes.

Clusters typically consist of one, or more, NetApp storage controller High Availability (HA) pairs. Both controllersin an HA pair actively host and serve data, but they are also capable of taking over their partner’s responsibilitiesin the event of a service disruption by virtue of their redundant cable paths to each other’s disk storage. Havingmultiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support non-disruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This meansthat cluster expansion and technology refreshes can take place while the cluster remains fully online, and servingdata.

Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an evennumber of controller nodes. There is one exception to this rule, the “single node cluster”, which is a specialcluster configuration that supports small storage deployments using a single physical controller head. The primarydifference between single node and standard clusters, besides the number of nodes, is that a single node clusterdoes not have a cluster network. Single node clusters can be converted into traditional multi-node clusters, andat that point become subject to all the standard cluster requirements like the need to utilize an even number ofnodes consisting of HA pairs. This lab does not contain a single node cluster, and so this lab guide does notdiscuss them further.

Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although thenode limit can be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters that also hostiSCSI and FC can scale up to a maximum of 8 nodes.

This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulatedcontroller, also known as a VSIM, is a virtual machine that simulates the functionality of a physical controllerwithout the need for dedicated controller hardware. The vsim is not designed for performance testing, but doesoffer much of the same functionality as a physical FAS controller, including the ability to generate I/O to disks.This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product features. The vsimis limited when a feature requires a specific physical capability that the vsim does not support. For example,vsims do not support Fibre Channel connections, which is why this lab uses iSCSI to demonstrate block storagefunctionality.

This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes DataONTAP licenses, the cluster’s basic network configuration, and a pair of pre-configured HA controllers. In thisnext section you will create the aggregates that are used by the SVMs that you will create in later sections of thelab. You will also take a look at the new Advanced Drive Partitioning feature introduced in clustered Data ONTAP8.3.

11.1.1 Advanced Drive Partitioning

Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storagein clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e.,cabling) to a given controller head.

Page 171: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

171 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for agroup of disks that are all physically attached to the same node. A given disk can only be a member of a singleaggregate.

By default each cluster node has one aggregate known as the root aggregate, which is a group of the node’slocal disks that host the node’s Data ONTAP operating system. A node’s root aggregate is automatically createdduring Data ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks(1 data, 2 parity), and has a name that begins the string aggr0. For example, in this lab the root aggregate ofthe node cluster1-01 is named “aggr0_cluster1_01.”, and the root aggregate of the node cluster1-02 is named“aggr0_cluster1_02”.

On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controller’s rootaggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root aggregate diskoverhead requirement signficantly reduces the disks available for storing user data. To improve usable capacity,NetApp introduced Advanced Drive Partitioning in 8.3, which divides the Hard Disk Drives (HDDs) on nodesthat have this feature enabled into two partititions; a small root partition, and a much larger data partition. DataONTAP allocates the root partitions to the node root aggregate, and the data partitions for data aggregates. Eachpartition behaves like a virtual disk, so in terms of RAID, Data ONTAP treats these partitions just like physicaldisks when creating aggregates. The key benefit is that a much higher percentage of the node’s overall diskcapacity is now available to host user data.

Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs installedin their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system installationtime, and there is no way to convert an existing system to use Advanced Drive Partitioning other than tocompletely evacuate the affected HDDs, and re-install Data ONTAP.

All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of HDDs. Thecapability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3 also introducesSSD partitioning for use with Flash Pools, but the details of that feature lie outside the scope of this lab.

In this section, you use the CLI to determine if a cluster node is utilizing Advanced Drive Partitioning.

If you do not already have a PuTTY session established to cluster1, launch PuTTY as described in the “Accessingthe Command Line” section at the beginning of this guide, and connect to the host cluster1 using the usernameadmin and the password Netapp1!.

1. List all of the physical disks attached to the cluster:

cluster1::> storage disk show Usable Disk Container ContainerDisk Size Shelf Bay Type Type Name Owner---------------- ---------- ----- --- ------- ----------- --------- --------VMw-1.1 28.44GB - 0 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.2 28.44GB - 1 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.3 28.44GB - 2 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.4 28.44GB - 3 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.5 28.44GB - 4 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.6 28.44GB - 5 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.7 28.44GB - 6 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.8 28.44GB - 8 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.9 28.44GB - 9 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.10 28.44GB - 10 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.11 28.44GB - 11 VMDISK shared - cluster1-01VMw-1.12 28.44GB - 12 VMDISK shared - cluster1-01VMw-1.13 28.44GB - 0 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.14 28.44GB - 1 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.15 28.44GB - 2 VMDISK shared aggr0_cluster1_02

Page 172: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

172 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1-02VMw-1.16 28.44GB - 3 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.17 28.44GB - 4 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.18 28.44GB - 5 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.19 28.44GB - 6 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.20 28.44GB - 8 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.21 28.44GB - 9 VMDISK shared aggr0_cluster1_02 VMw-1.22 28.44GB - 10 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.23 28.44GB - 11 VMDISK shared - cluster1-02VMw-1.24 28.44GB - 12 VMDISK shared - cluster1-0224 entries were displayed.cluster1::>

The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster.The container type for all the disks is “shared”, which indicates that the disks are partitioned. For disksthat are not partitioned, you would typically see values like “spare”, “data”, “parity”, and “dparity”. TheOwner field indicates which node the disk is assigned to, and the Container Name field indicates whichaggregate the disk is assigned to. Notice that two disks for each node do not have a Container Namelisted; these are spare disks that Data ONTAP can use as replacements in the event of a disk failure.

2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List theaggregates that exist on the cluster:

cluster1::> aggr showAggregate Size Available Used% State #Vols Nodes RAID Status--------- -------- --------- ----- ------- ------ ---------------- ------------aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normalaggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normal2 entries were displayed.cluster1::>

3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the commandthat you would ordinarily use to display that information for an aggregate that is not using partitioneddisks.

cluster1::> storage disk show -aggregate aggr0_cluster1_01There are no entries matching your query.Info: One or more aggregates queried for use shared disks. Use "storage aggregate show-status" to get correct set of disks associated with these aggregates.cluster1::>

4. As you can see, in this instance the preceding command is not able to produce a list of disks becausethis aggregate is using shared disks. Instead it refers you to use the “storage aggregate show” commandto query the aggregate for a list of it’s assigned disk partitions.

cluster1::> storage aggregate show-status -aggregate aggr0_cluster1_01Owner Node: cluster1-01 Aggregate: aggr0_cluster1_01 (online, raid_dp) (block checksums) Plex: /aggr0_cluster1_01/plex0 (online, normal, active, pool0) RAID Group /aggr0_cluster1_01/plex0/rg0 (normal, block checksums) Usable Physical Position Disk Pool Type RPM Size Size Status -------- --------------------------- ---- ----- ------ -------- -------- -------- shared VMw-1.1 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.2 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.3 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.4 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.5 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.6 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.7 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.8 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.9 0 VMDISK - 1.52GB 28.44GB (normal)

Page 173: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

173 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

shared VMw-1.10 0 VMDISK - 1.52GB 28.44GB (normal)10 entries were displayed.cluster1::>

The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 1.52 GB,and you know that the aggregate is using the listed disk’s root partitions because aggr0_cluster1_01 is aroot aggregate.

For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automaticallydetermines the size of the root and data disk partitions at system installation time. That determinationis based on the quantity and size of the available disks assigned to each node. As you saw earlier, thisparticular cluster node has 12 disks, so during installation Data ONTAP partitioned all 12 disks but onlyassigned 10 of those root partitions to the root aggregate so that the node would have 2 spares disksavailable.to protect against disk failures.

5. The Data ONTAP CLI includes a diagnostic level command that provides a more comprehensive singleview of a system’s partitioned disks. The following command shows the partitioned disks that belong tothe node cluster1-01.

cluster1::> set -priv diagWarning: These diagnostic commands are for use by NetApp personnel only.Do you want to continue? {y|n}: ycluster1::*> disk partition show -owner-node-name cluster1-01 Usable Container ContainerPartition Size Type Name Owner------------------------- ------- ------------- ----------------- -----------------VMw-1.1.P1 26.88GB spare Pool0 cluster1-01VMw-1.1.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.2.P1 26.88GB spare Pool0 cluster1-01VMw-1.2.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.3.P1 26.88GB spare Pool0 cluster1-01VMw-1.3.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.4.P1 26.88GB spare Pool0 cluster1-01VMw-1.4.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.5.P1 26.88GB spare Pool0 cluster1-01VMw-1.5.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.6.P1 26.88GB spare Pool0 cluster1-01VMw-1.6.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.7.P1 26.88GB spare Pool0 cluster1-01VMw-1.7.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 VMw-1.8.P1 26.88GB spare Pool0 cluster1-01VMw-1.8.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.9.P1 26.88GB spare Pool0 cluster1-01VMw-1.9.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.10.P1 26.88GB spare Pool0 cluster1-01VMw-1.10.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01VMw-1.11.P1 26.88GB spare Pool0 cluster1-01VMw-1.11.P2 1.52GB spare Pool0 cluster1-01VMw-1.12.P1 26.88GB spare Pool0 cluster1-01VMw-1.12.P2 1.52GB spare Pool0 cluster1-0124 entries were displayed.cluster1::*> set -priv admincluster1::>

11.1.2 Create a New Aggregate on Each Cluster Node

The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregateshould not be used to host user data, so in this section you will be creating a new aggregate on each of the nodesin cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will be creating later in thislab.

Page 174: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

174 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of thestorage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to useone or more specific aggregates to host the SVM’s volumes. Multiple SVMs can be assigned to use the sameaggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just asingle SVM provides greater workload isolation.

For this lab, you will be creating a single user data aggregate on each node in the cluster.

1. Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist optionto display a list of the disks in the entire cluster.)

Note: By default the PuTTY window may wrap output lines because the window is too small;if this is the case for you then simply expand the window by selecting its edge and dragging itwider, after which any subsequent output will utilize the visible width of the window.

cluster1::> disk show -nodelist cluster1-01 Usable Disk Container ContainerDisk Size Shelf Bay Type Type Name Owner---------------- ---------- ----- --- ------- ----------- --------- --------VMw-1.25 28.44GB - 0 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.26 28.44GB - 1 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.27 28.44GB - 2 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.28 28.44GB - 3 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.29 28.44GB - 4 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.30 28.44GB - 5 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.31 28.44GB - 6 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.32 28.44GB - 8 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.33 28.44GB - 9 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.34 28.44GB - 10 VMDISK shared aggr0_cluster1_01 cluster1-01VMw-1.35 28.44GB - 11 VMDISK shared - cluster1-01VMw-1.36 28.44GB - 12 VMDISK shared - cluster1-01VMw-1.37 28.44GB - 0 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.38 28.44GB - 1 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.39 28.44GB - 2 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.40 28.44GB - 3 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.41 28.44GB - 4 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.42 28.44GB - 5 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.43 28.44GB - 6 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.44 28.44GB - 8 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.45 28.44GB - 9 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.46 28.44GB - 10 VMDISK shared aggr0_cluster1_02 cluster1-02VMw-1.47 28.44GB - 11 VMDISK shared - cluster1-02VMw-1.48 28.44GB - 12 VMDISK shared - cluster1-0224 entries were displayed.cluster1::>

2. Display a list of the aggregates on the cluster.

cluster1::> aggr showAggregate Size Available Used% State #Vols Nodes RAID Status--------- -------- --------- ----- ------- ------ ---------------- ------------aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normalaggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normal2 entries were displayed.cluster1::>

3. Create the aggregate named “aggr1_cluster1_01” on the node cluster1-01.

cluster1::> aggr create -aggregate aggr1_cluster1_01 -node cluster1-01 -diskcount 5[Job 257] Job is queued: Create aggr1_cluster1_01.[Job 257] Job succeeded: DONEcluster1::>

4. Create the aggregate named “aggr1_cluster1_02” on the node cluster1-02.

cluster1::> aggr create -aggregate aggr1_cluster1_02 -node cluster1-02 -diskcount 5[Job 258] Job is queued: Create aggr1_cluster1_02.[Job 258] Job succeeded: DONEcluster1::>

Page 175: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

175 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Display the list of aggregates on the cluster again.

cluster1::> aggr showAggregate Size Available Used% State #Vols Nodes RAID Status--------- -------- --------- ----- ------- ------ ---------------- ------------aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normalaggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normalaggr1_cluster1_01 72.53GB 72.53GB 0% online 0 cluster1-01 raid_dp, normalaggr1_cluster1_02 72.53GB 72.53GB 0% online 0 cluster1-02 raid_dp, normal4 entries were displayed.cluster1::>

11.1.3 Networks

This section discusses the network components that Clustered Data ONTAP provides to manage your cluster.

Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps) youcan create to aggregate those connections, and the VLANs you can use to subdivide them.

A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of associatedcharacteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can failover to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM,and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs, in part, onall nodes that are hosting its LIFs.

Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM has it’sown routing table, changes to one SVM’s routing table does not have impact on any other SVM’s routing table.

IPspaces are new in Data ONTAP 8.3, and allow you to configure a Data ONTAP cluster to logically separateone IP network from another, even if those two networks are using the same IP address range. IPspaces are amult-tenancy feature that allow storage service providers to share a cluster between different companies while stillseparating storage traffic for privacy and security. Every cluster includes a default IPspace to which Data ONTAPautomatically assigns new SVMs, and that default IPspace is probably sufficient for most NetApp customers whodeploy a cluster within a single company or organization that uses a non-conflicting IP address range.

Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to thesame layer 2 networks, both physical and virtual (i.e., VLANs). Every IPspace has it’s own set of BroadcastDomains, and Data ONTAP provides a default broadcast domain to go along with the default IPspace. Broadcastdomains are used by Data ONTAP to determine what ports an SVM can use for it’s LIFs.

Subnets in Data ONTAP 8.3 are a convenience feature intended to make LIF creation and management easierfor Data ONTAP administrators. A subnet is a pool of IP addresses that you can specify by name when creatinga LIF. Data ONTAP will automatically assign an available IP address from the pool to the LIF, along with a subnetmask and a gateway. A subnet is scoped to a specific broadcast domain, so all the subnet’s addresses belongto the same layer 3 network. Data ONTAP manages the pool automatically as you create or delete LIFs, and ifyou manually configure a LIF with an address from the pool, it will detect that the address is in use and mark it assuch in the pool.

DNS Zones allow an SVM to manage DNS name resolution for it’s own LIFs, and since multiple LIFs can sharethe same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To use DNSZones you must configure your DNS server to delegate DNS authority for the subdomain to the SVM.

11.1.3.1 Create Subnets

1. Display a list of the cluster’s IPspaces. A cluster actually contains two IPspaces by default; theCluster IPspace, which correlates to the cluster network that Data ONTAP uses to have cluster nodes

Page 176: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

176 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

communicate with each other, and the Default IPspace to which Data ONTAP automatically assigns allnew SVMs. You can create more IPspaces if necessary, but that activity will not be covered in this lab.

cluster1::> network ipspace showIPspace Vserver List Broadcast Domains------------------- ----------------------------- ----------------------------Cluster Cluster ClusterDefault cluster1 Default2 entries were displayed.cluster1::>

2. Display a list of the cluster’s broadcast domains. Remember that broadcast domains are scoped toa single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in theCluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace.

cluster1::> network port broadcast-domain showIPspace Broadcast UpdateName Domain Name MTU Port List Status Details------- ----------- ------ ----------------------------- --------------Cluster Cluster 1500 cluster1-01:e0a complete cluster1-01:e0b complete cluster1-02:e0a complete cluster1-02:e0b completeDefault Default 1500 cluster1-01:e0c complete cluster1-01:e0d complete cluster1-01:e0e complete cluster1-01:e0f complete cluster1-01:e0g complete cluster1-02:e0c complete cluster1-02:e0d complete cluster1-02:e0e complete cluster1-02:e0f complete cluster1-02:e0g complete2 entries were displayed.cluster1::>

3. Display a list of the cluster’s subnets.

cluster1::> network subnet showThis table is currently empty.cluster1::>

4. Data ONTAP does not include a default subnet, so you will need to create a subnet now. The specificcommand you will use depends on what sections of this lab guide you plan to complete, as you want tocorrectly align the IP address pool in your lab with the IP addresses used in the portions of this lab guidethat you want to complete.

• If you plan to complete the NAS portion of this lab, enter the following command. Use thiscommand as well if you plan to complete both the NAS and SAN portions of this lab.

cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default -subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.131-192.168.0.139cluster1::>

• If you only plan to complete the SAN portion of this lab, then enter the following commandinstead.

cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default -subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.133-192.168.0.139cluster1::>

Page 177: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

177 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Re-display the list of the cluster’s subnets. This example assumes you plan to complete the whole lab.

cluster1::> network subnet showIPspace: DefaultSubnet Broadcast Avail/Name Subnet Domain Gateway Total Ranges--------- ---------------- --------- --------------- --------- ---------------Demo 192.168.0.1/24 Default 192.168.0.1 9/9 192.168.0.131-192.168.0.139cluster1::>

6. If you are interested in seeing a list of all of the network ports on your cluster, you can use the followingcommand for that purpose.

cluster1::> network port show Speed (Mbps)Node Port IPspace Broadcast Domain Link MTU Admin/Oper------ --------- ------------ ---------------- ----- ------- ------------cluster1-01 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 e0g Default Default up 1500 auto/1000cluster1-02 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 e0g Default Default up 1500 auto/100014 entries were displayed.cluster1::>

11.2 Create Storage for NFS and CIFSExpected Completion Time: 40 Minutes

If you are only interested in SAN protocols then you do not need to complete this section. However, werecommend that you review the conceptual information found here, and at the beginning of each of this section’ssubsections, before you advance to the SAN section as most of this conceptual material will not be repeatedthere.

Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operatewithin a cluster that serve data out to storage clients. A single cluster can host hundreds of SVMs, with each SVMmanaging its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g.,NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients, its own namespace.

The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and customersare encouraged to actively embrace this feature in order to take full advantage of a cluster’s capabilities. Werecommend against any organization starting out on a deployment intended to scale with only a single SVM.

You explicitly configure which storage protocols you want a given SVM to support at the time you create thatSVM. You can later add or remove protocols as desired. A single SVM can host any combination of the supportedprotocols.

An SVM’s assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. Asyou saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means that anSVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a directrelationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associatedcharacteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can failover to, an assigned SVM, a role, a routing group, and so on. You can only assign a given LIF to a single SVM,

Page 178: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

178 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

and since LIFs map to physical network ports on cluster nodes, this means that an SVM runs in part on all nodesthat are hosting its LIFs.

When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes hostedby the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is afunction of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility underNetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some loadbalancing by responding to different clients with different LIF addresses, but this distribution is not sophisticatedand requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFSServers do not handle name resolution on their own.

DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname.DNS is supported by both NFS and CIFS clients, and works equally well with clients on local area and wide areanetworks. Since DNS is an external service that resides outside of Data ONTAP, this architecture creates thepotential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline.To compensate for this condition you can configure DNS servers to delegate the name resolution responsibilityfor the SVM’s hostname records to the SVM itself, so that it can directly respond to name resolution requestsinvolving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding whatLIF address to return in response to a DNS name resolution request.

LIFS that map to physical network ports that reside on the same node as a volume’s containing aggregate offerthe most efficient client access path to the volume’s data. However, clients can also access volume data throughLIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP usesthe high speed cluster network to bridge communication between the node hosting the LIF and the node hostingthe volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that hasan aggregate that is hosting volumes for that SVM. If you desire additional resiliency then you can also create aNAS LIF on nodes not hosting aggregates for the SVM.

A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to anotherin the event of a component failure. Any existing connections to that LIF from NFS and SMB 2.0 (and later)clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates toa different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing networkrequests from that new node/port. Throughout this operation the NAS LIF maintains its IP address. Clientsconnected to the LIF may notice a brief delay while the failover is in progress, but as soon as it completes theclients resume any in-process NAS operations without any loss of data.

The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storagecontroller node can host a maximum of 125 SVMs, so you can calculate the cluster’s effective SVM limit bymultiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but thereis a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node ispart of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node canalso accommodate it’s HA partner’s LIFs in the event of a failover event).

Each SVM has its own NAS namespace, a logical grouping of the SVM’s CIFS and NFS volumes into a singlelogical filesystem view. Clients can access the entire namespace by mounting a single share or export at thetop of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistentview of the SVM’s data to all clients rather than having to reproduce that view structure on each individualclient. As an administrator maps and unmaps volumes from the namespace, those volumes instantly becomevisible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVM’s namespace.Administrators can also create NFS exports at individual junction points within the namespace, and can createCIFS shares at any directory path in the namespace.

11.2.1 Create a Storage Virtual Machine for NAS

In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a volumeover NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the cluster.

Start by creating the storage virtual machine.

Page 179: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

179 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

If you do not already have a PuTTY connection open to cluster1 then open one now following the directions inthe “Accessing the Command Line” section at the beginning of his lab guide. The username is admin and thepassword is Netapp1!.

1. Create the SVM named svm1. Notice that the clustered Data ONTAP command line syntax still refers tostorage virtual machines as vservers.

cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security ntfs -snapshot-policy default[Job 259] Job is queued: Create svm1.[Job 259][Job 259] Job succeeded:Vserver creation completedcluster1::>

2. Add CIFS and NFS protocol support to the SVM svm1:

cluster1::> vserver show-protocols -vserver svm1 Vserver: svm1Protocols: nfs, cifs, fcp, iscsi, ndmpcluster1::>

3. Remove the FCP, iSCSI, and NDMP protocols from the SVM svm1.

cluster1::> vserver remove-protocols -vserver svm1 -protocols fcp,iscsi,ndmpcluster1::>

4. Display the list of protocols assigned to the SVM svm1.

cluster1::> vserver show-protocols -vserver svm1 Vserver: svm1Protocols: nfs, cifscluster1::>

5. Display a list of the vservers in the cluster.

cluster1::> vserver show Admin Operational RootVserver Type Subtype State State Volume Aggregate----------- ------- ---------- ---------- ----------- ---------- ----------cluster1 admin - - - - -cluster1-01 node - - - - -cluster1-02 node - - - - -svm1 data default running running svm1_root aggr1_ cluster1_ 014 entries were displayed.cluster1::>

6. Display a list of the cluster’s network interfaces:

cluster1::> network interface show Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----Cluster cluster1-01_clus1 up/up 169.254.224.98/16 cluster1-01 e0a true cluster1-02_clus1 up/up 169.254.129.177/16 cluster1-02 e0a truecluster1 cluster1-01_mgmt1 up/up 192.168.0.111/24 cluster1-01 e0c true cluster1-02_mgmt1 up/up 192.168.0.112/24 cluster1-02 e0c true cluster_mgmt up/up 192.168.0.101/24 cluster1-01 e0c true5 entries were displayed.cluster1::>

Page 180: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

180 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7. Notice that there are not any LIFs defined for the SVM svm1 yet. Create the svm1_cifs_nfs_lif1 data LIFfor svm1:

cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data -data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo -firewall-policy mgmtcluster1::>

8. Create the svm1_cifs_nfs_lif2 data LIF for the SVM svm1:

cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data -data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -subnet-name Demo -firewall-policy mgmtcluster1::>

9. Display all of the LIFs owned by svm1:

cluster1::> network interface show -vserver svm1 Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----svm1 svm1_cifs_nfs_lif1 up/up 192.168.0.131/24 cluster1-01 e0c true svm1_cifs_nfs_lif2 up/up 192.168.0.132/24 cluster1-02 e0c true2 entries were displayed.cluster1::>

10. Display the SVM svm1's DNS configuration.

cluster1::> vserver services dns show NameVserver State Domains Servers--------------- --------- ----------------------------------- ----------------cluster1 enabled demo.netapp.com 192.168.0.253cluster1::>

11. Configure the DNS domain and nameservers for the svm1 SVM:

cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253 -domains demo.netapp.comcluster1::>

12. Display SVM's DNS configuration.

cluster1::> vserver services dns show NameVserver State Domains Servers--------------- --------- ----------------------------------- ----------------cluster1 enabled demo.netapp.com 192.168.0.253svm1 enabled demo.netapp.com 192.168.0.2532 entries were displayed.cluster1::>

Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so thatyou can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have donethis as part of the network interface create commands, but we opted to perform it separately here soyou could see how to modify an existing LIF.

13. Configure lif1 to accept DNS delegation responsibility for the svm1.demo.netapp.com zone.

cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone svm1.demo.netapp.comcluster1::>

14. Configure lif2 to accept DNS delegation responsibility for the svm1.demo.netapp.com zone.

cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2

Page 181: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

181 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

-dns-zone svm1.demo.netapp.comcluster1::>

15. Display the DNS delegation for svm1.

cluster1::> network interface show -vserver svm1 -fields dns-zone,addressvserver lif address dns-zone ------- ------------------ ------------- -------------------- svm1 svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com svm1 svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com 2 entries were displayed.cluster1::>

16. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1(username "root" and password "Netapp1!") and executing the following commands. If the delegation isworking correctly you should see IP addresses returned for the host svm1.demo.netapp.com, and if yourun the command several times you will eventually see that the responses vary the returned addressbetween the SVM’s two LIFs.

[root@rhel1 ~]# nslookup svm1.demo.netapp.comServer: 192.168.0.253Address: 192.168.0.253#53Non-authoritative answer:Name: svm1.demo.netapp.comAddress: 192.168.0.132[root@rhel1 ~]# nslookup svm1.demo.netapp.comServer: 192.168.0.253Address: 192.168.0.253#53Non-authoritative answer:Name: svm1.demo.netapp.comAddress: 192.168.0.131[root@rhel1 ~]#

17. This completes the planned LIF configuration changes for svm1, so now display a detailed configurationreport for the LIF svm1_cifs_nfs_lif1:

cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance Vserver Name: svm1 Logical Interface Name: svm1_cifs_nfs_lif1 Role: data Data Protocol: nfs, cifs Home Node: cluster1-01 Home Port: e0c Current Node: cluster1-01 Current Port: e0c Operational Status: up Extended Status: - Is Home: true Network Address: 192.168.0.131 Netmask: 255.255.255.0 Bits in the Netmask: 24 IPv4 Link Local: - Subnet Name: Demo Administrative Status: up Failover Policy: system-defined Firewall Policy: mgmt Auto Revert: false Fully Qualified DNS Zone Name: svm1.demo.netapp.com DNS Query Listen Enable: true Failover Group Name: Default FCP WWPN: - Address family: ipv4 Comment: - IPspace of LIF: Defaultcluster1::>

When you issued the vserver create command to create svm1 you included an option to enable CIFS,but that command did not actually create a CIFS server for the svm. Now it is time to create that CIFSserver.

Page 182: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

182 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18. Display the status of the cluster's CIFS servers.

cluster1::> vserver cifs showThis table is currently empty.cluster1::>

19. Create a CIFS server for svm1.

cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.comIn order to create an Active Directory machine account for the CIFS server, youmust supply the name and password of a Windows account with sufficientprivileges to add computers to the "CN=Computers" container within the"DEMO.NETAPP.COM" domain. Enter the user name: AdministratorEnter the password: cluster1::>

20. Display the status of the cluster's CIFS servers.

cluster1::> vserver cifs show Server Status Domain/Workgroup AuthenticationVserver Name Admin Name Style----------- --------------- --------- ---------------- --------------svm1 SVM1 up DEMO domaincluster1::>

As with CIFS, when you created svm1 you included an option to enable NFS, but that command did notactually create the NFS server. Now it is time to create that NFS server.

21. Display the status of the NFS server for svm1.

cluster1::> vserver nfs status -vserver svm1The NFS server is not running on Vserver "svm1".cluster1::>

22. Create an NFS v3 NFS server for svm1.

cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access truecluster1::>

23. Display the status of the NFS server for svm1 again.

cluster1::> vserver nfs status -vserver svm1The NFS server is running on Vserver "svm1".cluster1::>

11.2.2 Configure CIFS and NFS

Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the "svm1" SVM in theprevious section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand thatclients cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created anyvolumes on the SVM, but also because you have not told the SVM what you want to share, and who you want toshare it with.

Each SVM has its own namespace. A namespace is a logical grouping of a single SVM’s volumes into a directoryhierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVM’s root volume(svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFSand NFS clients. The SVM’s other volumes are junctioned (i.e. mounted) within that root volume or within othervolumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified,centrally maintained view of the storage encompassed by the namespace, regardless of where those junctionedvolumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not beenjunctioned into the namespace.

Page 183: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

183 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declaredat the top of the namespace. While this is a very powerful capability, there is no requirement to make the wholenamespace accessible. You can create CIFS shares at any directory level in the namespace, and you cancreate different NFS export rules at junction boundaries for individual volumes and for individual qtrees within ajunctioned volume.

Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy modelthat dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exportsthe root of its namespace and automatically associates that export with the SVM’s default export policy. But thatdefault policy is initially empty, and until it is populated with access rules no NFS clients will be able to accessthe namespace. The SVM’s default export policy applies to the root volume and also to any volumes that anadministrator junctions into the namespace, but an administrator can optionally create additional export policiesin order to implement different access rules within the namespace. You can apply export policies to a volumeas a whole and to individual qtrees within a volume, but a given volume or qtree can only have one associatedexport policy. While you cannot create NFS exports at any other directory level in the namespace, NFS clientscan mount from any level in the namespace by leveraging the namespace’s root export.

In this section of the lab, you are going to configure a default export policy for your SVM so that any volumes youjunction into its namespace will automatically pick up the same NFS export rules. You will also create a singleCIFS share at the top of the namespace so that all the volumes you junction into that namespace are accessiblethrough that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will besetting up name mapping between UNIX and Windows user accounts to facilitate smooth multiprotocol access tothe volumes and files in the namespace.

When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVM’s namespace. AnSVM always has a root volume, whether or not it is configured to support NAS protocols.

1. Verify that CIFS is running by default for the SVM svm1:

cluster1::> vserver cifs show Server Status Domain/Workgroup AuthenticationVserver Name Admin Name Style----------- --------------- --------- ---------------- --------------svm1 SVM1 up DEMO domaincluster1::>

2. Display the status of the NFS server for svm1 again.

cluster1::> vserver nfs status -vserver svm1The NFS server is running on Vserver "svm1".cluster1::>

3. Display the NFS server's configuration.

cluster1::> vserver nfs show -vserver svm1

Vserver: svm1 General NFS Access: true NFS v3: enabled NFS v4.0: disabled UDP Protocol: enabled TCP Protocol: enabled Default Windows User: - NFSv4.0 ACL Support: disabled NFSv4.0 Read Delegation Support: disabled NFSv4.0 Write Delegation Support: disabled NFSv4 ID Mapping Domain: defaultv4iddomain.comNFSv4 Grace Timeout Value (in secs): 45Preserves and Modifies NFSv4 ACL (and NTFS File Permissions in Unified Security Style): enabled NFSv4.1 Minor Version Support: disabled Rquota Enable: disabled NFSv4.1 Parallel NFS Support: enabled NFSv4.1 ACL Support: disabled NFS vStorage Support: disabledNFSv4 Support for Numeric Owner IDs: enabled Default Windows Group: - NFSv4.1 Read Delegation Support: disabled NFSv4.1 Write Delegation Support: disabled

Page 184: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

184 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

NFS Mount Root Only: enabled NFS Root Only: disabledPermitted Kerberos Encryption Types: des, des3, aes-128, aes-256 Showmount Enabled: disabledSet the Protocol Used for Name Services Lookups for Exports: udp NFSv3 MS-DOS Client Support: disabled

cluster1::>

4. Display a list of all the export policies.

cluster1::> vserver export-policy showVserver Policy Name--------------- -------------------svm1 defaultcluster1::>

The only defined policy is "default".

5. Display a list of all the export policy rules.

cluster1::> vserver export-policy rule showThis table is currently empty.cluster1::>

There are no rules defined for the "default" export policy.

6. Add a rule to the default export policy granting read-write access to all hosts.

cluster1::> vserver export-policy rule create -vserver svm1 -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1cluster1::>

7. Display a listing of all the export policy rules.

cluster1::> vserver export-policy rule show Policy Rule Access Client ROVserver Name Index Protocol Match Rule------------ --------------- ------ -------- --------------------- ---------svm1 default 1 any 0.0.0.0/0 anycluster1::>

8. Display a detailed listing of all the export policy rules.

cluster1::> vserver export-policy rule show -policyname default -instance Vserver: svm1 Policy Name: default Rule Index: 1 Access Protocol: anyClient Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0 RO Access Rule: any RW Access Rule: anyUser ID To Which Anonymous Users Are Mapped: 65534 Superuser Security Types: any Honor SetUID Bits in SETATTR: true Allow Creation of Devices: truecluster1::>

9. Display a list of the shares in the cluster.

cluster1::> vserver cifs share showVserver Share Path Properties Comment ACL-------------- ------------- ----------------- ---------- -------- -----------svm1 admin$ / browsable - -svm1 c$ / oplocks - BUILTIN\Administrators / Full Control browsable changenotifysvm1 ipc$ / browsable - -3 entries were displayed.cluster1::>

Page 185: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

185 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10. Create a share at the root of the namespace for the SVM svm1:

cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path /cluster1::>

11. Display a list of the shares in the cluster again.

cluster1::> vserver cifs share showVserver Share Path Properties Comment ACL-------------- ------------- ----------------- ---------- -------- -----------svm1 admin$ / browsable - -svm1 c$ / oplocks - BUILTIN\Administrators / Full Control browsable changenotifysvm1 ipc$ / browsable - -svm1 nsroot / oplocks - Everyone / Full Control browsable changenotify4 entries were displayed.cluster1::>

Set up CIFS <-> NFS user name mapping for the SVM svm1.12. Display a list of the current name mappings.

cluster1::> vserver name-mapping showThis table is currently empty.cluster1::>

13. Create a name mapping of DEMO\Administrator (specified in the command as "demo\\administrator") toroot.

cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1 -pattern demo\\administrator -replacement rootcluster1::>

14. Create a name mapping of root to DEMO\Administrator.

cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1 -pattern root -replacement demo\\administratorcluster1::>

15. Display a list of the current name mappings.

cluster1::> vserver name-mapping showVserver Direction Position -------------- --------- -------- svm1 win-unix 1 Pattern: demo\\administrator Replacement: rootsvm1 unix-win 1 Pattern: root Replacement: demo\\administrator2 entries were displayed.cluster1::>

11.2.3 Create a Volume and Map It to the Namespace Using the CLI

Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume onlyresides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate,which can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of avolume can vary depending on what storage controller model is hosting it.

An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can beconfigured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols

Page 186: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

186 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

(varies based on controller model), which means that there is an effective limit on the total number of volumesthat a cluster can host, depending on how many nodes there are in your cluster.

Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the node’s DataONTAP operating system. Do not use the node’s root aggregate to host any other volumes or user data; alwayscreate additional aggregates and volumes for that purpose.

Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin provisioning,deduplication, and compression. One specific storage efficiency feature you will be seeing in the section of the labis thin provisioning, which dictates how space for a FlexVol is allocated in its containing aggregate.

When you create a FlexVol with a volume guarantee of type “volume” you are thickly provisioning the volume,pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume willnever run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volumeguarantee of “none” you are thinly provisioning the volume, only allocating space for it on the containingaggregate at the time and in the quantity that the volume actually requires the space to store the data.

This latter configuration allows you to increase your overall space utilization and even oversubscribe anaggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribedvolumes reached their full size. However, if an oversubscribed aggregate does fill up then all it’s volumes will runout of space before they reach their maximum volume size, therefore oversubscription deployments generallyrequire a greater degree of administrative vigilance around space utilization.

In the Clusters section, you created a new aggregate named “aggr1_cluster1_01”; you will now use thataggregate to host a new thinly provisioned volume named “engineering” for the SVM named “svm1”.

1. Display basic information about the SVM’s current list of volumes:

cluster1::> volume show -vserver svm1Vserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%cluster1::>

2. Display the junctions in the SVM’s namespace:

cluster1::> volume show -vserver svm1 -junction Junction JunctionVserver Volume Language Active Junction Path Path Source--------- ------------ -------- -------- ------------------------- -----------svm1 svm1_root C.UTF-8 true / -cluster1::>

3. Create the volume “engineering”, junctioning it into the namespace at “/engineering”:

cluster1::> volume create -vserver svm1 -volume engineering -aggregate aggr1_cluster1_01 -size 10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path /engineering[Job 267] Job is queued: Create engineering.[Job 267] Job succeeded: Successfulcluster1::>

4. Display a list of svm1's volumes.

cluster1::> volume show -vserver svm1Vserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----svm1 engineering aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%2 entries were displayed.cluster1::>

5. Display a list of svm1's volume junction points.

cluster1::> volume show -vserver svm1 -junction

Page 187: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

187 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Junction JunctionVserver Volume Language Active Junction Path Path Source--------- ------------ -------- -------- ------------------------- -----------svm1 engineering C.UTF-8 true /engineering RW_volumesvm1 svm1_root C.UTF-8 true / -2 entries were displayed.cluster1::>

6. Create the volume eng_users, junctioning it into the namespace at /engineering/users.

cluster1::> volume create -vserver svm1 -volume eng_users -aggregate aggr1_cluster1_01 -size 10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path /engineering/users[Job 268] Job is queued: Create eng_users.[Job 268] Job succeeded: Successfulcluster1::>

7. Display a list of svm1's volume junction points.

volume show -vserver svm1 -junction Junction JunctionVserver Volume Language Active Junction Path Path Source--------- ------------ -------- -------- ------------------------- -----------svm1 eng_users C.UTF-8 true /engineering/users RW_volumesvm1 engineering C.UTF-8 true /engineering RW_volumesvm1 svm1_root C.UTF-8 true / -3 entries were displayed.cluster1::>

8. Display detailed information about the volume engineering. Notice here that the volume is reporting asthin provisioned (Space Guarantee Style is set to none) and that the Export Policy is set to default.

cluster1::> volume show -vserver svm1 -volume engineering -instance Vserver Name: svm1 Volume Name: engineering Aggregate Name: aggr1_cluster1_01 Volume Size: 10GB Volume Data Set ID: 1026 Volume Master Data Set ID: 2147484674 Volume State: online Volume Type: RW Volume Style: flex Is Cluster-Mode Volume: true Is Constituent Volume: false Export Policy: default User ID: - Group ID: - Security Style: ntfs UNIX Permissions: ------------ Junction Path: /engineering Junction Path Source: RW_volume Junction Active: true Junction Parent Volume: svm1_root Comment: Available Size: 9.50GB Filesystem Size: 10GB Total User-Visible Size: 9.50GB Used Size: 152KB Used Percentage: 5% Volume Nearly Full Threshold Percent: 95% Volume Full Threshold Percent: 98% Maximum Autosize (for flexvols only): 12GB(DEPRECATED)-Autosize Increment (for flexvols only): 512MB Minimum Autosize: 10GB Autosize Grow Threshold Percentage: 85% Autosize Shrink Threshold Percentage: 50% Autosize Mode: off Autosize Enabled (for flexvols only): false Total Files (for user-visible data): 311280 Files Used (for user-visible data): 98 Space Guarantee Style: none Space Guarantee in Effect: true Snapshot Directory Access Enabled: true Space Reserved for Snapshot Copies: 5% Snapshot Reserve Used: 0% Snapshot Policy: default

Page 188: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

188 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Creation Time: Mon Oct 20 02:33:31 2014 Language: C.UTF-8 Clone Volume: false Node name: cluster1-01 NVFAIL Option: off Volume's NVFAIL State: false Force NVFAIL on MetroCluster Switchover: off Is File System Size Fixed: false Extent Option: off Reserved Space for Overwrites: 0B Fractional Reserve: 0% Primary Space Management Strategy: volume_grow Read Reallocation Option: off Inconsistency in the File System: false Is Volume Quiesced (On-Disk): false Is Volume Quiesced (In-Memory): false Volume Contains Shared or Compressed Data: false Space Saved by Storage Efficiency: 0B Percentage Saved by Storage Efficiency: 0% Space Saved by Deduplication: 0B Percentage Saved by Deduplication: 0% Space Shared by Deduplication: 0B Space Saved by Compression: 0B Percentage Space Saved by Compression: 0% Volume Size Used by Snapshot Copies: 0B Block Type: 64-bit Is Volume Moving: false Flash Pool Caching Eligibility: read-write Flash Pool Write Caching Ineligibility Reason: - Managed By Storage Service: -Create Namespace Mirror Constituents For SnapDiff Use: - Constituent Volume Role: - QoS Policy Group Name: - Caching Policy Name: - Is Volume Move in Cutover Phase: false Number of Snapshot Copies in the Volume: 0VBN_BAD may be present in the active filesystem: false Is Volume on a hybrid aggregate: false Total Physical Used Size: 152KB Physical Used Percentage: 0%cluster1::>

9. View how much disk space this volume is actually consuming in it’s containing aggregate; the TotalFootprint value represents the volume’s total consumption. The value here is so small because thisvolume is thin provisioned and you have not yet added any data to it. If you had thick provisioned thevolume then the footprint here would have been 1 GB, the full size of the volume.

cluster1::> volume show-footprint -volume engineering Vserver : svm1 Volume : engineering Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 152KB 0% Volume Guarantee 0B 0% Flexible Volume Metadata 13.38MB 0% Delayed Frees 352KB 0% Total Footprint 13.88MB 0%cluster1::>

10. Create a qtree in the eng_users volume named "bob".

cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree bobcluster1::>

11. Create a qtree in the eng_users volume named "susan".

cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree susancluster1::>

12. Generate a list of all the qtrees that belong to svm1.

cluster1::> volume qtree show -vserver svm1Vserver Volume Qtree Style Oplocks Status---------- ------------- ------------ ------------ --------- --------

Page 189: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

189 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm1 eng_users "" ntfs enable normalsvm1 eng_users bob ntfs enable normalsvm1 eng_users susan ntfs enable normalsvm1 engineering "" ntfs enable normalsvm1 svm1_root "" ntfs enable normal5 entries were displayed.cluster1::>

13. Produce a detailed report of the configuration for the qtree bob.

cluster1::> volume qtree show -qtree bob -instance Vserver Name: svm1 Volume Name: eng_users Qtree Name: bob Actual (Non-Junction) Qtree Path: /vol/eng_users/bob Security Style: ntfs Oplock Mode: enable Unix Permissions: - Qtree Id: 1 Qtree Status: normal Export Policy: default Is Export Policy Inherited: truecluster1::>

11.2.4 Connect to the SVM From a Windows Client

The “svm1” SVM is up and running and is configured for NFS and CIFS access, so it’s time to validate thateverything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a Windowshost. You should complete both parts of this section so you can see that both hosts are able to seamlessly accessthe volume and it’s files.

This part of the lab demonstrates connecting the Windows client jumphost to the CIFS share \\svm1\nsroot usingthe Windows GUI.

1. On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the taskbar.

1

Figure 11-1:

A Windows Explorer window opens.

2. In Windows Explorer click on Computer.3. Click on Map network drive to launch the Map Network Drive wizard.

Page 190: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

190 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2

3

Figure 11-2:

The “Map Network Drive” wizard opens.

4. Set the fields in the window to the following values.

• “Drive”: S:• “Folder”: \\svm1\nsroot• Check the Reconnect at sign-in checkbox.

5. When finished click Finish.

Page 191: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

191 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

Figure 11-3:

A new Windows Explorer window opens.

6. The engineering volume you earlier junctioned into the svm1’s namespace is visible at the top of thensroot share, which points to the root of the namespace. If you created another volume on svm1 rightnow and mounted it under the root of the namespace, that new volume would instantly become visiblein this share, and to clients like jumphost that have already mounted the share. Double-click on theengineering folder to open it.

Page 192: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

192 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

Figure 11-4:

File Explorer displays the contents of the engineering folder. Next you will create a file in this folder toconfirm that you can write to it.

7. Notice that the “eng_users” volume that you junctioned in as users is visible inside this folder.8. Right-click in the empty space in the right pane of File Explorer.9. In the context menu, select New > Text Document, and name the resulting file “cifs.txt”.

Page 193: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

193 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8

9

7

Figure 11-5:

10. Double-click the cifs.txt file you just created to open it with Notepad.

Tip: If you aren't seeing file extensions in your lab, you can enable that by going to the Viewmenu at the top of Windows Explorer and checking the File Name Extensions checkbox.

11. In Notepad, enter some text (make sure you put a carriage return at the end of the line, or else whenyou later view the contents of this file on linux the command shell prompt will appear on the same lineas the file contents).

12. Use the File > Save menu in Notepad to save the file’s updated contents to the share. If write access isworking properly you will not receive an error message.

Page 194: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

194 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

11

12

Figure 11-6:

Close Notepad and File Explorer to finish this exercise.

11.2.5 Connect to the SVM From a Linux Client

This section demonstrates how to connect a Linux client to the NFS volume svm1:/ using the Linux command line.

1. Follow the instructions in the “Accessing the Command Line” section at the beginning of this lab guide toopen PuTTY and connect to the system rhel1. Log in as the user root with the password Netapp1!.

2. Verify that there are no NFS volumes currently mounted on rhel1.

[root@rhel1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg_rhel1-lv_root 11877388 4962504 6311544 45% /tmpfs 444612 76 444536 1% /dev/shm/dev/sda1 495844 40084 430160 9% /boot[root@rhel1 ~]#

3. Create the /svm1 directory to serve as a mount point for the NFS volume you will be shortly mounting.

[root@rhel1 ~]# mkdir /svm1[root@rhel1 ~]#

4. Add an entry for the NFS mount to the fstab file.

[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab[root@rhel1 ~]#

Page 195: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

195 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Verify the fstab file contains the new entry you just created.

[root@rhel1 ~]# grep svm1 /etc/fstabsvm1:/ /svm1 nfs rw,defaults 0 0[root@rhel1 ~]#

6. Mount all the file systems listed in the fstab file.

[root@rhel1 ~]# mount -a[root@rhel1 ~]#

7. View a list of the mounted file systems.

[root@rhel1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg_rhel1-lv_root 11877388 4962508 6311540 45% /tmpfs 444612 76 444536 1% /dev/shm/dev/sda1 495844 40084 430160 9% /bootsvm1:/ 19456 128 19328 1% /svm1[root@rhel1 ~]#

The NFS file system svm1:/ now shows as mounted on /svm1.

8. Navigate into the /svm1 directory.

[root@rhel1 ~]# cd /svm1[root@rhel1 svm1]#

9. Notice that you can see the engineering volume that you previously junctioned into the SVM’snamespace.

[root@rhel1 svm1]# lsengineering[root@rhel1 svm1]#

10. Navigate into engineering and list it's contents.

Attention: The following command output assumes that you have already performed theWindows client connection steps found earlier in this lab guide, including creating the cifs.txt file.

[root@rhel1 svm1]# cd engineering[root@rhel1 engineering]# lscifs.txt users[root@rhel1 engineering]#

11. Display the contents of the cifs.txt file you created earlier.

Tip: When you cat the cifs.txt file, if the shell prompt winds up on the same line as the fileoutput then that indicates that you forgot to include a newline at the end of the file when youcreated the file on Windows.

[root@rhel1 engineering]# cat cifs.txtwrite test from jumphost[root@rhel1 engineering]#

12. Verify that you can create file in this directory.

[root@rhel1 engineering]# echo "write test from rhel1" > nfs.txt[root@rhel1 engineering]# cat nfs.txtwrite test from rhel1[root@rhel1 engineering]# lltotal 4-rwxrwxrwx 1 root bin 26 Oct 20 03:05 cifs.txt-rwxrwxrwx 1 root root 22 Oct 20 03:06 nfs.txtdrwxrwxrwx 4 root root 4096 Oct 20 02:37 users[root@rhel1 engineering]#

Page 196: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

196 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11.2.6 NFS Exporting Qtrees (Optional)

Clustered Data ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains how toconfigure qtree exports and will demonstrate how to set different export rules for a given qtree. For this exerciseyou will be working with the qtrees you created in the previous section.

Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees do stillexist in cluster mode, but their purpose is essentially now limited to just quota management, with most other 7-mode qtree features, including NFS exports, now the exclusive purview of volumes. This functionality changecreated challenges for 7-mode customers with large numbers of NFS qtree exports who were trying to transitionto cluster mode and could not convert those qtrees to volumes because they would exceed clustered DataONTAP’s maximum number of volumes limit.

To solve this problem, clustered Data ONTP 8.2.1 introduced qtree NFS. NetApp continues to recommend thatcustomers favor volumes over qtrees in cluster mode whenever practical, but customers requiring large numbersof qtree NFS exports now have a supported solution under clustered Data ONTAP.

You need to create a new export policy and configure it with rules so that only the Linux host rhel1 will be grantedaccess to the associated volume and/or qtree.

1. Display a list of the export policies.

cluster1::> vserver export-policy showVserver Policy Name--------------- -------------------svm1 defaultcluster1::>

2. Create the export policy named rhel1-only.

cluster1::> vserver export-policy create -vserver svm1 -policyname rhel1-onlycluster1::>

3. Re-display the list of export policies.

cluster1::> vserver export-policy showVserver Policy Name--------------- -------------------svm1 defaultsvm1 rhel1-only2 entries were displayed.cluster1::>

4. Display a list of the rules for the rhel1-only export policy.

cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-onlyThere are no entries matching your query.cluster1::>

5. Add a rule to the policy so that only the Linux host rhel1 will be granted access.

cluster1::> vserver export-policy rule create -vserver svm1 -policyname rhel1-only -clientmatch 192.168.0.61 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1cluster1::>

6. Display a list of all the export policy rules.

cluster1::> vserver export-policy rule show Policy Rule Access Client ROVserver Name Index Protocol Match Rule------------ --------------- ------ -------- --------------------- ---------svm1 default 1 any 0.0.0.0/0 anysvm1 rhel1-only 1 any 192.168.0.61 any2 entries were displayed.cluster1::>

Page 197: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

197 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7. Display a detailed report of the rhel1-only export policy rules.

cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only -instance Vserver: svm1 Policy Name: rhel1-only Rule Index: 1 Access Protocol: anyClient Match Hostname, IP Address, Netgroup, or Domain: 192.168.0.61 RO Access Rule: any RW Access Rule: anyUser ID To Which Anonymous Users Are Mapped: 65534 Superuser Security Types: any Honor SetUID Bits in SETATTR: true Allow Creation of Devices: truecluster1::>

8. Produce a list of svm1’s export policies.

cluster1::> vserver export-policy showVserver Policy Name--------------- -------------------svm1 defaultsvm1 rhel1-only2 entries were displayed.cluster1::>

9. List svm1's qtrees.

cluster1::> volume qtree showVserver Volume Qtree Style Oplocks Status---------- ------------- ------------ ------------ --------- --------svm1 eng_users "" ntfs enable normalsvm1 eng_users bob ntfs enable normalsvm1 eng_users susan ntfs enable normalsvm1 engineering "" ntfs enable normalsvm1 svm1_root "" ntfs enable normal5 entries were displayed.cluster1::>

10. Apply the rhel1-only export policy to the susan qtree.

cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan -export-policy rhel1-onlycluster1::>

11. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is usingthe rhel1-only export policy.

cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan Vserver Name: svm1 Volume Name: eng_users Qtree Name: susan Qtree Path: /vol/eng_users/susan Security Style: ntfs Oplock Mode: enable Unix Permissions: - Qtree Id: 2 Qtree Status: normal Export Policy: rhel1-only Is Export Policy Inherited: falsecluster1::>

12. Produce a report showing the export policy assignments for all the volumes and qtrees that belong tosvm1.

cluster1::> volume qtree show -vserver svm1 -fields export-policyvserver volume qtree export-policy ------- --------- ----- ------------- svm1 eng_users "" default svm1 eng_users bob default svm1 eng_users susan rhel1-only

Page 198: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

198 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm1 engineering "" default svm1 svm1_root "" default 5 entries were displayed.cluster1::>

Now you need to validate that the more restrictive export policy that you’ve applied to the qtree susan isworking as expected from rhel1.

Note: If you still have an active PuTTY session open to the the Linux host rhel1 then bring thatwindow up now, otherwise open a new PuTTY session to that host (username = root, password= Netapp1!).

13. Change directory to /svm1/engineering/users.

[root@rhel1 ~]# cd /svm1/engineering/users[root@rhel1 users]#

14. List the directory contents.

[root@rhel1 users]# lsbob susan[root@rhel1 users]#

15. Enter the susan sub-directory.

[root@rhel1 users]# cd susan[root@rhel1 susan]#

16. Create a file in this directory.

[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt[root@rhel1 susan]#

17. Display the contents of the newly created file.

[root@rhel1 susan]# cat rhel1.txthello from rhel1[root@rhel1 susan]#

Next validate that rhel2 has different access rights to the qtree. This host should be able to access allthe volumes and qtrees in the svm1 namespace *except* susan, which should give a permission deniederror because that qtree’s associated export policy only grants access to the host rhel1.

Note: Open a PuTTY connection to the Linux host rhel2 (again, username = root and password= Netapp1!).

18. Create a mount point for the svm1 NFS volume.

[root@rhel2 ~]# mkdir /svm1[root@rhel2 ~]#

19. Mount the NFS volume svm1:/ on /svm1.

[root@rhel2 ~]# mount svm1:/ /svm1[root@rhel2 ~]#

20. Change directory to /svm1/engineering/users.

[root@rhel2 ~]# cd /svm1/engineering/users[root@rhel2 users]#

21. List the directory's contents.

[root@rhel2 users]# lsbob susan

Page 199: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

199 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[root@rhel2 users]#

22. Attempt to enter the susan sub-directory.

[root@rhel2 users]# cd susanbash: cd: susan: Permission denied[root@rhel2 users]#

23. Attempt to enter the bob sub-directory.

[root@rhel2 users]# cd bob[root@rhel2 bob]

11.3 Create Storage for iSCSIExpected Completion Time: 50 Minutes

This section of the lab is optional, and includes instructions for mounting a LUN on Windows and Linux. If youchoose to complete this section you must first complete the “Create a Storage Virtual Machine for iSCSI” section,and then complete either the “Create, Map, and Mount a Windows LUN” section, or the “Create, Map, and Mounta Linux LUN” section as appropriate based on your platform of interest.

The 50 minute time estimate assumes you complete only one of the Windows or Linux LUN sections. You arewelcome to complete both of those section if you choose, but you should plan on needing approximately 90minutes to complete the entire “Create and Mount a LUN” section.

If you completed the “Create a Storage Virtual Machine for NFS and CIFS” section of this lab then you exploredthe concept of a Storage Virtual Machine (SVM), created an SVM, and configured it to serve data over NFS andCIFS. If you skipped that section of the lab guide, consider reviewing the introductory text found at the beginningof that section, and each of it’s subsections, before you proceed further because this section builds on conceptsdescribed there.

In this section you are going to create another SVM and configure it for SAN protocols, which means you aregoing to configure the SVM for iSCSI since this virtualized lab does not support FC. The configuration steps foriSCSI and FC are similar, so the information provided here is also useful for FC deployment. After you create anew SVM and configure it for iSCSI, you will create a LUN for Windows and/or a LUN for Linux, and then mountthe LUN(s) on their respective hosts.

NetApp supports configuring an SVM to serve data over both SAN and NAS protocols, but it is common to seecustomers use separate SVMs for each in order to separate administrative responsibilities, or for architecturaland operational clarity. For example, SAN protocols do not support LIF failover ,so you cannot use NAS LIFs tosupport SAN protocols. You must instead create dedicated LIFs just for SAN. Implementing separate SVMs forSAN and NAS can in this example simplify the operational complexity of each SVM’s configuration, making eacheasier to understand and manage, but ultimately whether to mix or separate is a customer decision, and not aNetApp recommendation.

Since SAN LIFs do not support migration to different nodes, an SVM must have dedicated SAN LIFs on everynode that you want to service SAN requests, and you must utilize MPIO and ALUA to manage the controller’savailable paths to the LUNs. In the event of a path disruption MPIO and ALUA will compensate by re-routing theLUN communication over an alternate controller path (i.e., over a different SAN LIF).

NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the clusterso that all nodes can provide a path to the LUNs. In large clusters where this would result in the presentationof a large number of paths for a given LUN we recommend that you use portsets to limit the LUN to seeing nomore than 8 LIFs. Data ONTAP 8.3 introduces a new Selective LUN Mapping (SLM) feature to provide furtherassistance in managing fabric paths. SLM limits LUN path access to just the node that owns the LUN and its HApartner, and Data ONTAP automatically applies SLM to all new LUM map operations. For further information onSelective LUN Mapping, please see the Hands-On Lab for SAN Features in clustered Data ONTAP 8.3.

Page 200: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

200 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

In this lab the cluster contains two nodes connected to a single storage network. You will still configure a total of 4SAN LIFs, because it is common to see implementations with 2 paths per node for redundancy.

This section of the lab allows you to create and mount a LUN for only Windows, only Linux, or both if you desire.Both the Windows and Linux LUN creation steps require that you complete the “Create a Storage Virtual Machinefor iSCSI” section that comes next. If you want to create a Windows LUN, you need to complete the “Create, Map,and Mount a Windows LUN” section that follows. Additionally, if you want to create a Linux LUN, you need tocomplete the “Create, Map, and Mount a Linux LUN” section that follows after that. You can safely complete bothof those last two sections in the same lab.

11.3.1 Create a Storage Virtual Machine for iSCSI

If you do not already have a PuTTY session open to cluster1, open one now following the instructions in the“Accessing the Command Line” section at the beginning of this lab guide and enter the following commands.

1. Display the available aggregates so you can decide which one you want to use to host the root volumefor the SVM you will be creating.

cluster1::> aggr showAggregate Size Available Used% State #Vols Nodes RAID Status--------- -------- --------- ----- ------- ------ ---------------- ------------aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normalaggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normalaggr1_cluster1_01 72.53GB 72.49GB 0% online 3 cluster1-01 raid_dp, normalaggr1_cluster1_02 72.53GB 72.53GB 0% online 0 cluster1-02 raid_dp, normal4 entries were displayed.cluster1::>

2. Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the clustered Data ONTAPcommand line syntax still refers to storage virtual machines as vservers.

cluster1::> vserver create -vserver svmluns -rootvolume svmluns_root -aggregate aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security-style unix -snapshot-policy default[Job 269] Job is queued: Create svmluns.[Job 269][Job 269] Job succeeded:Vserver creation completedcluster1::>

3. Add the iSCSI protocol to the SVM “svmluns”:

cluster1::> vserver iscsi create -vserver svmlunscluster1::>

4. Display svmlun's configured protocols.

cluster1::> vserver show-protocols -vserver svmluns Vserver: svmlunsProtocols: nfs, cifs, fcp, iscsi, ndmpcluster1::>

5. Remove all the protocols other than iscsi.

cluster1::> vserver remove-protocols -vserver svmluns -protocols nfs,cifs,fcp,ndmpcluster1::>

Page 201: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

201 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6. Display the configured protocols for svmluns.

cluster1::> vserver show-protocols -vserver svmluns Vserver: svmlunsProtocols: iscsicluster1::>

7. Display detailed configuration for the svmlun SVM.

cluster1::> vserver show -vserver svmluns Vserver: svmluns Vserver Type: data Vserver Subtype: default Vserver UUID: beeb8ca5-580c-11e4-a807-0050569901b8 Root Volume: svmluns_root Aggregate: aggr1_cluster1_01 NIS Domain: - Root Volume Security Style: unix LDAP Client: - Default Volume Language Code: C.UTF-8 Snapshot Policy: default Comment: Quota Policy: default List of Aggregates Assigned: - Limit on Maximum Number of Volumes allowed: unlimited Vserver Admin State: running Vserver Operational State: running Vserver Operational State Stopped Reason: - Allowed Protocols: iscsi Disallowed Protocols: nfs, cifs, fcp, ndmp Is Vserver with Infinite Volume: false QoS Policy Group: - Config Lock: false IPspace Name: Defaultcluster1::>

8. Create 4 SAN LIFs for the SVM svmluns, 2 per node. Do not forget you can save some typing here byusing the up arrow to recall previous commands that you can edit and then execute.

cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_1 -role data -data-protocol iscsi -home-node cluster1-01 -home-port e0d -subnet-name Demo-failover-policy disabled -firewall-policy datacluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_2 -role data -data-protocol iscsi -home-node cluster1-01 -home-port e0e -subnet-name Demo -failover-policy disabled -firewall-policy datacluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_1 -role data -data-protocol iscsi -home-node cluster1-02 -home-port e0d -subnet-name Demo -failover-policy disabled -firewall-policy datacluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_2 -role data -data-protocol iscsi -home-node cluster1-02 -home-port e0e -subnet-name Demo -failover-policy disabled -firewall-policy datacluster1::>

9. Now create a Management Interface LIF for the SVM.

cluster1::> network interface create -vserver svmluns -lif svmluns_admin_lif1 -role data -data-protocol none -home-node cluster1-01 -home-port e0c -subnet-name Demo -failover-policy nextavail -firewall-policy mgmtcluster1::>

10. Display a list of the LIFs in the cluster.

cluster1::> network interface show Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----cluster cluster1-01_clus1 up/up 169.254.224.98/16 cluster1-01 e0a true cluster1-02_clus1 up/up 169.254.129.177/16 cluster1-02 e0a truecluster1 cluster1-01_mgmt1 up/up 192.168.0.111/24 cluster1-01 e0c true cluster1-02_mgmt1 up/up 192.168.0.112/24 cluster1-02 e0c true cluster_mgmt up/up 192.168.0.101/24 cluster1-01 e0c truesvm1

Page 202: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

202 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm1_cifs_nfs_lif1 up/up 192.168.0.131/24 cluster1-01 e0c true svm1_cifs_nfs_lif2 up/up 192.168.0.132/24 cluster1-02 e0c truesvmluns cluster1-01_iscsi_lif_1 up/up 192.168.0.133/24 cluster1-01 e0d true cluster1-01_iscsi_lif_2 up/up 192.168.0.134/24 cluster1-01 e0e true cluster1-02_iscsi_lif_1 up/up 192.168.0.135/24 cluster1-02 e0d true cluster1-02_iscsi_lif_2 up/up 192.168.0.136/24 cluster1-02 e0e true svmluns_admin_lif1 up/up 192.168.0.137/24 cluster1-01 e0c true12 entries were displayed.cluster1::>

11. Display detailed information for the LIF cluster1-01_iscsi_lif_1.

cluster1::> network interface show -lif cluster1-01_iscsi_lif_1 -instance Vserver Name: svmluns Logical Interface Name: cluster1-01_iscsi_lif_1 Role: data Data Protocol: iscsi Home Node: cluster1-01 Home Port: e0d Current Node: cluster1-01 Current Port: e0d Operational Status: up Extended Status: - Is Home: true Network Address: 192.168.0.133 Netmask: 255.255.255.0 Bits in the Netmask: 24 IPv4 Link Local: - Subnet Name: Demo Administrative Status: up Failover Policy: disabled Firewall Policy: data Auto Revert: false Fully Qualified DNS Zone Name: none DNS Query Listen Enable: false Failover Group Name: - FCP WWPN: - Address family: ipv4 Comment: - IPspace of LIF: Defaultcluster1::>

12. Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.

cluster1::> volume showVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----cluster1-01 vol0 aggr0_cluster1_01 online RW 9.71GB 6.97GB 28%cluster1-02 vol0 aggr0_cluster1_02 online RW 9.71GB 6.36GB 34%svm1 eng_users aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 engineering aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%svmluns svmluns_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%6 entries were displayed.cluster1::>

11.3.2 Create, Map, and Mount a Windows LUN

In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you willperform the remaining steps needed to configure and use a LUN under Windows:

• Gather the iSCSI Initiator Name of the Windows client.• Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that volume,

and map the LUN so it can be accessed by the Windows client.• Mount the LUN on a Windows client leveraging multi-pathing.

You must complete all of the subsections of this section in order to use the LUN from the Windows client.

Page 203: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

203 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11.3.2.1 Gather the Windows Client iSCSI Initiator NameYou need to determine the Windows client’s iSCSI initiator name so that when you create the LUN you can set upan appropriate initiator group to control access to the LUN.

On the desktop of the Windows client named "jumphost" (the main Windows host you use in the lab), perform thefollowing tasks:

1. Click on the Windows button on the far left side of the task bar.

1

Figure 11-7:

The Start screen opens.

2. Click on Administrative Tools.

2

Figure 11-8:

Page 204: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

204 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Windows Explorer opens to the List of Administrative Tools.

3. Double-click the entry for the iSCSI Initiator tool.

3

Figure 11-9:

The iSCSI Initiator Properties window opens.

4. Select the Configuration tab.5. Take note of the value in the “Initiator Name” field, which contains the initiator name for jumphost.

Attention: The initiator name is iqn.1991-05.com.microsoft:jumphost.demo.netapp.com

You will need this value later, so you might want to copy this value from the properties window and pasteit into a text file on your lab’s desktop so you have it readily available when that time comes.

6. Click OK.

Page 205: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

205 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4

5

6

Figure 11-10:

The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Toolswindow. Leave this window open because you will need to access other tools later in the lab.

11.3.2.2 Create and Map a Windows LUNYou will now create a new thin provisioned Windows LUN named “windows.lun” in the volume winluns onthe SVM "svmluns". You will also create an initiator igroup for the LUN and populate it with the Windows hostjumphost. An initiator group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names of thehosts that are permitted to see and access the associated LUNs.

Page 206: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

206 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1. If you do not already have a PuTTY connection open to cluster1 then please open one now following theinstructions in the “Accessing the Command Line” section at the beginning of this lab guide.

2. Create the volume winluns to host the Windows LUN you will be creating in a later step:

cluster1::> volume create -vserver svmluns -volume winluns -aggregate aggr1_cluster1_01 -size 10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none -autosize-mode grow -nvfail on[Job 270] Job is queued: Create winluns.[Job 270] Job succeeded: Successfulcluster1::>

3. Display a list of the volumes on the cluster.

cluster1::> volume showVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----cluster1-01 vol0 aggr0_cluster1_01 online RW 9.71GB 7.00GB 27%cluster1-02 vol0 aggr0_cluster1_02 online RW 9.71GB 6.34GB 34%svm1 eng_users aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 engineering aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%svmluns svmluns_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%svmluns winluns aggr1_cluster1_01 online RW 10.31GB 21.31GB 0%7 entries were displayed.cluster1::>

4. Create the Windows LUN named windows.lun:

cluster1::> lun create -vserver svmluns -volume winluns -lun windows.lun -size 10GB -ostype windows_2008 -space-reserve disabledCreated a LUN of size 10g (10742215680)cluster1::>

5. Add a comment to the LUN definition.

cluster1::> lun modify -vserver svmluns -volume winluns -lun windows.lun -comment "Windows LUN"cluster1::>

6. Display the LUNs on the cluster.

cluster1::> lun showVserver Path State Mapped Type Size--------- ------------------------------- ------- -------- -------- --------svmluns /vol/winluns/windows.lun online unmapped windows_2008 10.00GBcluster1::>

7. Display a list of the defined igroups.

cluster1::> igroup showThis table is currently empty.cluster1::>

8. Create a new igroup named winigrp that you will use to manage access to the new LUN.

cluster1::> igroup create -vserver svmluns -igroup winigrp -protocol iscsi -ostype windows -initiator iqn.1991-05.com.microsoft:jumphost.demo.netapp.comcluster1::>

Page 207: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

207 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9. Add the Windows client’s initiator name to the igroup.

cluster1::> igroup showVserver Igroup Protocol OS Type Initiators--------- ------------ -------- -------- ------------------------------------svmluns winigrp iscsi windows iqn.1991-05.com.microsoft:jumphost. demo.netapp.comcluster1::>

10. Map the LUN windows.lun to the igroup winigrp.

cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrpcluster1::>

11. Display a list of all the LUNs.

cluster1::> lun showVserver Path State Mapped Type Size--------- ------------------------------- ------- -------- -------- --------svmluns /vol/winluns/windows.lun online mapped windows_2008 10.00GBcluster1::>

12. Display a list of all the mapped LUNs.

cluster1::> lun mapped showVserver Path Igroup LUN ID Protocol---------- ---------------------------------------- ------- ------ --------svmluns /vol/winluns/windows.lun winigrp 0 iscsicluster1::>

13. Display a detailed report on the configuration of the LUN windows.lun.

cluster1::> lun show -lun windows.lun -instance Vserver Name: svmluns LUN Path: /vol/winluns/windows.lun Volume Name: winluns Qtree Name: "" LUN Name: windows.lun LUN Size: 10.00GB OS Type: windows_2008 Space Reservation: disabled Serial Number: wOj4Q]FMHlq6 Comment: Windows LUNSpace Reservations Honored: false Space Allocation: disabled State: online LUN UUID: 8e62421e-bff4-4ac7-85aa-2e6e3842ec8a Mapped: mapped Block Size: 512 Device Legacy ID: - Device Binary ID: - Device Text ID: - Read Only: false Fenced Due to Restore: false Used Size: 0 Maximum Resize Size: 502.0GB Creation Time: 10/20/2014 04:36:41 Class: regular Node Hosting the LUN: cluster1-01 QoS Policy Group: - Clone: false Clone Autodelete Enabled: false Inconsistent import: falsecluster1::>

11.3.2.3 Mount the LUN on a Windows ClientThe final step is to mount the LUN on the Windows client. You will be using MPIO/ALUA to support multiplepaths to the LUN using both of the SAN LIFs you configured earlier on the svmluns SVM. Data ONTAP DSM for

Page 208: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

208 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Windows MPIO is the multi-pathing software you will be using for this lab, and that software is already installed onjumphost.

You should begin by validating that the Multi-Path I/O (MPIO) software is working properly on this windows host.The Administrative Tools window should still be open on jumphost; if you already closed it then you will need tore-open it now so you can access the MPIO tool

1. On the desktop of JUMPHOST, in the Administrative Tools window which you should still have open,double-click the MPIO tool.

1

Figure 11-11:

The “MPIO Properties” window opens.

2. Select the Discover Multi-Paths tab.3. Examine the Add Support for iSCSI devices checkbox. If this checkbox is NOT greyed out then MPIO

is improperly configured. This checkbox should be greyed out for this lab, but in the event it is not thenplace a check in that checkbox, click the Add button, and then click Yes in the reboot dialog to rebootyour windows host. Once the system finishes rebooting, return to this window to verify that the checkboxis now greyed out, indicating that MPIO is properly configured.

4. Click Cancel.

Page 209: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

209 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2

3

4

Figure 11-12:

The “MPIO Properties” window closes and focus returns to the “Administrative Tools” window forjumphost. Now you need to begin the process of connecting jumphost to the LUN.

5. In Administrative Tools, double-click the iSCSI Initiator tool.

Page 210: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

210 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5

Figure 11-13:

The “iSCSI Initiator Properties” window opens.

6. Select the Targets tab.7. Notice that there are no targets listed in the “Discovered Targets” list box, indicating that that are

currently no iSCSI targets mapped to this host.8. Click the Discovery tab.

Page 211: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

211 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6

7

8

Figure 11-14:

The Discovery tab is where you begin the process of discovering LUNs, and to do that you must definea target portal to scan. You are going to manually add a target portal to jumphost.

9. Click the Discover Portal… button.

Page 212: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

212 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9

Figure 11-15:

The “Discover Target Portal” window opens. Here you will specify the first of the IP addresses that theclustered Data ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmlunsSVM. Recall that the wizard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.

10. Set the “IP Address or DNS name” textbox to 192.168.0.133, the first address in the range for yourLIFs.

11. Click OK.

10

11

Figure 11-16:

The “Discover Target Portal” window closes, and focus returns to the “iSCSI Initiator Properties”window.

12. The “Target Portals” list now contains an entry for the IP address you entered in the previous step.

Page 213: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

213 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Click on the Targets tab.

12

13

Figure 11-17:

The Targets tab opens to show you the list of discovered targets.14. In the “Discovered targets” list select the only listed target. Observe that the target’s status is Inactive,

because although you have discovered it you have not yet connected to it. Also note that the “Name” ofthe discovered target in your lab will have a different value than what you see in this guide; that namestring is uniquely generated for each instance of the lab. (Make a mental note of that string value as youwill see it a lot as you continue to configure iSCSI in later steps of this process.)

15. Click the Connect button.

Page 214: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

214 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

Figure 11-18:

The “Connect to Target” dialog box opens.

16. Click the Enable multi-path checkbox,.17. Click the Advanced… button.

1617

Figure 11-19:

Page 215: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

215 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The “Advanced Settings” window opens.18. In the “Target portal IP” dropdown menu select the entry containing the IP address you specified when

you discovered the target portal, which should be 192.168.0.133. The listed values are IP Address andPort number combinations, and the specific value you want to select here is 192.168.0.133 / 3260.

19. When finished, click OK.

18

19

Figure 11-20:

The “Advanced Setting” window closes, and focus returns to the “Connect to Target” window.

20. Click OK.

Page 216: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

216 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20

Figure 11-21:

The “Connect to Target” window closes, and focus returns to the “iSCSI Initiator Properties” window.

21. Notice that the status of the listed discovered target has changed from “Inactive” to “Connected”.

21

Figure 11-22:

Thus far you have added a single path to your iSCSI LUN, using the address for thecluster1-01_iscsi_lif_1 LIF the Create LUN wizard created on the node cluster1-01 for the svmlunsSVM. You are now going to add each of the other SAN LIFs present on the svmluns SVM. To begin thisprocedure you must first edit the properties of your existing connection.

22. Still on the Targets tab, select the discovered target entry for your existing connection.23. Click Properties.

Page 217: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

217 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

22

23

Figure 11-23:

The Properties window opens. From this window you will be starting the procedure of connectingalternate paths for your newly connected LUN. You will be repeating this procedure 3 times, once foreach of the remaining LIFs that are present on the svmluns SVM.

LIF IP Address Done

192.168.0.134

192.168.0.135

192.168.0.136

24. The Identifier list will contain an entry for every path you have specified so far, so it can serve as avisual indicator on your progress for defining specify all your paths. The first time you enter this windowyou will see one entry, for the the LIF you used to first connect to this LUN.

25. Click Add Session.

Page 218: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

218 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

24

Figure 11-24:

The Connect to Target window opens.

26. Check the Enable muti-path checkbox.27. Click Advanced….

2627

Figure 11-25:

The Advanced Setting window opens.

28. Select the “Target port IP” entry that contains the IP address of the LIF whose path you are addingin this iteration of the procedure to add an alternate path. The following screenshot shows the192.168.0.134 address, but the value you specify will depend of which specific path you areconfiguring.

29. When finished, click OK.

Page 219: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

219 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

Figure 11-26:

The Advanced Settings window closes, and focus returns to the Connect to Target window.

30. Click OK.

Page 220: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

220 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

30

Figure 11-27:

The Connect to Target window closes, and focus returns to the Properties window where a newidentifier list. Repeat the procedure from the last 4 screenshots for each of the last two remaining LIF IPaddresses.

When you have finished adding all 3 paths the Identifiers list in the Properties window should contain 4entries.

31. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions,one for each path. Note that it is normal for the identifier values in your lab to differ from those in thescreenshot.

32. Click OK.

Page 221: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

221 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

31

32

Figure 11-28:

The Properties window closes, and focus returns to the iSCSI Properties window.

33. Click OK.

Page 222: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

222 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

33

Figure 11-29:

The iSCSI Properties window closes, and focus returns to the desktop of jumphost. If the AdministrativeTools window is not still open on your desktop, open it again now.

If all went well, the jumphost is now connected to the LUN using multi-pathing, so it is time to formatyour LUN and build a filesystem on it.

34. In Administrative Tools, double-click the Computer Management tool.

Page 223: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

223 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

34

Figure 11-30:

The Computer Management window opens.

35. In the left pane of the Computer Management window, navigate to Computer Management (Local) >Storage > Disk Management.

35

Figure 11-31:

36. When you launch Disk Management an Initialize Disk dialog will open informing you that you mustinitialize a new disk before Logical Disk Manager can access it.

Page 224: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

224 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Note: If you see more than one disk listed then MPIO has not correctly recognized that themultiple paths you set up are all for the same LUN, so you will need to cancel the InitializeDisk dialog, quit Computer Manager, and go back to the iSCSI Initiator tool to review your pathconfiguration steps to find and correct any configuration errors, after which you can return to theComputer Management tool and try again.

Click OK to initialize the disk.

36

Figure 11-32:

The Initialize Disk window closes, and focus returns to the Disk Management view in the ComputerManagement window.

37. The new disk shows up in the disk list at the bottom of the window, and has a status of “Unallocated”.38. Right-click inside the “Unallocated” box for the disk (if you right-click outside this box you will get the

incorrect context menu), and select New Simple Volume… from the context menu.

Page 225: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

225 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37

38

Figure 11-33:

The “New Simple Volume Wizard” window opens.

39. Click the Next button to advance the wizard.

Page 226: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

226 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

39

Figure 11-34:

The wizard advances to the “Specify Volume Size” step.

40. The wizard defaults to allocating all of the space in the volume, so click the Next button.

Page 227: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

227 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

40

Figure 11-35:

The wizard advances to the “Assign Drive Letter or Path” step.

41. The wizard automatically selects the next available drive letter, which should be E. Click Next.

Page 228: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

228 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

41

Figure 11-36:

The wizard advances to the “Format Partition” step.

42. Set the “Volume Label” field to WINLUN.43. Click Next.

Page 229: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

229 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

42

43

Figure 11-37:

The wizard advances to the “Completing the New Simple Volume Wizard” step.

44. Click Finish.

Page 230: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

230 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

44

Figure 11-38:

The “New Simple Volume Wizard” window closes, and focus returns to the Disk Management view ofthe Computer Management window.

45. The new WINLUN volume now shows as “Healthy” in the disk list at the bottom of the window,indicating that the new LUN is mounted and ready to use. Before you complete this section of the lab,take a look at the MPIO configuration for this LUN by right-clicking inside the box for the WINLUNvolume.

46. From the context menu select Properties.

Page 231: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

231 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

45

46

Figure 11-39:

The WINLUN (E:) Properties window opens.

47. Click the Hardware tab.48. In the “All disk drives” list select the NETAPP LUN C-Mode Multi-Path Disk entry.49. Click Properties.

Page 232: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

232 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

47

48

49

Figure 11-40:

The “NETAPP LUN C-Mode Multi-Path Disk Device Properties” window opens.

50. Click the MPIO tab.51. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.

We recommend using the Data ONTAP DSM software, as it is the most full-featured option available,although the Microsoft DSM is also supported.

52. The MPIO policy is set to “Least Queue Depth”. A number of different multi-pathing policies areavailable, but the configuration shown here sends LUN I/O down the path that has the fewestoutstanding I/O requests. You can click the More information about MPIO policies link at the bottomof the dialog window for details about all the available policies.

53. The top two paths show both a “Path State” and “TPG State” as “Active/Optimized”. These paths areconnected to the node cluster1-01 and the Least Queue Depth policy makes active use of both paths tothis node. Conversely, the bottom two paths show a “Path State” of “Unavailable”, and a “TPG State” of“Active/Unoptimized”. These paths are connected to the node cluster1-02, and only enter a Path Stateof “Active/Optimized” if the node cluster1-01 becomes unavailable, or if the volume hosting the LUNmigrates over to the node cluster1-02.

54. When you finish reviewing the information in this dialog click OK to exit. If you changed any of thevalues in this dialog you should consider using the Cancel button to discard those changes.

Page 233: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

233 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

50

51

52

53

54

Figure 11-41:

The “NETAPP LUN C-Mode Multi-Path Disk Device Properties” window closes, and focus returns to the“WINLUN (E:) Properties” window.

55. Click OK.

Page 234: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

234 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

55

Figure 11-42:

The “WINLUN (E:) Properties” window closes.

56. Close the Computer Management window.

Page 235: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

235 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

56

Figure 11-43:

57. Close the Administrative Tools window.

57

Figure 11-44:

Page 236: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

236 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

58. You may see a message from Microsoft Windows stating that you must format the disk in drive E:before you can use it. As you may recall, you did format the LUN during the New Simple VolumeWizard", meaning this is an erroneous message from WIndows. Click Cancel to ignore it.

58

Figure 11-45:

Feel free to open Windows Explorer and verify that you can create a file on the E: drive.

This completes this exercise.

11.3.3 Create, Map, and Mount a Linux LUN

In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you willperform the remaining steps needed to configure and use a LUN under Linux:

• Gather the iSCSI Initiator Name of the Linux client.• Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within that

volume, and map the LUN to the Linux client.• Mount the LUN on the Linux client.

You must complete all of the following subsections in order to use the LUN from the Linux client. Note that youare not required to complete the Windows LUN section before starting this section of the lab guide, but thescreenshots and command line output shown here assumes that you have. If you did not complete the WindowsLUN section, the differences will not affect your ability to create and mount the Linux LUN.

11.3.3.1 Gather the Linux Client iSCSI Initiator NameYou need to determine the Linux client’s iSCSI initiator name so that you can set up an appropriate initiator groupto control access to the LUN.

You should already have a PuTTY connection open to the Linux host rhel1. If you do not, then open one nowusing the instructions found in the “Accessing the Command Line” section at the beginning of this lab guide. Theusername will be root and the password will be Netapp1!.

1. Change to the directory that hosts the iscsi configuration files.

[root@rhel1 ~]# cd /etc/iscsi[root@rhel1 iscsi]# lsinitiatorname.iscsi iscsid.conf[root@rhel1 iscsi]#

2. Display the name of the iscsi initiator.

[root@rhel1 iscsi] cat initiatorname.iscsiInitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com[root@rhel1 iscsi]#

Important: The initiator name for rhel1 is iqn.1994-05.com.redhat:rhel1.demo.netapp.com.

Page 237: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

237 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11.3.3.2 Create and Map a Linux LUNIn this activity, you create a new thin provisioned Linux LUN on the SVM “svmluns” under the volume “linluns”,and also create an initiator igroup for the LUN so that only the Linux host rhel1 can access it. An initiator group,or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names for the hosts that are permitted to seethe associated LUNs.

1. If you do not currently have a PuTTY session open to cluster1 then open one now following theinstructions from the “Accessing the Command Line” section at the beginning of this lab guide. Theusername will be "admin" and the password will be "Netapp1!".

2. Create the thin provisioned volume linluns that will host the Linux LUN you will create in a later step:

cluster1::> volume create -vserver svmluns -volume linluns -aggregate aggr1_cluster1_01 -size 10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none -autosize-mode grow -nvfail on[Job 271] Job is queued: Create linluns.[Job 271] Job succeeded: Successfulcluster1::>

3. Display the volume list.

cluster1::> volume showVserver Volume Aggregate State Type Size Available Used%--------- ------------ ------------ ---------- ---- ---------- ---------- -----cluster1-01 vol0 aggr0_cluster1_01 online RW 9.71GB 6.92GB 28%cluster1-02 vol0 aggr0_cluster1_02 online RW 9.71GB 6.27GB 35%svm1 eng_users aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 engineering aggr1_cluster1_01 online RW 10GB 9.50GB 5%svm1 svm1_root aggr1_cluster1_01 online RW 20MB 18.85MB 5%svmluns linluns aggr1_cluster1_01 online RW 10.31GB 10.31GB 0%svmluns svmluns_root aggr1_cluster1_01 online RW 20MB 18.86MB 5%svmluns winluns aggr1_cluster1_01 online RW 10.31GB 10.28GB 0%8 entries were displayed.cluster1::>

4. Display a list of the LUNs on the cluster.

cluster1::> lun showVserver Path State Mapped Type Size--------- ------------------------------- ------- -------- -------- --------svmluns /vol/winluns/windows.lun online mapped windows_2008 10.00GBcluster1::>

5. Create the thin provisioned Linux LUN linux.lun on the volume linluns:

cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 10GB -ostype linux -space-reserve disabledCreated a LUN of size 10g (10742215680)cluster1::>

6. Add a comment to the LUN linux.lun.

cluster1::> lun modify -vserver svmluns -volume linluns -lun linux.lun -comment "Linux LUN"cluster1::>

Page 238: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

238 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7. Display the list of LUNs.

cluster1::> lun showVserver Path State Mapped Type Size--------- ------------------------------- ------- -------- -------- --------svmluns /vol/linluns/linux.lun online unmapped linux 10GBsvmluns /vol/winluns/windows.lun online mapped windows_2008 10.00GB2 entries were displayed.cluster1::>

8. Display a list of the cluster's igroups.

cluster1::> igroup showVserver Igroup Protocol OS Type Initiators--------- ------------ -------- -------- ------------------------------------svmluns winigrp iscsi windows iqn.1991-05.com.microsoft:jumphost. demo.netapp.comcluster1::>

9. Create a new igroup named linigrp that grants rhel1 access to the LUN linux.lun.

cluster1::> igroup create -vserver svmluns -igroup linigrp -protocol iscsi -ostype linux -initiator iqn.1994-05.com.redhat:rhel1.demo.netapp.comcluster1::>

10. Display a list of the igroups.

cluster1::> igroup showVserver Igroup Protocol OS Type Initiators--------- ------------ -------- -------- ------------------------------------svmluns linigrp iscsi linux iqn.1994-05.com.redhat:rhel1.demo. netapp.comsvmluns winigrp iscsi windows iqn.1991-05.com.microsoft:jumphost. demo.netapp.com2 entries were displayed.cluster1::>

11. Map the LUN linux.lun to the igroup linigrp.

cluster1::> lun map -vserver svmluns -volume linluns -lun linux.lun -igroup linigrpcluster1::>

12. Display a list of the LUNs.

cluster1::> lun showVserver Path State Mapped Type Size--------- ------------------------------- ------- -------- -------- --------svmluns /vol/linluns/linux.lun online mapped linux 10GBsvmluns /vol/winluns/windows.lun online mapped windows_2008 10.00GB2 entries were displayed.cluster1::>

13. Display a list of the LUN mappings.

cluster1::> lun mapped showVserver Path Igroup LUN ID Protocol---------- ---------------------------------------- ------- ------ --------svmluns /vol/linluns/linux.lun linigrp 0 iscsisvmluns /vol/winluns/windows.lun winigrp 0 iscsi2 entries were displayed.cluster1::>

14. Display just the LUN linux.lun.

cluster1::> lun show -lun linux.lunVserver Path State Mapped Type Size--------- ------------------------------- ------- -------- -------- --------svmluns /vol/linluns/linux.lun online mapped linux 10GB

Page 239: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

239 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

15. Display LUN mappings for just linux.lun.

cluster1::> lun mapped show -lun linux.lunVserver Path Igroup LUN ID Protocol---------- ---------------------------------------- ------- ------ --------svmluns /vol/linluns/linux.lun linigrp 0 iscsicluster1::>

16. Display detailed LUN mapping information for linux.lun.

cluster1::> lun show -lun linux.lun -instance Vserver Name: svmluns LUN Path: /vol/linluns/linux.lun Volume Name: linluns Qtree Name: "" LUN Name: linux.lun LUN Size: 10GB OS Type: linux Space Reservation: disabled Serial Number: wOj4Q]FMHlq7 Comment: Linux LUNSpace Reservations Honored: false Space Allocation: disabled State: online LUN UUID: 1b4912fb-b779-4811-b1ff-7bc3a615454c Mapped: mapped Block Size: 512 Device Legacy ID: - Device Binary ID: - Device Text ID: - Read Only: false Fenced Due to Restore: false Used Size: 0 Maximum Resize Size: 128.0GB Creation Time: 10/20/2014 06:19:49 Class: regular Node Hosting the LUN: cluster1-01 QoS Policy Group: - Clone: false Clone Autodelete Enabled: false Inconsistent import: falsecluster1::>

Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim spacefrom a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP tonotify the client when the LUN cannot accept writes due to lack of space on the volume. This featureis supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and MicrosoftWindows 2012. The RHEL clients used in this lab are running version 6.6 and so you will enable thespace reclamation feature for your Linux LUN.

17. Display the space reclamation setting for the LUN linux.lun.

cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocationvserver path space-allocation ------- ---------------------- ---------------- svmluns /vol/linluns/linux.lun disabled cluster1::>

18. Configure the LUN linux.lun to support space reclamation.

lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation enabledcluster1::>

19. Display the new space reclamation setting for the LUN linux.lun.

lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocationvserver path space-allocation ------- ---------------------- ---------------- svmluns /vol/linluns/linux.lun enabled

Page 240: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

240 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

11.3.3.3 Mount the LUN on a Linux ClientIn this section you will use the Linux command line to configure the host rhel1 to connect to the Linux LUN /vol/linluns/linux.lun you created in the preceding section.

This section assumes that you know how to use the Linux command line. If you are not familiar with theseconcepts, we recommend that you skip this section of the lab.

1. If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root withthe password "Netapp1!".

2. The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, andthe iSCSI initiator name has already been configured for each host. Confirm that is the case:

[root@rhel1 ~]# rpm -qa | grep netappnetapp_linux_unified_host_utilities-7-0.x86_64[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsiInitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com[root@rhel1 ~]#

3. In the /etc/iscsi/iscsid.conf file the node.session.timeo.replacement_timeout value is set to 5 to bettersupport timely path failover, and the node.startup value is set to automatic so that the system willautomatically log in to the iSCSI node at startup.

[root@rhel1 ~]# grep replacement_time /etc/iscsi/iscsid.conf#node.session.timeo.replacement_timeout = 120node.session.timeo.replacement_timeout = 5[root@rhel1 ~]# grep node.startup /etc/iscsi/iscsid.conf# node.startup = automaticnode.startup = automatic[root@rhel1 ~]#

4. You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages anda /etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access theLUN using all of the SAN LIFs you created for the svmluns SVM.

[root@rhel1 ~]# rpm -q device-mapperdevice-mapper-1.02.79-8.el6.x86_64[root@rhel1 ~]# rpm -q device-mapper-multipathdevice-mapper-multipath-0.4.9-72.el6.x86_64[root@rhel1 ~]# cat /etc/multipath.conf# For a complete list of the default configuration values, see# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults# For a list of configuration options with descriptions, see# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated## REMEMBER: After updating multipath.conf, you must run## service multipathd reload## for the changes to take effect in multipathd# NetApp recommended defaultsdefaults { flush_on_last_del yes max_fds max queue_without_daemon no user_friendly_names no dev_loss_tmo infinity fast_io_fail_tmo 5}blacklist { devnode "^sda" devnode "^hd[a-z]" devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^ccis.*"}devices { # NetApp iSCSI LUNs device {

Page 241: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

241 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

vendor "NETAPP" product "LUN" path_grouping_policy group_by_prio features "3 queue_if_no_path pg_init_retries 50" prio "alua" path_checker tur failback immediate path_selector "round-robin 0" hardware_handler "1 alua" rr_weight uniform rr_min_io 128 getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" }}[root@rhel1 ~]#

5. You now need to start the iSCSI software service on rhel1 and configure it to start automatically at boottime. Note that a force-start is only necessary the very first time you start the iscsid service on host.

[root@rhel1 ~]# service iscsid statusiscsid is stopped[root@rhel1 ~]# service iscsid force-startStarting iscsid: OK[root@rhel1 ~]# service iscsi statusNo active sessions[root@rhel1 ~]# chkconfig iscsi on[root@rhel1 ~]# chkconfig --list iscsiiscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@rhel1 ~]#

6. Next discover the available targets using the iscsiadm command. Note that the exact values usedfor the node paths may differ in your lab from what is shown in this example, and that after runningthis command there will not as of yet be active iSCSI sessions because you have not yet created thenecessary device files.

[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.0.133192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4[root@rhel1 ~]# iscsiadm --mode sessioniscsiadm: No active sessions.[root@rhel1 ~]#

7. Create the devices necessary to support the discovered nodes, after which the sessions become active.

[root@rhel1 ~]# iscsiadm --mode node -l allLogging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.134,3260] (multiple)Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.136,3260] (multiple)Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.135,3260] (multiple)Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.133,3260] (multiple)Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.134,3260] successful.Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.136,3260] successful.Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.135,3260] successful.Login to [iface: default, target: iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4, portal: 192.168.0.133,3260] successful.[root@rhel1 ~]# iscsiadm --mode sessiontcp: [1] 192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4

Page 242: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

242 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

tcp: [2] 192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4tcp: [3] 192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4tcp: [4] 192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4[root@rhel1 ~]#

8. At this point the Linux client sees the LUN over all four paths but it does not yet understand that all fourpaths represent the same LUN.

[root@rhel1 ~]# sanlun lun showcontroller(7mode)/ device host lun vserver(Cmode) lun-pathname filename adapter protocol size product ------------------------------------------------------------------------------------------------svmluns /vol/linluns/linux.lun /dev/sde host3 iSCSI 10g cDOT svmluns /vol/linluns/linux.lun /dev/sdd host4 iSCSI 10g cDOT svmluns /vol/linluns/linux.lun /dev/sdc host5 iSCSI 10g cDOT svmluns /vol/linluns/linux.lun /dev/sdb host6 iSCSI 10g cDOT [root@rhel1 ~]#

9. Since the lab includes a pre-configured /etc/multipath.conf file you just need to start the multipathdservice to handle the multiple path management and configure it to start automatically at boot time.

[root@rhel1 ~]# service multipathd statusmultipathd is stopped[root@rhel1 ~]# service multipathd startStarting multipathd daemon: OK[root@rhel1 ~]# service multipathd statusmultipathd (pid 8656) is running...[root@rhel1 ~]# chkconfig multipathd on[root@rhel1 ~]# chkconfig --list multipathdmultipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@rhel1 ~]#

10. The multipath command displays the configuration of DM-Multipath, and the multipath -llcommand displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper that you use to access the multipathed LUN (in order to create a filesystem on it and tomount it); the first line of output from the multipath -ll command lists the name of that device file (inthis example 3600a0980774f6a34515d464d486c7137). The autogenerated name for this devicefile will likely differ in your copy of the lab. Also pay attention to the output of the sanlun lun show -pcommand which shows information about the Data ONTAP path of the LUN, the LUN’s size, its devicefile name under /dev/mapper, the multipath policy, and also information about the various device pathsthemselves.

[root@rhel1 ~]# multipath -ll[1m3600a0980774f6a34515d464d486c7137 dm-2 NETAPP,LUN C-Modesize=10G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw|-+- policy='round-robin 0' prio=50 status=active| |- 6:0:0:0 sdb 8:16 active ready running| `- 3:0:0:0 sde 8:64 active ready running`-+- policy='round-robin 0' prio=10 status=enabled |- 5:0:0:0 sdc 8:32 active ready running `- 4:0:0:0 sdd 8:48 active ready running[root@rhel1 ~]# ls -l /dev/mappertotal 0lrwxrwxrwx 1 root root 7 Oct 20 06:50 3600a0980774f6a34515d464d486c7137 -> ../dm-2crw-rw---- 1 root root 10, 58 Oct 19 18:57 controllrwxrwxrwx 1 root root 7 Oct 19 18:57 vg_rhel1-lv_root -> ../dm-0lrwxrwxrwx 1 root root 7 Oct 19 18:57 vg_rhel1-lv_swap -> ../dm-1[root@rhel1 ~]# sanlun lun show -p ONTAP Path: svmluns:/vol/linluns/linux.lun LUN: 0 LUN Size: 10g Product: cDOT Host Device: 3600a0980774f6a34515d464d486c7137 Multipath Policy: round-robin 0

Page 243: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

243 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Multipath Provider: Native--------- ---------- ------- ------------ ----------------------------------------------host vserver path path /dev/ host vserverstate type node adapter LIF--------- ---------- ------- ------------ ----------------------------------------------up primary sdb host6 cluster1-01_iscsi_lif_1 up primary sde host3 cluster1-01_iscsi_lif_2 up secondary sdc host5 cluster1-02_iscsi_lif_1 up secondary sdd host4 cluster1-02_iscsi_lif_2 [root@rhel1 ~]#

You can see even more detail about the configuration of multipath and the LUN as a whole by runningthe commands multipath -v3 -d -ll or iscsiadm -m session -P 3. As the output of thesecommands is rather lengthy, it is omitted here.

11. The LUN is now fully configured for multipath access, so the only steps remaining before you can usethe LUN on the Linux host is to create a filesystem and mount it. When you run the following commandsin your lab you will need to substitute in the /dev/mapper/… string that identifies your LUN (get thatstring from the output of ls -l /dev/mapper):

[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980774f6a34515d464d486c71377mke2fs 1.41.12 (17-May-2010)Discarding device blocks: 0/204800 done Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=1 blocks, Stripe width=16 blocks655360 inodes, 2621440 blocks131072 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=268435456080 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632Writing inode tables: done Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 34 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@rhel1 ~]# mkdir /linuxlun[root@rhel1 ~]# mount -t ext4 -o discard /dev/mapper/3600a0980774f6a345515d464d486c7137 /linuxlun[root@rhel1 ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg_rhel1-lv_root 11877388 4962816 6311232 45% /tmpfs 444612 76 444536 1% /dev/shm/dev/sda1 495844 40084 430160 9% /bootsvm1:/ 19456 128 19328 1% /svm1/dev/mapper/3600a0980774f6a34515d464d486c7137 10321208 154100 9642820 2% /linuxlun[root@rhel1 ~]# ls /linuxlunlost+found[root@rhel1 ~]# echo "hello from rhel1" > /linuxlun/test.txt[root@rhel1 ~]# cat /linuxlun/test.txthello from rhel1[root@rhel1 ~]# ls -l /linuxlun/test.txt-rw-r--r-- 1 root root 6 Oct 20 06:54 /linuxlun/test.txt[root@rhel1 ~]#

The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.

12. To have RHEL automatically mount the LUN’s filesystem at boot time, run the following command(modified to reflect the multipath device path being used in your instance of the lab) to add the mountinformation to the /etc/fstab file. The following command should be entered as a single line

[root@rhel1 ~]# echo '/dev/mapper/3600a0980774f6a34515d464d486c7137 /linuxlun ext4 _netdev,discard,defaults 0 0' >> /etc/fstab[root@rhel1 ~]#

Page 244: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

244 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12 ReferencesThe following references were used in writing this lab guide.

• TR-3982: “NetApp Clustered Data ONTAP 8.2.X – an Introduction:, July 2014• TR-4100: “Nondisruptive Operations and SMB File Shares for Clustered Data ONTAP”, April 2013• TR-4129: “Namespaces in clustered Data ONTAP”, July 2014

Page 245: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

245 Basic Concepts for Clustered Data ONTAP 8.3.1 © 2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13 Version History

Version Date Document Version History

Version 1.0 October 2014 Initial Release for Hands On Labs

Version 1.0.1 December 2014 Updates for Lab on Demand

Version 1.1 April 2015 Updated for Data ONTAP 8.3GA and other applicationsoftware. NDO section spun out into a separate lab guide.

Version 1.2 October 2015 Updated for Data ONTAP 8.3.1GA and other applicationsoftware.

Page 246: Basic Concepts for Clustered Data ONTAP 8.3 - NetApp · PDF fileBasic Concepts for Clustered Data ONTAP 8.3.1 December 2015 | SL10237 Version 1.2

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exactproduct and feature versions described in this document are supported for your specific environment.The NetApp IMT defines product components and versions that can be used to construct configurationsthat are supported by NetApp. Specific results depend on each customer's installation in accordancewith published specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of anyinformation or recommendations provided in this publication, or with respect to any results that may be obtainedby the use of the information or observance of any recommendations provided herein. The information in thisdocument is distributed AS IS, and the use of this information or the implementation of any recommendations ortechniques herein is a customer’s responsibility and depends on the customer’s ability to evaluate and integratethem into the customer’s operational environment. This document and the information contained herein may beused solely in connection with the NetApp products discussed in this document.

Go further, faster®

© 2015 NetApp, Inc. All rights reserved. No portions of this presentation may be reproduced without prior writtenconsent of NetApp, Inc. Specifications are subject to change without notice. NetApp and the NetApp logo areregistered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products aretrademarks or registered trademarks of their respective holders and should be treated as such.


Recommended