+ All Categories
Home > Documents > Field Installation Guide-V2 1 Foundation

Field Installation Guide-V2 1 Foundation

Date post: 17-Feb-2018
Category:
Upload: juan
View: 241 times
Download: 0 times
Share this document with a friend
61
7/23/2019 Field Installation Guide-V2 1 Foundation http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 1/61  Field Installation Guide Foundation 2.1 26-Aug-2015
Transcript
Page 1: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 1/61

 

Field Installation Guide

Foundation 2.1

26-Aug-2015

Page 2: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 2/61

Copyright | Field Installation Guide | Foundation | 2

Notice

Copyright

Copyright 2015 Nutanix, Inc.

Nutanix, Inc.

1740 Technology Drive, Suite 150

San Jose, CA 95110

 All rights reserved. This product is protected by U.S. and international copyright and intellectual property

laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks

and names mentioned herein may be trademarks of their respective companies.

License

The provision of this software to you does not grant any licenses or other rights under any Microsoft

patents with respect to anything other than the file server implementation portion of the binaries for this

software, including no licenses or any other rights in any hardware or any devices or software that are used

to communicate with or in connection with this software.

Conventions

Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command  The commands are executed in the Nutanix nCLI.

user@host$ command  The commands are executed as a non-privileged user (such as nutanix)

in the system shell.

root@host# command  The commands are executed as the root user in the vSphere or Acropolis

host shell.

> command  The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials

Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere client ESXi host root nutanix/4u

Page 3: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 3/61

Copyright | Field Installation Guide | Foundation | 3

Interface Target Username Password

SSH client or console ESXi host root nutanix/4u

SSH client or console Acropolis host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

Version

Last modified: August 26, 2015 (2015-08-26 16:25:24 GMT-7)

Page 4: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 4/61

4

Contents

Release Notes...................................................................................................................5

1: Field Installation Overview..................................................................... 7Imaging Nodes..................................................................................................................................... 7

Summary: Imaging a Cluster....................................................................................................7

Summary: Imaging a Node.......................................................................................................8

Supported Hypervisors.........................................................................................................................8

2: Preparing Installation Environment.......................................................9Preparing a Workstation...................................................................................................................... 9

Setting Up the Network..................................................................................................................... 13

3: Imaging a Cluster..................................................................................15Configuring Global Parameters......................................................................................................... 16

Configuring Node Parameters........................................................................................................... 18

Configuring Image Parameters..........................................................................................................22

Configuring Cluster Parameters........................................................................................................ 23

Monitoring Progress...........................................................................................................................25

Cleaning Up  After Installation............................................................................................................28

4: Imaging a Node..................................................................................... 29Installing a Hypervisor....................................................................................................................... 29

Installing ESXi......................................................................................................................... 32

Installing Hyper-V....................................................................................................................33

Installing KVM......................................................................................................................... 37Installing the Controller VM............................................................................................................... 44

5: Downloading Installation Files.............................................................48Foundation Files.................................................................................................................................50

Phoenix Files......................................................................................................................................51

6: Hypervisor ISO Images.........................................................................52

7: Setting IPMI Static IP Address.............................................................54

8: Troubleshooting.....................................................................................56Fixing IPMI Configuration Problems..................................................................................................56

Fixing Imaging Problems...................................................................................................................57

Frequently Asked Questions (FAQ)...................................................................................................58

Page 5: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 5/61

Release Notes | Field Installation Guide | Foundation | 5

Release Notes

Foundation Release 2.1.2

This release includes the following enhancements and changes:

• Support for the NX-9060-G4 platform.• Support for the Acropolis hypervisor (AHV) 20150616 installer. (This installer is not supported in earlier 

Foundation releases.)

• Resolved an issue where an ESX install would intermittently fail at first boot complaining about a

missing “datastore1" [ENG-36019].

• You can upgrade from Foundation 2.1.x to 2.1.2 using the steps described for a 2.0.x to 2.1 upgrade in

the "Foundation Release 2.1" section.

Foundation Release 2.1.1.1

This release includes a fix to support the Firefox and Safari browsers during the cluster creation step

[ENG-33759]. Foundation 2.1.1 works properly with Chrome and Internet Explorer, so Foundation 2.1.1.1 is

needed only if you are using Firefox or Safari. In addition, the instructions for installing the Hyper-V or KVMhypervisor on a single node using a Phoenix ISO (instead of using Foundation and a NOS tarball to image

a single node) have been updated in this guide (see Installing Hyper-V  on page 33 or Installing KVM  on

page 37).

Foundation Release 2.1.1

This release includes the following enhancements and changes:

• Support for three new platforms: NX-3175-G4, 1065S and 1065-G4

• Support for ESXi 6.0 as the hypervisor (see Hypervisor ISO Images on page 52).

• Expanded support for clusters with NX-6035C nodes (always imaged with KVM) to allow either Hyper-V

or ESXi imaging of the other nodes. (Previously, only ESXi was supported.)

• Additional bug fixes for a smoother imaging experience.• You can upgrade from Foundation 2.1 to 2.1.1 using the steps described for a 2.0.x to 2.1 upgrade in

the following section.

• Always use the latest version of Foundation when possible. The latest version provides the highest

feature maturity and latest hardware support. For example, only Foundation 2.1.1 (or later) supports the

firmware version (4.42) used in quad-port Haswell models.

Foundation Release 2.1

This release includes the following enhancements and changes:

• Added support for the NX-6035C. Foundation will image the NX-6035C using KVM in parallel with ESXi

imaging on other nodes.

• Foundation now installs NOS using the upgrade tarball rather than a Phoenix ISO, and it supports anyNOS version greater than 3.5. You can download the Foundation files and the NOS tarball from the

support portal (see Downloading Installation Files on page 48).

• Foundation now installs KVM using a Nutanix KVM ISO. The ISO is included in Foundation by default

at /home/nutanix/foundation/isos/hypervisor/kvm , and future updates will be made available on the

support portal.

• Foundation supports KVM on NOS version 4.1 or later; it does not support KVM on earlier NOS

versions.

• Previous versions of Foundation used a 24 GB virtual disk, but Foundation 2.1 uses a 30 GB virtual

disk.

Page 6: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 6/61

Release Notes | Field Installation Guide | Foundation | 6

• There is a new procedure when using Phoenix to install KVM on a node (see Installing a Hypervisor  on

page 29).

• Because Foundation 2.1 includes a new OVF file, it is recommended that all users install Foundation

2.1 from scratch (see Preparing a Workstation on page 9). However, it is possible to upgrade

to version 2.1 from version 2.0.x using the following steps. (Upgrading from a pre-2.0 version is not

supported.)

1. Copy the Foundation tarball (foundation-version# .tar.gz) from the support portal to /home/nutanix

in your VM.2. Navigate to /home/nutanix.

3. Enter the following five commands:

$ sudo service foundation_service stop$ rm -rf foundation$ tar xzf foundation-version# .tar.gz$ sudo yum install python-scp$ sudo service foundation_service restart

If the first command (foundation_service stop) is skipped or the commands are not run in order, the

user may get bizarre errors after upgrading. To fix this situation, enter the following two commands:

$ sudo pkill -9 foundation

$ sudo service foundation_service restart

Page 7: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 7/61

Field Installation Overview | Field Installation Guide | Foundation | 7

1

Field Installation Overview

Nutanix installs the KVM hypervisor and the Nutanix Operating System (NOS) Controller VM at the factory

before shipping a node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes

or to use any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides

step-by-step instructions on how to image nodes (install a hypervisor and then the NOS Controller VM)

after they have been physically installed at a site and configure the nodes into one or more clusters.

Note: Only Nutanix sales engineers, support engineers, and partners are authorized to perform

a field installation. Field installation can be used to cleanly install new nodes (blocks) in a cluster 

or to install a different hypervisor on a single node. It should not be used to upgrade the

hypervisor or switch hypervisors of nodes in an existing cluster. (You can use the Foundation

tool to re-image nodes in an existing cluster that you no longer want by first destroying the cluster.)

Imaging Nodes

 A field installation can be performed for a cluster (multiple nodes that can be configured as one or more

clusters) or a single node.

Summary: Imaging a Cluster

Details of these steps are in Preparing Installation Environment  on page 9 (step 1) and Imaging a

Cluster  on page 15 (step 2).

1. Set up the installation environment as follows:

a. Download Foundation (multi-node installation tool), NOS package, and hypervisor ISO image files to

a workstation. When installing ESXi or Hyper-V, the customer must provide a hypervisor ISO image

file.

b. Install Oracle VM VirtualBox on the workstation and configure the Foundation VM.

c. Connect the Ethernet ports on the nodes to a switch.

2. Open the Foundation GUI on the workstation and configure the following:

a. Specify global parameters (IPMI, hypervisor, and Controller VM addresses and credentialinformation).

b. Identify the nodes to image (configure discovered nodes and add bare metal nodes as desired).

c. Select the hypervisor ISO image and NOS package files to use.

d. [optional] Create cluster(s) and assign nodes to the cluster(s).

e. Start the imaging process and monitor progress.

Page 8: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 8/61

Field Installation Overview | Field Installation Guide | Foundation | 8

Summary: Imaging a Node

Details of these steps are in Imaging a Node on page 29.

Note: You can use either Foundation (cluster procedure above) or this procedure using a tool

called Phoenix to image a single node.

1. Download Foundation and NOS tarballs to a workstation. If installing ESXi or Hyper-V as the hypervisor,

also download an ESXi or Hyper-V ISO image.

2. Create a Phoenix ISO image file (derived from the Foundation and NOS tarballs).

3. Sign into the IPMI web console for that node, attach a hypervisor ISO image file, provide required node

information, and then restart the node.

4. Repeat step 3 for the Phoenix ISO image file.

Supported Hypervisors

Foundation supports imaging an ESXi, Hyper-V, or KVM hypervisor on any Nutanix hardware model exceptfor the following:

• Foundation does not support the NX-2000 and NX-3000 series. (This refers to the original NX-3000

series only. The NX-3050/3060 series is supported.)

• Hyper-V requires a 64 GB DOM.

• NX-7000 series:

• ESXi version 5.1 or later is supported; earlier ESXi versions are not supported.

• Hyper-V standard and datacenter versions are supported; the free version is not supported.

Note: See Hypervisor ISO Images on page 52 for a list of supported ESXi and Hyper-V

versions.

Page 9: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 9/61

Preparing Installation Environment | Field Installation Guide | Foundation | 9

2

Preparing Installation Environment

Imaging is performed from a workstation with access to the IPMI interfaces of the nodes in the cluster.

Imaging a cluster in the field requires first installing certain tools on the workstation and then setting the

environment to run those tools. This requires two preparation tasks:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may

not reflect the latest features described in this section.)

1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to

installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using

VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on

page 9).

2. Set up the network. The nodes and workstation must have network access to each other through a

switch at the site (see Setting Up the Network  on page 13).

Preparing a Workstation

 A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the

following:

Note: You can perform these steps either before going to the installation site (if you use a portable

laptop) or at the site (if you can connect to the web).

1. Get a workstation (laptop or desktop computer) that you can use for the installation.

The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk

space (preferably SSD), and a physical (wired) network adapter.

2. Go to the Foundation and NOS download pages in the Nutanix support portal (see Downloading 

Installation Files on page 48) and download the following files to a temporary directory on the

workstation.

• Foundation_VM_OVF-version# .tar. This tar file includes the following files:

• Foundation_VM-version# .ovf. This is the Foundation VM OVF configuration file for the version# 

release, for example Foundation_VM-2.1.ovf .

• Foundation_VM-version# -disk1.vmdk. This is the Foundation VM VMDK file for the version# 

release, for example Foundation_VM-2.1-disk1.vmdk .

• VirtualBox-version# -[OSX|Win].[dmg|exe] . This is the Oracle VM VirtualBox installer for Mac

OS (VirtualBox-version# -OSX.dmg) or Windows (VirtualBox-version# -Win.exe). Oracle VM

VirtualBox is a free open source tool used to create a virtualized environment on the workstation.

Note: Links to the VirtualBox files may not appear on the download page for every

Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)

• nutanix_installer_package-version# .tar.gz. This is the tarball used for imaging the desired NOS

release. Go to the NOS Releases download page on the support portal to download this file. (You

can download all the other files from the Foundation download page.)

Page 10: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 10/61

Preparing Installation Environment | Field Installation Guide | Foundation | 10

3. Go to the download location and extract Foundation_VM_OVF-version# .tar by entering the following

command:

$ tar -xf Foundation_VM_OVF-version# .tar

Note: This assumes the tar command is available. If it is not, use the corresponding tar utility

for your environment.

4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.See the Oracle VM VirtualBox User Manual  for installation and start up instructions (https:// 

www.virtualbox.org/wiki/Documentation).

Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.

Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM

VirtualBox.

5. Create a new folder called VirtualBox VMs in your home directory.

On a Windows system this is typically C:\Users\user_name\VirtualBox VMs.

6. Copy the Foundation_VM-version# .ovf and Foundation_VM-version# -disk1.vmdk files to the VirtualBox

VMs folder that you created in step 5.

7. Start Oracle VM VirtualBox.

Figure: VirtualBox Welcome Screen

8. Click the File option of the main menu and then select Import Appliance from the pull-down list.

9. Find and select the Foundation_VM-version# .ovf file, and then click Next.

10. Click the Import button.

11. In the left column of the main screen, select Foundation_VM-version#  and click Start.

The Foundation VM console launches and the VM operating system boots.

12.  At the login screen, login as the Nutanix user with the password nutanix/4u.

The Foundation VM desktop appears (after it loads).

13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,

install Oracle Additions as follows:

Page 11: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 11/61

Preparing Installation Environment | Field Installation Guide | Foundation | 11

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD

Image... from the menu.

 A VBOXADDITIONS CD entry appears on the Foundation VM desktop.

b. Click OK when prompted to Open Autorun Prompt and then click Run.

c. Enter the root password (nutanix/4u) and then click Authenticate.

d.  After the installation is complete, press the return key to close the VirtualBox Guest Additionsinstallation window.

e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.

f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.

Note:  A reboot is necessary for the changes to take effect.

g.  After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on

the VirtualBox window for the Foundation VM.

14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able toget an IP address from the DHCP server.

If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as

follows:

Note: Normally, the Foundation VM needs to be on a public network in order to copy selected

ISO files to the Foundation VM in the next two steps. This might require setting a static IP

address now and setting it again when the workstation is on a different (typically private)

network for the installation (see Imaging a Cluster  on page 15).

a. Double click the set_foundation_ip_address  icon on the Foundation VM desktop.

Figure: Foundation VM: Desktop

b. In the pop-up window, click the Run in Terminal button.

Page 12: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 12/61

Preparing Installation Environment | Field Installation Guide | Foundation | 12

Figure: Foundation VM: Terminal Window 

c. In the Select Action box in the terminal window, select Device Configuration.

Note: Selections in the terminal window can be made using the indicated keys only. (Mouse

clicks do not work.)

Figure: Foundation VM: Action Box 

d. In the Select a Device box, select eth0.

Figure: Foundation VM: Device Configuration Box 

e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by

default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and

then click the OK button.

Page 13: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 13/61

Preparing Installation Environment | Field Installation Guide | Foundation | 13

Figure: Foundation VM: Network Configuration Box 

f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action

box.

This save the configuration and closes the terminal window.

15. Copy nutanix_installer_package-version# .tar.gz (downloaded in step 2) to the /home/nutanix/

foundation/nos folder.

16. If you intend to install ESXi or Hyper-V as the hypervisor, download the hypervisor ISO image into the

appropriate folder for that hypervisor.

• ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx

• Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

Note: Customers must provide a supported ESXi or Hyper-V ISO image (see Hypervisor ISO

Images on page 52). Customers do not have to provide a KVM image because Foundation

automatically puts a KVM ISO into /home/nutanix/foundation/isos/hypervisor/kvm .

Setting Up the Network

The network must be set up properly on site before imaging nodes through the Foundation tool. To set up

the network connections, do the following:

Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing

tables). A flat switch is often recommended to protect against configuration errors that could affect

the production environment. Foundation includes a multi-homing feature that allows you to image

the nodes using production IP addresses despite being connected to a flat switch (see Imaging a

Cluster  on page 15).

1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN

interfaces of the nodes must be in failover mode (factory default setting).

The exact location of the port depends on the model type. See the hardware manual for your model

to determine the port location. The following figure illustrates the location on the back of an NX-3050

(middle RJ-45 interface).

Page 14: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 14/61

Preparing Installation Environment | Field Installation Guide | Foundation | 14

Figure: Port Locations (NX-3050)

Note: Unlike Nutanix systems, which only require that you connect the 1 GbE port, Dell

systems require that you connect both the iDRAC port (which is used instead of an IPMI port)

and one of the 1 GbE ports.

Figure: Port Locations (Dell System)

2. Connect the installation workstation (see Preparing a Workstation on page 9) to the same 1 GbE

switch as the nodes.

Page 15: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 15/61

Imaging a Cluster | Field Installation Guide | Foundation | 15

3

Imaging a Cluster

This procedure describes how to install a selected hypervisor and the NOS Controller VM on multiple new

nodes and optionally configure the nodes into one or more clusters.

Before you begin:

• Physically install the Nutanix cluster at your site. See the Physical Installation Guide for your model type

for installation instructions.

• Set up the installation environment (see Pr eparing Installation Environment  on page 9).

Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will

get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in

the BIOS.

Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the

imaging process. Therefore, disable STP before starting Foundation.

Note:  Avoid connecting any device (that is, plugging it into a USB port on a node) that presents

virtual media, such as CDROM. This could conflict with the foundation installation when it tries

to mount the virtual CDROM hosting the install ISO.

• Have ready the appropriate global, node, and cluster parameter values needed for installation.

Note: If the Foundation VM IP address set previously was configured in one (typically public)

network environment and you are imaging the cluster on a different (typically private) network

in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on

page 9 to configure a new static IP address for the Foundation VM.To image the nodes and create a cluster(s), do the following:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may

not reflect the latest features described in this section.)

1. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on

page 16).

2. Configure the nodes to image (see Configuring Node Parameters on page 18).

3. Select the images to use (see Configuring Image Par ameters on page 22).

4. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring Cluster Parameters on page 23).

5. Start the imaging process and monitor progress (see Monitoring Progress on page 25).

6. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see

Troubleshooting  on page 56).

7. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After 

Installation on page 28).

Page 16: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 16/61

Imaging a Cluster | Field Installation Guide | Foundation | 16

Configuring Global Parameters

Before you begin: Complete Imaging a Cluster  on page 15.

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to

see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect

the latest features described in this section.)

1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.

Note: See Preparing Installation Environment  on page 9 if Oracle VM VirtualBox is not started

or the Foundation VM is not running currently. You can also start the Foundation GUI by

opening a web browser and entering http://localhost:8000/gui/index.html .

Figure: Foundation VM Desktop

The Global Configuration screen appears. Use this screen to configure network addresses.

Note: You can access help from the gear icon pull-down menu (top right), but this

requires Internet access. If necessary, copy the help URL to a browser with Internet access.

Page 17: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 17/61

Imaging a Cluster | Field Installation Guide | Foundation | 17

Figure: Global Configuration Screen

2. In the top section of the screen, enter appropriate values for the IPMI, hypervisor, and Controller VM in

the indicated fields:

Note: The parameters in this section are global and will apply to all the imaged nodes.

Figure: Global Configuration Screen: IPMI, Hypervisor, and CVM Parameters

a. IPMI Netmask: Enter the IPMI netmask value.

b. IPMI Gateway: Enter an IP address for the gateway.

c. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.

d. IPMI Password: Enter the IPMI password. The default password is ADMIN.

Check the show password box to display the password as you type it.

e. Hypervisor Netmask: Enter the hypervisor netmask value.

f. Hypervisor Gateway: Enter an IP address for the gateway.

g. DNS Server IP: Enter the IP address of the DNS server.

h. CVM Netmask: Enter the Controller VM netmask value.

Page 18: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 18/61

Imaging a Cluster | Field Installation Guide | Foundation | 18

i. CVM Gateway: Enter an IP address for the gateway.

 j. CVM Memory: Select a memory size for the Controller VM from the pull-down list.

This field is set initially to default. (The default amount varies according to the node model type.)

The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The

default setting represents the recommended amount for the model type. Assigning more memory

than the default might be appropriate when using advanced features such as deduplication or 

compression. In addition, it is recommended that storage heavy nodes (those with 20 TB or more of capacity) have at least 24 GB of memory.

Note: Use the default memory setting unless Nutanix support recommends a different

setting.

3. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets,

check the Multi-Homing box in the bottom section of the screen.

When the box is checked, a line appears to enter Foundation VM virtual IP addresses. The purpose

of the multi-homing feature is to allow the Foundation VM to configure production IP addresses when

using a flat switch. Multi-homing assigns the Foundation VM virtual IP addresses on different subnets

so that you can use customer-specified IP addresses regardless of their subnet.

• Enter unique IPMI, hypervisor, and Controller VM IP addresses. Make sure that the addressesmatch the subnets specified for the nodes to be imaged (see Configuring Node Parameters on

page 18).

• If this box is not checked, Foundation requires that either all IP addresses are on the same subnet or 

that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.

Figure: Global Configuration Screen: Multi-Homing

4. Click the Next button at the bottom of the screen to configure the nodes to be imaged (see Configuring 

Node Parameters on page 18).

Configuring Node Parameters

Before you begin: Complete Configuring Global Parameters on page 16.

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to

see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect

the latest features described in this section.)

The Block & Node Config  screen appears. This screen allows you to configure discovered nodes and

add other (bare metal) nodes to be imaged. Upon opening this screen, Foundation searches the network

for unconfigured Nutanix nodes (that is, factory prepared nodes that are not part of a cluster) and then

displays information about the discovered blocks and nodes. The discovery process can take several

minutes if there are many nodes on the network. Wait for the discovery process to complete before

proceeding. The message "Searching for nodes. This may take a while" appears during discovery.

Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes

to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition,

Page 19: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 19/61

Imaging a Cluster | Field Installation Guide | Foundation | 19

Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a

preconfigured block with an existing cluster and you want Foundation to image those nodes, you

must first destroy the existing cluster in order for Foundation to discover those nodes.

Figure: Node Configuration Screen

1. Review the list of discovered nodes.

 A table appears with a section for each discovered block that includes information about each node in

the block.

• You can exclude a block by clicking the X on the far right of that block. The block disappears from

the display, and the nodes in that block will not be imaged. Clicking the X on the top line removes all

the displayed blocks.

• To repeat the discovery process (search for unconfigured nodes again), click the Retry Discovery

button. You can reset all the global and node entries to the default state by selecting Reset

Configuration from the gear icon pull-down menu.

2. To image additional (bare metal) nodes, click the Add Blocks button.

 A window appears to add a new block. Do the following in the indicated fields:

Page 20: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 20/61

Imaging a Cluster | Field Installation Guide | Foundation | 20

Figure: Add Bare Metal Blocks Window 

a. Number of Blocks: Enter the number of blocks to add.

b. Nodes per Block: Enter the number of nodes to add in each block. All added blocks get the same number of nodes. To add multiple blocks with differing nodes per 

block, add the blocks as separate actions.

c. Click the Create button.

The window closes and the new blocks appear at the end of the discovered blocks table.

3. Configure the fields for each node as follows:

a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned

automatically.

b. Position: Uncheck the boxes for any nodes you do not want to be imaged.The value (A, B, and so on) indicates the node placement in the block such as A, B, C, D for a four-

node block. You can exclude the node in that block position from being imaged by unchecking the

appropriate box. You can check (or uncheck) all boxes by clicking Select All or (Unselect All) above

the table on the right.

c. IPMI Mac Address: For any nodes you added in step 2, enter the MAC address of the IPMI

interface in this field.

Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is

read-only for discovered nodes and displays a value of "N/A" for those nodes.) The MAC address of 

the IPMI interface normally appears on a label on the back of each node. (Make sure you enter the

MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".) The MAC

address appears in the standard form of six two-digit hexadecimal numbers separated by colons, for 

example 00:25:90:D9:01:98.

Caution:  Any existing data on the node will be destroyed during imaging. If you are using

the add node option to re-image a previously used node, do not proceed until you have

saved all the data on the node that you want to keep.

Page 21: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 21/61

Imaging a Cluster | Field Installation Guide | Foundation | 21

Figure: IPMI MAC Address Label 

d. IPMI IP: Do one of the following in this field:

Note: If you are using a flat switch, the IP addresses must be on the same subnet as the

Foundation VM unless you configure multi-homing (see Configuring Global Parameters on

page 16).

• To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP

address in that field.

• To specify the IPMI addresses automatically, enter a starting IP address in the top line ("Start

IP address" field) of the IPMI IP column. The entered address is assigned to the IPMI port of 

the first node, and consecutive IP addresses (starting from the entered address) are assigned

automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by

position, so IP assignments are sequential. If you do not want all addresses to be consecutive,

you can change the IP address for specific nodes by updating the address in the appropriate

fields for those nodes.

Note:  Automatic assignment is not used for addresses ending in 0, 1, 254, or 255

because such addresses are commonly reserved by network administrators.

e. Hypervisor IP: Repeat the previous step for this field.

This sets the hypervisor IP addresses for all the nodes.

f. CVM IP: Repeat the previous step for this field.

This sets the Controller VM IP addresses for all the nodes.

Caution: The Nutanix high availability features require that both hypervisor and Controller VM be in the same subnet. Putting them in different subnets reduces the failure protection

provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended

that you keep both hypervisor and Controller VM in the same subnet.

g. Hypervisor Hostname: Do one of the following in this field:

• A host name is automatically generated for each host (NTNX-unique_identifier). If these names

are acceptable, do nothing in this field.

Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The

automatically generated names might be longer than 15 characters, which would result

in the same truncated name for multiple hosts in a Windows environment. Therefore, do

not use automatically generated names longer than 15 characters when the hypervisor isHyper-V.

• To specify the host names manually, go to the line for each node and enter the desired name in

that field.

• To specify the host names automatically, enter a base name in the top line of the Hypervisor 

Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first

node, and the base name with "-2", "-3" and so on are assigned automatically as the host names

of the remaining nodes. You can specify different names for selected nodes by updating the entry

in the appropriate field for those nodes.

Page 22: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 22/61

Imaging a Cluster | Field Installation Guide | Foundation | 22

h. NX-6035C : Check this box for any node that is a model NX-6035C.

Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs

are not allowed. NX-6035C nodes run KVM (and so will be imaged with KVM) regardless of what

hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 22).

4. To check which IP addresses are active and reachable, click Ping Scan (above the table on the right).

This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP fields. A (returned

response) or (no response) icon appears next to that field to indicate the ping test result for each

node. This feature is most useful when imaging a previously unconfigured set of nodes. None of 

the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing

infrastructure.

Note: When re-imaging a configured set of nodes using the same network configuration, failure

to ping indicates a networking issue.

5. Click the Next button at the bottom of the screen to select the images to use (see Configuring Image

Parameters on page 22).

Configuring Image Parameters

Before you begin: Complete Configuring Node Parameters on page 18.

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to

see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect

the latest features described in this section.)

The Node Imaging  configuration screen appears. This screen is for selecting the NOS package and

hypervisor image to use when imaging the nodes.

Figure: Node Imaging Screen

1. Select the hypervisor to install from the pull-down list on the left.

The following choices are available:

• ESX. Selecting ESX as the hypervisor displays the NOS Package and Hypervisor ISO Image fields

directly below.

Page 23: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 23/61

Imaging a Cluster | Field Installation Guide | Foundation | 23

• Hyper-V. Selecting Hyper-V as the hypervisor displays the NOS Package, Hypervisor ISO Image,

and SKU fields.

Caution: Nodes must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on

nodes with less DOM capacity will fail.

• KVM. Selecting KVM as the hypervisor displays the NOS Package and Hypervisor ISO Image fields.

2. In the NOS Package field, select the NOS package to use from the pull-down list.

Note: Click the Refresh NOS package link to display the current list of available images in

the ~/foundation/nos  folder. If the desired NOS package does not appear in the list, you must

download it to the workstation (see Preparing Installation Environment  on page 9).

3. In the Hypervisor ISO Image field, select the hypervisor ISO image to use from the pull-down list.

Note: Click the Refresh hypervisor image link to display the current list of available images

in the ~/foundation/isos/hypervisor/[esx|hyperv]  folder. If the desired hypervisor ISO image

does not appear in the list, you must download it to the workstation (see Preparing Installation

Environment  on page 9). Foundation automatically provides an ISO for KVM imaging in the ~/

foundation/isos/hypervisor/kvm  folder.

4. [Hyper-V only] In the SKU field, select the Hyper-V version to use from the pull-down list.Three Hyper-V versions are supported: free, standard, and datacenter. This field appears only when

you select Hyper-V.

5. When all the settings are correct, do one of the following:

→ To create a new cluster, click the Next button at the bottom of the screen (see Configuring Cluster 

Parameters on page 23).

→ To start imaging immediately (bypassing cluster configuration), click the Run Installation button at

the top of the screen (see Monitoring Progress on page 25).

Configuring Cluster Parameters

Before you begin: Complete Configuring Image Parameters on page 22.

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to

see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect

the latest features described in this section.)

The Clusters configuration screen appears. This screen allows you to create one or more clusters and

assign nodes to those clusters. It also allows you to enable diagnostic and health tests after creating the

cluster(s).

Page 24: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 24/61

Imaging a Cluster | Field Installation Guide | Foundation | 24

Figure: Cluster Configuration Screen

1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster  in the

Cluster Creation section at the top of the screen.This section includes a table that is empty initially. A blank line appears in the table for the new cluster.

Enter the following information in the indicated fields:

a. Cluster Name: Enter a cluster name.

b. External IP: Enter an external (virtual) IP address for the cluster.

This field sets a logical IP address that always points to an active Controller VM (provided the cluster 

is up), which removes the need to enter the address of a specific Controller VM. This parameter is

required for Hyper-V clusters and is optional for ESXi and KVM clusters. (This applies to NOS 4.0 or 

later; it is ignored when imaging an earlier NOS release.)

c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.

Enter a comma separated list to specify multiple server addresses in this field (and the next twofields).

d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.

You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable

or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.

Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active

Directory domain controller.

e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.

Page 25: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 25/61

Imaging a Cluster | Field Installation Guide | Foundation | 25

f. Max Redundancy Factor : Select a redundancy factor (2 or 3) for the cluster from the pull-down list.

This parameter specifies the number of times each piece of data is replicated in the cluster (either 2

or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum

number of nodes required to support that protection.

• Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of 

any single node or drive.

• Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failureof any two nodes or drives in different blocks. A redundancy factor of 3 requires that the cluster 

have at least five nodes, and it can be enabled only when the cluster is created. It is an option on

NOS release 4.0 or later. (In addition, containers must have replication factor 3 for guest VM data

to withstand the failure of two nodes.)

2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in

the Post Image Testing  section.

→ Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes

several performance metrics on each node in the cluster. These metrics indicate whether the cluster 

is perf orming properly. The results are stored in the ~/foundation/logs/diagnostics  directory.

→ Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of 

tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/logs/ncc directory. (This test is available on NOS 4.0 or later. Checking the box does nothing on an

earlier NOS release.)

3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes

field to be included in that cluster.

 A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes

to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot

be assigned to more than one cluster.

Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to

add to an existing cluster, which can be done through the web console or nCLI at a later time.

4. When all settings are correct, click the Run Installation button at the top of the screen to start theinstallation process (see Monitoring Progress on page 25).

Monitoring Progress

Before you begin: Complete Configuring Cluster Parameters on page 23 (or Configuring Image

Parameters on page 22 if you are not creating a cluster).

Video: Click here to see a video (MP4 format) demonstration of this procedure, or click here to

see a video demonstration of the complete cluster imaging procedure. (The videos may not reflect

the latest features described in this section.)

When all the global, node, and cluster settings are correct, do the following:

1. Click the Run Installation button at the top of the screen.

Figure: Run Installation Button

Page 26: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 26/61

Imaging a Cluster | Field Installation Guide | Foundation | 26

This starts the installation process. First, the IPMI port addresses are configured. The IPMI port

configuration processing can take several minutes depending on the size of the cluster.

Figure: IPMI Configuration Status

Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation

process stops before imaging any of the nodes. To correct a port configuration problem, see

Fixing IPMI Configuration Problems on page 56.

2. Monitor the imaging and cluster creation progress.

If IPMI port addressing is successful, Foundation moves to node imaging and displays a progress

screen. The progress screen includes the following sections:

• Progress bar at the top (blue during normal processing or red when there is a problem).

• Cluster Creation Status section with a line for each cluster being created (status indicator, cluster 

name, progress message, and log link).

• Node Status section with a line for each node being imaged (status indicator, IPMI IP address,progress message, and log link).

Figure: Foundation Progress Screen: Ongoing Installation

The status message for each node (in the Node Status section) displays the imaging percentagecomplete and current step. Nodes are imaged in parallel, and the imaging process takes about 45

minutes. You can monitor overall progress by clicking the Log link at the top, which displays the

service.log contents in a separate tab or window. Click on the Log link for a node to display the log file

for that node in a separate tab or window.

Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains

more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.

Page 27: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 27/61

Imaging a Cluster | Field Installation Guide | Foundation | 27

• When installation moves to cluster creation, the status message for each cluster (in the Cluster 

Creation Status section) displays the percentage complete and current step. Cluster creation

happens quickly, but this step could take some time if you selected the diagnostic and NCC post-

creation tests. Click on the Log link for a cluster to display the log file for that cluster in a separate

tab or window.

• When processing completes successfully, an "Installation Complete" message appears, along with a

green check mark in the Status field for each node and cluster. This means IPMI configuration and

imaging (both hypervisor and NOS Controller VM) across all the nodes in the cluster was successful,and cluster creation was successful (if enabled).

Figure: Foundation Progress Screen: Successful Installation

3. If the progress bar turns red with a "There were errors in the installation" message and one or more

node or cluster entries have a red X in the status column, the installation failed at the node imaging or 

cluster creation step. To correct such problems, see Fixing Imaging Problems on page 57. Clicking

the Back to config button returns you to the configuration screens to correct any entries. The defaultper-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and KVM, so you can

expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that

amount of time.

Page 28: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 28/61

Imaging a Cluster | Field Installation Guide | Foundation | 28

Figure: Foundation Progress Screen: Failed Installation

Cleaning Up After Installation

Some information persists after imaging a cluster using Foundation. If you want to use the same

Foundation VM to image another cluster, the persistent information must be removed before attempting

another installation.

To remove the persistent information after an installation, go to a configuration screen and then click the

Reset Configuration option from the gear icon pull-down list in the upper right of the screen.

Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and

returns the Foundation environment to a fresh state.

Figure: Reset Configuration

Page 29: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 29/61

Imaging a Node | Field Installation Guide | Foundation | 29

4

Imaging a Node

This procedure describes how to install the NOS Controller VM and selected hypervisor on a new or 

replacement node from an ISO image on a workstation (laptop or desktop machine).

Before you begin: If you are adding a new node, physically install that node at your site. See the Physical 

Installation Guide for your model type for installation instructions.

Note: You can use Foundation to image a single node (see Imaging a Cluster  on page 15),

so a separate procedure is not necessary or recommended. However, the following procedure

describes how to image a single node if you decide not to use Foundation.

Imaging a new or replacement node can be done either through the IPMI interface (network connection

required) or through a direct attached USB (no network connection required). In either case the installationis divided into two steps:

1. Install the desired hypervisor version (see Installing a Hypervisor  on page 29).

2. Install the NOS Controller VM and provision the hypervisor (see Installing the Controller VM on

page 44).

Installing a Hypervisor

This procedure describes how to install a hypervisor on a single node in a cluster in the field.

Caution: The node must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on a

node with less DOM capacity will fail.

To install a hypervisor on a new or replacement node in the field, do the following:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may

not reflect the latest features described in this section.)

1. Verify you have access to the IPMI interface for the node.

a. Connect the IPMI port on that node to the network if it is not already connected.

 A 1 or 10 GbE port connection is not required for imaging the node.

b.  Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.

To assign a static address, see Setting IPMI Static IP Address on page 54.

2. Download the NOS (nutanix_installer_package-version# .tar.gz) and Foundation

(foundation-version# .tar.gz) tarballs from the Nutanix support portal (see Downloading Installation

Files on page 48) to the /home/nutanix directory on the workstation. (Create this directory if it does

not exist currently.) If installing ESXi or Hyper-V, also download an ESXi or Hyper-V ISO image.

Page 30: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 30/61

Imaging a Node | Field Installation Guide | Foundation | 30

Customers must provide the ESXi or Hyper-V ISO image. See Hypervisor ISO Images on page 52

for a list of supported ESXi and Hyper-V ISO images. The Foundation tarball (after unpacking in step 4)

provides a KVM ISO located in /home/nutanix/foundation/isos/hypervisor/kvm .

Note: If Foundation 2.1 or later is installed on the workstation currently, it is not necessary to

download foundation-version# .tar.gz.

3. If Foundation 2.1 or later is installed, skip to the next step. Otherwise, do the following:

$ cd /home/nutanix$ rm -rf foundation$ tar xzf foundation-version# .tar.gz

This removes the foundation directory if it is present and extracts the Foundation tarball (including a

new foundation directory).

4. Enter the following commands from /home/nutanix:

$ sudo pkill -9 foundation$ gunzip nutanix_installer_package-version# .tar.gz

This kills the Foundation service if it is running and unpacks the NOS tarball.

Note: If either the tar or gunzip command is not available, use the corresponding tar or 

gunzip utility for your environment.

5. Create the Phoenix ISO by entering the following commands:

$ cd /home/nutanix/foundation$ ./foundation --generate_phoenix --nos_package=nutanix_installer_package-version# .tar

If nutanix_installer_package-version# .tar is not in the current directory, you must include the

path as part of the name. This command creates a Phoenix ISO image in the current directory called

phoenix-version# _NOS-version# .iso, which is the Phoenix ISO file to use when Installing the Controller 

VM on page 44.

6. Open a Web browser to the IPMI IP address of the node to be imaged.

7. Enter the IPMI login credentials in the login screen.The default value for both user name and password is ADMIN (upper case).

Figure: IPMI Console Login Screen

The IPMI console main screen appears.

Note: The following steps might vary depending on the IPMI version on the node.

Page 31: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 31/61

Imaging a Node | Field Installation Guide | Foundation | 31

Figure: IPMI Console Screen

8. Select Console Redirection from the Remote Console drop-down list of the main menu, and then

click the Launch Console button.

Figure: IPMI Console: Remote Control Menu

9. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.

Figure: IPMI Remote Console: Virtual Media Menu

10. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type

field drop-down list, and click the Open Image button.

Figure: IPMI Virtual Storage Window 

11. In the browse window, go to where the hypervisor ISO image was downloaded, select that file, and then

click the Open button.

12. Click the Plug In button and then the OK button to close the Virtual Storage window.

Page 32: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 32/61

Imaging a Node | Field Installation Guide | Foundation | 32

13. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.

This causes the system to reboot using the selected hypervisor image.

Figure: IPMI Remote Console: Power Control Menu

What to do next: Complete installation by following the steps for the hypervisor:

• Installing ESXi  on page 32

• Installing Hyper-V  on page 33

• Installing KVM  on page 37

Installing ESXi

Before you begin: Complete Installing a Hypervisor  on page 29.

1. Click Continue at the installation screen and then accept the end user license agreement on the next

screen.

Figure: ES Xi Installation Screen

2. On the Selec t a Disk  screen, select the SATADOM as the storage device, click Continue, and then click

OK in the confirmation window.

Figure: ESXi Device Selection Screen

Page 33: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 33/61

Imaging a Node | Field Installation Guide | Foundation | 33

3. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

4. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

5. Review the information on the Install Confirm screen and then click Install.

Figure: ESXi Installation Confirmation Screen

The installation begins and a dynamic progress bar appears.

6. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the PlugOut button, and then return to the Installation Complete screen and click Reboot.

What to do next: After the system reboots, you can install the NOS Controller VM and provision the

hypervisor (see Installing the Controller VM on page 44).

Installing Hyper-V

Before you begin: Complete Installing a Hypervisor  on page 29.

1. Start the installation.

a. Press any key when the Press any key to boot from CD or DVD prompt appears.

b. Select Windows Setup [EMS Enabled] in the Windows Boot Manager  screen.

Figure: Windows Boot Manager Screen

c. In the language selection screen, simply click the Next button.

Page 34: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 34/61

Imaging a Node | Field Installation Guide | Foundation | 34

Figure: Hyper-V Language Screen

d. In the installation screen, select the Repair your computer  option.

Note: Do not click the Install now button. It will be used later in the procedure.

Figure: Hyper-V Installation Screen

e. In the Choose an option screen, select Troubleshoot.

Figure: Hyper-V Choose Option Screen

f. In the Advanced options screen, select Command Prompt.

Page 35: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 35/61

Imaging a Node | Field Installation Guide | Foundation | 35

Figure: Hyper-V Advanced Options Screen

2. Partition and format the DOM.

a. Start the disk partitioning utility.

> diskpart

b. List the disks to determine which one is the 60 GB SATA DOM.

list disk

c. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk

and then run the clean command:

select disk numberclean

d. Create and format a primary partition (size 1024 and file system fat32).

create partition primary size=1024select partition 1format fs=fat32 quick

e. Create and format a second primary partition (default size and file system ntfs).

create partition primaryselect partition 2format fs=ntfs quick

f.  Assign the drive letter "C" to the DOM install partition volume.

list volumelist partition

This displays a table of logical volumes and their associated drive letter, size, and file system type.

Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which

is the DOM install partition) is drive letter "C", go to the next step.

Otherwise, do one of the following:

• If drive letter "C" is assigned currently to another volume, enter the following commands toremove the current "C" drive volume and reassign "C" to the DOM install partition volume:

select volume cdrive_volume_id# removeselect volume dom_install_volume_id# assign letter=c

Page 36: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 36/61

Imaging a Node | Field Installation Guide | Foundation | 36

• If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the

DOM install partition volume:

select volume dom_install_volume_id# assign letter=c

g. Exit the diskpart utility.

exit

3. Continue installation of the hypervisor.

a. Start the server setup utility.

> setup.exe

b. In the language selection screen that reappears, again just click the Next button.

c. In the install screen that reappears click the Install now button.

d. In the operating system screen, select Windows Server 2012 Datacenter (Server Core

Installation) and then click the Next button.

Figure: Hyper-V Operating System Screen

e. In the license terms screen, check the I accept the license terms box and then click the Next

button.

f. In the type of installation screen, select Custom: Install Windows only (advanced).

Figure: Hyper-V Install Type Screen

g. In the where to install screen, select Partition 2 (the NTFS partition) of the DOM disk you just

formatted and then click the Next button.

Ignore the warning about free space. The installation location is Drive 6 Partition 2 in the example.

Page 37: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 37/61

Imaging a Node | Field Installation Guide | Foundation | 37

Figure: Hyper-V Install Disk Screen

The installation begins and a dynamic progress screen appears.

Figure: Hyper-V Progress Screen

h.  After the installation is complete, manually boot the host.

4.  After Windows boots up, click Ctrl-Alt-Delete and then log in as Administrator when prompted.

5. Change your password when prompted to nutanix/4u.

6. Install the NOS Controller VM and provision the hypervisor (see Installing the Controller VM onpage 44).

7. Open a command prompt and enter the following two commands:

> schtasks /create /sc onstart /ru Administrator /rp "nutanix/4u" /tn ` firstboot /tr D:\firstboot.bat> shutdown /r /t 0

This causes a reboot and the firstboot script to run, after which the host will reboot two more times.

This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor 

progress, log into the VM after the initial reboot and enter the command notepad C:\Program Files

\Nutanix\Logs\first_boot.log . This displays a (static) snapshot of the log file. Repeat this command

as desired to see an updated version of the log file.

Note:  A d:\firstboot_fail  file appears when this process fails. If that file is not present, theprocess is continuing (if slowly).

Installing KVM

Before you begin: Complete Installing a Hypervisor  on page 29.

1. Select Install or Upgrade an Existing System in the welcome screen and then press the Enter  key.

Page 38: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 38/61

Imaging a Node | Field Installation Guide | Foundation | 38

Processing begins and messages appear as the installation progresses.

Figure: Welcome (Install Options) Screen

2. When the Disc Found  box appears, click the Skip button.

Installation begins, and the Disc Found box disappears. (The background screen remains.) This step

takes some time (5-10 minutes) without any messages appearing during the installation process. Waituntil the CentOS 6 logo screen appears.

Figure: Disc Found Screen

3. In the CentOS 6 logo screen, click the Next button (lower right).

Figure: CentOS 6 Logo Screen

4. In the language screen, select the desired language and then the Next button.

Page 39: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 39/61

Imaging a Node | Field Installation Guide | Foundation | 39

Figure: Language Screen

5. In the keyboard screen, select the desired language and then the Next button.

Page 40: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 40/61

Imaging a Node | Field Installation Guide | Foundation | 40

Figure: Keyboard Screen

6. In the storage devices screen, click the Basic Storage Devices radio button and then the Next button.

Figure: Storage Devices Screen

7. If an existing installation screen appears, click the Fresh Installation radio button and then the Next

button.

Page 41: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 41/61

Imaging a Node | Field Installation Guide | Foundation | 41

Figure: Installation Choice Screen

8. In the host name screen, enter a name for the node in the Hostname field and then click the Next

button.

Figure: Hostname Screen

9. In the timezone screen, select a city that resides in your timezone and then click the Next button.

Figure: Timezone Screen

10. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

Page 42: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 42/61

Imaging a Node | Field Installation Guide | Foundation | 42

Figure: Root Password Screen

11. In the installation type screen, click the Create Custom Layout radio button and then click the Next

button.

Figure: Install Type Screen

12. In the select a device screen, select the entry for the SATADOM and configure the partitions as follows:

a. Select VolGroup (under LVM Volume Groups) and click the Delete button (lower right).

 A pop-up appears to verify the action; click the Delete button in that pop-up to delete this volume

group.

Figure: Device Screen: VolGroup

b. Expand the sdb hard drive, select sdb2, and click the Delete button to delete this partition.

Page 43: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 43/61

Imaging a Node | Field Installation Guide | Foundation | 43

Figure: Device Screen: sdb2 Partition

c. Select whichever device is the SATADOM and click the Edit button. In the Edit Partition pop-up

window, enter the following in the indicated fields and then click the OK button.

• Mount Point: Enter / (slash) as the mount point (instead of /boot).

• File System Type: Select ext4.

• Allowable Drives: Check the sdb box (and uncheck all others).• Size: Leave the displayed value.

• Additional Size Options: Click the Fill To maximum allowable size radio button.

• Force to be a primary partition: Leave this unchecked.

• Encrypt: Leave this unchecked.

Figure: Device Screen: sdb1 Partition

d. When the partition information is correct, click the Next button (lower right of screen).

 A pop-up warning appears about potential data loss from reformatting the disk. Click the Write

changes to disk button (after verifying there is no data to save).

Figure: Reformatting Warning

Page 44: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 44/61

Imaging a Node | Field Installation Guide | Foundation | 44

The installation begins and progress tracking appears (an installation progress bar and then a listing of 

packages as they are installed. Installation can take several minutes.

13. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the Plug

Out button, and then return to the Installation Complete screen and click Reboot.

Figure: Installation Complete Screen

14.  After the system reboots, enter the following command:puppet apply -e 'include kvm'

15. Reboot the system again.

What to do next: After the system reboots, you can install the NOS Controller VM and provision the

hypervisor (see Installing the Controller VM on page 44).

Installing the Controller VM

This procedure describes how to install the NOS Controller VM and provision the hypervisor on a single

node in a cluster in the field.

Before you begin: Install a hypervisor on the node (see Installing a Hypervisor  on page 29).

To install the Controller VM (and provision the hypervisor) on a new or replacement node, do the following:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may

not reflect the latest features described in this section.)

1. Verify you have access to the IPMI interface for the node.

a. Connect the IPMI port on that node to the network if it is not already connected.

 A 1 or 10 GbE port connection is not required for imaging the node.

b.  Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.To assign a static address, see Setting IPMI Static IP Address on page 54.

2. Open a Web browser to the IPMI IP address of the node to be imaged.

3. Enter the IPMI login credentials in the login screen.

The default value for both user name and password is ADMIN (upper case).

Page 45: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 45/61

Imaging a Node | Field Installation Guide | Foundation | 45

Figure: IPMI Console Login Screen

The IPMI console main screen appears.

Note: The following steps might vary depending on the IPMI version on the node.

Figure: IPMI Console Screen

4. Select Console Redirection from the Remote Console drop-down list of the main menu, and thenclick the Launch Console button.

Figure: IPMI Console Menu

5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.

Figure: IPMI Remote Console Menu (Virtual Media)

6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type

field drop-down list, and click the Open Image button.

Page 46: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 46/61

Imaging a Node | Field Installation Guide | Foundation | 46

Figure: IPMI Virtual Storage Window 

7. In the browse window, go to where the phoenix-version# _NOS-version# .iso file is located (see step 3 in

Installing a Hypervisor  on page 29), select that file, and then click the Open button.

8. Click the Plug In button and then the OK button to close the Virtual Storage window.

9. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.

This causes the system to reboot using the selected Phoenix image. The Nutanix Installer  screen

appears after rebooting.

Figure: IPMI Remote Console Menu (Power Control)

10. Do the following in the Nutanix Installer  configuration screen:

a. Review the values in the upper eight fields to verify they are correct, or update them if necessary.

Only the Block ID, Node Serial, and Node Cluster ID fields can be edited in this screen.

b. Do one of the following in the next three fields (check boxes):

• If you are imaging a U-node, select both Configure Hypervisor  (to provision the hypervisor) and

Clean CVM (to install the Controller VM).

Note: You must select both to install the Controller VM; selecting Clean CVM by itself 

will fail.

• If you are imaging an X-node, select Configure Hypervisor  only. This provisions the hypervisor 

without installing a new Controller VM.

• If you are instructed to do so by Nutanix customer support, select Repair CVM. This option is for 

repairing certain problem conditions. Ignore this option unless Nutanix customer support instructs

you to select it.

Nutanix ships two types of nodes from the factory, a U-node and an X-node. The U-node is fully

populated with disks and other components. This is the node type shipped from the factory when

you are adding a new node to a cluster. Both the hypervisor and Controller VM must be installed in a

new U-node. In contrast, an X-node does not contain disks or a NIC card. This is the node type that

is shipped from the factory when you need to replace an existing node because it has a hardware

failure or related problem (RMA request). In this case you transfer the disks and NIC from the old

node to the X-node, and then install the hypervisor only (not the Controller VM).

Page 47: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 47/61

Imaging a Node | Field Installation Guide | Foundation | 47

Caution: Do not select Clean CVM if you are replacing a node (X-node) because this

option cleans the disks as part of the process, which means existing data will be lost.

c. When all the fields are correct, click the Start button.

Figure: Nutanix Installer Screen

Installation begins and takes about 30 minutes.

11.  After installation completes, go to the Virtual Storage window and click Plug Out in the CDROM&ISO

tab.

12.  At the reboot prompt in the console, type Y to restart the node.

Figure: Installation Messages

The node restarts with the new image. After the node starts, additional configuration tasks run and

then the host restarts again. Wait until this stage completes (typically 15-30 minutes depending on the

hypervisor) before accessing the node.

Caution: Do not restart the host until the configuration is complete.

Page 48: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 48/61

Downloading Installation Files | Field Installation Guide | Foundation | 48

5

Downloading Installation Files

Nutanix maintains a support portal where you can download the Foundation and NOS (or Phoenix) files

required to do a field installation. To access the Nutanix support portal, do the following:

1. Open a web browser and go to http://portal.nutanix.com.

The login page is displayed.

2. Enter your support portal credentials to access the site.

3. In the initial screen, click Downloads from the main menu at the top and then select Foundation to

download Foundation-related files (or Phoenix to download Phoenix-related files).

Figure: Nutanix Support Portal Main Screen

The Foundation (or Phoenix ) download page appears.

4. Use the filter options to display the files for a specific Foundation (or Phoenix) release.

Files for the latest release appear by default.

• Typically, previous Foundation (or Phoenix) releases are removed from the portal when a newer 

version is released. However, if an earlier release is still available, you can display the files for that

release by selecting the release number from the first pull-down list.

• [Phoenix only] Select the desired hypervisor type (KVM, ESXi, or HyperV) from the second pull-

down list.

Page 49: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 49/61

Downloading Installation Files | Field Installation Guide | Foundation | 49

Figure: Phoenix Download Screen

5. Download the appropriate files from this screen.

 A table of files is displayed that includes the following columns.

Hypervisor [Phoenix only] 

Displays the hypervisor name (KVM, HyperV, or ESXi).

Version

Displays the Foundation (or Phoenix) version number.

MD5 

Displays the associated MD5 hash value to validate against after downloading the file.

Size

Displays the file size (in KB, GB, or MB).

Download Displays the file name. Click on the file name to download that file.

Figure: Foundation Download Screen

6. To download a NOS release tarball, select Downloads > NOS Releases and click the button or link for 

the desired release.

Page 50: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 50/61

Downloading Installation Files | Field Installation Guide | Foundation | 50

Clicking the Download version#  button in the upper right of the screen downloads the latest

NOS release. You can download an earlier NOS release by clicking the appropriate Download

version#  link under the ADDITIONAL RELEASES heading. The tarball to download is named

nutanix_installer_package-version# .tar.gz.

Figure: NOS Download Screen

Foundation Files

The following table describes the files required to install Foundation. Use the latest Foundation version

available unless instructed by Nutanix customer support to use an earlier version.

File Name Description

VirtualBox-version# -OSX.dmg This is the Oracle VM VirtualBox installer for Mac

OS where version#  is a version and build number.

VirtualBox-version# -Win.exe This is the Oracle VM VirtualBox installer for  

Windows.

Foundation_VM-version# .ovf This is the Foundation VM OVF configuration file

where version#  is the Foundation version number.

Foundation_VM-version# -disk1.vmdk This is the Foundation VM VMDK file.

Foundation_VM_OVF-version# .tar This is a Foundation tar file that contains

the Foundation_VM-version# .ovf and

Foundation_VM-version# -disk1.vmdk files.Foundation 2.1 (or later) packages the OVF and

VMDK files into this TAR file; it does not apply to

earlier versions.

Foundation-version# .tar.gz This is a tarball used for upgrading when

Foundation is already installed (see Release Notes

on page 5).

Page 51: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 51/61

Downloading Installation Files | Field Installation Guide | Foundation | 51

File Name Description

nutanix_installer_package-version# .tar.gz This is the tarball used for imaging the desired

NOS release where version#  is a version and build

number. Go to the NOS Releases download page

on the support portal to download this file. (You can

download all the other files from the Foundation

download page.)

Phoenix Files

The following table describes the Phoenix ISO files.

Note: Starting with release 2.1, Foundation no longer uses a Phoenix ISO file for imaging.

Phoenix ISO files are now used only for single node imaging (see Imaging a Node on page 29)

and are generated by the user from Foundation and NOS tarballs. The Phoenix ISOs available on

the support portal are only for those who are using an older version of Foundation (pre 2.1).

File Name Description

phoenix- x.x  _ESX_NOS-y.y.y .iso This is the Phoenix ISO image for a selected NOS

version on the ESXi hypervisor where x.x  is the

Phoenix version number and y.y.y  is the NOS

version number. There is a separate file for each

supported NOS version.

phoenix- x.x  _HYPERV_NOS-y.y.y .iso This is the Phoenix ISO image for a selected NOS

version on the Hyper-V hypervisor. There is a

separate file for each supported NOS version.

phoenix- x.x  _KVM_NOS-y.y.y .iso This is the Phoenix ISO image for a selected NOS

version on the KVM hypervisor. There is a separate

file for each supported NOS version.

Page 52: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 52/61

Hypervisor ISO Images | Field Installation Guide | Foundation | 52

6

Hypervisor ISO Images

 A KVM ISO image is included as part of Foundation. However, customers must provide an ESXi or Hyper-

V ISO image for those hypervisors. Check with your VMware or Microsoft representative, or download an

ISO image from an appropriate VMware or Microsoft support site:

• VMware Support: http://www.vmware.com/support.html 

• Microsoft Technet: http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx 

• Microsoft EA portal: http://www.microsoft.com/licensing/licensing-options/enterprise.aspx 

• MSDN: http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052 

• Only VMware ESXi 5.5 U2a or later is supported for installation on Nutanix Intel Haswell-based

platforms like NX-3060-G4 and NX-6035-G4.

The following tables list the supported ESXi and Hyper-V hypervisor images.

Note: These are the ISO images supported in Foundation, but some might no longer be available

from the download sites.

ESXi ISO Images

Version File Name MD5 Sum

5.0 U2 VMware-VMvisor-

Installer-5.0.0.update02-914586.x86_64.iso

fa6a00a3f0dd0cd1a677f69a236611e2

5.0 U3 VMware-VMvisor-

Installer-5.0.0.update03-1311175.x86_64.iso

391496b995db6d0cf27f0cf79927eca6

5.1 U1 VMware-VMvisor-

Installer-5.1.0.update01-1065491.x86_64.iso

2cd15e433aaacc7638c706e013dd673a

5.1 U2 VMware-VMvisor-

Installer-5.1.0.update02-1483097.x86_64.iso

6730d6085466c513c04e74a2c2e59dc8

5.1 U3 VMware-VMvisor-

Installer-5.1.0.update03-2323236.x86_64.iso

3283ae6f5c82a8204442bd6ec38197b9

5.5 VMware-VMvisor-

Installer-5.5.0-1331820.x86_64.iso

9aaa9e0daa424a7021c7dc13db7b9409

5.5 U2a VMware-VMvisor-Installer-201410001-2143827.x86_64.iso

e7d63c6402d179af830b4c887ce2b872

5.5 U2d VMware-VMvisor-

Installer-201501001-2403361.x86_64.iso

1e0e128e678af54657e6bd3b5bf5f124

6.0 VMware-VMvisor-

Installer-6.0.0-2494585.x86_64.iso

478e2c6f7a875dd3dacaaeb2b0b38228

Page 53: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 53/61

Hypervisor ISO Images | Field Installation Guide | Foundation | 53

Version File Name MD5 Sum

5.5 (Dell

custom)

VMware-VMvisor-

Installer-5.5.0-1331820.x86_64-

Dell_Customized_A00.iso

b9661e44c791b86caf60f179b857a17d

5.5 U2 (Dell

custom)

VMware-VMvisor-

Installer-5.5.0.update02-2068190.x86_64-

Dell_Customized-A00.iso

02887b626eaabb7d933e2a3fa580f1bc

Hyper-V ISO Images

Version SKUs Source

Site

File Name MD5 Sum

Windows

Server 

2012 R2

datacenter 

standard

MSDN en_windows_server_ 

2012_r2_vl_x64_ 

dvd_3319595.iso

fb101ed6d7328aca6473158006630a9d

(SHA1: A73FC07C1B9F560F960F1

C4A5857FAC062041235)

WindowsServer 

2012

R2 with

update

datacenter standard

MSDN en_windows_server_ 2012_r2_vl_with_update_ 

x64_dvd_4065221.iso

b52450dd5ba8007e2934f5c6e6eda0ce

Windows

Server 

2012 R2

datacenter 

standard

EA

portal

SW_DVD9_Windows_ 

Svr_Std_and_DataCtr_ 

2012_R2_64Bit_English_-3_MLF_ 

X19-53588.ISO

b52450dd5ba8007e2934f5c6e6eda0ce

Windows

Server 

2012 R2

datacenter 

standard

EA

portal

SW_DVD9_Windows_ 

Svr_Std_and_DataCtr_ 

2012_R2_64Bit_English_-4_MLF_X19-82891.ISO

9a00defab26a046045d939086df78460

Windows

Server 

2012 R2

free Technet 9600.16384.WINBLUE_RTM.

130821-1623_ X64FRE_ 

SERVERHYPERCORE_ EN-

US-IRM_SHV_X64FRE_ EN-

US_DV5.ISO

9c9e0d82cb6301a4b88fd2f4c35caf80

Page 54: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 54/61

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 54

7

Setting IPMI Static IP Address

You can assign a static IP address for an IPMI port by resetting the BIOS configuration.

To configure a static IP address for the IPMI port on a node, do the following:

1. Connect a VGA monitor and USB keyboard to the node.

2. Power on the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.

The BIOS Setup Utility  screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter  key.

6. Select Update IPMI LAN Configuration, press Enter , and then select Yes in the pop-up window.

7. Select Configuration Address Source, press Enter , and then select Static in the pop-up window.

Page 55: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 55/61

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 55

8. Select Station IP Address, press Enter , and then enter the IP address for the IPMI port on that node in

the pop-up window.

9. Select Subnet Mask, press Enter , and then enter the corresponding submask value in the pop-up

window.

10. Select Gateway IP Address, press Enter , and then enter the IP address for the node's network

gateway in the pop-up window.

11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup

mode.

Page 56: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 56/61

Troubleshooting | Field Installation Guide | Foundation | 56

8

Troubleshooting

This section provides guidance for fixing problems that might occur during a Foundation installation.

• For help with IPMI configuration problems, see Fixing IPMI Configuration Problems on page 56.

• For help with imaging problems, see Fixing Imaging Problems on page 57.

• For answers to other common questions, see Frequently Asked Questions (FAQ) on page 58.

Fixing IPMI Configuration Problems

When the IPMI port configuration fails for one or more nodes in the cluster, or it works but type detection

fails and complains that it cannot reach an IPMI IP address, the installation process stops before imaging

any of the nodes. (Foundation will not go to the imaging step after an IPMI port configuration failure, but it

will try to configure the port address on all nodes before stopping.) Possible reasons for a failure include

the following:

• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the Block &

Node Config  screen and correct the IPMI MAC and IP addresses as needed (see Configuring Node

Parameters on page 18).

• There is a user name/password mismatch. Go to the Global Configuration screen and correct the IPMI

username and password fields as needed (see Configuring Global Parameters on page 16).

• One or more nodes are connected to the switch through the wrong network interface. Go to the back of 

the nodes and verify that the first 1GbE network interface of each node is connected to the switch (see

Setting Up the Network  on page 13).

• The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes

or the IPMI interface for added (bare metal or undiscovered) nodes. This problem typically occurs

because (a) you are not using a flat switch, (b) some node IP addresses are not in the same subnet as

the Foundation VM, and (c) multi-homing was not configured.

• If all the nodes are in the Foundation VM subnet, go to the Block & Node Config  screen and correct

the IP addresses as needed (see Configuring Node Parameters on page 18).

• If the nodes are in multiple subnets, go to the Global Configuration screen and configure multi-

homing (see Configuring Global Parameters on page 16).

• The IPMI interface is not set to failover. You can check for this through the BIOS (see Setting IPMI 

Static IP Address on page 54 to access the BIOS setup utility).

To identify and resolve IPMI port configuration problems, do the following:

1. Go to the Block & Node Config  screen and review the problem IP address f or the failed nodes (nodes

with a red X next to the IPMI address field).

Hovering the cursor over the address displays a pop-up message with troubleshooting information. This

can help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log ) and

the individual node log files for more detailed information.

Page 57: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 57/61

Troubleshooting | Field Installation Guide | Foundation | 57

Figure: Foundation: IPMI Configuration Error 

2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button

at the top of the screen.

Figure: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI conf iguration errors.

4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at

the top of the screen to begin the imaging step.

If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those

nodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this caseyou must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP 

 Address on page 54).

Fixing Imaging Problems

When imaging fails for one or more nodes in the cluster, the progress bar turns red and a red check

appears next to the hypervisor address field for any node that was not imaged successfully. Possible

reasons for a failure include the following:

• A type failure was detected. Check connectivity to the IPMI.

• There were network connectivity issues such as the following:

• The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.

• [Hyper-V only] SAMBA is not up. If Hyper-V complains that it failed to mount the install share, restart

SAMBA with the command "sudo service smb restart ".

• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some

space by deleting extraneous ISO images. In addition, a Foundation crash could leave a /tmp/tmp*

directory that contains a copy of an ISO image which you can unmount (if necessary) and delete.

Foundation needs about 9 GB of free space for Hyper-V and about 3 GB for ESXi or KVM.

• The host boots but complains it cannot reach the Foundation VM. The message varies per hypervisor.

For example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned with an error" error 

message. Make sure you have assigned the host an IP address on the same subnet as the Foundation

VM or you have configured multi-homing (see (see Configuring Global Parameters on page 16). Alsocheck for IP address conflicts.

To identify and resolve imaging problems, do the following:

1. See the individual log file for any failed nodes (in /home/nutanix/foundation/log ) for information about

the problem.

2. When you have corrected the problems and are ready to try again, click the Image Nodes button at the

top of the screen.

Page 58: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 58/61

Troubleshooting | Field Installation Guide | Foundation | 58

Figure: Image Nodes Button

3. Repeat the preceding steps as necessary to fix all the imaging errors.

If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a

time (see Imaging a Node on page 29).

Frequently Asked Questions (FAQ)

This section provides answers to some common Foundation questions.

Installation Issues

• What steps should I take when I encounter a problem?

Click the appropriate log link in the progress screen (see Monitoring Progress on page 25) to view the

relevant log file. In most cases the log file should provide some information about the problem near the

end of the file. If that information (plus the information in this troubleshooting section) is sufficient toidentify and solve the problem, fix the issue and then restart the imaging process.

If you were unable to fix the problem, open a Nutanix support case. You can do this from the Nutanix

support portal (https://portal.nutanix.com/#/page/cases/form%3FtargetAction=new ). Upload relevant

log files as requested. The log files are located in /home/nutanix/foundation/log  in your Foundation

VM. This directory contains a service.log file for Foundation-related log messages, a log file for 

each node being imaged (named node_0.log, node_1.log, and so on), a log file for each cluster 

being created (named cluster _0.log, cluster_1.log, and so on), and http.access and http.error

files for server-related log messages. Logs from past installations are stored in /home/nutanix/

foundation/log/archive . In addition, the state of the current install process is stored in /home/nutanix/

foundation/persisted_config.json . You can download the entire log archive from the following URL:

http:// foundation_ip:8000/foundation/log_archive.tar

• Foundation is not running, and the service log complains about permissions.

 A crash or abrupt shutdown can cause Foundation to lock its PID file in a way that does not recover 

automatically. Enter the following commands to fix this problem:

$ sudo /etc/init.d/foundation_service stop # Don’t mind if this fails.$ cd ~/foundation $ rm foundation.pid$ touch foundation.pid$ chmod g-w foundation.pid$ sudo /etc/init.d/foundation_service start

When shutting down the Foundation VM, allow it to shutdown gracefully by using a command such as

"shutdown -h now" or by logging out and then powering down the VM.

• My installation hangs, and the service log complains about type detection.

Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP

assignment will take some time.) If you get a complaint about authentication, double-check your 

password. If the problem persists, try resetting the BMC.

• Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.

Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this

setting by logging into IPMI and going to Configuration > Network > Lan Interface. Verify that the

setting is Failover  (not Dedicate).

Page 59: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 59/61

Troubleshooting | Field Installation Guide | Foundation | 59

• The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not

complete (hangs, fails, times out).

Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not

provide the performance necessary to run this test at a reasonable speed.

• Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor>

and the install hangs.

The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as thefirst boot device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select

"restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the

nodes and retry the installation.

• I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for 

the call back function, and is there a way I can avoid the wait?

The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal

in the Foundation VM and enter the following commands:

$ sudo /etc/init.d/foundation_service stop$ cd ~/foundation/$ mv persisted_config.json persisted_config.json.bak$ sudo /etc/init.d/foundation_service start

Refresh the Foundation web page. If the nodes are still stuck, reboot them.

• I need to reset a block to the default state.

Download the desired Phoenix ISO image for KVM from the support portal (see https:// 

 portal.nutanix.com/#/page/phoenix/list ). Boot each node in the block to that ISO and follow the prompts

until the re-imaging process is complete. You should then be able to use Foundation as usual.

• The cluster create step is not working.

If you are installing NOS 3.5 or later, check the service.log file for messages about the problem. Next,

check the relevant cluster log (cluster_ X .log) for cluster-specific messages. The cluster create step

in Foundation is not supported for earlier releases and will fail if you are using Foundation to image a

pre-3.5 NOS release. You must create the cluster manually (after imaging) for earlier NOS releases.

• I want to re-image nodes that are part of an existing cluster.

Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during discovery.)

• My Foundation VM is complaining that it is out of disk space. What can I delete to make room?

Unmount any temporarily-mounted file systems using the following commands:

$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse$ sudo umount /tmp/tmp*$ sudo rm -rf /tmp/tmp*

If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.

• I keep seeing the message “"tar: Exiting with failure status due to previous errors'tar rf /home/

nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./

persisted_config.json' failed; error ignored."

This is a benign message. Foundation archives your persisted configuration file (persisted_config.json)

alongside the logs. Occasionally, there is no configuration file to back up. This is expected, and you may

ignore this message with no ill consequences.

• Imaging fails after changing the language pack.

Page 60: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 60/61

Troubleshooting | Field Installation Guide | Foundation | 60

Do not change the language pack. Only the default English language pack is supported. Changing the

language pack can cause some scripts to fail during Foundation imaging. Even after imaging, changing

the language pack can cause problems for NOS.

• [Hyper-V] I cannot reach the CVM console via ssh. How do I get to its console?

See KB article 1701 (https://portal.nutanix.com/#/page/kbs/details%3FtargetId=kA0600000008fJhCAI ).

• [ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.

Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it. See KB

article 1467 ( https://portal.nutanix.com/#/page/kbs/details%3FtargetId=kA0600000008dDxCAI ).

Network and Workstation Issues

• I am having trouble installing VirtualBox on my Mac.

Turning off the WiFi can sometimes resolve this problem. For help with VirtualBox issues, see https:// 

www.virtualbox.org/wiki/End-user_documentation.

There can be a problem when the USB Ethernet adapter is listed as a 10/100 interface instead of a 1G

interface. To support a 1G interface, it is recommend that MacBook Air users connect to the network

with a thunderbolt network adapter rather than a USB network adapter.

• I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM

on VirtualBox.

The VM needs to be configured to expose a 64-bit CPU. For more information, see https:// 

forums.virtualbox.org/viewtopic.php?f=8&t=58767 .

• I am running the network setup script, but I do not see eth0 when I run ifconfig.

This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically

creates a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run

the following commands:

$ sudo rm /etc/udev/rules.d/70-persistent-net-rules$ sudo shutdown -r now

This should reboot your machine and reset your adapter to eth0.

• I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but

discovery is not finding the nodes to image.

Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6

link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in.

(If they are, the Controller VMs might choose to direct their traffic over that interface and never reach

your Foundation VM.) If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and

the 10G ports are connected.

• The switch is dropping my IPMI connections in the middle of imaging.

If your network connection seems to be dropping out in the middle of imaging, try using an unmanaged

switch with spanning tree protocol disabled.

• Foundation is stalled on the ping home phase.

The ping test will wait up to two minutes per NIC to receive a response, so a long delay in the ping

phase indicates a network connection issue. Check that your 10G cables are unplugged and your 1G

connection can reach Foundation.

• How do I install on a 10/100 switch?

Page 61: Field Installation Guide-V2 1 Foundation

7/23/2019 Field Installation Guide-V2 1 Foundation

http://slidepdf.com/reader/full/field-installation-guide-v2-1-foundation 61/61

 A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may see

timeouts. It is highly recommend that you use a 1G or 10G switch if it is available to you.


Recommended