+ All Categories
Home > Documents > 821807_635465544298802999

821807_635465544298802999

Date post: 19-Jul-2016
Category:
Upload: ravi-kasibhatla
View: 48 times
Download: 3 times
Share this document with a friend
Description:
821807_635465544298802999
102
Transcript

Oracle ZFS Storage Appliance Installation

Installation and Configuration

(Instalación y Configuración)

Module 2

Installing the Oracle ZFS Storage Appliance

System - The EIS Methodology The EIS methodology was designed to address issues before, during, and

after the actual hands-on installation work.

Here are the steps identified for an EIS installation:

Step 1: Project Initiation

Step 2: Site Audit

Step 3: Installation Configuration Planning

Step 4: Installation Task Planning

Step 5: Installation and Configuration of Hardware and Software

Step 6: System Test

Step 7: Operational Handover

http://www.oracle.com/partners/secure/support/tools-and-resources/installation-

checklists-178619.html

Oracle ZFS Storage Appliance Installation

Module 2

Step 1: Project Initiation

Step 2: Site Audit

Installing the ZS3 Step 1: Project Initiation

1 Project Schedule

1.1 General Information

The purpose of this Project Schedule is to capture and agree on when the installation

service (or services) will be delivered and possibly by whom. The amount of detail

depends heavily upon the type of installation – where multiple high-end servers with

associated storage and possible Cluster software are involved the relevant information

here will be extensive; for a standalone ZS3-2 Storage the contents are likely to be

minimal.

1.1.1 Site and Systems Access

Upon Oracle's request, Customer will provide Oracle with access to Customer's facilities,

systems, and operating environment, including root access as necessary for Oracle to

provide the Service.

1.2 Specific Information

In the section enter details such as which groups of specialist installing

engineers (server, storage, cluster etc.) will come to the Customer site and when.

Installing the ZS3 Step 2: Site Audit

1 Site Audit

1.1 General Information

Based upon Oracle's Site Planning Guide for the ZFS Storage Appliances, which contains

installation recommendations and requirements for the Covered System, Oracle and

Customer will conduct an audit at Customer's site to:

• Determine installation needs for the System, including such assessments as: the

suitability of access routes to the installation location, such as doors, elevators, floor

strengths and ramps.

• Determine the floor weight load capacity at the location where the System will be

installed.

• Determine the availability of required electrical power to run and maintain the System.

• Determine the environmental conditions at Customer's site including temperature,

humidity, cleanliness and such other assessments as are determined to be necessary

in Oracle's sole discretion.

Oracle will document the results of the Site Audit ("Site Audit Report") and provide

Customer with a copy of the Site Audit Report.

Oracle ZFS Storage Appliance Installation

Module 2

Step 3: Installation Configuration Planning

Installing the ZS3-2 Basic Components

ZS3-2 Controller Trays

Ship With:

• Slide rail rack kit

• DB9-RJ45 adapter (for

serial management

port)

• (4) 6-meter Ethernet

cables

• (3) 1-meter Ethernet

cables

• (4) 3-meter SAS

cables (for connectivity

to disk trays)

• Documentation

Installing the ZS3-2 Basic Components

DE2-24C and DE2-24P

Drive Enclosures Ship

With:

• Fixed rack mount kit

• (2) 2-meter SAS cables

(for connectivity to

controller tray or other

disk trays)

• Documentation

• DS2 Disk Shelves Ship

With:

• Fixed rack mount kit

• (2) 2-meter SAS cables

(for connectivity to

controller tray or other

disk trays)

• Documentation

Installing ZS3-4 Basic Components

ZS3-4 Controller Trays

Ship With:

• Slide rail rack kit (Tool-

less)

• DB9-RJ45 adapter (for

serial management

port)

• (4) 6-meter Ethernet

cables

• (3) 1-meter Ethernet

cables

• (8) 3-meter SAS

cables (for connectivity

to disk trays)

• Documentation

Installing the ZS3-4 Basic Components

DE2-24C and DE2-24P

Drive Enclosures Ship

With:

• Fixed rack mount kit

• (2) 2-meter SAS cables

(for connectivity to

controller tray or other

disk trays)

• Documentation

• DS2 Disk Shelves Ship

With:

• Fixed rack mount kit

• (2) 2-meter SAS cables

(for connectivity to

controller tray or other

disk trays)

• Documentation

Requirements and Precautions

• Supported rack configurations

• Sun Rack 900/1000

• SunFire cabinet

• StorEdge expansion cabinet

• Sun Rack II

• 19-inch wide 4 post EIA

• Front to back depth of 61 cm to 91 cm

• Four post rack only – no two post support

• Details in latest Install Guide

Tools and Connections

• Tools

– Phillips screwdriver

– ESD mat, grounding strap

– Stylus

• System connection:

– Workstation, laptop, ASCII

terminal

– RJ-45 Cable

– 8N1

– 9600 bps

– Flow control = None

General Install Information

• Mounting brackets

– Controller includes rack rails

– Disk shelf rails ordered separately

– Not the same type

• Weight of Chassis: 29kg – 39kg (65-85lbs)

• Weight of disk shelf: 47kg (103 lbs)

– 2TB disks weigh twice as much as 1TB

disks

• Three people (two strong ones)

• If no lift, remove …

– Power Supplies

– Disk Drives

Controller Mounting Brackets

• No screws necessary

• Align rails on ‘pins’ on side of

controller

• Slide on and ‘click’

• Rack side rail ‘squeeze and click

to remove slider

• Press down on #2 below to

disengage

Rail Width Spacer • Use ‘clip nuts’ to secure rails to rack

• Insert ‘rail spacer’ prior to tightening

screws on rack

– Ensures proper fit for controller

• Stabilize the rack

• Push the slide rail assemblies all

the way back

• Raise the chassis, align the

brackets, and insert slowly until it

‘clicks’ into place

Cable Management Assembly (CMA)

• CMA locks into place in the rear of

the slide rail

• Same thing for the left

• Attach the hook and loop

assemblies to secure cables

Disk Shelf Mounting

• Load from bottom up – these things are heavy

• Attach rail plates first

• Rail plates secure the rack rails

• Plates are positioned with pins, then secured to rails

Disk Shelves - Cautions

• Shelf slides into rail assembly

• Tighten captive screws to secure unit

• Use locking clip at rear of unit to secure the

chassis

• Use of a lift or two or more people recommended

ZFS Storage Appliance – General Cabling

Oracle ZFS Storage Appliance Installation Guide

October 2013 E48491–01

http://docs.oracle.com/cd/E27998_01/html/E48491/index.html

ZFS Storage Appliance

Maximum number of supported disk shelves

per controller

NOTE: Controllers cannot use 2X4 port SAS-2 HBAs and 4X4 port SAS-2 HBAs at the same time. To use

DE2 and Sun Disk Shelves together, the controller must use 4X4 port SAS-2 HBAs, which are only

supported with release AK 2013.1.0 and later.

Controller Max. Shelves Max. 2X4 port SAS-2

HBA

Max. 4X4 port SAS-2

HBA

ZS3-2 8 NA 2

ZS3-4 36 NA 4

7120 2 1 NA

7320 6 1 1

7420 36 6 6

The following table shows the maximum supported controller configurations.

General Cabling (DE2-24C Disk Shelf)

General Cabling (DE2-24P Disk Shelf)

DE2 to ZS3-2 Standalone

Single Disk Shelf

(minimum)

Multiple Disk Shelves

(maximum)

The following figures show a subset of the supported configurations for

Oracle ZFS Storage ZS3-2/7120/7320 standalone controllers with one or

two HBAs.

fig.1 Standalone controller

with one HBA and one disk

shelf in a single chain

Two HBAs and multiple disk shelves in two chains.

Four disk shelves in a single chain.

DE2 to ZS3-2 Clustered

Single Disk Shelf

(minimum)

Two Disk Shelves

(maximum)

Connecting ZS3-2/7320 Clustered Controllers to Disk Shelves

Clustered controllers with one HBA

and one disk shelf in a single chain.

One HBA and multiple disk shelves in two

chains.

DE2 to ZS3-4 Standalone

Single Disk Shelf

(minimum)

Max of Disk Shelves

Connecting the ZS3-4 Standalone Controller to Disk Shelves (3 HBAs)

Standalone controller with three HBAs and one disk shelf in a single chain.

Three HBAs with multiple disk shelves in six chains.

Four disk shelves in a single chain.

ZS3-4 – DE2 Cabling Diagram Example

Single Disk Shelf

(minimum)

Max of 6 Disk Shelves

• Each HBA can support up to six disk shelves

• Chaque HBA peut prendre en charge jusqu'à six étagères de disque

• 各 HBA は最大 6 ディスク台のシェルフをサポートできます ·

• 每个 HBA 最多可支持 6 个磁盘机架

Oracle ZFS Storage Appliance Installation

Module 2

Step 4: Installation Task Planning

Pre-Configuration Procedures

• Information you’ll need before starting the installation procedure

– A system with a secure shell client

– One Ethernet switch connection

– IP addresses for data, administration, and Service Processor

access

– One Network Time Protocol (NTP) server (recommended)

– One Domain Name Server (DNS) (recommended)

– Customer storage profile

Network Requirements

– Hostname

– Administrative IP address/netmask

– Additional IP address/netmask for available interfaces

– Service Processor IP address/netmask

– Default Router/Gateway IP address

– DNS Server

– NTP servers IP addresses

– Subnet mask

– Root password

Initial Configuration

Initial Configuration

• Serial

• Connect RJ-45 serial connection to SER MGT port

• Login using TTY shell from a serial console – telnet root

• Configure first network IP address

• Continue with initial setup via console OR

• Login to BUI to complete installation

• Network

• Connect RJ-45 Ethernet connection to NET MGT port

• Configure DHCP server to recognize appliance

• ssh root@<ip address>

• Start /SP/console – ‘y’ – confirm NET-0

Default Username and Passwords

• When connecting to either management port you will need to utilize

the default user name and password for initial configuration.

Component Login Password

Service

Processor (SP) root changeme

Command Line

Interface (tty)

ssh root@<systemIP> or

ssh root@<systemName>

or serial connection

set during install used

to update

SP password

Browser User

Interface (BUI)

https://<systemIP>:215 or

https://<hostName>:215 set during install

ILOM Configuration Access ILOM via serial mgmt port (using a terminal) or via network

mgmt port ( using SSH @ dhcp adress)

User: root

Password: changeme

ILOM Configuration Access ILOM via https or ssh using defined IP address

e.g.

https://192.168.2.10 or SSH 192.168.2.10

Normally a static network management IP address for ILOM gets defined

ILOM Configuration User: root

Password: changeme

ILOM Configuration

Access ILOM serial or via SSH and start the host-(ZFSSA) console

Hit Enter key

After the ZFSSA has finished booting up (if not done already) you should

see

Hit any key

ILOM Configuration

Enter requested information

ILOM Configuration

Now you can access ZFSSA via https

Oracle ZFS Storage Appliance Installation

Step 5: Installation and Configuration of

Hardware and Software

Module 2

Starting the Browser Interface

• Start your web browser and enter the URL for the Browser Interface:

• https://<appliance_network_name_or_IP>:215/

• Use IP Address or network name

• Or use a tty shell to configure the appliance over the host console

• Log in as root using the administrator password you specified during Network

Environment Configuration.

Installation Wizard Click on Start in the Welcome screen.

6 easy steps to installation and configuration:

1. Network

2. DNS

3. Time

4. Name Services

5. Storage

6. Registration and Support

1. Network

Installation Wizard

Step 1: Configure Networking

Three main components

1. Devices

– Physical ports

– IPoIB Partitions

2. Datalinks

– Construct for send/rcv packets

• Virtual Local Area Network - VLANS – to improve security

• Link Aggregation Control Protocol - LACP – improve

performance

• IPv4 or IPv6

3. Interfaces

Configuring a Datalink

Datalinks are required to complete the network configuration whether they

apply specific settings to the network devices or not.

Configuring an Interface

Caution – Don’t accidentally disconnect yourself by changing the main interface that

you’re using to configure the system.

• Click on the <plus> sign to add

• Name the interface

• Choose your protocol IPv4 or v6

• Choose ‘static’ or DHCP

• Optionally configure an IPMP group

Configuring IP MultiPathing (IPMP)

• Provide address failover

• Use LACP for performance

• Create one or more IP interfaces

• Click on the ‘IP Multipathing Group’

box

• Acceptable interfaces will show up

in a list

• Choose whether the interface will

be ‘Active’ or ‘Standby’

• Choose the interfaces you wish to

assign to the IPMP group

• Click ‘Apply’

Viewing Interface details

View Tasks Action

Network device hardware Configuration>>Network -device icons blink,

text indicates speed and status

Device MAC address Configuration>>Network >> Edit Datalink – list

of current MAC IDs will appear in a dialog box

Network datalinks Configuration>>Network - middle column.

Icons indicate type (phys, vlan, LACP aggr)

Datalink MAC address Configuration>>Network - mouse-over a

datalink icon and its MAC will appear

Network interfaces

Configuration>>Network - right-hand column.

Icons indicate status (green = online, blue =

offline, alert = maintenance needed)

device:datalink:interface relationships

Configuration>>Network - click on a list row:

the objects it depends on will be highlighted to

show dependencies

Note – All actions listed in the table can be performed via the BUI or CLI.

Other Network Configuration Tasks Task Action

Edit a datalink or interface Click on the "pencil" icon next to any object. A dialog box appears to

set properties, label, or change type.

Commit Commit globally to change the system

Insert a datalink or interface Click on the "+" button above the appropriate column. Commit to

insert. Commit globally to change the system

Delete a datalink or interface Click on the "trash" icon next to any object. Then Commit globally to

change the system

Change the device used by a physical datalink Drag-and-drop a device row on to a physical datalink row

Change the datalink used by an IP interface Drag-and-drop a datalink row on to an IP interface row

Extend an aggregation Drag-and-drop a device row on to an aggregation datalink row

Extend an IP Multipath Drag-and-drop an IP interface row on to an IPMP interface row

View the global address list Go to Network>>Addresses

View DNS hostnames Go to Network>>Addresses

2. DNS Setup

Installation Wizard

Configuring the DNS Service

DNS Settings

Property Descriptions

DNS Domain

The network Domain Name Service (DNS) domain name

for your appliance. For example, if the full DNS name of

your appliance is

appliance.foo.bar.com, then the DNS domain is

foo.bar.com

DNS Servers The IP address of the DNS server or servers for the

network to which you have connected your appliance.

Allow IPv4 non-DNS resolution In rare cases allows use of LDAP or NIS for name lookup

if not found via DNS

Allow IPv6 non-DNS resolution In rare cases allows use of LDAP or NIS for name lookup

if not found via DNS

Logs Output of DNS service logs

DNS-Less Operation

• For test/demo or failure to locate DNS servers

• Supply loopback address for DNS 127.0.0.1.

• Use of this mode is strongly discouraged; several features will not work

correctly, including:

– Analytics will be unable to resolve client addresses to hostnames.

– The Active Directory feature will not function (you will be unable to

join a domain).

– Use of SSL-protected LDAP will not work properly with certificates

containing hostnames.

– Alert and threshold actions that involve sending e-mail can only be

sent to mail servers on an attached subnet, and all addresses

must be specified using the mail server’s IP address.

– Some operations may take longer than normal due to hostname

resolution time-outs.

3. Time/NTP

Installation Wizard

Network Time Protocol

• Automatically synchronize clock with a time server

• Important for timestamps, proper file times, and protocol authentication

• “Sync” button can be clicked to set appliance time to match the client

browser time

• Time must be synchronized within 5 minutes to avoid authentication errors

(Windows)

• Client Time

• Server Time

4. Name Services

Installation Wizard

Configuring Name Services

• NIS

• LDAP

• AD

Can use more than one

Directory, Users, and Roles

Map identities between Unix / Windows

Create local accounts for administration of the ZFS Storage Appliance

Use directory accounts for administration of the ZFS Storage Appliance

Roles to manage authorizations

Authorizations allow users to perform specific tasks at a fine grained level

NIS BUI Screen shot

NIS Service Properties

Property Description

Domain

The domain name of the NIS domain to which the

appliance belongs. The NIS domain must include

one or more NIS directory servers.

Search Using Broadcast

Search for a NIS server by broadcasting to the IP

network within the NIS domain. The service

chooses the first NIS server that responds. If the

chosen server becomes disabled, the NIS service

automatically switches to another server.

Use Listed Servers

Use one or more specified servers as its NIS server

or servers. The service chooses the first NIS server

on the list that does not time out.

Server(s)

The server or servers that the NIS service uses to

authenticate users when you choose the Use Listed

Servers option.

LDAP - BUI Screen shot

LDAP Service Properties

Property Description

Protect LDAP traffic with SSL/TLS

Use Transport Layer Security (TLS), the descendant

of Secure Sockets Layer (SSL) to establish secure

connections to the LDAP server.

Base search DN

The Base Distinguished Name from which the

service searches the LDAP Directory Information

Tree (DIT).

Search scope

Use an LDAP search scope of One-level

(nonrecursive) or Subtree (recursive) when it

searches the DIT. One-level (non-recursive) is the

default value.

LDAP Service Properties (Continued)

Property Description

Bind credential level

Credential level with which the service authenticates

to the LDAP server. The Anonymous option gives

the service access only to data that is available to

everyone. The Proxy option directs the service to

bind to the server using a proxy account which you

must specify. Anonymous is the default value.

Proxy DN The distinguished name of the proxy server.

Proxy Password Password for the proxy server account.

Authentication method

The bind authentication method that the service

uses to bind to the LDAP server. The options are

Simple (RFC 4513), SASL/CRAM-MD5, or

SASL/DIGEST-MD5. Simple (RFC 4513) is the

default value.

Active Directory - BUI Screen shot

Active Directory Service Properties

Property Description

Active Directory Domain The name of the Active Directory domain that

the service joins.

Administrative User The user name of the AD administrator, usually

"Administrator".

Administrative Password The administrative user's password.

Additional DNS Search Path

When this optional property is specified, DNS

queries are resolved against this domain, in

addition to the primary DNS domain and the

Active Directory domain.

Configure Domain Mode Authentication

1. Click on the edit icon for the Active Directory service.

2. Click on Join Domain.

3. Type the Active Directory Domain, Administrative User, and

Administrative Password in the corresponding text fields.

4. Click OK to join the domain specified.

5. Click on APPLY to set the properties, enable the Active Directory

service, and return to the Configure Name Services view.

When joining a domain, the clocks of the

appliance and the domain controller must

be within five minutes of the same time.

Configure Workgroup Mode Authentication

1. Click on the edit icon for the Active Directory service.

2. Click on Join Workgroup.

3. Type the Workgroup name in the text field.

4. Click OK to join the workgroup specified.

5. Click on APPLY to set the properties, enable Workgroup mode

authentication, and return to the Configure Name Services view.

Note – Joining a workgroup prevents the CIFS

and Identity Mapping services from communicating

with an Active Directory server.

5. Storage Pool Configuration

Installation Wizard

Configuring Storage

• You configure a storage pool in 3 steps:

1. Select the hardware devices you want to allocate in the

storage pool

2. Choose a storage data profile

3. Confirm your data profile choice

Note – You also configure storage when you add capacity to the

system, or when you reconfigure existing storage.

Caution – Once you have configured the storage pool, you cannot change it.

To reconfigure storage you must destroy the existing storage pool, including

any data stored there.

ZFS Pool Configuration

Two pools – system and user

system disks mirrored together

/root and /usr configured as read only

No configuration data intermingles with user

data

User pools configured using profiles

Configuring Pools

• Single Pool

• With the ability to control access to log and

cache devices on a per-share basis, the

recommended mode of operation is a

single pool.

• Multiple Pools …

• Adds complexity

• Possible poor performance

• Artificial partitioning

• Only recommended when performance

characteristics are drastically different

Single Pool

Allocate and Verification

Configure Data Profile

Data Profiles Double Parity RAID

• Each stripe contains two parity disks

• High capacity and High availability

• Data remains available even with the loss of two disks

• Cost to performance

• parity calculated on writes

• many concurrent I/Os

Dual parity RAID

Single Pool

9 Data Disks

2 Parity Disks

Parit

y Parity

Data Profiles Mirrored

• Recommended configuration

• Highest performance and highly available

• Recommended when you have plenty of disk space

• Cost for performance

Data Data

Data

Data Profiles Single Parity, Narrow Stripes

Stripe is kept to 3 data disks and single parity disk

Few advantages over double parity RAID

Can fill a gap between mirroring and double parity RAID

Not generally recommended

Good random read performance

Cost less than mirroring

Data Data Data Parity

Data Data Data Parity

Data Data Data Parity

Data Data Data Parity

Data Stripe

Data Profiles Striped

• Data is striped across disks, with no redundancy

• Maximizes both performance and capacity

• Single disk failure will result in data loss.

• Not recommended. Should only be used when data loss is

considered to be an acceptable trade off for marginal gains in capacity

and performance.

Data Data Data Data

Data Stripe

Data Profiles Triple Parity RAID, Wide Stripes

• Each stripe has 3 disks for parity

• Wide stripes across arrays for capacity

• Worse performance than double parity RAID

• Resilvering can take significantly longer

Data Data Data Data

Data Stripe

Data Parity

Data Profiles Triple Mirrored

• Reduces capacity by 1/3rd

• High performance and highly reliable

• Capacity isn’t important

• Good for database storage

Data Data

Data

Data

Cache Profile

Only displayed when SSDs are present

L2ARC always striped

Clean cache

Failure has no effect on availability

Loss of performance only

Cache Profile with Write-Optimized SSDs

• With write-optimized SSDs the Cache Profile can be selected:

– Log stripe

• Log devices treated as a stripe

• Highest performance

– Log mirror

• Log devices mirrored

• Reduces capacity and IOPS by half

• Data stored in log devices is also stored in memory

– Log mirror NSPF (no single point of failure)

• Log devices mirrored across JBODs

• Greater availability

• Reduces capacity and IOPS by half

Example SSD Configuration Read- and Write-optimized SSD Configuration

• Read-optimized SSD

– Up to 4 drives per head

– Slots 2-5

– Front-accessible

• Write-optimized SSD

– *16 total drives max

– Up to 4 drives per tray

– Placed in Slots 20, 21, 22, and 23

– Front-accessible *This is the max for the 7320. The 7420 can have more

6. Registration and Support

Installation Wizard

Registration and Support

• Register your system with Oracle support to enable automated response to

system faults and issues

• Use existing account or create new one

• Enter proxy and / or host port

• Registration connects your appliance to the portal

Oracle ZFS Storage Appliance Installation

Module 2

Step 6: System Test

Step 7: Operational Handover

Installing the Oracle ZFS Storage Appliance

ZS3 - Step 6: System Test

1 The Test Procedure Plan (TPP)

1.1 General Information

The Test Procedure Plan contains the tests on the Covered System that Oracle in its sole

discretion deems necessary to determine that the Covered System is installed and

configured according to the Installation and Configuration Plan. The inclusion of

additional tests is at Oracle's sole discretion.

1.2 Specific Information

In this section, check the Test Plan Procedure (TPP) such as :

• Final Physical Inspection & Storage Infrastructure Verification

• Oracle ZFS Storage Appliance ZS3 Subsystem Status Verification

• System Software Version

• Network Time Protocol (“NTP”)

• Build Document Verification

• Oracle ZFS Storage Appliance ZS3 Power Failure

• IPMP Resilience

Installing the Oracle ZFS Storage Appliance

ZS3 - Step 7: Operational Handover

1 Operational Handover Document

The Operational Handover Document is created by Oracle to document the installation

and configuration of Customer's system(s) as understood and accepted by both parties.

The following documents are considered to be part of this Operational Handover

Document.

1.1Serial Numbers

This System Turnover Document covers the installed equipment with product brief

description and serial number.

1.2Comments

Lab 1: Installation

Lab Time!

ZFS Storage Appliance Installation

Extras: Clustering Overview

Module 2

Extras: Cluster Configuration

Cluster Configuration

• Two like controllers

• Each controller may be assigned a …

• Storage pool

• Networking interfaces

• Other resources available to the cluster

• Active-Active cluster

• Two storage pools

• One assigned to each controller along with network resources

• Active-Passive” cluster

• One storage pool, assigned to the controller designated as

“active” along with its associated network interfaces.

Active-Passive Clustered Appliances

Grow from single head to dual heads

One cluster node owns a single pool

Other head on standby

Start with single JBOD

Expand capacity without disruption Clustered

Active Head Passive Head

Clustered

Active Head Passive Head

Single Node

Active Head

Pool A

Pool A

Pool A

Active-Active Clustered Appliances

Two underlying storage pools

If one head fails, the other takes over both pools

Active Head A Active Head B

Pool A Pool B

Active Head A Active Head B

Pool A Pool B

Pool A

Active Head A Active Head B

Pool A Pool B

Pool A

Pool B

Cluster Facts

• Clustering, head-node failover

– Designed to be simple and fast

– Only two controllers supported

– Cluster communications are done

through cluster cards and cables between both heads

– Active/Active and Active/Passive head configurations

• Initial cluster configuration accomplished through the BUI

– Requires duplicate network adapter configuration on both heads

– Size individual heads to run full load in case of outage

• Logzilla Flash architecture

• Means that the data path does not have to be mirrored between

head nodes: removes traditional performance scaling bottleneck

Clustering Adapter

Clustron Card

• All three ports use standard Ethernet

cables

• 2 x serial links for heartbeat

communication from peer-to-peer

in the cluster

• 1 x 1 Gb Ethernet link for heartbeat

communication from peer-to-peer in

the cluster

• 122 ms effective latency between

cluster peers

• Fastest port available always used for

cluster status messages

Note:

The ZS3-2 storage has two cluster serial ports and one Ethernet port itself to

provide communication between two controllers to form a cluster configuration.

Cluster Cabling

All inter-head communication consists of one or more messages

transmitted over one of the three cluster I/O links provided by the

CLUSTRON card. This device offers two low-speed serial links and

one Ethernet link seen in Figure 3-14.

Figure 3-14 Cluster Cabling

Clustering Considerations for Storage

Variable Single-Pool Characteristics (A-A) Dual-Pool Characteristics (A-P)

Total

throughput

(nominal

operation)

Up to 50% of total CPU resources, 50% of

DRAM, and 50% of total network

connectivity can be used to provide service

at any one time. Only a single head is ever

servicing client requests, so the other is idle.

All CPU and DRAM resources can be used to provide

service at any one time. Up to 50% of all network

connectivity can be used at any one time (dark network

devices are required on each head to support failover).

Total

throughput

(failed over)

No change in throughput relative to nominal

operation.

100% of the surviving head's resources will be used to

provide service. Total throughput relative to nominal

operation may range from approximately 40% to 100%,

depending on utilization during nominal operation.

I/O Latency

(failed over)

Read-optimized SSD is not available during

failed-over operation, which may significantly

increase latencies for read-heavy workloads

that fit into available read cache. Latency of

write operation is unaffected.

Read-optimized is not available during failed-over operation,

which may significantly increase latencies for read-heavy

workloads that fit into available read cache. Latency of both

read and write operations may be increased due to greater

contention for head resources. This is caused by running

two workloads on the surviving head instead of the usual

one. When nominal workloads on each head approach the

head's maximum capabilities, latencies in the failed over

state may be extremely high.

Cluster Setup Procedure

1. Connect power and at least one Ethernet cable to each node.

2. Cable together the cluster cards of each node.

3. Cable together the HBAs to the shared JBOD(s)

4. Power on both nodes (order doesn’t matter) and go to the serial console

of the initial setup node (it doesn’t matter which one) in the same

manner as when configuring a standalone appliance.

5. Configure its Ethernet management interface and then enter the BUI to

begin cluster setup.

6. Cluster setup can be selected as part of initial setup if the Sun

Fishworks Clustron controller is installed.

7. Alternately, you can perform standalone configuration at this time,

deferring cluster setup until later. In the latter case, you can perform the

cluster configuration task by clicking the Setup button in Configuration-

>Cluster.

BUI Cluster User Interface

• The interface contains these buttons:

– Setup - If the cluster is not yet configured, execute the cluster

setup guided task, and then return to the current screen.

– Unconfigure – returns one of the cluster controllers to its factory

default configuration

– Revert - If resource modifications are pending (rows highlighted in

yellow), revert those changes and show the current cluster

configuration.

– Fail Back -If the current appliance (left-hand side) is the OWNER,

fail-back resources owned by the other appliance to it, leaving

both nodes in the CLUSTERED state (active/active).

– Take Over - If the current appliance (left-hand side) is either

CLUSTERED or STRIPED, force the other appliance to reboot,

and take-over its resources, making the current appliance the

OWNER