+ All Categories
Home > Documents > Sg 247984

Sg 247984

Date post: 18-Apr-2015
Category:
Upload: wernerwelgens6232
View: 58 times
Download: 3 times
Share this document with a friend
326
ibm.com/redbooks Front cover IBM PureFlex System and IBM Flex System Products and Technology David Watts Randall Davis Richard French Lu Han Dave Ridley Cristian Rojas Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details of available I/O modules and expansion options Explains networking and storage configurations
Transcript
Page 1: Sg 247984

ibm.com/redbooks

Front cover

IBM PureFlex System and IBM Flex System Products and Technology

David WattsRandall Davis

Richard FrenchLu Han

Dave RidleyCristian Rojas

Describes the IBM Flex System Enterprise Chassis and compute node technology

Provides details of available I/O modules and expansion options

Explains networking and storage configurations

Page 2: Sg 247984
Page 3: Sg 247984

International Technical Support Organization

IBM PureFlex System and IBM Flex System Products and Technology

July 2012

SG24-7984-00

Page 4: Sg 247984

© Copyright International Business Machines Corporation 2012. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.

First Edition (July 2012)

This edition applies:

IBM PureFlex SystemIBM Flex System Enterprise ChassisIBM Flex System ManagerIBM Flex System x220 Compute NodeIBM Flex System x240 Compute NodeIBM Flex System p260 Compute NodeIBM Flex System p24L Compute NodeIBM Flex System p460 Compute NodeIBM 42U 1100 mm Enterprise V2 Dynamic Rack

Note: Before using this information and the product it supports, read the information in “Notices” on page ix.

Page 5: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. iii

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiThe team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiNow you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivStay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

Chapter 1. IBM PureSystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 IBM PureApplication System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 IBM Flex System: The building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3.1 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.2 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.3 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.4 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.5 Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.4 IBM Flex System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.2 IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4.4 I/O Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.5 This book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 2. IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1 IBM PureFlex System capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2 IBM PureFlex System Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.3 IBM PureFlex System Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4 IBM PureFlex System Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.4.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Page 6: Sg 247984

iv IBM PureFlex System and IBM Flex System Products and Technology

2.4.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.4.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.4.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.4.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.4.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.4.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.5 IBM SmartCloud Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Chapter 3. Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.1 Management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.4 Compute node management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.4.1 Integrated Management Module II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.4.2 Flexible service processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.3 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.5.1 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.5.2 Software features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.5.3 Supported agents, hardware, operating systems, and tasks . . . . . . . . . . . . . . . . 53

Chapter 4. Chassis and infrastructure configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 574.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.1.1 Front of the chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.1.2 Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.1.3 Rear of the chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.1.4 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.1.5 Air filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.1.6 Compute node shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.1.7 Hot plug and hot swap components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.2 Power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.3 Fan modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.4 Fan logic module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.5 Front information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.6 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.7 Power supply and fan module requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.7.1 Fan module population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.7.2 Power supply population. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.8 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.9 I/O architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.10 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.10.1 I/O module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934.10.2 Serial access cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934.10.3 I/O module naming scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.10.4 IBM Flex System Fabric EN4093 10 Gb Scalable Switch. . . . . . . . . . . . . . . . . . 944.10.5 IBM Flex System EN4091 10 Gb Ethernet Pass-thru . . . . . . . . . . . . . . . . . . . . 1004.10.6 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . 1024.10.7 IBM Flex System FC5022 16 Gb SAN Scalable Switch . . . . . . . . . . . . . . . . . . 1074.10.8 IBM Flex System FC3171 8 Gb SAN Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Page 7: Sg 247984

v

4.10.9 IBM Flex System FC3171 8 Gb SAN Pass-thru . . . . . . . . . . . . . . . . . . . . . . . . 1164.10.10 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 118

4.11 Infrastructure planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194.11.1 Supported power cords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194.11.2 Supported PDUs and UPS units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204.11.3 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204.11.4 UPS planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244.11.5 Console planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254.11.6 Cooling planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264.11.7 Chassis-rack cabinet compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

4.12 IBM 42U 1100 mm Enterprise V2 Dynamic Rack . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284.13 IBM Rear Door Heat eXchanger V2 Type 1756 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Chapter 5. Compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405.2 IBM Flex System x240 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405.2.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445.2.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445.2.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455.2.5 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475.2.6 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505.2.7 Standard onboard features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625.2.8 Local storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635.2.9 Integrated virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695.2.10 Embedded 10 Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705.2.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715.2.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725.2.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

5.3 IBM Flex System x220 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1775.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1775.3.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1805.3.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815.3.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825.3.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1845.3.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1845.3.7 Internal disk storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1865.3.8 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1915.3.9 Embedded 1 Gb Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925.3.10 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925.3.11 Integrated virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1935.3.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1945.3.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

5.4 IBM Flex System p260 and p24L Compute Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985.4.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985.4.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2005.4.3 IBM Flex System p24L Compute Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2005.4.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2015.4.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2035.4.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2035.4.7 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2035.4.8 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055.4.9 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

Page 8: Sg 247984

vi IBM PureFlex System and IBM Flex System Products and Technology

5.4.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2095.4.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2125.4.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2145.4.13 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2155.4.14 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

5.5 IBM Flex System p460 Compute Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2165.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2165.5.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2185.5.3 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2185.5.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2205.5.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2215.5.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2225.5.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2235.5.8 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265.5.9 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2275.5.10 Local storage and cover options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2285.5.11 Hardware RAID capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2295.5.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2305.5.13 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2325.5.14 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2335.5.15 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

5.6 I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2345.6.1 Form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2345.6.2 Naming structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355.6.3 Supported compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355.6.4 Supported switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2365.6.5 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter . . . . . . . . . . . . . . . . . . 2365.6.6 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter . . . . . . . . . . . . . . . . . 2385.6.7 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter . . . . . . . . . . . . . . . . . 2405.6.8 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . 2425.6.9 IBM Flex System FC3172 2-port 8 Gb FC Adapter. . . . . . . . . . . . . . . . . . . . . . . 2465.6.10 IBM Flex System FC3052 2-port 8 Gb FC Adapter. . . . . . . . . . . . . . . . . . . . . . 2475.6.11 IBM Flex System FC5022 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 2495.6.12 IBM Flex System IB6132 2-port FDR InfiniBand Adapter . . . . . . . . . . . . . . . . . 2515.6.13 IBM Flex System IB6132 2-port QDR InfiniBand Adapter. . . . . . . . . . . . . . . . . 253

Chapter 6. Network integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576.1 Ethernet switch module selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586.2 Scalable switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586.3 VLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2606.4 High availability and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

6.4.1 Redundant network topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2626.4.2 Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2626.4.3 Layer 2 failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2636.4.4 Virtual Link Aggregation Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2646.4.5 Virtual Router Redundancy Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2656.4.6 Routing protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266

6.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666.5.1 Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666.5.2 Jumbo frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666.5.3 NIC teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2676.5.4 Server Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

6.6 IBM Virtual Fabric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Page 9: Sg 247984

vii

6.6.1 Virtual Fabric mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2696.6.2 Switch independent mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

6.7 VMready . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

Chapter 7. Storage integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2737.1 External storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

7.1.1 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2757.1.2 IBM XIV Storage System series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2767.1.3 IBM System Storage DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2777.1.4 IBM System Storage DS5000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2787.1.5 IBM System Storage DS3000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2787.1.6 IBM System Storage N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2787.1.7 IBM System Storage TS3500 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807.1.8 IBM System Storage TS3310 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807.1.9 IBM System Storage TS3100 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

7.2 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817.2.1 Fibre Channel requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817.2.2 FC switch selection and fabric interoperability rules . . . . . . . . . . . . . . . . . . . . . . 282

7.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2867.4 High availability and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2877.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887.6 Backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

7.6.1 Dedicated server for centralized LAN backup. . . . . . . . . . . . . . . . . . . . . . . . . . . 2897.6.2 LAN-free backup for nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290

7.7 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2917.7.1 Implementing Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2917.7.2 iSCSI SAN Boot specific considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

7.8 Converged networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

Related publications and education. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295IBM education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

Page 10: Sg 247984

viii IBM PureFlex System and IBM Flex System Products and Technology

Page 11: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. ix

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information about the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Page 12: Sg 247984

x IBM PureFlex System and IBM Flex System Products and Technology

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

Active Memory™AIX®AS/400®BladeCenter®BNT®DS8000®Easy Tier®EnergyScale™FlashCopy®IBM Flex System™IBM SmartCloud™IBM®

iDataPlex®Netfinity®NMotion®Power Systems™POWER6+™POWER6®POWER7®PowerPC®PowerVM®POWER®PureApplication™PureFlex™

PureSystems™Redbooks®Redbooks (logo) ®ServerProven®ServicePac®Storwize®System Storage®System x®VMready®WebSphere®XIV®

The following terms are trademarks of other companies:

Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Linear Tape-Open, LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

SnapMirror, SnapManager, NearStore, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

UNIX is a registered trademark of The Open Group in the United States and other countries.

BNT, NMotion, VMready, and Server Mobility are trademarks or registered trademarks of Blade Network Technologies, Inc., an IBM Company.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

Page 13: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. xi

Preface

To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more.

The IBM® PureFlex™ System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System™ Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications.

The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future.

This IBM Redbooks® publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity.

This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.

The team who wrote this book

This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center.

David Watts is a Consulting IT Specialist at the IBM ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks publications on hardware and software topics related to IBM Flex System, IBM System x®, and BladeCenter® servers and associated client platforms. He has authored over 200 books, papers, and Product Guides. He holds a Bachelor of Engineering degree from the University of Queensland (Australia), and has worked for IBM in both the United States and Australia since 1989. David is an IBM Certified IT Specialist, and a member of the IT Specialist Certification Review Board.

Randall Davis is a Senior IT Specialist working in the System x pre-sales team for IBM Australia as a Field Technical Sales Support (FTSS) specialist. He regularly performs System x, BladeCenter, and Storage demonstrations for customers at the IBM Demonstration Centre in Melbourne, Australia. He also helps instruct Business Partners and customers on how to configure and install the BladeCenter. His areas of expertise are the IBM BladeCenter, System x servers, VMware, and Linux. Randall started at IBM as a System 36 and AS/400® Engineer in 1989.

Page 14: Sg 247984

xii IBM PureFlex System and IBM Flex System Products and Technology

Richard French has worked for over 30 years at IBM, with the last nine in System x pre-sales Technical Education as a Senior Instructional Designer / Instructor. He is the course author and instructor of many System x and BladeCenter classroom courses, and leads the technical team that develops IBM Flex System course curriculum. He holds numerous certifications from IBM, Microsoft, and CompTIA, and has assisted in developing IBM certification exams. He is based in Raleigh NC.

Lu Han is a Senior IT Specialist working in Advanced Technical Skills team for the IBM Growth Markets Unit. Before this, he was a member of Greater China Group ATS team. He started at IBM as a System x and BladeCenter Techline engineer in 2002, and has about 10 years of wide technical experience in IBM x86 products. He is familiar with System x and BladeCenter solutions that cover system management, virtualization, networking, and cloud. He also focuses on IBM iDataPlex® products and HPC solutions.

Dave Ridley is the System x, BladeCenter, iDataPlex, and IBM Flex System Product Manager for IBM in the United Kingdom and Ireland. His role includes product transition planning, supporting marketing events, press briefings, management of the UK loan pool, running early ship programs, and supporting the local sales and technical teams. He is based in Horsham in the United Kingdom, and has been working for IBM since 1998. In addition, he has been involved with IBM x86 products for some 27 years.

Cristian Rojas is a Senior IT Specialist working as a Client Technical Sales Specialist for IBM. He supports all System x and BladeCenter products, and is a technical focal point for IBM PureFlex System in the Northeast United States. Before this role, he worked as a member of the System x Performance team in Raleigh, NC where his main focus was future server performance optimization. His other areas of expertise include installing and configuring IBM System x and BladeCenter servers, and virtualization including Red Hat and VMware. He has worked for IBM since 2005, and is based in Allentown, PA.

The team (l-r): Richard, David, Lu, Cristian, Dave, and Randall

Page 15: Sg 247984

xiii

Thanks to the following people for their contributions to this project:

From IBM marketing:

� TJ Aspden� Michael Bacon� John Biebelhausen� Bruce Corregan� Mary Beth Daughtry� Mike Easterly� Diana Cunniffe� Kyle Hampton� Botond Kiss

� Shekhar Mishra� Justin Nguyen� Sander Kim� Dean Parker� Hector Sanchez� David Tareen� David Walker� Randi Wood� Bob Zuber

From IBM development:

� Mike Anderson� Sumanta Bahali� Wayne Banks� Keith Cramer� Mustafa Dahnoun� Dean Duff� Royce Espey� Kaena Freitas� Dottie Gardner� Sam Gaver� Phil Godbolt� Mike Goodman� John Gossett� Tim Hiteshew� Andy Huryn

� Bill Ilas� Don Keener� Caroline Metry� Meg McColgan� Mark McCool� Rob Ord� Greg Pruett� Mike Solheim� Fang Su� Vic Stankevich� Tan Trinh� Rochelle White� Dale Weiler� Mark Welch� Al Willard

From the International Technical Support Organization:

� Kevin Barnes� Tamikia Barrow� Mary Comianos� Shari Deiana� Cheryl Gera

� Ilya Krutov� Karen Lawrence� Julie O’Shea� Linda Robinson

Others from IBM around the world:

� Kerry Anders� Bill Champion

� Michael L. Nelson� Matt Slavin

Others from other companies:

� Tom Boucher, Emulex� Brad Buland, Intel� Jeff Lin, Emulex� Chris Mojica, QLogic� Brent Mosbrook, Emulex

� Jimmy Myers, Brocade� Haithuy Nguyen, Mellanox� Brian Sparks, Mellanox� Matt Wineberg, Brocade

Page 16: Sg 247984

xiv IBM PureFlex System and IBM Flex System Products and Technology

Now you can become a published author, too!

Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.

Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:

� Use the online Contact us review Redbooks form found at:

ibm.com/redbooks

� Send your comments in an email to:

[email protected]

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks

� Find us on Facebook:

http://www.facebook.com/IBMRedbooks

� Follow us on Twitter:

http://twitter.com/ibmredbooks

� Look for us on LinkedIn:

http://www.linkedin.com/groups?home=&gid=2130806

� Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter:

https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm

� Stay current on recent Redbooks publications with RSS Feeds:

http://www.redbooks.ibm.com/rss.html

Page 17: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 1

Chapter 1. IBM PureSystems

During the last 100 years, information technology has moved from a specialized tool to a pervasive influence on nearly every aspect of life. From tabulating machines that counted with mechanical switches or vacuum tubes to the first programmable computers, IBM has been a part of this growth. The goal has always been to help customers to solve problems. IT is a constant part of business and of general life. The expertise of IBM in delivering IT solutions has helped the planet become more efficient. As organizational leaders seek to extract more real value from their data, business processes, and other key investments, IT is moving to the strategic center of business.

To meet these business demands, IBM is introducing a new category of systems. These systems combine the flexibility of general-purpose systems, the elasticity of cloud computing, and the simplicity of an appliance that is tuned to the workload. Expert integrated systems are essentially the building blocks of capability. This new category of systems represents the collective knowledge of thousands of deployments, established guidelines, innovative thinking, IT leadership, and distilled expertise.

The offerings in IBM PureSystems™ are designed to deliver value in the following ways:

� Built-in expertise helps you to address complex business and operational tasks automatically.

� Integration by design helps you to tune systems for optimal performance and efficiency.

� Simplified experience, from design to purchase to maintenance, creates efficiencies quickly.

The IBM PureSystems offerings are optimized for performance and virtualized for efficiency. These systems offer a no-compromise design with system-level upgradeability. IBM PureSystems is built for cloud, containing “built-in” flexibility and simplicity.

At IBM, expert integrated systems come in two types:

� IBM PureFlex System. Infrastructure systems deeply integrate the IT elements and expertise of your system infrastructure.

� IBM PureApplication™ System. Platform systems include middleware and expertise for deploying and managing your application platforms.

1

Page 18: Sg 247984

2 IBM PureFlex System and IBM Flex System Products and Technology

1.1 IBM PureFlex System

To meet today’s complex and ever-changing business demands, you need a solid foundation of server, storage, networking, and software resources. Furthermore, it needs to be simple to deploy, and able to quickly and automatically adapt to changing conditions. You also need access to, and the ability to take advantage of, broad expertise and proven guidelines in systems management, applications, hardware maintenance and more.

IBM PureFlex System is a comprehensive infrastructure system that provides an expert integrated computing system. It combines servers, enterprise storage, networking, virtualization, and management into a single structure. Its built-in expertise enables organizations to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management. These systems are ideally suited for customers who want a system that delivers the simplicity of an integrated solution while still able to tune middleware and the runtime environment.

IBM PureFlex System uses workload placement based on virtual machine compatibility and resource availability. Using built-in virtualization across servers, storage, and networking, the infrastructure system enables automated scaling of resources and true workload mobility.

IBM PureFlex System has undergone significant testing and experimentation so that it can mitigate IT complexity without compromising the flexibility to tune systems to the tasks’ businesses demand. By providing both flexibility and simplicity, IBM PureFlex System can provide extraordinary levels of IT control, efficiency, and operating agility. This combination enables businesses to rapidly deploy IT services at a reduced cost. Moreover, the system is built on decades of expertise. This expertise enables deep integration and central management of the comprehensive, open-choice infrastructure system. It also dramatically cuts down on the skills and training required for managing and deploying the system.

IBM PureFlex System combines advanced IBM hardware and software along with patterns of expertise. It integrates them into three optimized configurations that are simple to acquire and deploy so you get fast time to value.

The PureFlex System has the following configurations:

� IBM PureFlex System Express, which is designed for small and medium businesses and is the most affordable entry point for PureFlex System.

� IBM PureFlex System Standard, which is optimized for application servers with supporting storage and networking, and is designed to support your key ISV solutions.

� IBM PureFlex System Enterprise, which is optimized for transactional and database systems. It has built-in redundancy for highly reliable and resilient operation to support your most critical workloads.

Page 19: Sg 247984

3

These configurations are summarized in Table 1-1.

Table 1-1 IBM PureFlex System configurations

The fundamental building blocks of IBM PureFlex System solutions are the IBM Flex System Enterprise Chassis complete with compute nodes, networking, and storage.

For more information about IBM PureFlex System, see Chapter 2, “IBM PureFlex System” on page 11.

1.2 IBM PureApplication System

IBM PureApplication System is a platform system that includes a full application platform set of middleware and expertise in with the IBM PureFlex System. It can be controlled with a single management console. This workload-aware, flexible platform is easy to deploy, customize, safeguard, and manage in a traditional or private cloud environment. IBM PureApplication System ultimately provides superior IT economics.

With the IBM PureApplication System, you can provision your own patterns of software, middleware, and virtual system resources. You can provision these patterns within a unique framework that is shaped by IT guidelines and industry standards. These standards have been culled from many years of IBM experience with clients and a deep understanding of computing. These IT guidelines and standards are infused throughout the system.

Component IBM PureFlex System Express

IBM PureFlex System Standard

IBM PureFlex System Enterprise

IBM PureFlex System 42U Rack

1 1 1

IBM Flex System Enterprise Chassis

1 1 1

IBM Flex System Fabric EN4093 10 Gb Scalable Switch

1 1 2 with both port-count upgrades

IBM Flex System FC3171 8 Gb SAN Switch

1 2 2

IBM Flex System Manager Node

1 1 1

IBM Flex System Manager software license

IBM Flex System Manager with 1-year service and support

IBM Flex System Manager Advanced with 3-year service and support

Flex System Manager Advanced with 3-year service and support

Chassis Management Module 2 2 2

Chassis power supplies (std/max)

2 / 6 4 / 6 6 / 6

Chassis 80 mm fan modules (std/max)

4 / 8 6 / 8 8 / 8

IBM Storwize® V7000 Disk System

Yes (redundant controller) Yes (redundant controller) Yes (redundant controller)

IBM Storwize V7000 Software Base with 1-year software maintenance agreement

Base with 3-year software maintenance agreement

Base with 3-year software maintenance agreement

Page 20: Sg 247984

4 IBM PureFlex System and IBM Flex System Products and Technology

IBM PureApplication System provides the following advantages:

� IBM builds expertise into preintegrated deployment patterns, which can speed the development and delivery of new services

� By automating key processes such as application deployment, PureApplication System built-in expertise capabilities can reduce the cost and time required to manage an infrastructure

� Built-in application optimization expertise reduces the number of unplanned outages through guidelines and automation of the manual processes identified as sources of those outages

� Administrators can use built-in application elasticity to scale up or to scale down automatically. Systems can use data replication to increase availability.

Patterns of expertise can automatically balance, manage, and optimize the elements necessary, from the underlying hardware resources up through the middleware and software. These patterns of expertise help deliver and manage business processes, services, and applications by encapsulating guidelines and expertise into a repeatable and deployable form. This knowledge and expertise has been gained from decades of optimizing the deployment and management of data centers, software infrastructures, and applications around the world.

These patterns help you achieve the following types of value:

� Agility: As you seek to innovate to bring products and services to market faster, you need fast time-to-value. Expertise built into a solution can eliminate manual steps, automate delivery, and support innovation.

� Efficiency: To reduce costs and conserve valuable resources, get the most from your systems in terms of energy efficiency, simple management, and fast, automated response to problems. With built-in expertise, you can optimize your critical business applications and get the most out of your investments.

� Increased simplicity: You need a less complex environment. Patterns of expertise can help you easily consolidate diverse servers, storage, and applications onto an easier-to-manage, integrated system.

� Control: With optimized patterns of expertise, you can accelerate cloud implementations to lower risk by improving security and reducing human error.

IBM PureApplication System is available in four configurations. These configuration options enable you to choose the size and compute power that meets your needs for application infrastructure. You can upgrade to the next size when your organization requires more capacity, and in most cases, you can do so without taking an application downtime.

Page 21: Sg 247984

5

Table 1-2 provides a high-level overview of the configurations.

Table 1-2 IBM PureApplication System configurations

IBM PureApplication System is outside the scope of this book. For more information, see:

http://ibm.com/expert

1.3 IBM Flex System: The building blocks

IBM PureFlex System and IBM PureApplication System are built from reliable IBM technology that supports open standards and offer confident road maps, IBM Flex System. IBM Flex System is designed for multiple generations of technology, supporting your workload today while being ready for the future demands of your business.

1.3.1 Management

IBM Flex System Manager is designed to optimize the physical and virtual resources of the IBM Flex System infrastructure while simplifying and automating repetitive tasks. It provides easy system set-up procedures with wizards and built-in expertise, and consolidated monitoring for all of your resources, including compute, storage, networking, virtualization, and energy. IBM Flex System Manager provides core management functionality along with automation. It is an ideal solution that allows you to reduce administrative expense and focus your efforts on business innovation.

A single user interface controls these features:

� Intelligent automation� Resource pooling� Improved resource utilization� Complete management integration� Simplified setup

1.3.2 Compute nodes

The compute nodes are designed to advantage of the full capabilities of IBM POWER7® and Intel Xeon processors. This configuration offers the performance you need for your critical applications.

IBM PureApplication

System W1500-96

IBM PureApplication

System W1500-192

IBM PureApplication

System W1500-384

IBM PureApplication

System W1500-608

Cores 96 192 384 608

RAM 1.5 3.1 6.1 9.7

SSD Storage 6.4 TB

HDD Storage 48.0 TB

Application Services Entitlement

Included

Page 22: Sg 247984

6 IBM PureFlex System and IBM Flex System Products and Technology

With support for a range of hypervisors, operating systems, and virtualization environments, the compute nodes provide the foundation for these applications:

� Virtualization solutions� Database applications� Infrastructure support� Line of business applications

1.3.3 Storage

The storage capabilities of IBM Flex System give you advanced functionality with storage nodes in your system, and take advantage of your existing storage infrastructure through advanced virtualization.

IBM Flex System simplifies storage administration with a single user interface for all your storage. The management console is integrated with the comprehensive management system. These management and storage capabilities allow you to virtualize third-party storage with nondisruptive migration of your current storage infrastructure. You can also take advantage of intelligent tiering so you can balance performance and cost for your storage needs. The solution also supports local and remote replication, and snapshots for flexible business continuity and disaster recovery capabilities.

1.3.4 Networking

The range of available adapters and switches to support key network protocols allow you to configure IBM Flex System to fit in your infrastructure. However, you can do so without sacrificing being ready for the future. The networking resources in IBM Flex System are standards-based, flexible, and fully integrated into the system. This combination gives you no-compromise networking for your solution. Network resources are virtualized and managed by workload. And these capabilities are automated and optimized to make your network more reliable and simpler to manage.

IBM Flex Systems gives you these key networking capabilities:

� Supports the networking infrastructure you have today, including Ethernet, Fibre Channel and InfiniBand

� Offers industry-leading performance with 1 Gb, 10 Gb, and 40 Gb Ethernet; 8 Gb and 16 Gb Fibre Channel; and FDR InfiniBand

� Provides pay-as-you-grow scalability so you can add ports and bandwidth when needed

1.3.5 Infrastructure

The IBM Flex System Enterprise Chassis is the foundation of the offering, supporting intelligent workload deployment and management for maximum business agility. The 14-node, 10U chassis delivers high-performance connectivity for your integrated compute, storage, networking, and management resources. The chassis is designed to support multiple generations of technology, and offers independently scalable resource pools for higher utilization and lower cost per workload.

Page 23: Sg 247984

7

1.4 IBM Flex System overview

The expert integrated system of IBM PureSystems are based on a new hardware and software platform, IBM Flex System.

1.4.1 IBM Flex System Manager

The IBM Flex System Manager (FSM) is a high performance scalable systems management appliance with a preloaded software stack. As an appliance, the hardware is closed, on a dedicated compute node platform, and designed to provide a specific purpose. It is intended to configure, monitor, and manage IBM Flex System resources in multiple IBM Flex System Enterprise Chassis (Enterprise Chassis), optimizing time-to-value. The FSM provides an instant resource-oriented view of the Enterprise Chassis and its components, providing vital information for real-time monitoring.

An increased focus on optimizing time-to-value is evident in these features:

� Setup wizards, including initial setup wizards, provide intuitive and quick setup of the FSM

� The Chassis Map provides multiple view overlays to track health, firmware inventory, and environmental metrics

� Configuration management for repeatable setup of compute, network, and storage devices

� Remote presence application for remote access to compute nodes with single sign-on

� Quick search provides results as you type

Beyond the physical world of inventory, configuration, and monitoring, IBM Flex System Manager enables virtualization and workload optimization for a new class of computing:

� Resource utilization: Detects congestion, notification policies, and relocation of physical and virtual machines that include storage and network configurations within the network fabric

� Resource pooling: Pooled network switching, with placement advisors that consider VM compatibility, processor, availability, and energy

� Intelligent automation: Automated and dynamic VM placement based on utilization, energy, hardware predictive failure alerts, and host failures

Figure 1-1 shows the IBM Flex System Manager.

Figure 1-1 IBM Flex System Manager

Page 24: Sg 247984

8 IBM PureFlex System and IBM Flex System Products and Technology

1.4.2 IBM Flex System Enterprise Chassis

The IBM Flex System Enterprise Chassis offers compute, networking, and storage capabilities far exceeding those currently available. With the ability to handle up 14 compute nodes, intermixing POWER7 and Intel x86, the Enterprise Chassis provides flexibility and tremendous compute capacity in a 10-U package. Additionally, the rear of the chassis accommodates four high speed networking switches. With interconnecting compute nodes, networking, and storage using a high performance and scalable mid-plane, Enterprise Chassis can support 40 Gb speeds.

The ground-up design of the Enterprise Chassis reaches new levels of energy efficiency through innovations in power, cooling, and air flow. Simpler controls and futuristic designs allow the Enterprise Chassis to break free of “one size fits all” energy schemes.

The ability to support the workload demands of tomorrow’s workloads is built in with a new IO architecture, which provides choice and flexibility in fabric and speed. With the ability to use Ethernet, InfiniBand, FC, FCoE, and iSCSI, the Enterprise Chassis is uniquely positioned to meet the growing IO needs to the IT industry.

Figure 1-2 shows the IBM Flex System Enterprise Chassis.

Figure 1-2 The IBM Flex System Enterprise Chassis

1.4.3 Compute nodes

IBM Flex System offers compute nodes that vary in architecture, dimension, and capabilities. Optimized for efficiency, density, performance, reliability, and security, the portfolio includes the following IBM POWER7-based and Intel Xeon-based nodes:

� IBM Flex System x240 Compute Node, a two socket Intel Xeon-based compute node

� IBM Flex System x220 Compute Node, a cost-optimized two-socket Intel Xeon-based compute node

Page 25: Sg 247984

9

� IBM Flex System p260 Compute Node, a two socket IBM POWER7-based compute node

� IBM Flex System p24L Compute Node, a two socket IBM POWER7-based compute node optimized for Linux

� IBM Flex System p460 Compute Node, a four socket IBM POWER7-based compute node

Figure 1-3 shows a IBM Flex System p460 Compute Node.

Figure 1-3 IBM Flex System p460 Compute Node

The nodes are complimented with leadership IO capabilities of up to 16 x 10 Gb lanes per node. The following IO adapters are offered:

� IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

� IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter

� IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter

� IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

� IBM Flex System FC3052 2-port 8 Gb FC Adapter

� IBM Flex System FC3172 2-port 8 Gb FC Adapter

� IBM Flex System FC5022 2-port 16Gb FC Adapter

� IBM Flex System IB6132 2-port FDR InfiniBand Adapter

� IBM Flex System IB6132 2-port QDR InfiniBand Adapter

1.4.4 I/O Modules

Networking in data centers is undergoing a transition from a discrete traditional model to a more flexible, optimized model. The network architecture in IBM Flex System has been designed to address the key challenges customers are facing today in their data centers. The key focus areas of the network architecture on this platform are unified network management, optimized and automated network virtualization, and simplified network infrastructure.

Providing innovation, leadership, and choice in the I/O module portfolio uniquely positions IBM Flex System to provide meaningful solutions to address customer need. The following is a list of the I/O Modules offered with IBM Flex System:

� IBM Flex System Fabric EN4093 10 Gb Scalable Switch

� IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

� IBM Flex System EN4091 10 Gb Ethernet Pass-thru

� IBM Flex System FC3171 8 Gb SAN Switch

� IBM Flex System FC3171 8 Gb SAN Pass-thru

� IBM Flex System FC5022 16 Gb SAN Scalable Switch

Page 26: Sg 247984

10 IBM PureFlex System and IBM Flex System Products and Technology

� IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch

� IBM Flex System IB6131 InfiniBand Switch

� IBM Flex System IB6132 2-port QDR InfiniBand Adapter

Figure 1-4 shows the IBM Flex System Fabric EN4093 10 Gb Scalable Switch.

Figure 1-4 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

1.5 This book

This book describes the IBM Flex System components in detail. It addresses the technology and features of the chassis, compute nodes, management features, and connectivity and storage options. It starts with a discussion of the systems management features of the product portfolio.

Page 27: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 11

Chapter 2. IBM PureFlex System

IBM PureFlex System provides an integrated computing system that combines servers, enterprise storage, networking, virtualization, and management into a single structure. Its built-in expertise allows you to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management.

This chapter includes the following sections:

� 2.1, “IBM PureFlex System capabilities” on page 12� 2.2, “IBM PureFlex System Express” on page 13� 2.3, “IBM PureFlex System Standard” on page 20� 2.4, “IBM PureFlex System Enterprise” on page 27� 2.5, “IBM SmartCloud Entry” on page 34

2

Page 28: Sg 247984

12 IBM PureFlex System and IBM Flex System Products and Technology

2.1 IBM PureFlex System capabilities

The PureFlex System offers these advantages:

� Configurations that ease acquisition experience and match your needs

� Optimized to align with targeted workloads and environments

� Designed for cloud with SmartCloud Entry included on Standard and Enterprise

� Choice of architecture, operating system, and virtualization engine

� Designed for simplicity with integrated, single-system management across physical and virtual resources

� Simplified ordering that accelerates deployment into your environments

� Ships as a single integrated entity directly to you

� Includes factory integration and lab services optimization

IBM PureFlex System has three preintegrated offerings that support compute, storage, and networking requirements. You can select from these offerings, which are designed for key client initiatives and help simplify ordering and configuration. As a result, PureFlex System reduces the cost, time, and complexity of system deployments.

The IBM PureFlex System is offered in these configurations:

� Express: The infrastructure system for small-sized and midsized businesses, and the most cost-effective entry point (2.2, “IBM PureFlex System Express” on page 13).

� Standard: The infrastructure system for application servers with supporting storage and networking (2.3, “IBM PureFlex System Standard” on page 20).

� Enterprise: The infrastructure system optimized for scalable cloud deployments. It has built-in redundancy for highly reliable and resilient operation to support critical applications and cloud services (2.4, “IBM PureFlex System Enterprise” on page 27).

A PureFlex System configuration has these main components:

� Preinstalled and configured IBM Flex System Enterprise Chassis

� Compute nodes with either IBM POWER® or Intel Xeon processors

� IBM Flex System Manager, preinstalled with management software and licenses for software activation

� IBM Storwize V7000 external storage unit

� All hardware components preinstalled in an IBM PureFlex System 42U rack

� Choice of:

– Operating system: IBM AIX®, IBM i, Microsoft Windows, Red Hat Enterprise Linux, or SUSE Linux Enterprise Server

– Virtualization software: IBM PowerVM®, KVM, VMware vSphere, or Microsoft Hyper V

– SmartCloud Entry (see 2.5, “IBM SmartCloud Entry” on page 34).

� Complete pre-integrated software and hardware

� On-site services included to get you up and running quickly

Restriction: Orders for Power Systems compute node must be one of the three IBM PureFlex System configurations. Build-to-order configurations are not available.

Page 29: Sg 247984

13

2.2 IBM PureFlex System Express

The tables in this section show the hardware, software, and services that make up IBM PureFlex System Express:

� 2.2.1, “Chassis”� 2.2.2, “Top of rack Ethernet switch”� 2.2.3, “Top of rack SAN switch” on page 14� 2.2.4, “Compute nodes” on page 14� 2.2.5, “IBM Flex System Manager” on page 16� 2.2.6, “IBM Storwize V7000” on page 16� 2.2.7, “Rack cabinet” on page 17� 2.2.8, “Software” on page 17� 2.2.9, “Services” on page 20

To specify IBM PureFlex System Express in the IBM ordering system, specify the indicator feature code listed in Table 2-1 for each system type.

Table 2-1 Express indicator feature code

2.2.1 Chassis

Table 2-2 lists the major components of the IBM Flex System Enterprise Chassis, including the switches and options.

Table 2-2 Components of the chassis and switches

AAS feature code XCC feature code Description

EFD1 A2VS IBM PureFlex System Express Indicator Feature Code

Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.

AAS featurecode

XCC featurecode

Description Minimumquantity

7893-92X 8721-HC1 IBM Flex System Enterprise Chassis

3593 A0TB IBM Flex System Fabric EN4093 10 Gb Scalable Switch

1

3282 5053 10 GbE 850 nm Fiber SFP+ Transceiver (SR) 2

EB29 3268 IBM BNT® SFP RJ45 Transceiver 5

3595 A0TD IBM Flex System FC3171 8 Gb SAN Switch 1

3286 5075 IBM 8 GB SFP+ Short-Wave Optical Transceiver 2

3590 A0UD Additional PSU 2500 W 0

4558 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 power cord 2

9039 A0TM Base Chassis Management Module 1

3592 A0UE Additional Chassis Management Module 1

Page 30: Sg 247984

14 IBM PureFlex System and IBM Flex System Products and Technology

2.2.2 Top of rack Ethernet switch

If more than one chassis is configured then a top of rack Ethernet switch must be added to the configuration. If only one chassis is configured, the switch is optional. Table 2-3 lists the switch components for an Ethernet switch.

Table 2-3 Components of the top of rack Ethernet switch

2.2.3 Top of rack SAN switch

If more than one chassis is configured then a top of rack SAN switch must be added to the configuration. If only one chassis is configured, the switch is optional. Table 2-4 lists the switch components for a SAN switch.

Table 2-4 Components of the top of rack SAN switch

2.2.4 Compute nodes

The PureFlex System Express requires one of the following compute nodes:

� IBM Flex System p260 Compute Node (IBM POWER7 based)� IBM Flex System p24L Compute Node (IBM POWER7 based)� IBM Flex System x240 Compute Node (Intel Xeon based)

9038 None Base Fan Modules (four) 1

7805 A0UA Additional Fan Modules (two) 0

AAS featurecode

XCC featurecode

Description Minimumquantity

AAS featurecode

XCC featurecode

Description Minimumquantity

7309-HC3 1455-64C IBM System Networking RackSwitch G8264 0a

a. One required when a two or more Enterprise Chassis are configured

7309-G52 1455-48E IBM System Networking RackSwitch G8052 0a

ECB5 A1PJ 3m IBM Passive DAC SFP+ Cable 1 per EN4093 switch

EB25 A1PJ 3m IBM QSFP+ DAC Break Out Cable 0

AAS featurecode

XCC featurecode

Description Minimumquantity

2498-B24 2498-B24 24-port SAN Switch 0

5605 5605 5m optic cable 1

2808 2808 8 Gb SFP transceivers (eight pack) 1

Page 31: Sg 247984

15

Table 2-5 lists the major components of the IBM Flex System p260 Compute Node.

Table 2-5 Components of IBM Flex System p260 Compute Node

Table 2-6 lists the major components of the IBM Flex System p24L Compute Node.

Table 2-6 Components of IBM Flex System p24L Compute Node

AAS featurecode

Description Minimumquantity

7895-22x IBM Flex System p260 Compute Node 1

1764 IBM Flex System FC3172 2-port 8 Gb FC Adapter 1

1762 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter 1

Base Processor 1 Required, select only one, minimum 1, maximum 1

EPR1 8 Cores, 2 x 4 core, 3.3 GHz + 2-socket system board 1

EPR3 16 Cores, 2 x 8 core, 3.2 GHz + 2-socket system board

EPR5 16 Cores, 2 x 8 core, 3.55 GHz + 2-socket system board

Memory - 8 GB per core minimum with all dual inline memory module (DIMM) slots filled with same memory type

8145 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35V)

8199 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

AAS featurecode

Description Minimumquantity

1457-7FL IBM Flex System p24L Compute Node 1

1764 IBM Flex System FC3172 2-port 8 Gb FC Adapter 1

1762 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter 1

Base Processor 1 Required, select only one, minimum 1, maximum 1

EPR7 12 cores, 2x 6core, 3.7 GHz + 2-socket system board 1

EPR8 16 cores, 2x 8 core, 3.2 GHz + 2-socket system board

EPR9 16 cores, 2x 8 core, 3.55 GHz + 2-socket system board

Memory - 2 GB per core minimum with all DIMM slots filled with same memory type

8145 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35V)

8199 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

8196 8 GB (2x 4 GB), 1066 MHz, DDR3, VLP RDIMMS(1.35V)

EM04 4 GB (2 x2 GB), 1066 MHz, DDR3 DRAM, RDIMM (1Rx8)

Page 32: Sg 247984

16 IBM PureFlex System and IBM Flex System Products and Technology

Table 2-7 lists the major components of the IBM Flex System x240 Compute Node.

Table 2-7 Components of IBM Flex System x240 Compute Node

2.2.5 IBM Flex System Manager

Table 2-8 lists the major components of the IBM Flex System Manager.

Table 2-8 Components of the IBM Flex System Manager

2.2.6 IBM Storwize V7000

Table 2-9 lists the major components of the IBM Storwize V7000 storage server.

Table 2-9 Components of the IBM Storwize V7000 storage server

AAS featurecode

XCC featurecode

Description Minimumquantity

7863-10X 8737AC1 IBM Flex System x240 Compute Node

EN20EN21

A1BCA1BD

x240 with embedded 10 Gb Virtual Fabricx240 without embedded 10 Gb Virtual Fabric(select one of these base features)

1

1764 A2N5 IBM Flex System FC3052 2-port 8 Gb FC Adapter 1

1759 A1R1 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter(select if x240 without embedded 10 Gb Virtual Fabric is selected - EN21/A1BD)

1

EBK2 49Y8119 IBM Flex System x240 USB Enablement Kit

EBK3 41Y8300 2 GB USB Hypervisor Key (VMware 5.0)

AAS featurecode

XCC featurecode

Description Minimumquantity

7955-01M 8731AC1 IBM Flex System Manager 1

EB31 9220 Platform Bundle preload indicator 1

EM09None

None8941

8 GB (2x 4 GB) 1333 MHz RDIMMs (1.35V) 4 GB (1x 4 GB) 1333 MHz RDIMMs (1.35V)

4a

8

a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.

None A1CW Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 1

1771 5420 200 GB, 1.8", SATA MLC SSD 2

3767 A1AV 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD 1

AAS featurecode

XCC featurecode

Description Minimumquantity

2076-124 2076-124 IBM Storwize V7000 Controller 1

5305 5305 5m Fiber-optic Cable 2

35123514

35123514

200 GB 2.5 INCH SSD or400 GB 2.5 INCH SSD

2a

Page 33: Sg 247984

17

2.2.7 Rack cabinet

Table 2-10 lists the major components and options of the rack.

Table 2-10 Components of the rack

2.2.8 Software

This section lists the software features of IBM PureFlex System Express.

AIX and IBM iTable 2-11 lists the software features included with the Express configuration on POWER7 processor-based compute nodes for AIX and IBM i.

Table 2-11 Software features for IBM PureFlex System Express with AIX and IBM i on Power

0010 0010 Storwize V7000 Software Preload 1

6008 6008 8 GB Cache 2

9730 9730 Power cord to PDU (includes two power cords) 1

9801 9801 Power supplies 2

a. If Power Systems compute node is selected, at least eight drives must be installed in the Storwize V7000. If an Intel Xeon-based compute node is selected with SmartCloud Entry, four drives must be installed in the Storwize V7000.

AAS featurecode

XCC featurecode

Description Minimumquantity

AAS featurecode

XCC featurecode

Description Minimumquantity

7953-94X 93634AX IBM 42U 1100 mm Enterprise V2 Dynamic Rack 1

EC06 None Gray Door 1

EC03 None Side Cover Kit (Black) 1

EC02 None Rear Door (Black/flat) 1

71967189+64927189+64917189+64897189+66677189+6653

58975902590459035906None

Combo PDU C19/C13 3-Phase 60ACombo PDU C19/C13 1-Phase 60ACombo PDU C19/C13 1-Phase 63A InternationalCombo PDU C19/C13 3-Phase 32A InternationalCombo PDU C19/C13 1-Phase 32A Australia and NZCombo PDU C19/C13 3-Phase 16A International

2a

22224

a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU, which is quantity = 4. The selection depends on customer’s country and utility power requirements.

AIX 6 AIX 7 IBM i 6.1 IBM i 7.1

Standard components - Express

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM1 One-year Software Maintenance (SWMA)

Page 34: Sg 247984

18 IBM PureFlex System and IBM Flex System Products and Technology

RHEL and SUSE Linux on PowerTable 2-12 lists the software features included with the Express configuration on POWER7 processor-based compute nodes for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).

Table 2-12 Software features for IBM PureFlex System Express with RHEL and SLES on Power

IBM Flex System Manager

� 5765-FMX IBM Flex System Manager Standard� 5660-FMX 1-year SWMA

Operating system � 5765-G62 AIX Standard 6

� 5771-SWM 1-year SWMA

� 5765-G98 AIX Standard 7

� 5771-SWM 1-year SWMA

� 5761-SS1 IBM i 6.1

� 5733-SSP 1-year SWMA

� 5770-SS1 IBM i 7.1

� 5733-SSP 1-year SWMA

Virtualization � 5765-PVS PowerVM Standard� 5771-PVS 1-year SWMA

Security (PowerSC) � 5765-PSE PowerSC Standard� 5660-PSE 1-year SWMA

Not applicable Not applicable

Cloud Software (optional)

� None standard in Express configurations. Optional - see following.

Optional components - Express Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced

Operating system � 5765-AEZ AIX 6 Enterprise

� 5765-G99 AIX 7 Enterprise

IBM i 6.1 IBM i 7.1

Virtualization � 5765-PVE PowerVM Enterprise

Security (PowerSC) Not applicable Not applicable Not applicable Not applicable

Cloud Software (optional)

� 5765-SCP Smart Cloud Entry

� 5660-SCP 1-year SWMA

� Requires upgrade to 5765-FMS IBM Flex System Manager Advanced

� 5765-SCP Smart Cloud Entry

� 5660-SCP 1-year SWMA

� Requires upgrade to 5765-FMS IBM Flex System Manager Advanced

Not applicable Not applicable

AIX 6 AIX 7 IBM i 6.1 IBM i 7.1

Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)

Standard components - Express

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM1 1-year SWMA

IBM Flex System Manager

� 5765-FMX IBM Flex System Manager Standard� 5660-FMX 1-year SWMA

Page 35: Sg 247984

19

Intel Xeon-based compute nodesTable 2-13 lists the software features included with the Express configuration on Intel Xeon-based compute nodes.

Table 2-13 Software features for IBM PureFlex System Express on Intel Xeon-based compute nodes

Operating system � 5639-RHP RHEL 5 & 6 � 5639-S11 SLES 11

Virtualization � 5765-PVS PowerVM Standard� 5771-PVS 1-year SWMA

Cloud Software (optional)

� 5765-SCP Smart Cloud Entry� 5660-SCP 1-year SWMA� Requires upgrade to 5765-FMS IBM Flex System Manager Advanced

Optional components - Express Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced

Virtualization � 5765-PVE PowerVM Enterprise

Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)

Intel Xeon-based compute nodes (AAS) Intel Xeon-based compute nodes (HVEC)

Standard components - Express

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM1 1-year SWMA

IBM Flex System Manager

� 5765-FMX IBM Flex System Manager Standard

� 5660-FMX 1-year SWMA

� 94Y9782 IBM Flex System Manager Standard 1-year SWMA

Operating system � Varies � Varies

Virtualization Not applicable

Cloud Software (optional)

Not applicable

Optional components - Express Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced

� 94Y9783 IBM Flex System Manager Advanced

Operating system � 5639-OSX RHEL for x86 � 5639-W28 Windows 2008 R2� 5639-CAL Windows 2008 Client Access

� 5731RSI RHEL for x86 - L3 support only� 5731RSR RHEL for x86 - L1-L3 support� 5731W28 Windows 2008 R2� 5731CAL Windows 2008 Client Access

Virtualization � VMware ESXi selectable in the hardware configuration

Cloud Software � 5765-SCP SmartCloud Entry� 5660-SCP 1-year SWMA

� 5641-SC1 SmartCloud Entry with 1-year SWMA

Page 36: Sg 247984

20 IBM PureFlex System and IBM Flex System Products and Technology

2.2.9 Services

IBM PureFlex System Express includes the following services:

� Service and Support offerings:

– Software Maintenance: 1-year of 9x5 (9 hours per day, 5 days per week) – Hardware Maintenance: 3-years of 9x5 Next Business Day service

� Technical Support Services

Essential minimum service level offering for every IBM PureFlex System Express configuration:

– Three years with one microcode analysis per year

Optional TSS offerings for IBM PureFlex System Express:

– Three years of Warranty Service upgrade to 24x7x4 service– Three years of SWMA on applicable products– Three years of Software Support on Windows Server / Linux and VMware

environments.– Three years of Enhanced Technical Support

� Lab Services:

– Three days of on-site Lab services– If the first compute node is a p260 or p460, 6911-300 is specified– If the first compute node is a x240, 6911-100 is specified

2.3 IBM PureFlex System Standard

The tables in this section show the hardware, software, and services that make up IBM PureFlex System Standard.

� 2.3.1, “Chassis” on page 21� 2.3.2, “Top of rack Ethernet switch” on page 21� 2.3.3, “Top of rack SAN switch” on page 22� 2.3.4, “Compute nodes” on page 22� 2.3.5, “IBM Flex System Manager” on page 23� 2.3.6, “IBM Storwize V7000” on page 23� 2.3.7, “Rack cabinet” on page 24� 2.3.8, “Software” on page 24� 2.3.9, “Services” on page 26

To specify IBM PureFlex System Standard in the IBM ordering system, specify the indicator feature code listed in Table 2-14 for each system type.

Table 2-14 Standard indicator feature code

AAS feature code XCC feature code Description

EFD2 A2VT IBM PureFlex System Standard Indicator Feature Code: First of each MTM (for example, first compute node)

Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.

Page 37: Sg 247984

21

2.3.1 Chassis

Table 2-15 lists the major components of the IBM Flex System Enterprise Chassis including the switches and options.

Table 2-15 Components of the chassis and switches

2.3.2 Top of rack Ethernet switch

If more than one chassis is configured, a top of rack Ethernet switch must be added to the configuration. If only one chassis is configured, the top of rack switch is optional. Table 2-16 lists the switch components for an Ethernet switch.

Table 2-16 Components of the top of rack Ethernet switch

AAS featurecode

XCC featurecode

Description Minimumquantity

7893-92X 8721-HC1 IBM Flex System Enterprise Chassis 1

3593 A0TB IBM Flex System Fabric EN4093 10 Gb Scalable Switch

1

3282 5053 10 GbE 850 nm Fiber SFP+ Transceiver (SR) 4

EB29 3268 IBM BNT SFP RJ45 Transceiver 5

3595 A0TD IBM Flex System FC3171 8 Gb SAN Switch 2

3286 5075 IBM 8 GB SFP+ Short-Wave Optical Transceiver 4

3590 A0UD Additional PSU 2500W 2

4558 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Power Cord 4

9039 A0TM Base Chassis Management Module 1

3592 A0UE Additional Chassis Management Module 1

9038 None Base Fan Modules (four) 1

7805 A0UA Additional Fan Modules (two) 1

AAS featurecode

XCC featurecode

Description Minimumquantity

7309-HC3 1455-64C IBM System Networking RackSwitch G8264 0a

a. One required when two or more Enterprise Chassis are configured

1455-48E IBM System Networking RackSwitch G8052 0a

ECB5 A1PJ 3m IBM Passive DAC SFP+ Cable 1 per EN4093 switch

EB25 A1PJ 3m IBM QSFP+ DAC Break Out Cable 0

Page 38: Sg 247984

22 IBM PureFlex System and IBM Flex System Products and Technology

2.3.3 Top of rack SAN switch

If more than one chassis is configured, a top of rack SAN switch must be added to the configuration. If only one chassis is configured, the top of rack switch is optional.Table 2-17 lists the switch components for a SAN switch.

Table 2-17 Components of the top of rack SAN switch

2.3.4 Compute nodes

The PureFlex System Standard requires one of the following compute nodes:

� IBM Flex System p460 Compute Node (IBM POWER7 based)� IBM Flex System x240 Compute Node (Intel Xeon based)

Table 2-18 lists the major components of the IBM Flex System p460 Compute Node.

Table 2-18 Components of IBM Flex System p460 Compute Node

Table 2-19 lists the major components of the IBM Flex System x240 Compute Node.

Table 2-19 Components of IBM Flex System x240 Compute Node

AAS featurecode

XCC featurecode

Description Minimumquantity

2498-B24 2498-B24 24-port SAN Switch 0

5605 5605 5m optic cable 1

2808 2808 8 Gb SFP transceivers (eight pack) 1

AAS featurecode

Description Minimumquantity

7895-42x IBM Flex System p460 Compute Node 1

1764 IBM Flex System FC3172 2-port 8 Gb FC Adapter 2

1762 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter 2

Base Processor 1 Required, select only one, minimum 1, maximum 1

EPR2 16 Cores, (4 x 4 core), 3.3 GHz + 4-socket system board 1

EPR4 32 Cores, (4 x 8 core), 3.2 GHz + 4-socket system board

EPR6 32 Cores, (4 x 8 core), 3.55 GHz + 4-socket system board

Memory - 8 GB per core minimum with all DIMM slots filled with same memory type

8145 32 GB (2 x 16 GB), 1066 MHz, LP RDIMMs (1.35V)

8199 16 GB (2 x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

AAS featurecode

XCC featurecode

Description Minimumquantity

7863-10X 8737AC1 IBM Flex System x240 Compute Node

EN20EN21

A1BCA1BD

x240 with embedded 10 Gb Virtual Fabricx240 without embedded 10 Gb Virtual Fabric(select one of these base features)

1

Page 39: Sg 247984

23

2.3.5 IBM Flex System Manager

Table 2-20 lists the major components of the IBM Flex System Manager.

Table 2-20 Components of the IBM Flex System Manager

2.3.6 IBM Storwize V7000

Table 2-21 lists the major components of the IBM Storwize V7000 storage server.

Table 2-21 Components of the IBM Storwize V7000 storage server

1764 A2N5 IBM Flex System FC3052 2-port 8 Gb FC Adapter 1

1759 A1R1 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter(select if x240 without embedded 10 Gb Virtual Fabric is selected: EN21/A1BD)

1

EBK2 49Y8119 IBM Flex System x240 USB Enablement Kit

EBK3 41Y8300 2 GB USB Hypervisor Key (VMware 5.0)

AAS featurecode

XCC featurecode

Description Minimumquantity

AAS featurecode

XCC featurecode

Description Minimumquantity

7955-01M 8731AC1 IBM Flex System Manager 1

EB31 9220 Platform Bundle preload indicator 1

EM09None

None8941

8 GB (2x 4 GB) 1333 MHz RDIMMs (1.35V) 4 GB (1x 4 GB) 1333 MHz RDIMMs (1.35V)

4a

8

a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.

None A1CW Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 1

1771 5420 200 GB, 1.8", SATA MLC SSD 2

3767 A1AV 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD 1

AAS featurecode

XCC featurecode

Description Minimumquantity

2076-124 2076-124 IBM Storwize V7000 Controller 1

5305 5305 5m Fiber-optic Cable 2

35123514

35123514

200 GB 2.5 INCH SSD or400 GB 2.5 INCH SSD

2a

0010 0010 Storwize V7000 Software Preload 1

6008 6008 8 GB Cache 2

9730 9730 Power cord to PDU (includes two power cords) 1

9801 9801 Power supplies 2

Page 40: Sg 247984

24 IBM PureFlex System and IBM Flex System Products and Technology

2.3.7 Rack cabinet

Table 2-22 lists the major components and options of the rack.

Table 2-22 Components of the rack

2.3.8 Software

This section lists the software features of IBM PureFlex System Standard.

AIX and IBM iTable 2-23 lists the software features included with the Standard configuration on POWER7 processor-based compute nodes for AIX and IBM i.

Table 2-23 Software features for IBM PureFlex System Standard with AIX and IBM i on Power

a. If Power Systems compute node is selected, at least eight drives must be installed in the Storwize V7000. If an Intel Xeon-based compute node is selected with SmartCloud Entry, four drives must be installed in the Storwize V7000.

AAS featurecode

XCC featurecode

Description Minimumquantity

7953-94X 93634AX IBM 42U 1100 mm Enterprise V2 Dynamic Rack 1

EC06 None Gray Door 1

EC03 None Side Cover Kit (Black) 1

EC02 None Rear Door (Black/flat) 1

71967189+64927189+64917189+64897189+66677189+6653

58975902590459035906None

Combo PDU C19/C13 3-Phase 60ACombo PDU C19/C13 1-Phase 60ACombo PDU C19/C13 1-Phase 63A InternationalCombo PDU C19/C13 3-Phase 32A InternationalCombo PDU C19/C13 1-Phase 32A Australia and NZCombo PDU C19/C13 3-Phase 16A International

2a

22224

a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU, which is quantity = 4. The selection depends on your country and utility power requirements.

AIX 6 AIX 7 IBM i 6.1 IBM i 7.1

Standard components - Standard

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM3 3-year SWMA

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced� 5662-FMS 3-year SWMA

Operating system � 5765-G62 AIX Standard 6

� 5773-SWM 3-year SWMA

� 5765-G98 AIX Standard 7

� 5773-SWM 3-year SWMA

� 5761-SS1 IBM i 6.1

� 5773-SWM 3-year SWMA

� 5770-SS1 IBM i 7.1

� 5773-SWM 3-year SWMA

Virtualization � 5765-PVE PowerVM Enterprise� 5773-PVE 3-year SWMA

Page 41: Sg 247984

25

RHEL and SUSE Linux on PowerTable 2-24 lists the software features included with the Standard configuration on POWER7 processor-based compute nodes for RHEL and SLES.

Table 2-24 Software features for IBM PureFlex System Standard with RHEL and SLES on Power

Security (PowerSC) � 5765-PSE PowerSC Standard� 5662-PSE 3-year SWMA

Not applicable Not applicable

Cloud Software (default but optional)

� 5765-SCP SmartCloud Entry

� 5662-SCP 3-year SWMA

� 5765-SCP SmartCloud Entry

� 5662-SCP 3-year SWMA

Not applicable Not applicable

Optional components - Standard Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

Not applicable

Operating system � 5765-AEZ AIX 6 Enterprise

� 5765-G99 AIX 7 Enterprise

Virtualization � 5765-PVE PowerVM Enterprise

Security (PowerSC) Not applicable Not applicable Not applicable Not applicable

Cloud Software (optional)

Not applicable Not applicable Not applicable Not applicable

AIX 6 AIX 7 IBM i 6.1 IBM i 7.1

Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)

Standard components - Standard

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM3 3-year SWMA

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced� 5662-FMS 3-year SWMA

Operating system � 5639-RHP RHEL 5 & 6 � 5639-S11 SLES 11

Virtualization � 5765-PVE PowerVM Enterprise� 5773-PVE 3-year SWMA

Cloud Software (optional)

� 5765-SCP SmartCloud Entry� 5662-SCP 3-year SWMA

Optional components - Standard Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

Not applicable

Virtualization Not applicable

Page 42: Sg 247984

26 IBM PureFlex System and IBM Flex System Products and Technology

Intel Xeon-based compute nodesTable 2-25 lists the software features included with the Standard configuration on Intel Xeon-based compute nodes.

Table 2-25 Software features for IBM PureFlex System Standard on Intel Xeon-based compute nodes

2.3.9 Services

IBM PureFlex System Standard includes the following services:

� Service & Support offerings:

– Software Maintenance: 1-year of 9x5 (9 hours per day, 5 days per week) – Hardware Maintenance: 3-years of 9x5 Next Business Day service

� Technical Support Services

Essential minimum service level offering for every IBM PureFlex System Standard configuration:

– Three years with one microcode analysis per year– Three years of Warranty Service upgrade to 24x7x4 service– Three years of Account Advocate or Enhanced Technical Support (9x5) and software

support prerequisites.

Intel Xeon-based compute nodes (AAS) Intel Xeon-based compute nodes (HVEC)

Standard components - Standard

IBM Storwize V7000 Software

� 5639-VM1 - V7000 Base PID� 5639-SM3 - 3-year SWMA

IBM Flex System Manager

� 5765-FMX IBM Flex System Manager Standard

� 5662-FMX 3-year SWMA

� 94Y9787 IBM Flex System Manager Standard, 3-year SWMA

Operating system � Varies � Varies

Virtualization � VMware ESXi selectable in the hardware configuration

Cloud Software (optional) (Windows and RHEL only)

� 5765-SCP SmartCloud Entry� 5662-SCP 3 yr SWMA

� 5641-SC3 SmartCloud Entry, 3 yr SWMA

Optional components - Standard Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced

� 94Y9783 IBM Flex System Manager Advanced

Operating system � 5639-OSX RHEL for x86 � 5639-W28 Windows 2008 R2� 5639-CAL Windows 2008 Client Access

� 5731RSI RHEL for x86 - L3 support only� 5731RSR RHEL for x86 - L1-L3 support� 5731W28 Windows 2008 R2� 5731CAL Windows 2008 Client Access

Virtualization � VMware ESXi selectable in the hardware configuration

Cloud Software Not applicable Not applicable

Page 43: Sg 247984

27

� Lab Services:

– Five days of on-site Lab services– If the first compute node is a p260 or p460, 6911-300 is specified– If the first compute node is a x240, 6911-100 is specified

2.4 IBM PureFlex System Enterprise

The tables in this section show the hardware, software, and services that make up IBM PureFlex System Enterprise.

� 2.4.1, “Chassis”� 2.4.2, “Top of rack Ethernet switch” on page 28� 2.4.3, “Top of rack SAN switch” on page 28� 2.4.4, “Compute nodes” on page 29� 2.4.5, “IBM Flex System Manager” on page 30� 2.4.6, “IBM Storwize V7000” on page 30� 2.4.7, “Rack cabinet” on page 31� 2.4.8, “Software” on page 31� 2.4.9, “Services” on page 33

To specify IBM PureFlex System Enterprise in the IBM ordering system, specify the indicator feature code listed in Table 2-26 for each system type.

Table 2-26 Enterprise indicator feature code

2.4.1 Chassis

Table 2-27 lists the major components of the IBM Flex System Enterprise Chassis including the switches and options.

Table 2-27 Components of the chassis and switches

AAS feature code XCC feature code Description

EFD3 A2VU IBM PureFlex System Enterprise Indicator Feature Code: First of each MTM (for example, first compute node)

Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.

AAS featurecode

XCC featurecode

Description Minimumquantity

7893-92X 8721-HC1 IBM Flex System Enterprise Chassis 1

3593 A0TB IBM Flex System Fabric EN4093 10 Gb Scalable Switch

2

3596 A1EL IBM Flex System Fabric EN4093 10 Gb Scalable Switch Upgrade 1

2

3597 A1EM IBM Flex System Fabric EN4093 10 Gb Scalable Switch Upgrade 2

2

Page 44: Sg 247984

28 IBM PureFlex System and IBM Flex System Products and Technology

2.4.2 Top of rack Ethernet switch

A minimum of two top of rack Ethernet switches are required in the Enterprise configuration. Table 2-28 lists the switch components for an Ethernet switch.

Table 2-28 Components of the top of rack Ethernet switch

2.4.3 Top of rack SAN switch

A minimum of two top of rack SAN switches are required in the Enterprise configuration. Table 2-29 lists the switch components for a SAN switch.

Table 2-29 Components of the top of rack SAN switch

3282 5053 10 GbE 850 nm Fiber SFP+ Transceiver (SR) 4

EB29 3268 IBM BNT SFP RJ45 Transceiver 6

3595 A0TD IBM Flex System FC3171 8 Gb SAN Switch 2

3286 5075 IBM 8 GB SFP+ Short-Wave Optical Transceiver 8

3590 A0UD Additional PSU 2500W 4

4558 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Power Cord 6

9039 A0TM Base Chassis Management Module 1

3592 A0UE Additional Chassis Management Module 1

9038 None Base Fan Modules (four) 1

7805 A0UA Additional Fan Modules (two) 2

AAS featurecode

XCC featurecode

Description Minimumquantity

AAS featurecode

XCC featurecode

Description Minimumquantity

7309-HC3 1455-64C IBM System Networking RackSwitch G8264 2a

a. For IBM Power Systems™ configurations, two are required. For System x configurations, two are required when two or more Enterprise Chassis are configured.

1455-48E IBM System Networking RackSwitch G8052 2a

ECB5 A1PJ 3m IBM Passive DAC SFP+ Cable 1 per EN4093 switch

EB25 A1PJ 3m IBM QSFP+ DAC Break Out Cable 1

AAS featurecode

XCC featurecode

Description Minimumquantity

2498-B24 2498-B24 24-port SAN Switch 0

5605 5605 5m optic cable 1

2808 2808 8 Gb SFP transceivers (eight pack) 1

Page 45: Sg 247984

29

2.4.4 Compute nodes

The PureFlex System Enterprise requires one of the following compute nodes:

� IBM Flex System p460 Compute Node (IBM POWER7 based)� IBM Flex System x240 Compute Node (Intel Xeon based)

Table 2-30 lists the major components of the IBM Flex System p260 Compute Node.

Table 2-30 Components of IBM Flex System p460 Compute Node

Table 2-31 lists the major components of the IBM Flex System x240 Compute Node.

Table 2-31 Components of IBM Flex System x240 Compute Node

AAS featurecode

Description Minimumquantity

7895-42x IBM Flex System p460 Compute Node 2

1764 IBM Flex System FC3172 2-port 8 Gb FC Adapter 2

1762 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter 2

Base Processor 1 Required, select only one, minimum 1, maximum 1

EPR2 16 Cores, (4 x 4 core), 3.3 GHz + 4-socket system board 1

EPR4 32 Cores, (4 x 8 core), 3.2 GHz + 4-socket system board

EPR6 32 Cores, (4 x 8 core), 3.55 GHz + 4-socket system board

Memory: 8 GB per core minimum with all DIMM slots filled with same memory type

8145 32 GB (2 x 16 GB), 1066 MHz, LP RDIMMs (1.35V)

8199 16 GB (2 x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)

AAS featurecode

XCC featurecode

Description Minimumquantity

7863-10X 8737AC1 IBM Flex System x240 Compute Node 2

EN20EN21

A1BCA1BD

x240 with embedded 10 Gb Virtual Fabricx240 without embedded 10 Gb Virtual Fabric(select one of these base features)

1 per

1764 A2N5 IBM Flex System FC3052 2-port 8 Gb FC Adapter 1 per

1759 A1R1 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter(select if x240 without embedded 10 Gb Virtual Fabric is selected - EN21/A1BD)

1 per

EBK2 49Y8119 IBM Flex System x240 USB Enablement Kit

EBK3 41Y8300 2 GB USB Hypervisor Key (VMware 5.0)

Page 46: Sg 247984

30 IBM PureFlex System and IBM Flex System Products and Technology

2.4.5 IBM Flex System Manager

Table 2-32 lists the major components of the IBM Flex System Manager.

Table 2-32 Components of the IBM Flex System Manager

2.4.6 IBM Storwize V7000

Table 2-33 lists the major components of the IBM Storwize V7000 storage server.

Table 2-33 Components of the IBM Storwize V7000 storage server

AAS featurecode

XCC featurecode

Description Minimumquantity

7955-01M 8731AC1 IBM Flex System Manager 1

EB31 9220 Platform Bundle preload indicator 1

EM09None

None8941

8 GB (2 x 4 GB) 1333 MHz RDIMMs (1.35V) 4 GB (1 x 4 GB) 1333 MHz RDIMMs (1.35V)

4a

8

a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.

None A1CW Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 1

1771 5420 200 GB, 1.8", SATA MLC SSD 2

3767 A1AV 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD 1

AAS featurecode

XCC featurecode

Description Minimumquantity

2076-124 2076-124 IBM Storwize V7000 Controller 1

5305 5305 5m Fiber-optic Cable 4

35123514

35123514

200 GB 2.5 INCH SSD or400 GB 2.5 INCH SSD

2a

a. If Power Systems compute node is selected, at least eight drives must be installed in the Storwize V7000. If an Intel Xeon-based compute node is selected with SmartCloud Entry, four drives must be installed in the Storwize V7000.

0010 0010 Storwize V7000 Software Preload 1

6008 6008 8 GB Cache 2

9730 9730 Power cord to PDU (includes two power cords) 1

9801 9801 Power supplies 2

Page 47: Sg 247984

31

2.4.7 Rack cabinet

Table 2-34 lists the major components of the rack and options.

Table 2-34 Components of the rack

2.4.8 Software

This section lists the software features of IBM PureFlex System Enterprise.

AIX and IBM iTable 2-35 lists the software features included with the Enterprise configuration on POWER7 processor-based compute nodes for AIX and IBM i.

Table 2-35 Software features for IBM PureFlex System Enterprise with AIX and IBM i on Power

AAS featurecode

XCC featurecode

Description Minimumquantity

7953-94X 93634AX IBM 42U 1100 mm Enterprise V2 Dynamic Rack 1

EC06 None Gray Door 1

EC03 None Side Cover Kit (Black) 1

EC02 None Rear Door (Black/flat) 1

71967189+64927189+64917189+64897189+66677189+6653

58975902590459035906None

Combo PDU C19/C13 3-Phase 60ACombo PDU C19/C13 1-Phase 60ACombo PDU C19/C13 1-Phase 63A InternationalCombo PDU C19/C13 3-Phase 32A InternationalCombo PDU C19/C13 1-Phase 32A Australia and NZCombo PDU C19/C13 3-Phase 16A International

2a

22224

a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU which is quantity = 4. The selection depends on your country and utility power requirements.

AIX 6 AIX 7 IBM i 6.1 IBM i 7.1

Standard components - Standard

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM3 3-year SWMA

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced� 5662-FMS 3-year SWMA

Operating system � 5765-G62 AIX Standard 6

� 5773-SWM 3-year SWMA

� 5765-G98 AIX Standard 7

� 5773-SWM 3-year SWMA

� 5761-SS1 IBM i 6.1

� 5773-SWM 3-year SWMA

� 5770-SS1 IBM i 7.1

� 5773-SWM 3-year SWMA

Virtualization � 5765-PVE PowerVM Enterprise� 5773-PVE 3-year SWMA

Security (PowerSC) � 5765-PSE PowerSC Standard� 5662-PSE 3-year SWMA

Not applicable Not applicable

Page 48: Sg 247984

32 IBM PureFlex System and IBM Flex System Products and Technology

RHEL and SUSE Linux on PowerTable 2-36 lists the software features included with the Enterprise configuration on POWER7 processor-based compute nodes for RHEL and SLES.

Table 2-36 Software features for IBM PureFlex System Enterprise with RHEL and SLES on Power

Cloud Software (default but optional)

� 5765-SCP SmartCloud Entry

� 5662-SCP 3-year SWMA

� 5765-SCP SmartCloud Entry

� 5662-SCP 3-year SWMA

Not applicable Not applicable

Optional components - Standard Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

Not applicable

Operating system � 5765-AEZ AIX 6 Enterprise

� 5765-G99 AIX 7 Enterprise

Virtualization � 5765-PVE PowerVM Enterprise

Security (PowerSC) Not applicable Not applicable Not applicable Not applicable

Cloud Software (optional)

Not applicable Not applicable Not applicable Not applicable

AIX 6 AIX 7 IBM i 6.1 IBM i 7.1

Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES)

Standard components - Standard

IBM Storwize V7000 Software

� 5639-VM1 V7000 Base PID� 5639-SM3 3-year SWMA

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced� 5662-FMS 3-year SWMA

Operating system � 5639-RHP RHEL 5 & 6 � 5639-S11 SLES 11

Virtualization � 5765-PVE PowerVM Enterprise� 5773-PVE 3-year SWMA

Cloud Software (optional)

� 5765-SCP SmartCloud Entry� 5662-SCP 3-year SWMA

Optional components - Standard Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

Not applicable

Virtualization Not applicable

Page 49: Sg 247984

33

Intel Xeon-based compute nodesTable 2-37 lists the software features included with the Enterprise configuration on Intel Xeon-based compute nodes.

Table 2-37 Software features for IBM PureFlex System Enterprise on Intel Xeon-based compute nodes

2.4.9 Services

IBM PureFlex System Enterprise includes the following services:

� Service & Support offerings:

– Software Maintenance: 1-year of 9x5 (9 hours per day, 5 days per week) – Hardware maintenance: 3-years of 9x5 Next Business Day service

� Technical Support Services

Essential minimum service level offering for every IBM PureFlex System Standard configuration:

– Three years with two microcode analyses per year– Three years of Warranty Service upgrade to 24x7x4 service– Three years of Account Advocate or Enhanced Technical Support (24x7) and software

support prerequisites.

� Lab Services:

– Seven days of on-site Lab services– If the first compute node is a p260 or p460, 6911-300 is specified– If the first compute node is a x240, 6911-100 is specified

Intel Xeon-based compute nodes (AAS) Intel Xeon-based compute nodes (HVEC)

Standard components - Enterprise

IBM Storwize V7000 Software

� 5639-VM1 - V7000 Base PID� 5639-SM3 - 3-year SWMA

IBM Flex System Manager

� 5765-FMX IBM Flex System Manager Standard

� 5662-FMX 3-year SWMA

� 94Y9787 IBM Flex System Manager Standard, 3-year SWMA

Operating system � Varies � Varies

Virtualization � VMware ESXi selectable in the hardware configuration

Cloud Software (optional)

� 5765-SCP SmartCloud Entry� 5662-SCP 3 yr SWMA

� 5641-SC3 SmartCloud Entry, 3 yr SWMA

Optional components - Enterprise Expansion

IBM Storwize V7000 Software

� 5639-EV1 V7000 External Virtualization software � 5639-RM1 V7000 Remote Mirroring

IBM Flex System Manager

� 5765-FMS IBM Flex System Manager Advanced

� 94Y9783 IBM Flex System Manager Advanced

Operating system � 5639-OSX RHEL for x86 � 5639-W28 Windows 2008 R2� 5639-CAL Windows 2008 Client Access

� 5731RSI RHEL for x86 - L3 support only� 5731RSR RHEL for x86 - L1-L3 support� 5731W28 Windows 2008 R2� 5731CAL Windows 2008 Client Access

Virtualization � VMware ESXi selectable in the hardware configuration

Cloud Software Not applicable Not applicable

Page 50: Sg 247984

34 IBM PureFlex System and IBM Flex System Products and Technology

2.5 IBM SmartCloud Entry

It is a challenge of delivering new capabilities as your data, applications, physical hardware such as servers, storages, and networks all increase. The traditional means of deploying, provisioning, managing, and maintaining physical and virtual resources can no longer meet the demands of IT infrastructure. Virtualization simplifies and improves efficiency and utilization, and helps manage growth beyond physical resource boundaries.

With SmartCloud Entry, you can build on your current virtualization strategies to continue to gain IT efficiency, flexibility, and control.

Adapting Cloud in IT environments has the following advantages:

� Reduce data center footprint and management cost� Automated server request/provisioning solution � Improve utilization, workload management, and capability to deliver new services� Rapid service deployment – improving from several weeks to just days or hours.� Built-in metering system � Improve IT governance and risk management

IBM simplifies the customer journey from server consolidation to cloud management. IBM provides complete cloud solutions. These solutions include hardware, software technologies, and services for implementing private cloud. They add value on top of virtualized infrastructure with IBM SmartCloud™ Entry for Cloud offerings. The product provides a comprehensive cloud software stack with capabilities that you can get only with multiple products from other providers such as VMware. It enables you to quickly deploy your Cloud environment. IBM also offers advanced Cloud when those features are required.

You can take advantage of existing IBM server investments and virtualized environments to deploy IBM SmartCloud Entry with the essential cloud infrastructure capabilities:

Create images:

� Simplify storage of thousands of images.

� Easily create new ‘golden master’ images and software appliances by using corporate standard operating systems

� Convert images from physical systems or between various x86 hypervisors

� Reliably track images to ensure compliance and minimize security risks

� Optimize resources, reducing the number of virtualized images and the storage required for them

Deploy VMs:

� Reduce time to value for new workloads from months to a few days.

� Deploy application images across compute and storage resources

� User self-service for improved responsiveness

� Ensure security through VM isolation, and project-level user access controls

� Easy to use: You do not need to know all the details of the infrastructure

� Investment protection from full support of existing virtualized environments

� Optimize performance on IBM systems with dynamic scaling, expansive capacity, and continuous operation

Page 51: Sg 247984

35

Operate a private cloud:

� Cut costs with efficient operations.

� Delegate provisioning to authorized users to improve productivity

� Maintain full oversight to ensure an optimally running and safe cloud through automated approval/rejection

� Standardize deployment and configuration to improve compliance and reduce errors by setting policies, defaults, and templates

� Simplify administration with an intuitive interface for managing projects, users, workloads, resources, billing, approvals, and metering

IBM Cloud and virtualization solutions offer flexible approaches to cloud. Where you start your journey depends on your business needs.

For more information about IBM SmartCloud Entry, see:

http://ibm.com/systems/cloud

Page 52: Sg 247984

36 IBM PureFlex System and IBM Flex System Products and Technology

Page 53: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 37

Chapter 3. Systems management

IBM Flex System Manager, the management component of IBM Flex System Enterprise Chassis, and compute nodes are designed to help you get the most out of your IBM Flex System installation. They also allow you to automate repetitive tasks. These management interfaces can significantly reduce the number of manual navigational steps for typical management tasks. They offer simplified system set-up procedures by using wizards and built-in expertise to consolidated monitoring for physical and virtual resources.

This chapter contains the following sections:

� 3.1, “Management network” on page 38� 3.2, “Chassis Management Module” on page 39� 3.3, “Security” on page 41� 3.4, “Compute node management” on page 43� 3.5, “IBM Flex System Manager” on page 46

3

Page 54: Sg 247984

38 IBM PureFlex System and IBM Flex System Products and Technology

3.1 Management network

In an IBM Flex System Enterprise Chassis, you can configure separate management and data networks.

The management network is a private and secure Gigabit Ethernet network. It is used to complete management-related functions throughout the chassis, including management tasks related to the compute nodes, switches, and the chassis itself.

The management network is shown in Figure 3-1 as the blue line. It connects the Chassis Management Module (CMM) to the compute nodes, the switches in the I/O bays, the Flex System Manager (FSM). The FSM connection to the management network is through a special Broadcom 5718-based management network adapter (Eth0). The management networks in multiple chassis can be connected together through the external ports of the CMMs in each chassis through a GbE top-of-rack switch.

The yellow line in the Figure 3-1 shows the production data network. The FSM also connects to the production network (Eth1) so that it can access the Internet for product updates and other related information.

Figure 3-1 Separate management and production data networks

Enterprise Chassis

Separate Management and Data Networks

Flex System Manager

Eth0 Eth1

CMM

CMM

Management Network

Top-of-Rack Switch

CMM CMM

System x compute node

Data Network

I/O bay 1 I/O bay 2

PowerSystems

compute node

Por

t

IMM FSPIMM

Eth1 = embedded 2-port 10 GbE controller with Virtual Fabric Connector

Eth0 = Special GbE management network adapter

CMMs in other Enterprise Chassis

Management workstation

Page 55: Sg 247984

39

One of the key functions that the data network supports is discovery of operating systems on the various network endpoints. Discovery of operating systems by the FSM is required to support software updates on an endpoint such as a compute node. The FSM Checking and Updating Compute Nodes wizard assists you in discovering operating systems as part of the initial setup.

3.2 Chassis Management Module

The CMM provides single-chassis management, and is used to communicate with the management controller in each compute node. It provides system monitoring, event recording, and alerts, and manages the chassis, its devices, and the compute nodes. The chassis supports up to two chassis management modules. If one CMM fails, the second CMM can detect its inactivity, activate itself, and take control of the system without any disruption. The CMM is central of the management of the chassis, and is required in the Enterprise Chassis.

The following section describes the usage models of the CMM and its features.

For more information, see 4.8, “Chassis Management Module” on page 82.

3.2.1 Overview

The CMM is a hot-swap module that provides basic system management functions for all devices installed in the Enterprise Chassis. An Enterprise Chassis comes with at least one CMM, and supports CMM redundancy.

The CMM is shown in Figure 3-2

Figure 3-2 Chassis Management Module

Tip: If you want, the management node console can be connected to the data network for convenient access.

Page 56: Sg 247984

40 IBM PureFlex System and IBM Flex System Products and Technology

Through an embedded firmware stack, the CMM implements functions to monitor, control, and provide external user interfaces to manage all chassis resources. The CMM allows you to perform these functions among others:

� Define login IDs and passwords

� Configure security settings such as data encryption and user account security

� Select recipients for alert notification of specific events

� Monitor the status of the compute nodes and other components

� Find chassis component information

� Discover other chassis in the network and enable access to them

� Control the chassis, compute nodes, and other components

� Access the I/O modules to configure them

� Change the startup sequence in a compute node

� Set the date and time

� Use a remote console for the compute nodes

� Enable multi-chassis monitoring

� Set power policies and view power consumption history for chassis components

3.2.2 Interfaces

The CMM supports a web-based graphical user interface that provides a way to perform chassis management functions within a supported web browser. You can also perform management functions through the CMM command-line interface (CLI). Both the web-based and CLI interfaces are accessible through the single RJ45 Ethernet connector on the CMM, or from any system connected to the same network.

The CMM has the following default IPv4 settings:

� IP address: 192.168.70.100� Subnet: 255.255.255.0� User ID: USERID (all capital letters)� Password: PASSW0RD (all capital letters, with a zero instead of the letter O)

The CMM does not have a fixed static IPv6 IP address by default. Initial access to the CMM in an IPv6 environment can be done by either using the IPv4 IP address or the IPv6 link-local address. The IPv6 link-local address is automatically generated based on the MAC address of the CMM. By default, the CMM is configured to respond to DHCP first before using its static IPv4 address. If you do not want this operation to take place, connect locally to the CMM and change the default IP settings. You can connect locally, for example, by using a mobile computer.

The web-based GUI brings together all the functionality needed to manage the chassis elements in an easy-to-use fashion consistently across all System x IMM2 based platforms.

Page 57: Sg 247984

41

Figure 3-3 shows the Chassis Management Module login window.

Figure 3-3 CMM login pane

Figure 3-4 shows an example of the Chassis Management Module front page after login.

Figure 3-4 Initial view of CMM after login

3.3 Security

The focus of IBM on smarter computing is evident in the improved security measures implemented in IBM Flex System Enterprise Chassis. Today’s world of computing demands tighter security standards and native integration with computing platforms. For example, the push towards virtualization has increased the need for more security. This increase comes as

Page 58: Sg 247984

42 IBM PureFlex System and IBM Flex System Products and Technology

more mission critical workloads are consolidated on to fewer and more powerful servers. The IBM Flex System Enterprise Chassis takes a new approach to security with a ground-up chassis management design to meet new security standards.

These security enhancements and features are provided in the chassis:

� Single sign-on (central user management)

� End-to-end audit logs

� Secure boot: TPM and CRTM

� Intel TXT technology (Intel Xeon-based compute nodes)

� Signed firmware updates to ensure authenticity

� Secure communications

� Certificate authority and management

� Chassis and compute node detection and provisioning

� Role-based access control

� Security policy management

� Same management protocols supported on BladeCenter AMM for compatibility with earlier versions

� Insecure protocols come disabled by default in CMM, with “Locks” settings to prevent user from inadvertently or maliciously enabling them

� Supports up to 84 local CMM user accounts

� Supports up to 32 simultaneous sessions

� Planned support for DRTM

The Enterprise Chassis ships Secure, and supports two security policy settings:

� Secure: Default setting to ensure a secure chassis infrastructure

– Strong password policies with automatic validation and verification checks

– Updated passwords that replace the manufacturing default passwords after the initial setup

– Only secure communication protocols such as Secure Shell (SSH) and Secure Sockets Layer (SSL)

– Certificates to establish secure, trusted connections for applications that run on the management processors

� Legacy: Flexibility in chassis security

– Weak password policies with minimal controls

– Manufacturing default passwords that do not have to be changed

– Unencrypted communication protocols such as Telnet, SNMPv1, TCP Command Mode, CIM-XML, FTP Server, and TFTP Server

The centralized security policy makes Enterprise Chassis easy to configure. In essence, all components run with the same security policy provided by the CMM. This consistency ensures that all I/O modules run with a hardened attack surface.

Page 59: Sg 247984

43

3.4 Compute node management

Each node in the Enterprise Chassis has a management controller that communicates upstream through the CMM-enabled 1 GbE private management network that enables management capability. Different chassis components supported in the Enterprise Chassis can implement different management controllers. Table 3-1 details the different management controllers implemented in the chassis components.

Table 3-1 Chassis components and their respective management controllers

The management controllers for the various Enterprise Chassis components have the following default IPv4 addresses:

� CMM:192.168.70.100

� Compute nodes: 192.168.70.101-114 (corresponding to the slots 1-14 in the chassis)

� I/O Modules: 192.168.70.120-123 (sequentially corresponding to chassis bay numbering)

In addition to the IPv4 address, all I/O modules also support link-local IPv6 addresses and configurable external IPv6 addresses.

3.4.1 Integrated Management Module II

The Integrated Management Module II (IMM2) is the next generation of the IMMv1 (first released in the Intel Xeon “Nehalem-EP”-based servers). It is present on all Intel Xeon “Romley” based platforms, and features a complete rework of hardware and firmware. The IMM2 enhancements include a more responsive user interface, faster power on, and increased remote presence performance.

The IMM2 incorporates a new web user interface that provides a common “look and feel” across all IBM System x software products. In addition to the new interface, the following provides a list of other major enhancements from IMMv1:

� Faster processor and more memory

� IMM2 manageable “northbound” from outside the chassis, which enables consistent management and scripting with System x rack servers

� Remote presence:

– Increased color depth and resolution for more detailed server video

– Active X client in addition to Java client

– Increased memory capacity (~50 MB) provides convenience for remote software installations

� No IMM2 reset required on configuration changes because they become effective immediately without reboot

� Hardware management of non-volatile storage

� Faster Ethernet over USB

Chassis components Management controller

Intel Xeon processor-based compute nodes Integrated Management Module II (IMM2)

Power Systems compute nodes Flexible service processor (FSP)

Chassis Management Module Integrated Management Module II (IMM2)

Page 60: Sg 247984

44 IBM PureFlex System and IBM Flex System Products and Technology

� 1 Gb Ethernet management capability

� Improved system power-on and boot time

� More detailed information for UEFI detected events enables easier problem determination and fault isolation

� User interface meets accessibility standards (CI-162 compliant)

� Separate audit and event logs

� “Trusted” IMM with significant security enhancements (CRTM/TPM, signed updates, authentication policies, and so on)

� Simplified update/flashing mechanism

� Addition of Syslog alerting mechanism provides you with an alternative to email and SNMP traps.

� Support for Features On Demand (FoD) enablement of server functions, option card features, and System x solutions and applications

� First Failure Data Capture - One button web press initiates data collection and download

For more information about IMM2, see Chapter 5, “Compute nodes” on page 139. For more detailed information, see

� Integrated Management Module II User’s Guide

http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346

� IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849:

http://www.redbooks.ibm.com/abstracts/tips0849.html

3.4.2 Flexible service processor

Several advanced system management capabilities are built into POWER7-based compute nodes. An FSP handles most of the server-level system management. The FSP used in Enterprise Chassis compatible POWER based nodes is the same service processor used on POWER rack servers. It has system alerts and Serial over LAN (SOL) capability

The FSP provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the FSP directly. Rather, you interact by using tools such as IBM Flex System Manager and Chassis Management Module.

Both the p260 and p460 have one FSP each.

The Flexible Service Processor provides an SOL interface, which is available by using the CMM and the console command. The POWER7-based compute nodes do not have an on-board video chip, and do not support keyboard, video, and mouse (KVM) connections. Server console access is obtained by a SOL connection only.

SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage servers that do not have KVM support or that are attached to the FSM. SOL provides console redirection for both Software Management Services (SMS) and the server operating system.

The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the CMM network interface. The SOL connection enables POWER7-based compute nodes to be managed from any remote location with network access to the CMM.

Page 61: Sg 247984

45

SOL offers the following functions:

� Remote administration without KVM

� Reduced cabling and no requirement for a serial concentrator

� Standard Telnet/SSH interface, eliminating the requirement for special client software

The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration allows the POWER7-based compute nodes to be managed from a remote location.

3.4.3 I/O modules

The I/O modules have the following base functions:

� Initialization� Configuration� Diagnostics (both power-on and concurrent)� Status Reporting

In addition, the following set of protocols and software features are supported on the I/O modules:

� Supports configuration method over the Ethernet management port.

� A scriptable SSH CLI, a web server with SSL support, Simple Network Management Protocol v3 (SNMPv3) Agent with alerts, and a sFTP client.

� Server ports used for Telnet, HTTP, SNMPv1 agents, TFTP, FTP, and other insecure protocols are DISABLED by default.

� LDAP authentication protocol support for user authentication.

� For Ethernet I/O modules, 802.1x enabled with policy enforcement point (PEP) capability to allow support of TNC (Trusted Network Connect).

� The ability to capture and apply a switch configuration file and the ability to capture a first failure data capture (FFDC) data file.

� Ability to transfer files by using URL update methods (HTTP, HTTPS, FTP, TFTP, sFTP).

� Various methods for firmware updates are supported including FTP, sFTP, and TFTP. In addition, firmware updates by using a URL that includes protocol support for HTTP, HTTPs, FTP, sFTP, and TFTP are supported.

� Supports SLP discovery in addition to SNMPv3.

� Ability to detect firmware/hardware hangs, and ability to pull a ‘crash-failure memory dump’ file to an FTP (sFTP) server.

� Supports selectable primary and backup firmware banks as the current operational firmware.

� Ability to send events, SNMP traps, and event logs to the CMM, including security audit logs.

� IPv4 and IPv6 on by default.

� The CMM management port supports IPv4 and IPv6 (IPV6 support includes the use of link local addresses.

� Port mirroring capabilities:

– Port mirroring of CMM ports to both internal and external ports.

Page 62: Sg 247984

46 IBM PureFlex System and IBM Flex System Products and Technology

– For security reasons, the ability to mirror the CMM traffic is hidden and is available only to development and service personnel

� Management virtual local area network (VLAN) for Ethernet switches: A configurable management 802.1q tagged VLAN in the standard VLAN range of 1 - 4094. It includes the CMM’s internal management ports and the I/O modules internal ports that are connected to the nodes.

3.5 IBM Flex System Manager

The FSM is a high performance scalable system management appliance. It is based on the IBM Flex System x240 Compute Node. The x240 is described in more detail in 5.2, “IBM Flex System x240 Compute Node” on page 140. The FSM hardware comes preinstalled with systems management software that enables you to configure, monitor, and manage IBM Flex System resources in up to four chassis.

The IBM Flex System Manager has these high-level features and functions:

� Supports a comprehensive, pre-integrated system that is configured to optimize performance and efficiency

� Automated processes triggered by events simplify management and reduce manual administrative tasks

� Centralized management reduces the skills and the number of steps it takes to manage and deploy a system

� Enables comprehensive management and control of energy utilization and costs

� Automates responses for a reduced need for manual tasks such as custom actions and filters, configure, edit, relocate, and automation plans

� Full integration with server views, including virtual server views, enables efficient management of resources

The pre-load contains a set of software components that are responsible for running management functions. These components must be activated by using the available IBM FoD software entitlement licenses. They are licensed on a per-chassis basis, so you need one license for each chassis you plan to manage. The management node comes without any entitlement licenses, so you must purchase a license to enable the required FSM functions.

The part number to order the management node is shown in Table 3-2.

Table 3-2 Ordering information for IBM Flex System Manager node

Remember: Support for management of more than four chassis with a single FSM can be added at a later date.

Part number Description

8731A1xa

a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for specifics.

IBM Flex System Manager node

Page 63: Sg 247984

47

The part numbers to order FoD software entitlement licenses are shown in the following tables. The part numbers for the same features are different in different countries. Ask your local IBM representative for specifics. Table 3-3 shows the information for the United States, Canada, Asia Pacific, and Japan.

Table 3-3 Ordering information for FoD licenses (United States, Canada, Asia Pacific, Japan)

Table 3-4 shows the ordering information for Latin America and Europe/Middle East/Africa.

Table 3-4 Ordering information for FoD licenses (Latin America and Europe/Middle East/Africa)

IBM Flex System Manager base feature set offers the following functions:

� Support up to four managed chassis� Support up to 5,000 managed elements� Auto-discovery of managed elements� Overall health status� Monitoring and availability� Hardware management� Security management

Part number Description

Base feature set

90Y4217 IBM Flex System Manager per managed chassis with 1-Year SW S&S

90Y4222 IBM Flex System Manager per managed chassis with 3-Year SW S&S

Advanced feature set

90Y4249 IBM Flex System Manager, Advanced Upgrade, per managed chassis with 1-Year SW S&S

00D7554 IBM Flex System Manager, Advanced Upgrade, per managed chassis with 3-Year SW S&S

Fabric Manager

00D7550 IBM Fabric Manager, per managed chassis with 1-Year SW S&S

00D7551 IBM Fabric Manager, per managed chassis with 3-Year SW S&S

Part number Description

Base feature set

95Y1174 IBM Flex System Manager Per Managed Chassis with 1-Year SW S&S

95Y1179 IBM Flex System Manager Per Managed Chassis with 3-Year SW S&S

Advanced feature set

94Y9219 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 1-Year SW S&S

94Y9220 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 3-Year SW S&S

Fabric Manager

00D4692 IBM Fabric Manager, Per Managed Chassis with 1-Year SW S&S

00D4693 IBM Fabric Manager, Per Managed Chassis with 3-Year SW S&S

Page 64: Sg 247984

48 IBM PureFlex System and IBM Flex System Products and Technology

� Administration � Network management (Network Control)� Storage management (Storage Control)� Virtual machine lifecycle management (VMControl Express)

IBM Flex System Manager advanced feature set offers all capabilities of the base feature set plus:

� Image management (VMControl Standard)� Pool management (VMControl Enterprise)

IBM Fabric Manager offers the following features:

� Manage assignments of Ethernet MAC and Fibre Channel WWN addresses

� Monitor the health of compute nodes, and automatically replace a failed compute node from a designated pool of spare compute nodes

� Preassign MAC and WWN addresses, as well as storage boot targets, for up to 256 chassis or 3584 compute nodes.

� Using an enhanced GUI, you can perform these tasks:

– Create addresses for compute nodes

– Save the address profiles

– Deploy the addresses to slots in the same chassis, or in up to 256 different chassis

3.5.1 Hardware overview

Fundamentally, the FSM from a hardware point of view is a locked-down compute node with a specific hardware configuration. This configuration is designed for optimal performance of the preinstalled software stack. The FSM looks similar to the Intel-based x240. However, there are slight differences between the system board designs, so these two hardware nodes are not interchangeable. Figure 3-5 shows a front view of the FSM.

Figure 3-5 IBM Flex System Manager

Page 65: Sg 247984

49

Figure 3-6 shows the internal layout and major components of the FSM.

Figure 3-6 Exploded view of the IBM Flex System Manager node, showing major components

Additionally, the FSM comes preconfigured with the components described in Table 3-5.

Table 3-5 Features of the IBM Flex System Manager node (8731)

Feature Description

Processor 1x Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W

Memory 8 x 4 GB (1x4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

SAS Controller One LSI 2004 SAS Controller

Disk 1 x IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD2 x IBM 200GB SATA 1.8" MLC SSD (configured in an RAID-1)

Integrated NIC Embedded dual-port 10 Gb Virtual Fabric Ethernet controller (Emulex BE3)Dual-port 1 GbE Ethernet controller on a management adapter (Broadcom 5718)

Systems Management

Integrated Management Module II (IMM2)Management network adapter

Cover

Heat sink

Microprocessorheat sink filler

I/O expansionadapter

ETEadapter

Air baffles

DIMMDIMMfillerStorage

drive filler

Hot-swapstorage drive

Hot-swapstoragecage

Microprocessor

SSD and HDDbackplane

SSD mountinginsert

SSD interposer

SSDdrives

Page 66: Sg 247984

50 IBM PureFlex System and IBM Flex System Products and Technology

Figure 3-7 shows the internal layout of the FSM.

Figure 3-7 Internal view that shows the major components of IBM Flex System Manager

Front controlsThe FSM has similar controls and LEDs as the IBM Flex System x240 Compute Node. The diagram in Figure 3-8 shows the front of an FSM with the location of the control and LEDs.

Figure 3-8 FSM front panel showing controls and LEDs

StorageThe FSM ships with 2 x IBM 200 GB SATA 1.8" MLC SSD and 1 x IBM 1 TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD drives. The 200 GB SSD drives are configured in an RAID-1 pair that provides roughly 200 GB of usable space. The 1 TB SATA drive is not part of a RAID group.

Processor 1Filler slot for Processor 2

Management network adapter

Drive bays

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

USB connector

KVM connector

Powerbutton/LED

IdentifyLED

Check logLED

FaultLED

Hard disk driveactivity LED

Hard disk drivestatus LED

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a

0

2

1

Solid statedrive LEDs

Page 67: Sg 247984

51

The partitioning of the disks is listed in Table 3-6

Table 3-6 Detailed SSD and HDD disk partitioning

Management network adapterThe management network adapter is a standard feature of the FSM, and provides a physical connection into the private management network of the chassis. The adapter is shown in Figure 3-6 on page 49 as the everything-to-everything (ETE) adapter.

The management network adapter contains a Broadcom 5718 Dual 1GbE adapter and a Broadcom 5389 8-port L2 switch. This card is one of the features that makes the FSM unique compared to all other nodes supported by the Enterprise Chassis. The management network adapter provides a physical connection into the private management network of the chassis. The connection allows the software stack to have visibility into both the data and management networks. The L2 switch on this card is automatically set up by the IMM2, and connects the FSM and the onboard IMM2 into the same internal private network.

3.5.2 Software features

The IBM Flex System Manager management software has these main features:

� Monitoring and problem determination

– A real-time multichassis view of hardware components with overlays for additional information

– Automatic detection of issues in your environment through event setup that triggers alerts and actions

– Identification of changes that might affect availability

– Server resource utilization by virtual machine or across a rack of systems

� Hardware management

– Automated discovery of physical and virtual servers and interconnections, applications, and supported third-party networking

– Inventory of hardware components

– Chassis and hardware component views

– Hardware properties

– Component names/hardware identification numbers

Physical disk Virtual disk size Description

SSD 50 MB Boot disk

SSD 60 GB OS/Application disk

SSD 80 GB Database disk

HDD 40 GB Update repository

HDD 40 GB Dump space

HDD 60 GB Spare disk for OS/Application

HDD 80 GB Spare disk for database

HDD 30 GB Service Partition

Page 68: Sg 247984

52 IBM PureFlex System and IBM Flex System Products and Technology

– Firmware levels

– Utilization rates

� Network management

– Management of network switches from various vendors

– Discovery, inventory, and status monitoring of switches

– Graphical network topology views

– Support for KVM, pHyp, VMware virtual switches, and physical switches

– VLAN configuration of switches

– Integration with server management

– Per-virtual machine network usage and performance statistics provided to VMControl

– Logical views of servers and network devices grouped by subnet and VLAN

� Storage management

– Discovery of physical and virtual storage devices

– Support for virtual images on local storage across multiple chassis

– Inventory of physical storage configuration

– Health status and alerts

– Storage pool configuration

– Disk sparing and redundancy management

– Virtual volume management

– Support for virtual volume discovery, inventory, creation, modification, and deletion

� Virtualization management (base feature set)

– Support for VMware, Hyper-V, KVM, and IBM PowerVM

– Create virtual servers

– Edit virtual servers

– Manage virtual servers

– Relocate virtual servers

– Discover virtual server, storage, and network resources, and visualize the physical-to-virtual relationships

� Virtualization management (advanced feature set)

– Create new image repositories for storing virtual appliances and discover existing image repositories in your environment

– Import external, standards-based virtual appliance packages into your image repositories as virtual appliances

– Capture a running virtual server that is configured just the way you want, complete with guest operating system, running applications, and virtual server definition

– Import virtual appliance packages that exist in the Open Virtual Machine Format (OVF) from the Internet or other external sources

– Deploy virtual appliances quickly to create new virtual servers that meet the demands of your ever-changing business needs

– Create, capture, and manage workloads

Page 69: Sg 247984

53

– Create server system pools, which enable you to consolidate your resources and workloads into distinct and manageable groups

– Deploy virtual appliances into server system pools

– Manage server system pools, including adding hosts or additional storage space, and monitoring the health of the resources and the status of the workloads in them

– Group storage systems together by using storage system pools to increase resource utilization and automation

– Manage storage system pools by adding storage, editing the storage system pool policy, and monitoring the health of the storage resources

� Additional features

– Resource-oriented chassis map provides instant graphical view of chassis resource that includes nodes and I/O modules

• Fly-over provides instant view of individual server (node) status and inventory

• Chassis map provides inventory view of chassis components, a view of active statuses that require administrative attention, and a compliance view of server (node) firmware

• Actions can be taken on nodes such as working with server-related resources, showing and installing updates, submitting service requests, and starting the remote access tools

– Remote console

• Open video sessions and mount media such as DVDs with software updates to their servers from their local workstation

• Remote KVM connections

• Remote Virtual Media connections (mount CD/DVD/ISO/USB media)

• Power operations against servers (Power On/Off/Restart)

– Hardware detection and inventory creation

– Firmware compliance and updates

– Automatic detection of hardware failures

• Provides alerts

• Takes corrective action

• Notifies IBM of problems to escalate problem determination

– Health status (such as processor utilization) on all hardware devices from a single chassis view

– Administrative capabilities, such as setting up users within profile groups, assigning security levels, and security governance.

3.5.3 Supported agents, hardware, operating systems, and tasks

IBM Flex System Manager provides four tiers of agents for managed systems. For each managed system, you need to choose the tier that provides the amount and level of capabilities that you need for that system. Select the level of agent capabilities that best fits the type of managed system and the management tasks you need to perform.

Page 70: Sg 247984

54 IBM PureFlex System and IBM Flex System Products and Technology

IBM Flex System Manager has these agent tiers:

� Agentless in-band

Managed systems without any FSM client software installed. FSM communicates with the managed system through the operating system.

� Agentless out-of-band

Managed systems without any FSM client software installed. FSM communicates with the managed system through something other than the operating system, such as a service processor or a hardware management console.

� Platform Agent

Managed systems with Platform Agent installed. FSM communicates with the managed system through the Platform Agent.

� Common Agent

Managed systems with Common Agent installed. FSM communicates with the managed system through the Common Agent.

Table 3-7 lists the agent tier support for the IBM Flex System managed compute nodes. Managed nodes include x240 compute node that supports Windows, Linux and VMware, and p260 and p460 compute nodes that support IBM AIX, IBM i, and Linux.

Table 3-7 Agent tier support by management system type

Table 3-8 summarizes the management tasks supported by the compute nodes that depend on the agent tier.

Table 3-8 Compute node management tasks supported by the agent tier

Agent tierManaged system type

Agentlessin-band

Agentlessout-of-band

PlatformAgent

CommonAgent

Compute nodes that run AIX Yes Yes No Yes

Compute nodes that run IBM i Yes Yes Yes Yes

Compute nodes that run Linux No Yes Yes Yes

Compute nodes that run Linux and supporting SSH

Yes Yes Yes Yes

Compute nodes that run Windows No Yes Yes Yes

Compute nodes that run Windows and supporting SSH or distributed component object model (DCOM)

Yes Yes Yes Yes

Compute nodes that run VMware Yes Yes Yes Yes

Other managed resources that support SSH or SNMP

Yes Yes No No

Agent tierManaged system type

Agentlessin-band

Agentlessout-of-band

PlatformAgent

CommonAgent

Command automation No No No Yes

Hardware alerts No Yes Yes Yes

Platform alerts No No Yes Yes

Page 71: Sg 247984

55

Table 3-9 shows supported virtualization environments and their management tasks.

Table 3-9 Supported virtualization environments and management tasks

Table 3-10 shows supported I/O switches and their management tasks.

Table 3-10 Supported I/O switches and management tasks

Health and status monitoring No No Yes Yes

File transfer No No No Yes

Inventory (hardware) No Yes Yes Yes

Inventory (software) Yes No Yes Yes

Problems (hardware status) No Yes Yes Yes

Process management No No No Yes

Power management No Yes No Yes

Remote control No Yes No No

Remote command line Yes No Yes Yes

Resource monitors No No Yes Yes

Update manager No No Yes Yes

Virtualization environmentManagement task

AIX andLinuxa

a. Linux on Power Systems compute nodes

IBM i VMwarevSphere

MicrosoftHyper-V

LinuxKVM

Deploy virtual servers Yes Yes Yes Yes Yes

Deploy virtual farms No No Yes No Yes

Relocate virtual servers Yes No Yes No Yes

Import virtual appliance packages Yes Yes No No Yes

Capture virtual servers Yes Yes No No Yes

Capture workloads Yes Yes No No Yes

Deploy virtual appliances Yes Yes No No Yes

Deploy workloads Yes Yes No No Yes

Deploy server system pools Yes No No No Yes

Deploy storage system pools Yes No No No No

I/O moduleManagement task

EN20921 Gb Ethernet

EN409310 Gb Ethernet

FC31718 Gb FC

FC502216 Gb FC

Discovery Yes Yes Yes Yes

Inventory Yes Yes Yes Yes

Monitoring Yes Yes Yes Yes

Agent tierManaged system type

Agentlessin-band

Agentlessout-of-band

PlatformAgent

CommonAgent

Page 72: Sg 247984

56 IBM PureFlex System and IBM Flex System Products and Technology

Table 3-11 shows supported storage systems and their management tasks.

Table 3-11 Supported storage systems and management tasks

For more information, see the IBM Flex System Manager product publications available from the IBM Flex System Information Center at:

http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp

Alerts Yes Yes Yes Yes

Configuration Yes Yes Yes No

Storage systemManagement task

V7000

Storage device discovery Yes

Integrated physical and logical topology views Yes

Show relationships between storage and server resources Yes

Perform logical and physical configuration Yes

View controller and volume status and to set notification alerts Yes

I/O moduleManagement task

EN20921 Gb Ethernet

EN409310 Gb Ethernet

FC31718 Gb FC

FC502216 Gb FC

Page 73: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 57

Chapter 4. Chassis and infrastructure configuration

The IBM Flex System Enterprise Chassis (machine type 8721) is a 10U next-generation server platform with integrated chassis management. It is a compact, high-density, high-performance, rack-mount, scalable server platform system. It supports up to 14 one-bay compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise Chassis. In addition, it can also support up to seven 2-bay compute nodes or three 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay, and 4-bay compute nodes to meet your specific hardware needs.

This chapter includes the following sections:

� 4.1, “Overview” on page 58� 4.2, “Power supplies” on page 65� 4.3, “Fan modules” on page 68� 4.4, “Fan logic module” on page 70� 4.5, “Front information panel” on page 71� 4.6, “Cooling” on page 72� 4.7, “Power supply and fan module requirements” on page 77� 4.8, “Chassis Management Module” on page 82� 4.9, “I/O architecture” on page 85� 4.10, “I/O modules” on page 92� 4.11, “Infrastructure planning” on page 119� 4.12, “IBM 42U 1100 mm Enterprise V2 Dynamic Rack” on page 128� 4.13, “IBM Rear Door Heat eXchanger V2 Type 1756” on page 134

4

Page 74: Sg 247984

58 IBM PureFlex System and IBM Flex System Products and Technology

4.1 Overview

Figure 4-1 shows the Enterprise Chassis as seen from the front. The front of the chassis has 14 horizontal bays with removable dividers that allow nodes and future elements to be installed within the chassis. The nodes can be installed when the chassis is powered.

The chassis employs a die-cast mechanical bezel for rigidity. This chassis construction allows for tight tolerances between nodes, shelves, and the chassis bezel. These tolerances ensure accurate location and mating of connectors to the midplane.

Figure 4-1 IBM Flex System Enterprise Chassis

The major components of Enterprise Chassis are:

� Fourteen 1-bay compute node bays (can also support seven 2-bay or three 4-bay compute nodes with the shelves removed).

� Six 2500-watt power modules that provide N+N or N+1 redundant power.

� Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules).

� Four physical I/O modules.

� An I/O architectural design capable of providing:

– Up to eight lanes of I/O to an I/O adapter. Each lane capable of up to 16 Gbps.– A maximum of 16 lanes of I/O to a half wide-node with two adapters.– A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and

InfiniBand.

� Two IBM Flex System Manager (FSM) management appliances for redundancy. The FSM provides multiple-chassis management support for up to four chassis.

� Two IBM Chassis Management Module (CMMs). The CMM provides single-chassis management support.

Page 75: Sg 247984

59

Table 4-1 lists these components.

Table 4-1 8721-A1x Chassis configuration

Figure 4-2 shows the component parts of the chassis, with the shuttle removed. The shuttle forms the rear of the chassis where the I/O Modules, power supplies, fan modules, and Chassis Management Modules are installed. The Shuttle would be removed only to gain access to the midplane or fan distribution cards, in the rare event of a service action.

Figure 4-2 Enterprise Chassis component parts

Within the chassis, a personality card holds vital product data (VPD) and other information relevant to the particular chassis. This card can be replaced only under service action, and is not normally accessible. The personality card is attached to the midplane as shown in Figure 4-4 on page 61.

Part number Quantity Description

8721-A1x 1 IBM Flex System Enterprise Chassis

1 Chassis Management Module

2 2500W power supply unit

4 80 mm fan modules

2 40 mm fan modules

1 Console breakout cable

2 C19 to C20 2M power cables

1 Rack mount kit

Chassis Chassismanagementmodule

40mm fanmodule

Fanlogicmodule

CMMfiller

Powersupplyfiller

I/Omodule

ShuttleRearLEDcard

Powersupply

80mm fanfiller

80mm fanmodule

Fan distributioncards Midplane

Page 76: Sg 247984

60 IBM PureFlex System and IBM Flex System Products and Technology

4.1.1 Front of the chassis

Figure 4-3 shows the bay numbers and air apertures on the front of the Enterprise Chassis.

Figure 4-3 Front view of the Enterprise Chassis

The chassis has the following features on the front:

� The front information panel on the lower left of the chassis

� Bays 1 - 14 supporting nodes and FSM

� Lower airflow inlet apertures that provide air cooling for switches, CMMs, and power supplies

� Upper airflow inlet apertures that provide cooling for power supplies

For efficient cooling, each bay in the front or rear in the chassis must contain either a device or a filler.

The Enterprise Chassis provides several LEDs on the front information panel that can be used to obtain the status of the chassis. The Identify, Check log, and the Fault LED are also on the rear of the chassis for ease of use.

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Bay 1

Bay 3

Bay 5

Bay 7

Bay 9

Bay 11

Bay 13 Bay 14

Bay 12

Bay 10

Bay 8

Bay 6

Bay 4

Bay 2

Upper airflow inlets

Lower airflow InletsInformation Panel

Page 77: Sg 247984

61

4.1.2 Midplane

The midplane is the circuit board that connects to the compute nodes from the front of the chassis. It also connects to I/O modules, fan modules, and power supplies from the rear of the chassis. The midplane is located within the chassis, and can be accessed by removing the Shuttle assembly. Removing the midplane is only necessary in case of service action.

The midplane is passive, which is to say that there are no electronic components on it. The midplane has apertures to allow air to pass through. It has connectors on both sides for power supplies, fan distribution cards, switches, I/O adapters, and nodes.

Figure 4-4 Connectors on the midplane

I/O adapter connectors

Midplane front view Midplane rear view

Node power connectors

Management connectors

I/O module connectors

Power supply connectors

Fan power and signal connectors

CMM connectors

Personality card connector

Page 78: Sg 247984

62 IBM PureFlex System and IBM Flex System Products and Technology

4.1.3 Rear of the chassis

Figure 4-5 shows the rear view of the chassis.

Figure 4-5 Rear view of Enterprise Chassis

The following components can be installed into the rear of the chassis

� Up to two CMMs.

� Up to six 2500W power supply modules.

� Up to six fan modules that consist of four 80 mm fan modules and two 40 mm fan modules. Additional fan modules can be installed for a total of 10 modules.

� Up to four I/O modules.

4.1.4 Specifications

Table 4-2 shows the specifications of the Enterprise Chassis 8721-A1x.

Table 4-2 Enterprise Chassis specifications

Feature Specifications

Machine type-model System x ordering sales channel: 8721-A1xPower Systems sales channel: 7893-92Xa

Form factor 10U rack mounted unit

Maximum number of compute nodes supported

14 half-wide (single bay), 7 full-wide (two bays), or 3 double-height full-wide (four bays). Mixing is supported.

Chassis per 42U rack 4

Nodes per 42U rack 56 half-wide, or 28 full-wide

Page 79: Sg 247984

63

For data center planning, the chassis is rated to a maximum operating temperature of 40°C. For comparison, BC-H is rated to 35°C. 110v operation is not supported: The AC operating range is 200VAC to 240VAC.

4.1.5 Air filter

There is an optional airborne contaminate filter that can be fitted to the front of the chassis as listed in Table 4-3.

Table 4-3 IBM Flex System Enterprise Chassis airborne contaminant filter ordering information

Management One or two Chassis Management Modules for basic chassis management. Two CMMs form a redundant pair. One CMM is standard in 8721-A1x. The CMM interfaces with the integrated management module (IMM) or flexible service processor (FSP) integrated in each compute node in the chassis. An optional IBM Flex System Managera management appliance provides comprehensive management that includes virtualization, networking, and storage management.

I/O architecture Up to eight lanes of I/O to an I/O adapter, with each lane capable of up to 16 Gbps bandwidth. Up to 16 lanes of I/O to a half wide-node with two adapters. A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and InfiniBand

Power supplies Six 2500-watt power modules that provide N+N or N+1 redundant power. Two are standard in model 8721-A1x. Power supplies are 80 PLUS Platinum certified and provide over 94% efficiency at both 50% load and 20% load. Power capacity of 2500 Watts output rated at 200VAC. Each power supply contains two independently powered 40 mm cooling fan modules.

Fan modules Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four 80 mm and two 40 mm fan modules are standard in model 8721-A1x.

Dimensions � Height: 440 mm (17.3”)� Width: 447 mm (17.6”)� Depth, measured from front bezel to rear of chassis: 800 mm (31.5")� Depth, measured from node latch handle to the power supply handle: 840 mm

(33.1")

Weight � Minimum configuration: 96.62 kg (213 lb)� Maximum configuration: 220.45 kg (486 lb)

Declared sound level 6.3 to 6.8 bels

Temperature Operating air temperature 5°C to 40°C

Electrical power Input power: 200 - 240 V ac (nominal), 50 or 60 HzMinimum configuration: 0.51 kVA (two power supplies)Maximum configuration: 13 kVA (six power supplies)

Power consumption 12,900 watts maximum

a. When ordering the IBM Flex System Enterprise Chassis through the Power Systems sales channel, select one of the IBM PureFlex System offerings. These offers are described in Chapter 2, “IBM PureFlex System” on page 11. In such offerings, the IBM Flex System Manager is a standard component and therefore is not optional.

Feature Specifications

Part Number Description

43W9055 IBM Flex System Enterprise Chassis airborne contaminant filter

43W9057 IBM Flex System Enterprise Chassis airborne contaminant filter replacement pack

Page 80: Sg 247984

64 IBM PureFlex System and IBM Flex System Products and Technology

The filter is attached to and removed from the chassis as shown in Figure 4-6.

Figure 4-6 Dust filter

4.1.6 Compute node shelves

A shelf is required for half-wide bays. The chassis ships with these shelves in place. To allow for installation of the full-wide or larger, shelves must be removed from the chassis. Remove the shelves by sliding two blue latches on the shelf towards the center and then sliding the shelf out of the chassis.

Figure 4-7 shows removal of a shelf from Enterprise Chassis.

Figure 4-7 Shelf removal

Tabs

Shelf

Page 81: Sg 247984

65

4.1.7 Hot plug and hot swap components

The chassis follows the standard color coding scheme used by IBM for touch points and hot swap components.

Touch points are blue, and are found on these locations:

� The fillers that cover empty fan and power supply bays� The handle of nodes� Other removable items that cannot be hot swapped

Hot Swap components have orange touch points. Orange tabs are found on fan modules, fan logic modules, power supplies, and I/O Module handles. The orange designates that the items are hot swap, and can be both removed and replaced while the chassis is powered. Table 4-4 shows which components are hot swap and which are hot plug.

Nodes can be plugged into the chassis while the chassis is powered. The node can then be powered on. Power the node off before removal.

Table 4-4 Hot plug and hot swap components

4.2 Power supplies

A maximum of six power supplies can be installed within the Enterprise Chassis. The power supplies are 80 PLUS Platinum certified and are 2500 Watts output rated at 200VAC to 208VAC (nominal), and 2750W at 220VAC to 240VAC (nominal). The power supply has an oversubscription rating of up to 3538 Watts output at 200VAC. The power supply operating range is 200-240VAC. The power supplies also contain two independently powered 40mm cooling fan modules that use power from the midplane, not from the power supply.

80 PLUS is a performance specification for power supplies used within servers and computers. The standard has several ratings, such as Bronze, Silver, Gold, Platinum. To meet the 80 PLUS Platinum standard, the power supply must have a power factor (PF) of 0.95 or greater at 50% rated load and efficiency equal to or greater than the following:

� 90% at 20% of rated load� 94% at 50% of rated load � 91% at 100% of rated load

Further information about 80 PLUS can be found at

http://www.plugloadsolutions.com

Component Hot plug Hot swap

Node Yes Noa

a. Node must be powered off, in standby before removal.

I/O Module Yes Yesb

b. I/O Module might require reconfiguration, and removal is disruptive to any communications that are taking place.

40 mm Fan Pack Yes Yes

80 mm Fan Pack Yes Yes

Power Supply Yes Yes

Fan logic module Yes Yes

Page 82: Sg 247984

66 IBM PureFlex System and IBM Flex System Products and Technology

Table 4-5 lists the efficiency of the Enterprise Chassis power supplies at various percentage loads.

Table 4-5 Power supply efficiency at different loads

Figure 4-8 shows the location of the power supplies.

Figure 4-8 Power supply locations

The chassis allows configurations of power supplies to give N+N or N+1 redundancy. A fully configured chassis operates on just three 2500W power supplies with no redundancy, but N+1 or N+N is better to keep the chassis available. Three (or six with N+N redundancy) power supplies allows for a balanced 3-phase configuration.

All power supply modules are combined into a single power domain within the chassis. This combination distributes power to each of the compute nodes, I/O modules, and ancillary components through the Enterprise Chassis midplane. The midplane is a highly reliable design with no active components. Each power supply is designed to provide fault isolation and is hot swappable.

Power monitoring of both the DC and AC signals allows the Chassis Management Module to accurately monitor the power supplies.

The integral power supply fans are not dependent upon the power supply being functional. They operate and are powered independently from the midplane.

Load 10% load 20% load 50% load 100% load

Input voltage (Vac) 200-208V 220-240V 200-208V 220-240V 200-208V 220-240V 200-208V 220-240V

Output power 250 W 275 W 500 W 550 W 1250 W 1375 W 2500 W 2750 W

Efficiency 93.2% 93.5% 94.2% 94.4% 94.5% 92.2% 91.8% 91.4%

Powersupplybay 6

Powersupplybay 5

Powersupplybay 4

Powersupplybay 1

Powersupplybay 3

Powersupplybay 2

Page 83: Sg 247984

67

Power supplies are added as required to meet the load requirements of the Enterprise Chassis configuration. There is no need to over provision a chassis. For more information about power-supply unit (PSU) planning, see 4.11, “Infrastructure planning” on page 119.

Figure 4-9 shows the power supply rear view and highlights the LEDs. There is a handle for removal and insertion of the power supply.

Figure 4-9 Power supply

The rear of the power supply has a C20 inlet socket for connection to power cables. You can use a C19-C20 power cable, which can connect to a suitable IBM DPI rack power distribution unit (PDU).

The rear LEDs are:

� AC Power: When lit green, this LED indicates that AC power is being supplied to the PSU inlet.

� DC Power: When lit green, this LED indicates that DC power is being supplied to the chassis midplane.

� Fault: When lit amber, this LED indicates a fault with the PSU.

Table 4-6 shows the specifications for the Enterprise Chassis power supplies.

Table 4-6 2500W Power Supply Module option part number

Before removing any power supplies, ensure that the remaining power supplies have sufficient capacity to power the Enterprise Chassis. Power usage information can be found in the Chassis Management Module web interface. For more information about oversubscription, see 4.7.2, “Power supply population” on page 78.

Part Number Feature codesa

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

43W9049 A0UC / 3590 IBM Flex System Enterprise Chassis 2500W Power Module

LEDs (left to right:)

� AC power� DC power� Fault

Removal latch

Pull handle

Page 84: Sg 247984

68 IBM PureFlex System and IBM Flex System Products and Technology

4.3 Fan modules

The Enterprise Chassis supports up to 10 hot pluggable fan modules that consist of two 40 mm fan modules and eight 80 mm fan modules.

A chassis can operate with a minimum of six hot-swap fan modules installed, consisting of four 80 mm fan modules and two 40 mm fan modules.

The fan modules plug into the chassis and connect to the fan distribution cards. More 80 mm fan modules can be added as required to support chassis cooling requirements.

Figure 4-10 shows the fan bays in the back of the Enterprise Chassis.

Figure 4-10 Fan bays in the Enterprise Chassis

For more information about how to populate the fan modules, see 4.6, “Cooling” on page 72.

Fanbay 10

Fanbay 9

Fanbay 8

Fanbay 7

Fanbay 6

Fanbay 5

Fanbay 4

Fanbay 3

Fanbay 2

Fanbay 1

Page 85: Sg 247984

69

Figure 4-11 shows a 40 mm fan module,

Figure 4-11 40 mm fan module

The two 40 mm fan modules in fan bays 5 and 10 distribute airflow to the I/O modules and chassis management modules. These modules ship preinstalled in the chassis.

Each 40 mm fan module contains two 40 mm fans internally, side by side.

The 80 mm fan modules distribute airflow to the compute nodes through the chassis from front to rear. Each 80 mm fan module contains two 80 mm fan modules, back to back at each end of the module, which are counter rotating.

Both fan modules have an electromagnetic compatibility (EMC) mesh screen on the rear internal face of the module. This design provides a laminar flow through the screen. Laminar flow is a smooth flow of air, sometimes called streamline flow. This flow reduces turbulence of the exhaust air and improves the efficiency of the overall fan assembly.

These factors combine to form a highly efficient fan design that provides the best cooling for lowest energy input:

� Design of the whole fan assembly� The fan blade design� The distance between and size of the fan modules� The EMC mesh screen

Figure 4-12 shows an 80 mm fan module.

Figure 4-12 80 mm fan module

Power on LED

Fault LED

Removal latch

Pull handle

Power on LED

Fault LED

Removal latch

Pull handle

Page 86: Sg 247984

70 IBM PureFlex System and IBM Flex System Products and Technology

The minimum number of 80 mm fan modules is four. The maximum number of 80 mm fan modules that can be installed is eight. When the modules are ordered as an option, they are supplied as a pair.

Both fan modules have two LED indicators, consisting of a green power-on indicator and an amber fault indicator. The power indicator lights when the fan module has power, and flashes when the module is in power save state.

Table 4-7 lists the specifications on the 80 mm Fan Module Pair option.

Table 4-7 80 mm Fan Module Pair option part number

For more information about airflow and cooling, see 4.6, “Cooling” on page 72.

4.4 Fan logic module

There are two fan logic modules included within the chassis as shown in Figure 4-13.

Figure 4-13 Fan logic modules on the rear of the chassis

Fan logic modules are multiplexers for the internal I2C bus, which is used for communication between hardware components within the chassis. Each fan pack is accessed through a dedicated I2C bus, switched by the Fan Mux card, from each CMM. The fan logic module switches the I2C bus to each individual fan pack. This module can be used by the Chassis Management Module to determine multiple parameters, such as fan RPM.

Part Number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

43W9078 A0UA / 7805 IBM Flex System Enterprise Chassis 80 mm Fan Module Pair

Fan logicbay 2

Fan logicbay 1

Page 87: Sg 247984

71

There is a fan logic module for the left and right side of the chassis. The left fan logic module access the left fan modules, and the right fan logic module accesses the right fan modules.

Fan presence indication for each fan pack is read by the fan logic module. Power and fault LEDs are also controlled by the fan logic module.

Figure 4-14 shows a fan logic module and its LEDs.

Figure 4-14 Fan logic module

As shown in Figure 4-14 there are two LEDs on the fan logic module. The Power-on LED is green when the fan logic module is powered. The amber fault LED flashes to indicate a faulty fan logic module. Fan logic modules are hot swappable.

For more information about airflow and cooling, see 4.6, “Cooling” on page 72

4.5 Front information panel

Figure 4-15 shows the front information panel

Figure 4-15 Front information panel

The following items are displayed on the front information panel:

� White Backlit IBM Logo: When lit, this logo indicates that the chassis is powered.

� Locate LED: When lit (blue) solid, this LED indicates the location of the chassis. When flashing, this LED indicates that a condition has occurred that caused the CMM to indicate that the chassis needs attention.

!

White backlitIBM logo

IdentifyLED

FaultLED

Checklog LED

Page 88: Sg 247984

72 IBM PureFlex System and IBM Flex System Products and Technology

� Check Error Log LED: When lit (amber), this LED indicates that a noncritical event has occurred. This event might be a wrong I/O module inserted into a bay, or a power requirement that exceeds the capacity of the installed power modules.

� Fault LED: When lit (amber), this LED indicates that a critical system error has occurred. This can be an error in a power module or a system error in a node.

Figure 4-16 shows the LEDs on the rear of the chassis.

Figure 4-16 Chassis LEDs on the rear of the unit (lower right)

4.6 Cooling

This section addresses Enterprise Chassis cooling. The flow of air within the Enterprise Chassis follows a front to back cooling path. Cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. Air is drawn in both through the front node bays and the front airflow inlet apertures at the top and bottom of the chassis. There are two cooling zones for the nodes: A left zone and a right zone.

The cooling can be scaled up as required, based on which node bays are populated. The number of fan modules required for a certain number of nodes is described further in this section.

When a node is not inserted in a bay, an airflow damper closes in the midplane. Therefore, no air is drawn in through an unpopulated bay. When a node is inserted into a bay, the damper is opened mechanically by the node insertion. This action allows for cooling of the node in that bay.

IdentifyLED

ChecklogLED

FaultLED

Page 89: Sg 247984

73

Figure 4-17 shows the upper and lower cooling apertures.

Figure 4-17 Enterprise Chassis lower and upper cooling apertures

Various fan modules are present in the chassis to assist with efficient cooling. Fan modules consist of both 40 mm and 80 mm types, and are contained within hot pluggable fan modules. The power supplies also have two integrated, independently powered 40 mm fan modules.

The cooling path for the nodes begins when air is drawn in from the front of the chassis. The airflow intensity is controlled by the 80 mm fan modules in the rear. Air passes from the front of the chassis, through the node, through openings in the Midplane and then into a plenum chamber. Each plenum is isolated from the other, providing separate left and right cooling zones. The 80 mm fan packs on each zone then move the warm air from the plenum to the rear of the chassis.

In a 2-bay wide node, the air flow within the node is not segregated because it spans both airflow zones.

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Upper cooling apertures

Lower cooling apertures

Page 90: Sg 247984

74 IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-18 shows a chassis with the outer casing removed for clarity to show airflow path through the chassis. There is no airflow through the chassis midplane where a node is not installed. The air damper is opened only when a node is inserted in that bay.

Figure 4-18 Airflow into chassis through the Nodes and exhaust through the 80 mm fan packs (chassis casing is removed for clarity)

Node installed in Bay 1

80 mm fan pack

Node installed in Bay 14

Cool airflow in

Warm Airflow

Midplane

Cool airflow in

Page 91: Sg 247984

75

Figure 4-19 shows the path of air from the upper and lower airflow inlet apertures to the power supplies.

Figure 4-19 Airflow path power supplies. (chassis casing is removed for clarity)

Power Supply Cool airflow in

Nodes

Midplane

Cool airflow in

Page 92: Sg 247984

76 IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-20 shows the airflow from the lower inlet aperture to the 40 mm fan modules. This airflow provides cooling for the switch modules and CMM installed in the rear of the chassis.

Figure 4-20 40 mm fan module airflow (chassis casing is removed for clarity)

The right side 40 mm fan module cools the right switches, while the left 40 mm fan module cools the left pair of switches. Each 40 mm fan module has a pair of fans for redundancy.

Cool air flows in from the lower inlet aperture at the front of the chassis. It is drawn into the lower openings in the CMM and I/O Modules where it provides cooling for these components. It passes through and is drawn out the top of the CMM and I/O modules. The warm air is expelled to the rear of the chassis by the 40 mm fan assembly. This expulsion is shown by the red airflow arrows in Figure 4-20.

The removal of the fan pack exposes an opening in the bay to the 80 mm fan packs located below. A back flow damper within the fan bay then closes. The backflow damper prevents hot air from re-entering the system from the rear of the chassis. The 80 mm fan packs cool the switch modules and the CMM while the fan pack is being replaced.

Chassis cooling is implemented as a function of:

� Node configurations� Power Monitor Circuits� Component Temperatures� Ambient Temperature

This results in lower airflow volume (measured in cubic feet per minute or CFM) and lower cooling energy spent at a chassis level. This system also maximizes the temperature difference across the chassis (known generally as the Delta T) for more efficient room integration. Monitored Chassis level airflow usage is displayed to enable airflow planning and monitoring for hot air recirculation.

Nodes

Airflow

I/O modules

40 mm fan module

CMM

Page 93: Sg 247984

77

Five Acoustic Optimization states can be selected. Use the one that best balances performance requirements with the noise level of the fans.

Chassis level CFM usage is available to you for planning purposes. In addition, ambient health awareness can detect potential hot air recirculation to the chassis.

4.7 Power supply and fan module requirements

The number of fan modules and power supplies required is dependent on the number of nodes installed within a chassis and the level of redundancy required.

When installing additional nodes, install the nodes, fan modules, and power supplies from the bottom upwards.

4.7.1 Fan module population

The fan modules are populated dependent on nodes installed. To support the base configuration and up to four nodes, a chassis ships with four 80 mm fan modules and two 40 mm fan modules preinstalled.

The minimum configuration of 80 mm fan modules is four, which provides cooling for a maximum of four nodes. This configuration is shown in Figure 4-21 and is the base configuration.

Figure 4-21 Four 80 mm fan modules allow a maximum of four nodes installed

Front View Rear View

Cooling zone Cooling zone

1 2

3 4

Node Bays

1

5

11

7

13

9

2

10

3 4

6

14

12

8

1

2

3

4

6

7

8

9

Page 94: Sg 247984

78 IBM PureFlex System and IBM Flex System Products and Technology

Installing six 80 mm fan modules installed allows a further four nodes to be supported within the chassis. The maximum therefore is eight as shown in Figure 4-22.

Figure 4-22 Six 80 mm fan modules allow for a maximum of eight nodes

To cool more than eight nodes, all fan modules must be installed as shown in Figure 4-23

Figure 4-23 Eight 80 mm fan modules support for 9 to 14 nodes

If there are insufficient fan modules for the number of nodes installed, the nodes might be throttled.

4.7.2 Power supply population

The power supplies can be installed in either N+N or N+1 configuration. N+N means a fully redundant configuration where there are duplicate power supplies for each supply needed for full operation. N+1 means there is only one redundant power supply and all other supplies are needed for full operation. To support a full chassis of Nodes, N (the number of power supplies) must equal 3 for N+N operation. N must be greater than or equal to 3 for N+1 operation.

As the number of nodes in a chassis is expanded, more power supplies can be added as required. This system allows cost effective scaling of power configurations.

Rear View

Cooling zone Cooling zone

1

5

7

2

3 4

6

8

Node Bays

Front View

1

5

11

7

13

9

2

10

3 4

6

14

12

8

1

2

3

4

6

7

8

9

Rear View

Cooling zone Cooling zoneNode Bays

Front View

11

55

1111

77

1313

99

22

1010

33 44

66

1414

1212

88

1

2

3

4

6

7

8

9

Page 95: Sg 247984

79

If there is not enough DC power available to meet the load demand, the Chassis Management Module automatically powers down devices to reduce the load demand.

Power policiesThere are five power management policies that can be selected to dictate how the chassis is protected in the case of potential power module or supply failures. These policies are configured by using the Chassis Management Module graphical interface.

� AC Power source redundancy

Power is allocated under the assumption that no throttling of the nodes is allowed if a power supply fault occurs. This is an N+N configuration.

� AC Power source redundancy with compute node throttling allowed

Power is allocated under the assumption that throttling of the nodes are allowed if a power supply fault occurs. This is an N+N configuration.

� Power Module Redundancy

Maximum input power is limited to one less than the number of power modules when more than one power module is present. One power module can fail without affecting compute note operation. Multiple power node failures can cause the chassis to power off. Some compute nodes might not be able to power on if doing so would exceed the power policy limit.

� Power Module Redundancy with compute node throttling allowed

This can be described as oversubscription mode. Operation in this mode assumes that a nodes load can be reduced, or throttled, to the continuous load rating within a specified time. This process occurs following a loss of one or more power supplies. The Power Supplies can exceed their continuous rating of 2500w for short periods. This is for an N+1 configuration.

� Basic Power Management

This allows the total output power of all power supplies to be used. When operating in this mode, there is no power redundancy. If a power supply fails, or an AC feed to one or more supplies is lost, the entire chassis might shut down. There is no power throttling.

The chassis run in one of these power capping policies:

� No Power Capping

Maximum input power is determined by the active power redundancy policy

� Static Capping

This sets an overall chassis limit on the maximum input power. In a situation where powering on a component would cause the limit to be exceeded, the component is prevented from powering on.

Page 96: Sg 247984

80 IBM PureFlex System and IBM Flex System Products and Technology

Power supplies required in an N+N configurationA total of six PSU can be installed and, in an N+N configuration, the options are either 2, 4, or 6 power supplies installed.

The chassis ships with Power supply bay 1 and 4 preinstalled. For N+N, this configuration allows up to four nodes to be populated into the chassis before requiring any additional power supplies. Figure 4-24 shows this configuration.

Figure 4-24 N+N with four nodes installed

For up to eight nodes with N+N configuration, install a further pair of power supplies in bays 2 and 5 as shown in Figure 4-25.

Figure 4-25 N+N power supply requirements with up to eight nodes installed

Front View

Node Bays

5

11

7

13

9 10

6

14

12

8

4 1

Rear View

Power Supply Bays

1 2

3 4

1 2

3 4 1

2

3

4

5

6

1

5

7

2

3 4

6

8

Node Bays

Front View

1

5

11

7

13

9

2

10

3 4

6

14

12

8

4

2

1

5

Rear View

Power Supply Bays

1

2

3

4

5

6

Page 97: Sg 247984

81

To support more than eight nodes with N+N, install the remaining pair of power supplies (3 and 6) as shown in Figure 4-26.

Figure 4-26 N+N power supply requirements for nodes 9 - 14

Power supplies required in an N+1 configurationThe chassis ships with two power supplies installed. Therefore, you can install up to 4 nodes in an N+1 power configuration. Figure 4-27 shows an N+1 configuration.

Figure 4-27 N+1: Two PSUs support up to four nodes

1

5

11

7

13

9

2

10

3 4

6

14

12

8

Node Bays

Front View

1

5

11

7

13

9

2

10

3 4

6

14

12

8

4

2

1

36

5

Rear View

Power Supply Bays

1

2

3

4

5

6

Front View

1 2

3 4

Node Bays

1

5

11

7

13

9

2

10

3 4

6

14

12

8

4 1

Rear View

Power Supply Bays

1

2

3

4

5

6

Page 98: Sg 247984

82 IBM PureFlex System and IBM Flex System Products and Technology

With configurations between five and eight nodes, for N+1 a total of three Power supplies are required (Figure 4-28).

Figure 4-28 N+1: Up to eight nodes are supported with three power supplies

For configurations greater than nine nodes, a total of four power supplies are required as shown in Figure 4-29.

Figure 4-29 N+1 fully configured chassis requires four power supplies

A fully populated chassis can function on three power supplies. However, avoid this configuration because it has no power redundancy in the event of a power source or power supply failure.

4.8 Chassis Management Module

The CMM provides single chassis management and the networking path for remote keyboard, video, mouse (KVM) capability for compute nodes within the chassis.

The chassis can accommodate one or two CMM. The first is installed into CMM Bay 1, the second into CMM bay 2. Installing two provides CMM redundancy.

1

5

7

2

3 4

6

8

Node Bays

Front View

1

5

11

7

13

9

2

10

3 4

6

14

12

8

4

2

1

Rear View

Power Supply Bays

1

2

3

4

5

6

1

5

11

7

13

9

2

10

3 4

6

14

12

8

Node Bays

Front View

1

5

11

7

13

9

2

10

3 4

6

14

12

8

4

2

1

5

Rear View

Power Supply Bays

1

2

3

4

5

6

Page 99: Sg 247984

83

Table 4-8 lists the ordering information for the second CMM.

Table 4-8 Chassis Management Module ordering information

Figure 4-30 shows the location of the CMM bays on the back of the Enterprise Chassis.

Figure 4-30 CMM Bay 1 and Bay 2

The CMM provides these functions:

� Power control� Fan management� Chassis and compute node initialization� Switch management� Diagnostics� Resource discovery and inventory management� Resource alerts and monitoring management� Chassis and compute node power management� Network management

The CMM has the following connectors:

� USB connection: Can be used for insertion of a USB media key for tasks such as firmware updates.

� 10/100/1000 Mbps RJ45 Ethernet connection: For connection to a management network. The CMM can be managed through this Ethernet port.

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

68Y7030 A0UE / 3592 IBM Flex System Chassis Management Module

Page 100: Sg 247984

84 IBM PureFlex System and IBM Flex System Products and Technology

� Serial port (mini-USB): For local serial (command-line interface (CLI)) access to the CMM. Use the cable kit listed in Table 4-9 for connectivity.

Table 4-9 Serial cable specifications

The CMM has the following LEDs that provide status information:

� Power-on LED� Activity LED� Error LED� Ethernet port link and port activity LEDs

Figure 4-31 shows the CMM connectors and LEDs.

Figure 4-31 Chassis Management Module

The CMM also incorporates a reset button. It has two functions, dependent upon how long the button is held in:

� When pressed for less than 5 seconds, the CMM restarts.

� When pressed for more than 5 seconds (for example 10-15 seconds), the CMM configuration is reset to manufacturing defaults and then restarts.

For more information about how the CMM integrates into the Systems management architecture, see 3.2, “Chassis Management Module” on page 39.

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

90Y9338 A2RR / None IBM Flex System Management Serial Access CableContains two cables:� Mini-USB-to-RJ45 serial cable � Mini-USB-to-DB9 serial cable

Page 101: Sg 247984

85

4.9 I/O architecture

The Enterprise Chassis can accommodate four I/O modules installed in vertical orientation into the rear of the chassis, as shown in Figure 4-32.

Figure 4-32 Rear view that shows the I/O Module bays 1-4

If a node has a two port integrated LAN on Motherboard (LOM) as standard, Module 1 and 2 are connected to this LOM. If an I/O adapter is installed in the nodes I/O expansion bay 1, Modules 1 and 2 would be connected to this LOM.

Modules 3 and 4 connect to the I/O adapter that is installed within I/O expansion bay 2 on the node.

These I/O modules provide external connectivity, and connect internally to each of the nodes within the chassis. They can be either Switch or Pass thru modules, with a potential to support other types in the future.

I/O modulebay 1

I/O modulebay 3

I/O modulebay 2

I/O modulebay 4

Page 102: Sg 247984

86 IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-33 shows the connections from the nodes to the switch modules.

Figure 4-33 LOM, I/O adapter, and switch module connections

The node in Bay 1 on Figure 4-33 shows that when shipped with a LOM, the LOM connector provides the link from the node system board to the midplane. Some nodes do not ship with LOM.

If required, this LOM connector can be removed and an I/O expansion adapter installed in its place. This configuration is shown on the node in Bay 2 on Figure 4-33

Nodebay 1with LOM

Nodebay 2with I/Oexpansionadapter

Nodebay 14

LOM

LOM connector(remove whenI/O expansionadapter is installed)

I/O module 1

I/O module 3

I/O module 2

I/O module 4

LOM

4 lanes (KX-4) or4 10 Gbps lanes (KR)

14 internal groups(of 4 lanes each),one to each node.

Page 103: Sg 247984

87

Figure 4-34 shows the electrical connections from the LOM and I/O adapters to the I/O Modules, which all takes place across the chassis midplane.

Figure 4-34 Logical layout of node to switch interconnects

A total of two I/O expansion adapters (designated M1 and M2 in Figure 4-34) can be plugged into a half-wide node. Up to 4 I/O adapters can be plugged into a full-wide node.

Each I/O adapter has two connectors. One connects to the compute node’s system board (PCI Express connection). The second connector is a high speed interface to the midplane that mates to the midplane when the node is installed into a bay within the chassis.

As shown in Figure 4-34, each of the links to the midplane from the I/O adapter (shown in red) are in fact four links wide. Exactly how many links are employed on each I/O adapter is dependent on the design of the adapter and the number of ports that are wired. Therefore, a half wide node can have a maximum of 16 I/O links, and a full wide node 32.

Node14 M1

M2

Node3 M1

M2

Node2 M1

M2

Node1 M1

M2

Switch1

Switch2

Switch3

Switch4

...

...

...

...

Each line between an I/O adapter and a switch is four links

Page 104: Sg 247984

88 IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-35 shows an I/O expansion adapter.

Figure 4-35 I/O expansion adapter

Each of these individual I/O links or lanes can be wired for 1 Gb or 10 Gb Ethernet, or 8 or 16 Gbps Fibre Channel. You can enable any number of these links. The application-specific integrated circuit (ASIC) type on the I/O Expansion adapter dictates the number of links that can be enabled. Some ASICs are two port and some are four port. For a two port ASIC, one port can go to one switch and one port to the other. This configuration is shown in Figure 4-36 on page 89. In the future other combinations can be implemented.

In an Ethernet I/O adapter, the wiring of the links is to the IEEE 802.3ap standard, which is also known as the Backplane Ethernet standard. The Backplane Ethernet standard has different implementations at 10 Gbps, being 10GBASE-KX4 and 10GBASE-KR. The I/O architecture of the Enterprise Chassis supports both the KX4 and KR.

10GBASE-KX4 uses the same physical layer coding (IEEE 802.3 clause 48) as 10GBASE-CX4, where each individual lane (SERDES = Serializer/DeSerializer) carries 3.125 Gbaud of signaling bandwidth.

10GBASE-KR uses the same coding (IEEE 802.3 clause 49) as 10GBASE-LR/ER/SR, where the SERDES lane operates at 10.3125 Gbps.

Each of the links between I/O expansion adapter and I/O module can either be 4x 3.125 Lanes/port (KX-4) or 4x 10 Gbps Lanes (KR). This choice is dependent on the expansion adapter and I/O Module implementation.

PCIe connector

Guide block to ensure correct installation

Midplane connector

Adapters share a common size (100 mm x 80 mm)

Page 105: Sg 247984

89

Figure 4-36 shows how the integrated 2-port 10 Gb LOM connects through a LOM connector to the midplane on a compute node. This implementation provides a pair of 10 Gb lanes. Each lane connects to a 10 Gb switch or 10 Gb pass-through module installed in I/O module bays in the rear of the chassis.

Figure 4-36 LOM implementation: Emulex 10 Gb Virtual Fabric onboard LOM to I/O ModuleLO

M C

onnector

P1

P2

LOM

P2

P1

10 Gbps KR lane

1

2

Page 106: Sg 247984

90 IBM PureFlex System and IBM Flex System Products and Technology

A half-wide compute node with two standard I/O adapter sockets and an I/O adapter with two ports is shown in Figure 4-37. Port 1 connects to one switch in the chassis and Port 2 connects to another switch in the chassis. With 14 compute nodes installed in the chassis, therefore, each switch has 14 internal ports for connectivity to the compute nodes.

Figure 4-37 I/O adapter with two port ASIC

P1P3P5P7

P2P4P6P8

x1 Ports

x1 Ports

P1P3P5P7

P2P4P6P8

2-Port

P1

P2

I/O adapter in slot 1

Half-widenode

I/O modules

I/O adapter in slot 2

1

2

3

4

Page 107: Sg 247984

91

Another implementation of the I/O adapter is the four port. Figure 4-38 shows the interconnection to the I/O module bays for such I/O adapters that uses a 4-port ASIC.

Figure 4-38 I/O adapter with four port ASIC connections

In this case, with each node having a four port I/O adapter in I/O slot 1, each I/O module would require 28 internal ports enabled. This configuration highlights another key feature of the I/O architecture: Switch partitioning.

Switch partitioning is where sets of ports are enabled by Feature on Demand (FoD) to allow a great number of connections between nodes and a switch. With two lanes per node to each switch and 14 nodes requiring four ports connected, each switch therefore needs to have 28 internal ports enabled. You also need sufficient uplink ports.

The architecture allows for a total of eight lanes per I/O adapter. Therefore, a total of 16 I/O lanes per half wide node is possible. Each I/O module requires the matching number of internal ports to be enabled.

For more information about switch partitioning and port enablement using FoD, see 4.10, “I/O modules” on page 92. For more information about I/O expansion adapters that install on the nodes, see 5.5.1, “Overview” on page 216.

P1P3P5P7

P2P4P6P8

AS

IC4-P

ort

P1P3P5P7

x1 Ports

P2P4P6P8

x1 Ports

P1P2P3P4

I/O adapter in slot 1

Half-widenode

I/O modules

I/O adapter in slot 2

1

2

3

4

Page 108: Sg 247984

92 IBM PureFlex System and IBM Flex System Products and Technology

4.10 I/O modules

I/O modules are inserted into the rear of the Enterprise Chassis to provide interconnectivity both within the chassis and external to the chassis. This section covers the I/O and Switch module naming scheme. It contains the following subsections:

� 4.10.1, “I/O module LEDs”� 4.10.2, “Serial access cable” on page 93� 4.10.3, “I/O module naming scheme” on page 94� 4.10.4, “IBM Flex System Fabric EN4093 10 Gb Scalable Switch” on page 94� 4.10.5, “IBM Flex System EN4091 10 Gb Ethernet Pass-thru” on page 100� 4.10.6, “IBM Flex System EN2092 1 Gb Ethernet Scalable Switch” on page 102� 4.10.7, “IBM Flex System FC5022 16 Gb SAN Scalable Switch” on page 107� 4.10.8, “IBM Flex System FC3171 8 Gb SAN Switch” on page 113� 4.10.9, “IBM Flex System FC3171 8 Gb SAN Pass-thru” on page 116� 4.10.10, “IBM Flex System IB6131 InfiniBand Switch” on page 118

There are four I/O Module bays to the rear of the chassis. To insert an I/O module into a bay, the I/O filler must first be removed. Figure 4-39 shows how to remove an I/O filler and inserting an I/O module into the chassis by using the two handles.

Figure 4-39 Removing an I/O filler and installing an I/O module

Page 109: Sg 247984

93

4.10.1 I/O module LEDs

I/O Module Status LED are at the bottom of the module when inserted into the chassis. All modules share three status LEDs as shown in Figure 4-40.

Figure 4-40 Example of an I/O module status LEDs

The LEDs are as follows:

� OK (power)

When this LED is lit, it indicates that the switch is on. When it is not lit and the amber switch error LED is lit, it indicates a critical alert. If the amber LED is also not lit, it indicates that the switch is off.

� Identify

You can physically identify a switch by making this blue LED light up by using the management software.

� Switch Error

When this LED is lit, it indicates a POST failure or critical alert. When this LED is lit, the system-error LED on the chassis is also lit.

When this LED is not lit and the green LED is lit, it indicates that the switch is working correctly. If the green LED is also not lit, it indicates that the switch is off

4.10.2 Serial access cable

The switches (and CMM) support local command-line interface (CLI) access through a USB serial cable. The mini-USB port on the switch is near the LEDs as shown in Figure 4-40. A cable kit with supported serial cables can be ordered as listed in Table 4-10.

Table 4-10 Serial cable

Part number 90Y9338 contains two cables:

� Mini-USB-to-RJ45 serial cable � Mini-USB-to-DB9 serial cable

Switch errorIdentify OK

Serial port for local management

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

90Y9338 A2RR / None IBM Flex System Management Serial Access Cable

Page 110: Sg 247984

94 IBM PureFlex System and IBM Flex System Products and Technology

4.10.3 I/O module naming scheme

The I/O module naming scheme follows a logical structure, similar to that of the I/O adapters.

Figure 4-41 shows the I/O module naming scheme. As time progresses this scheme might be expanded to support future technology.

Figure 4-41 IBM Flex System I/O Module naming scheme

4.10.4 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

The IBM Flex System Fabric EN4093 10 Gb Scalable Switch is a 10 Gb 64-port upgradeable midrange to high-end switch module. It offers Layer 2/3 switching designed to install within the I/O module bays of the Enterprise Chassis. The switch contains the following ports:

� Up to 42 internal 10 Gb ports

� Up to 14 external 10 Gb uplink ports (enhanced small form-factor pluggable (SFP+) connectors)

� Up to 2 external 40 Gb uplink ports (quad small form-factor pluggable (QSFP+) connectors)

The switch is considered suited for clients with these needs:

� Building a 10 Gb infrastructure

� Implementing a virtualized environment

� Requiring investment protection for 40 Gb uplinks

� Want to reduce total cost of ownership (TCO) and improve performance, while maintaining high levels of availability and security

� Want to avoid oversubscription (traffic from multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact)

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

EN2092

Fabric Type: EN = EthernetFC = Fibre ChannelCN = Converged NetworkIB = InfiniBand

Series:2 for 1 Gb3 for 8 Gb4 for 10 Gb5 for 16 Gb6 for InfiniBand

Vendor name where A=0102 = Brocade09 = IBM13 = Mellanox17 = QLogic

Maximum numberof partitions 2 = 2 partitions

Page 111: Sg 247984

95

The EN4093 10Gb Scalable Switch is shown in Figure 4-42.

Figure 4-42 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

As listed in Table 4-11, the switch is initially licensed with fourteen 10 Gb internal ports enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied.

Table 4-11 lists the available parts and upgrades.

Table 4-11 IBM Flex System Fabric EN4093 10 Gb Scalable Switch part numbers and port upgrades

Partnumber

Featurecodea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Product description Total ports enabled

Internal 10 Gb uplink 40 Gb uplink

49Y4270 A0TB / 3593 IBM Flex System Fabric EN4093 10 Gb Scalable Switch� 10x external 10 Gb uplinks� 14x internal 10 Gb ports

14 10 0

49Y4798 A1EL / 3596 IBM Flex System Fabric EN4093 10 Gb Scalable Switch (Upgrade 1)� Adds 2x external 40 Gb uplinks� Adds 14x internal 10 Gb ports

28 10 2

88Y6037 A1EM / 3597 IBM Flex System Fabric EN4093 10 Gb Scalable Switch (Upgrade 2) (requires Upgrade 1):� Adds 4x external 10 Gb uplinks � Add 14x internal 10 Gb ports

42 14 2

Page 112: Sg 247984

96 IBM PureFlex System and IBM Flex System Products and Technology

The key components on the front of the switch are shown in Figure 4-43.

Figure 4-43 IBM Flex System Fabric EN4093 10 Gb Scalable Switch

Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed:

� The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches)

� Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch)

� Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch)

The rear of the switch has 14 SPF+ module ports and two QSFP+ module ports. The QSFP+ ports can be used to provide either two 40 Gb uplinks or eight 10 Gb ports. Use one of the supported QSFP+ to 4x 10 Gb SFP+ cables listed in Table 4-12. This cable splits a single 40 Gb QSPFP port into 4 SFP+ 10 Gb ports.

For management of the switch, a mini USB port and an Ethernet management port are provided.

The supported SFP+ and QSFP+ modules and cables for the switch are listed in Table 4-12.

Table 4-12 Supported SFP+ modules and cables

14x 10 Gb uplink ports (10 standard, 4 with Upgrade 2)

2x 40 Gb uplink ports (enabled with Upgrade 1)

SFP+ portsSwitch release handle (one either side)

QSFP+ ports Managementports

SwitchLEDs

Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows you to deliver the ability to have 42 internal ports, with three ports connected to each of the 14 compute nodes in the chassis. To take full advantage of all 42 internal ports, a 6-port adapter is required, but this type of adapter is currently not available.

Upgrade 2 still provides a benefit even with a 4-port adapter because this upgrade enables an extra four external 10 Gb uplinks as well.

Part number Feature codea Description

Serial console cables

90Y9338 A2RR / None IBM Flex System Management Serial Access Cable Kit

Small form-factor pluggable (SFP) transceivers - 1 GbE

81Y1618 3268 / EB29 IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps)

81Y1622 3269 / EB2A IBM SFP SX Transceiver

90Y9424 A1PN / None IBM SFP LX Transceiver

Page 113: Sg 247984

97

The EN4093 10Gb Scalable Switch has the following features and specifications:

� Internal ports

– Forty-two internal full-duplex 10 Gigabit ports. Fourteen ports are enabled by default. Optional FoD licenses are required to activate the remaining 28 ports.

– Two internal full-duplex 1 GbE ports connected to the chassis management module.

� External ports

– Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC cables. Ten ports are enabled by default. An optional FoD license is required to activate the remaining four ports. SFP+ modules and DAC cables are not included and must be purchased separately.

– Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (ports are disabled by default. An optional FoD license is required to activate them). QSFP+ modules and DAC cables are not included and must be purchased separately.

– One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.

SFP+ transceivers - 10 GbE

46C3447 5053 / None IBM SFP+ SR Transceiver

90Y9412 A1PM / None IBM SFP+ LR Transceiver

44W4408 4942 / 3382 10GBase-SR SFP+ (MMFiber) transceiver

SFP+ Direct Attach Copper (DAC) cables - 10 GbE

90Y9427 A1PH / ECB4 1m IBM Passive DAC SFP+

90Y9430 A1PJ / ECB5 3m IBM Passive DAC SFP+

90Y9433 A1PK / None 5m IBM Passive DAC SFP+

QSFP+ transceiver and cables - 40 GbE

49Y7884 A1DR / EB27 IBM QSFP+ 40GBASE-SR Transceiver(Requires either cable 90Y3519 or cable 90Y3521)

90Y3519 A1MM / None 10m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)

90Y3521 A1MN / None 30m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)

QSFP+ breakout cables - 40 GbE to 4x10 GbE

49Y7886 A1DL / EB24 1m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

49Y7887 A1DM / EB25 3m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

49Y7888 A1DN / EB26 5m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

QSFP+ Direct Attach Copper (DAC) cables - 40 GbE

49Y7890 A1DP / None 1m QSFP+ to QSFP+ DAC

49Y7891 A1DQ / None 3m QSFP+ to QSFP+ DAC

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Part number Feature codea Description

Page 114: Sg 247984

98 IBM PureFlex System and IBM Flex System Products and Technology

� Scalability and performance

– 40 Gb Ethernet ports for extreme uplink bandwidth and performance

– Fixed-speed external 10 Gb Ethernet ports to take advantage of 10 Gb core infrastructure

– Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization

– Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps

– Media Access Control (MAC) address learning: Automatic update, support of up to 128,000 MAC addresses

– Up to 128 IP interfaces per switch

– Static and Link Aggregation Control Protocol (LACP) (IEEE 802.3ad) link aggregation: Up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group

– Support for jumbo frames (up to 9,216 bytes)

– Broadcast/multicast storm control

– Internet Group Management Protocol (IGMP) snooping to limit flooding of IP multicast traffic

– IGMP filtering to control multicast traffic for hosts that participate in multicast groups

– Configurable traffic distribution schemes over trunk links based on source/destination IP or MAC addresses or both

– Fast port forwarding and fast uplink convergence for rapid STP convergence

� Availability and redundancy

– Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy

– IEEE 802.1D Spanning Tree Protocol (STP) for providing L2 redundancy

– IEEE 802.1s Multiple STP (MSTP) for topology optimization, up to 32 STP instances are supported by single switch

– IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic like voice or video

– Rapid Per-VLAN STP (RPVST) enhancements

– Layer 2 Trunk Failover to support active/standby configurations of network adapter that team on compute nodes

– Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off

� Virtual local area network (VLAN) support

– Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management module’s connection only.)

– 802.1Q VLAN tagging support on all ports

– Private VLANs

� Security

– VLAN-based, MAC-based, and IP-based access control lists (ACLs)

– 802.1x port-based authentication

– Multiple user IDs and passwords

Page 115: Sg 247984

99

– User access control

– Radius, TACACS+ and LDAP authentication and authorization

� Quality of Service (QoS)

– Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing

– Traffic shaping and remarking based on defined policies

– Eight weighted round robin (WRR) priority queues per port for processing qualified traffic

� IP v4 Layer 3 functions

– Host management

– IP forwarding

– IP filtering with ACLs, up to 896 ACLs supported

– VRRP for router redundancy

– Support for up to 128 static routes

– Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table

– Support for Dynamic Host Configuration Protocol (DHCP) Relay

– Support for IGMP snooping and IGMP relay

– Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM).

� IP v6 Layer 3 functions

– IPv6 host management (except default switch management IP address)

– IPv6 forwarding

– Up to 128 static routes

– Support for OSPF v3 routing protocol

– IPv6 filtering with ACLs

� Virtualization

– Virtual Fabric with virtual network interface card (vNIC)

– 802.1Qbg Edge Virtual Bridging (EVB)

– IBM VMready®

� Converged Enhanced Ethernet

– Priority-based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic. This function is based on the 802.1p priority value in each packet’s VLAN tag.

– Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth based on the 802.1p priority value in each packet’s VLAN tag.

– Data Center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities.

� Manageability

– Simple Network Management Protocol (SNMP V1, V2, and V3)

– HTTP browser GUI

Page 116: Sg 247984

100 IBM PureFlex System and IBM Flex System Products and Technology

– Telnet interface for CLI

– Secure Shell (SSH)

– Serial interface for CLI

– Scriptable CLI

– Firmware image update: Trivial File Transfer Protocol (TFTP) and File Transfer Protocol (FTP)

– Network Time Protocol (NTP) for switch clock synchronization

� Monitoring

– Switch LEDs for external port status and switch module status indication

– Remote monitoring (RMON) agent to collect statistics and proactively monitor switch performance

– Port mirroring for analyzing network traffic that passes through the switch

– Change tracking and remote logging with syslog feature

– Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer required elsewhere)

– POST diagnostic procedures

For more information, see the IBM Redbooks Product Guide for the IBM Flex System Fabric EN4093 10 Gb Scalable Switch, at:

http://www.redbooks.ibm.com/abstracts/tips0864.html?Open

4.10.5 IBM Flex System EN4091 10 Gb Ethernet Pass-thru

The EN4091 10 Gb Ethernet Pass-thru module offers a one for one connection between a single node bay and an I/O module uplink. It has no management interface, and can support both 1 Gb and 10 Gb dual-port adapters installed in the compute nodes. If quad-port adapters are installed in the compute nodes, only the first two ports have access to the pass-through module’s ports.

The necessary 1 GbE or 10 GbE module (SFP, SFP+ or DAC) must also be installed in the external ports of the pass-through. This configuration supports the speed (1 Gb or 10 Gb) and medium (fiber optic or copper) for adapter ports on the compute nodes.

The IBM Flex System EN4091 10 Gb Ethernet Pass-thru is shown in Figure 4-44.

Figure 4-44 IBM Flex System EN4091 10 Gb Ethernet Pass-thru

Page 117: Sg 247984

101

The ordering part number and feature codes are listed in Table 4-13.

Table 4-13 EN4091 10 Gb Ethernet Pass-thru part number and feature codes

The EN4091 10 Gb Ethernet Pass-thru has the following specifications:

� Internal ports

14 internal full-duplex Ethernet ports that can operate at 1 Gb or 10 Gb speeds

� External ports

Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. SFP+ modules and DAC cables are not included, and must be purchased separately.

� Unmanaged device that has no internal Ethernet management port. However, it is able to provide its VPD to the secure management network in the Chassis Management Module

� Supports 10 Gb Ethernet signaling for CEE, FCoE, and other Ethernet-based transport protocols.

� Allows direct connection from the 10 Gb Ethernet adapters installed in compute nodes in a chassis to an externally located top of rack switch or other external device.

There are three standard I/O module status LEDs as shown in Figure 4-40 on page 93. Each port has link and activity LEDs.

Table 4-14 lists the supported transceivers and DAC cables.

Table 4-14 IBM Flex System EN4091 10 Gb Ethernet Pass-thru part numbers and feature codes

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Product Name

88Y6043 A1QV / 3700 IBM Flex System EN4091 10 Gb Ethernet Pass-thru

Restriction: The EN4091 10 Gb Ethernet Pass-thru has only 14 internal ports. As a result, only two ports on each compute node are enabled, one for each of two pass-through modules installed in the chassis. If four-port adapters are installed in the compute nodes, ports 3 and 4 on those adapters are not enabled.

Part number Feature codesa Description

SFP+ transceivers - 10 GbE

44W4408 4942 / 3282 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR)

46C3447 5053 / None IBM SFP+ SR Transceiver

90Y9412 A1PM / None IBM SFP+ LR Transceiver

SFP transceivers - 1 GbE

81Y1622 3269 / EB2A IBM SFP SX Transceiver

81Y1618 3268 / EB29 IBM SFP RJ45 Transceiver

90Y9424 A1PN / None IBM SFP LX Transceiver

Direct-attach copper (DAC) cables

81Y8295 A18M / EN01 1m 10GE Twinax Act Copper SFP+ DAC (active)

Page 118: Sg 247984

102 IBM PureFlex System and IBM Flex System Products and Technology

For more information, see the IBM Redbooks Product Guide for the IBM Flex System EN4091 10 Gb Ethernet Pass-thru, at:

http://www.redbooks.ibm.com/abstracts/tips0865.html?Open

4.10.6 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

The EN2092 1Gb Ethernet Switch provides support for L2/L3 switching and routing. The switch has these ports:

� Up to 28 internal 1 Gb ports� Up to 20 external 1 Gb ports (RJ45 connectors)� Up to 4 external 10 Gb uplink ports (SFP+ connectors)

The switch is shown in Figure 4-45.

Figure 4-45 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

81Y8296 A18N / EN02 3m 10GE Twinax Act Copper SFP+ DAC (active)

81Y8297 A18P / EN03 5m 10GE Twinax Act Copper SFP+ DAC (active)

95Y0323 A25A / None 1m IBM Active DAC SFP+ Cable

95Y0326 A25B / None 3m IBM Active DAC SFP+ Cable

95Y0329 A25C / None 5m IBM Active DAC SFP+ Cable

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Part number Feature codesa Description

Page 119: Sg 247984

103

As listed in Table 4-15, the switch comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order.

Table 4-15 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch part numbers and port upgrades

The key components on the front of the switch are shown in Figure 4-46.

Figure 4-46 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed:

� The base switch requires a two-port Ethernet adapter installed in each compute node (one port of the adapter goes to each of two switches)

� Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two ports of the adapter to each switch)

The standard has 10 external ports enabled. Additional external ports are enabled with license upgrades:

� Upgrade 1 enables 10 additional ports for a total of 20 ports

� Uplinks Upgrade enables the four 10 Gb SFP+ ports.

These two upgrades can be installed in either order.

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Product description

49Y4294 A0TF / 3598 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch� 14 internal 1 Gb ports� 10 external 1 Gb ports

90Y3562 A1QW / 3594 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch (Upgrade 1)� Adds 14 internal 1 Gb ports� Adds 10 external 1 Gb ports

49Y4298 A1EN / 3599 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch (10 Gb Uplinks)� Adds 4 external 10 Gb uplinks

20x external 1 Gb ports (10 standard, 10 with Upgrade 1)

4x 10 Gb uplink ports (enabled with Uplinks upgrade)

RJ45 ports SFP+ ports Managementport

SwitchLEDs

Page 120: Sg 247984

104 IBM PureFlex System and IBM Flex System Products and Technology

This switch is considered ideal for clients with these characteristics:

� Still use 1 Gb as their networking infrastructure

� Are deploying virtualization and require multiple 1 Gb ports

� Want investment protection for 10 Gb uplinks

� Looking to reduce TCO and improve performance, while maintaining high levels of availability and security

� Looking to avoid oversubscription (multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact).

The switch has three switch status LEDs (see Figure 4-40 on page 93) and one mini-USB serial port connector for console management.

Uplink Ports 1 - 20 are RJ45, and the 4 x 10 Gb uplink ports are SFP+. The switch supports either SFP+ modules or DAC cables. The supported SFP+ modules and DAC cables for the switch are listed in Table 4-16.

Table 4-16 IBM Flex System EN2092 1 Gb Ethernet Scalable Switch SFP+ and DAC cables

The EN2092 1 Gb Ethernet Scalable Switch has the following features and specifications:

� Internal ports

– Twenty-eight internal full-duplex Gigabit ports. Fourteen ports are enabled by default. An optional FoD license is required to activate another 14 ports.

– Two internal full-duplex 1 GbE ports connected to the chassis management module

� External ports

– Four ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. These ports are disabled by default. An optional FoD license is required to activate them. SFP+ modules are not included and must be purchased separately.

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

SFP transceivers

81Y1622 3269 / EB2A IBM SFP SX Transceiver

81Y1618 3268 / EB29 IBM SFP RJ45 Transceiver

90Y9424 A1PN / None IBM SFP LX Transceiver

SFP+ transceivers

44W4408 4942 / 3282 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR)

46C3447 5053 / None IBM SFP+ SR Transceiver

90Y9412 A1PM / None IBM SFP+ LR Transceiver

DAC cables

90Y9427 A1PH / None 1m IBM Passive DAC SFP+

90Y9430 A1PJ / ECB5 3m IBM Passive DAC SFP+

90Y9433 A1PK / None 5m IBM Passive DAC SFP+

Page 121: Sg 247984

105

– Twenty external 10/100/1000 1000BASE-T Gigabit Ethernet ports with RJ-45 connectors. Ten ports are enabled by default. An optional FoD license is required to activate another 10 ports.

– One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.

� Scalability and performance

– Fixed-speed external 10 Gb Ethernet ports for maximum uplink bandwidth

– Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization

– Non-blocking architecture with wire-speed forwarding of traffic

– MAC address learning: Automatic update, support of up to 32,000 MAC addresses

– Up to 128 IP interfaces per switch

– Static and LACP (IEEE 802.3ad) link aggregation, up to 60 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group

– Support for jumbo frames (up to 9,216 bytes)

– Broadcast/multicast storm control

– IGMP snooping for limit flooding of IP multicast traffic

– IGMP filtering to control multicast traffic for hosts that participate in multicast groups

– Configurable traffic distribution schemes over trunk links based on source/destination IP or MAC addresses, or both

– Fast port forwarding and fast uplink convergence for rapid STP convergence

� Availability and redundancy

– VRRP for Layer 3 router redundancy

– IEEE 802.1D STP for providing L2 redundancy

– IEEE 802.1s MSTP for topology optimization, up to 32 STP instances supported by single switch

– IEEE 802.1w RSTP (provides rapid STP convergence for critical delay-sensitive traffic like voice or video)

– RPVST enhancements

– Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes

– Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off

� VLAN support

– Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management module’s connection only)

– 802.1Q VLAN tagging support on all ports

– Private VLANs

� Security

– VLAN-based, MAC-based, and IP-based ACLs

– 802.1x port-based authentication

– Multiple user IDs and passwords

Page 122: Sg 247984

106 IBM PureFlex System and IBM Flex System Products and Technology

– User access control

– Radius, TACACS+, and Lightweight Directory Access Protocol (LDAP) authentication and authorization

� QoS

– Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing

– Traffic shaping and remarking based on defined policies

– Eight WRR priority queues per port for processing qualified traffic

� IP v4 Layer 3 functions

– Host management

– IP forwarding

– IP filtering with ACLs, up to 896 ACLs supported

– VRRP for router redundancy

– Support for up to 128 static routes

– Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table

– Support for DHCP Relay

– Support for IGMP snooping and IGMP relay

– Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM).

� IP v6 Layer 3 functions

– IPv6 host management (except default switch management IP address)

– IPv6 forwarding

– Up to 128 static routes

– Support for OSPF v3 routing protocol

– IPv6 filtering with ACLs

� Virtualization

– VMready

� Manageability

– Simple Network Management Protocol (SNMP V1, V2, and V3)

– HTTP browser GUI

– Telnet interface for CLI

– SSH

– Serial interface for CLI

– Scriptable CLI

– Firmware image update (TFTP and FTP)

– NTP for switch clock synchronization

� Monitoring

– Switch LEDs for external port status and switch module status indication

– RMON agent to collect statistics and proactively monitor switch performance

Page 123: Sg 247984

107

– Port mirroring for analyzing network traffic that passes through the switch

– Change tracking and remote logging with the syslog feature

– Support for the sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer required elsewhere)

– POST diagnostic functions

For more information, see the IBM Redbooks Product Guide for the IBM Flex System EN2092 1 Gb Ethernet Scalable Switch, at:

http://www.redbooks.ibm.com/abstracts/tips0861.html?Open

4.10.7 IBM Flex System FC5022 16 Gb SAN Scalable Switch

The IBM Flex System FC5022 16 Gb SAN Scalable Switch is a high-density, 48-port 16 Gbps Fibre Channel switch that is used in the Enterprise Chassis. The switch provides 28 internal ports to compute nodes by way of the midplane, and 20 external SFP+ ports. These system area network (SAN) switch modules deliver an embedded option for IBM Flex System users who deploy storage area networks in their enterprise. They offer end-to-end 16 Gb and 8 Gb connectivity.

The N_Port Virtualization mode streamlines the infrastructure by reducing the number of domains to manage. It allows you to add or move servers without impact to the SAN. Monitoring is simplified by using an integrated management appliance. Clients who use end-to-end Brocade SAN can take advantage of the Brocade management tools.

Figure 4-47 shows the IBM Flex System FC5022 16 Gb SAN Scalable Switch.

Figure 4-47 IBM Flex System FC5022 16 Gb SAN Scalable Switch

Two versions are available as listed in Table 4-17: A 12-port switch module and a 24-port switch with the Enterprise Switch Bundle (ESB) software. The port count can be applied to internal or external ports by using a a feature called Dynamic Ports on Demand (DPOD).

Table 4-17 IBM Flex System FC5022 16 Gb SAN Scalable Switch part numbers

Part number Feature codesa

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description Ports enabled

88Y6374 A1EH / 3770 IBM Flex System FC5022 16 Gb SAN Scalable Switch 12

90Y9356 A1EJ / 3771 IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch

24

Page 124: Sg 247984

108 IBM PureFlex System and IBM Flex System Products and Technology

Table 4-18 provides a feature comparison between the FC5022 switch models.

Table 4-18 Feature comparison by model

With DPOD, ports are licensed as they come online. With the FC5022 16 Gb SAN Scalable Switch, the first 12 ports that report (on a first-come, first-served basis) on boot-up are assigned licenses. These 12 ports can be any combination of external or internal Fibre Channel ports. After all licenses are assigned, you can manually move those licenses from one port to another. Because this process is dynamic, no defined ports are reserved except ports 0 and 29. The FC5022 16 Gb ESB Switch has the same behavior. The only difference is the number of ports.

The part number for the switch includes the following items:

� One IBM Flex System FC5022 16 Gb SAN Scalable Switch or IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch

� Important Notices Flyer

� Warranty Flyer

� Documentation CD-ROM

The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable, 90Y9338, is supported and contains two cables: A mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable. Either cable can be used to connect to the switch locally for configuration tasks and firmware updates.

Feature FC5022 16 Gb ESB Switch(90Y9356)

FC5022 16 Gb SAN Scalable Switch(88Y6374)

Number of active ports 24 12

Full fabric Included Included

Access Gateway Included Included

Advanced zoning Included Included

Enhanced Group Management Included Included

ISL Trunking Included Not available

Adaptive Networking Included Not available

Advanced Performance Monitoring Included Not available

Fabric Watch Included Not available

Extended Fabrics Included Not available

Server Application Optimization Included Not available

Page 125: Sg 247984

109

TransceiversThe switch comes without SFP+. They must be ordered separately to provide outside connectivity. Table 4-19 lists supported SFP+ options.

Table 4-19 Supported SFP+ transceivers

BenefitsThe switches offer the following key benefits:

� Exceptional price/performance for growing SAN workloads

The FC5022 16 Gb SAN Scalable Switch delivers exceptional price/performance for growing SAN workloads. It achieves this through a combination of market-leading 1,600 MBps throughput per port and an affordable high-density form factor. The 48 FC ports produce an aggregate 768 Gbps full-duplex throughput, plus any external eight ports can be trunked for 128 Gbps inter-switch links (ISLs). Because 16 Gbps port technology dramatically reduces the number of ports and associated optics/cabling required through 8/4 Gbps consolidation, the cost savings and simplification benefits are substantial.

� Accelerating fabric deployment and serviceability with diagnostic ports

Diagnostic Ports (D_Ports) are a new port type supported by the FC5022 16 Gb SAN Scalable Switch. They enable administrators to quickly identify and isolate 16 Gbps optics, port, and cable problems, reducing fabric deployment and diagnostic times. If the optical media is found to be the source of the problem, it can be transparently replaced because 16 Gbps optics are hot-pluggable.

� A building block for virtualized, private cloud storage

The FC5022 16 Gb SAN Scalable Switch supports multi-tenancy in cloud environments through VM-aware end-to-end visibility and monitoring, QoS, and fabric-based advanced zoning features. The FC5022 16 Gb SAN Scalable Switch enables secure distance extension to virtual private or hybrid clouds with dark Fibre support. They also enable in-flight encryption and data compression. Internal fault-tolerant and enterprise-class reliability, availability, and serviceability (RAS) features help minimize downtime to support mission-critical cloud environments.

� Simplified and optimized interconnect with Brocade Access Gateway

The FC5022 16 Gb SAN Scalable Switch can be deployed as a full-fabric switch or as a Brocade Access Gateway. It simplifies fabric topologies and heterogeneous fabric connectivity. Access Gateway mode uses N_Port ID Virtualization (NPIV) switch standards to present physical and virtual servers directly to the core of SAN fabrics. This configuration makes it not apparent to the SAN fabric, greatly reducing management of the network edge.

� Maximizing investments

To help optimize technology investments, IBM offers a single point of serviceability backed by industry-renowned education, support, and training. In addition, the IBM 16/8 Gbps SAN Scalable Switch is in the IBM ServerProven® program, enabling compatibility among various IBM and partner products. IBM recognizes that customers deserve the most innovative, expert integrated systems solutions.

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

88Y6416 5084 / 5370 Brocade 8 Gb SFP+ SW Optical Transceiver

88Y6393 A22R / 5371 Brocade 16 Gb SFP+ Optical Transceiver

Page 126: Sg 247984

110 IBM PureFlex System and IBM Flex System Products and Technology

Features and specificationsFC5022 16 Gb SAN Scalable Switches have the following features and specifications:

� Internal ports

– 28 internal full-duplex 16 Gb FC ports (up to 14 internal ports can be activated with Port-on-Demand feature, remaining ports are reserved for future use)

– Internal ports operate as F_ports (fabric ports) in native mode or in access gateway mode

– Two internal full-duplex 1 GbE ports connect to the chassis management module

� External ports

– Twenty external ports for 16 Gb SFP+ or 8 Gb SFP+ transceivers that supporting 4 Gb, 8 Gb, and 16 Gb port speeds. SFP+ modules are not included and must be purchased separately. Ports are activated with Port-on-Demand feature.

– External ports can operate as F_ports, FL_ports (fabric loop ports), or E_ports (expansion ports) in native mode. They can operate as N_ports (node ports) in access gateway mode.

– One external 1 GbE port (1000BASE-T) with RJ-45 connector for switch configuration and management.

– One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.

� Access gateway mode (N_Port ID Virtualization - NPIV) support

� Power-on self-test diagnostics and status reporting

� ISL Trunking (licensable) allows up to eight ports (at 16, 8, or 4 Gbps speeds) to combine. These ports form a single, logical ISL with a speed of up to 128 Gbps (256 Gbps full duplex). This configuration allows for optimal bandwidth utilization, automatic path failover, and load balancing.

� Brocade Fabric OS delivers distributed intelligence throughout the network and enables a wide range of value-added applications. These applications include Brocade Advanced Web Tools and Brocade Advanced Fabric Services (on certain models).

� Supports up to 768 Gbps I/O bandwidth

� 420 million frames switch per second, 0.7 microseconds latency

� 8,192 buffers for up to 3,750 Km extended distance at 4 Gbps FC (Extended Fabrics license required)

� In-flight 64 Gbps Fibre Channel compression and decompression support on up to two external ports (no license required)

� In-flight 32 Gbps encryption and decryption on up to two external ports (no license required)

� 48 Virtual Channels per port

� Port mirroring to monitor ingress or egress traffic from any port within the switch

� Two I2C connections able to interface with redundant management modules

� Hot pluggable, up to four hot pluggable switches per chassis

� Single fuse circuit

� Four temperature sensors

� Managed with Brocade Web Tools

� Supports a minimum of 128 domains in Native mode and Interoperability mode

Page 127: Sg 247984

111

� Nondisruptive code load in Native mode and Access Gateway mode

� 255 N_port logins per physical port

� D_port support on external ports

� Class 2 and Class 3 frames

� SNMP v1 and v3 support

� SSH v2 support

� Secure Sockets Layer (SSL) support

� NTP client support (NTP V3)

� FTP support for firmware upgrades

� SNMP/Management Information Base (MIB) monitoring functionality contained within the Ethernet Control MIB-II (RFC1213-MIB)

� End-to-end optics and link validation

� Sends switch events and syslogs to the CMM

� Traps identify cold start, warm start, link up/link down and authentication failure events

� Support for IPv4 and IPv6 on the management ports

The FC5022 16 Gb SAN Scalable Switches come standard with the following software features:

� Brocade Full Fabric mode: Enables high performance 16 Gb or 8 Gb fabric switching

� Brocade Access Gateway mode: Uses NPIV to connect to any fabric without adding switch domains to reduce management complexity

� Dynamic Path Selection: Enables exchange-based load balancing across multiple Inter-Switch Links for superior performance

� Brocade Advanced Zoning: Segments a SAN into virtual private SANs to increase security and availability

� Brocade Enhanced Group Management: Enables centralized and simplified management of Brocade fabrics through IBM Network Advisor

Enterprise Switch Bundle software licensesThe IBM Flex System FC5022 24-port 16 Gb ESB SAN Scalable Switch includes a complete set of licensed features. These features maximize performance, ensure availability, and simplify management for the most demanding applications and expanding virtualization environments.

This switch comes with 24 port licenses that can be applied to either internal or external links on this switch.

This switch also includes the following ESB software licenses:

� Brocade Extended Fabrics

Provides up to 1000km of switches fabric connectivity over long distances.

� Brocade ISL Trunking

Allows you to aggregate multiple physical links into one logical link for enhanced network performance and fault tolerance.

Page 128: Sg 247984

112 IBM PureFlex System and IBM Flex System Products and Technology

� Brocade Advanced Performance Monitoring

Enables performance monitoring of networked storage resources. This license includes the TopTalkers feature.

� Brocade Fabric Watch

Monitors mission-critical switch operations. Fabric Watch now includes the new Port Fencing capabilities.

� Adaptive Networking

Adaptive Networking provides a rich set of capabilities to the data center or virtual server environments. It ensures high priority connections to obtain the bandwidth necessary for optimum performance, even in congested environments. It optimizes data traffic movement within the fabric by using Ingress Rate Limiting, Quality of Service, and Traffic Isolation Zones

� Server Application Optimization (SAO)

This license optimizes overall application performance for physical servers and virtual machines. SAO, when deployed with Brocade Fibre Channel host bus adapters (HBAs), extends Brocade Virtual Channel technology from fabric to the server infrastructure. This license delivers application-level, fine-grain QoS management to the HBAs and related server applications.

Supported Fibre Channel standardsThe switches support the following Fibre Channel standards:

� FC-AL-2 INCITS 332: 1999

� FC-GS-5 ANSI INCITS 427 (includes the following):

– FC-GS-4 ANSI INCITS 387: 2004

� FC-IFR INCITS 1745-D, revision 1.03 (under development)

� FC-SW-4 INCITS 418:2006 (includes the following):

� FC-SW-3 INCITS 384: 2004

� FC-VI INCITS 357: 2002

� FC-TAPE INCITS TR-24: 1999

� FC-DA INCITS TR-36: 2004 (includes the following):

– FC-FLA INCITS TR-20: 1998

– FC-PLDA INCIT S TR-19: 1998

� FC-MI-2 ANSI/INCITS TR-39-2005

� FC-PI INCITS 352: 2002

� FC-PI-2 INCITS 404: 2005

� FC-PI-4 INCITS 1647-D, revision 7.1 (under development)

� FC-PI-5 INCITS 479: 2011

� FC-FS-2 ANSI/INCITS 424:2006 (includes the following):

– FC-FS INCITS 373: 2003

� FC-LS INCITS 433: 2007

� FC-BB-3 INCITS 414: 2006 (includes the following):

� FC-BB-2 INCITS 372: 2003

Page 129: Sg 247984

113

� FC-SB-3 INCITS 374: 2003 (replaces FC-SB ANSI X3.271: 1996 and FC-SB-2 INCITS 374: 2001)

� RFC 2625 IP and ARP Over FC

� RFC 2837 Fabric Element MIB

� MIB-FA INCITS TR-32: 2003

� FCP-2 INCITS 350: 2003 (replaces FCP ANSI X3.269: 1996)

� SNIA Storage Management Initiative Specification (SMI-S) Version 1.2 (includes the following):

– SNIA Storage Management Initiative Specification (SMI-S) Version 1.03 ISO standard IS24775-2006. (replaces ANSI INCITS 388: 2004)

– SNIA Storage Management Initiative Specification (SMI-S) Version 1.1.0

– SNIA Storage Management Initiative Specification (SMI-S) Version 1.2.0

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC5022 16 Gb SAN Scalable Switch, at:

http://www.redbooks.ibm.com/abstracts/tips0870.html?Open

4.10.8 IBM Flex System FC3171 8 Gb SAN Switch

The IBM Flex System FC3171 8 Gb SAN Switch is a full-fabric Fibre Channel switch module. It can be converted to a pass-through module when configured in transparent mode. Figure 4-48 shows the IBM Flex System FC3171 8 Gb SAN Switch.

Figure 4-48 IBM Flex System FC3171 8 Gb SAN Switch

The I/O module has 14 internal ports and 6 external ports. All ports are licensed on the switch because there are no port licensing requirements. Ordering information is listed in Table 4-20.

Table 4-20 FC3171 8 Gb SAN Switch

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Product Name

69Y1930 A0TD / 3595 IBM Flex System FC3171 8 Gb SAN Switch

Page 130: Sg 247984

114 IBM PureFlex System and IBM Flex System Products and Technology

No SFP modules and cables are supplied as standard. The ones listed in Table 4-21 are supported.

Table 4-21 FC3171 8 Gb SAN Switch supported SFP modules and cables

You can reconfigure the FC3171 8 Gb SAN Switch to become a pass-through module by using the switch GUI or CLI. The module can then be converted back to a full function SAN switch at some future date. The switch requires a reset when turning on or off transparent mode.

The switch can be configured by using either command line or QuickTools:

� Command Line: Access the switch by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands.

� QuickTools: Requires a current version of the Java runtime environment (JRE) on your workstation before pointing a web browser to the switch’s IP address. The IP address of the switch must be configured. QuickTools does not require a license and code is included.

On this switch when in Full Fabric mode, access to all of the Fibre Channel Security features is provided. Security includes additional services of SSL and SSH, which are available. In addition, RADIUS servers can be used for device and user authentication. After SSL/SSH is enabled, the security features are available to be configured. Configuring security features allows the SAN administrator to configure which devices are allowed to log on to the Full Fabric Switch module. This process is done by creating security sets with security groups. These sets are configured on a per switch basis. The security features are not available when in pass-through mode.

FC3171 8 Gb SAN Switch specifications and standards:

� Fibre Channel standards:

– C-PH version 4.3– FC-PH-2– FC-PH-3– FC-AL version 4.5– FC-AL-2 Rev 7.0– FC-FLA– FC-GS-3– FC-FG– FC-PLDA– FC-Tape– FC-VI– FC-SW-2– Fibre Channel Element MIB RFC 2837– Fibre Alliance MIB version 4.0

� Fibre Channel protocols:

– Fibre Channel service classes: Class 2 and class 3

Part number Feature codesa

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

44X1964 5075 / 3286 IBM 8 Gb SFP+ SW Optical Transceiver

39R6475 4804 / 3238 4 Gb SFP Transceiver Option

Page 131: Sg 247984

115

– Operation modes: Fibre Channel class 2 and class 3, connectionless

� External port type:

– Full fabric mode: Generic loop port– Transparent mode: Transparent fabric port

� Internal port type:

– Full fabric mode: F_port– Transparent mode: Transparent host port/NPIV mode– Support for up to 44 host NPIV logins

� Port characteristics:

– External ports are automatically detected and self- configuring– Port LEDs illuminate at startup– Number of Fibre Channel ports: 6 external ports and 14 internal ports– Scalability: Up to 239 switches maximum depending on your configuration– Buffer credits: 16 buffer credits per port– Maximum frame size: 2148 bytes (2112 byte payload)– Standards-based FC FC-SW2 Interoperability– Support for up to a 255 to 1 port-mapping ratio– Media type: SFP+ module

� 2 Gb specifications

– 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second)– 2 Gb fabric latency: Less than 0.4 msec– 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex

� 4 Gb specifications

– 4 Gb switch speed: 4.250 Gbps– 4 Gb switch fabric point-to-point: 4 Gbps at full duplex– 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex

� 8 Gb specifications

– 8 Gb switch speed: 8.5 Gbps– 8 Gb switch fabric point-to-point: 8 Gbps at full duplex– 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex

� Nonblocking architecture to prevent latency

� System processor: IBM PowerPC®

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3171 8 Gb SAN Switch, at:

http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

Page 132: Sg 247984

116 IBM PureFlex System and IBM Flex System Products and Technology

4.10.9 IBM Flex System FC3171 8 Gb SAN Pass-thru

The IBM Flex System FC3171 8 Gb SAN Pass-thru I/O module is an 8 Gbps Fibre Channel Pass-thru SAN module. It has 14 internal ports and six external ports. It is shipped with all ports enabled. Figure 4-49 shows the IBM Flex System FC3171 8 Gb SAN Pass-thru module.

Figure 4-49 IBM Flex System FC3171 8 Gb SAN Pass-thru

Ordering information is listed in Table 4-22.

Table 4-22 FC3171 8 Gb SAN Pass-thru part number

There are no SFPs supplied with the switch and must be ordered separately. Supported transceivers and fiber optic cables are listed in Table 4-23.

Table 4-23 FC3171 8 Gb SAN Pass-thru supported modules and cables

The FC3171 8 Gb SAN Pass-thru can be configured by using either the command line or QuickTools:

� Command Line: Access the module by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands.

� QuickTools: Requires a current version of the JRE on your workstation before pointing a web browser to the module’s IP address. The IP address of the module must be configured. QuickTools does not require a license, and the code is included.

The pass-through module supports the following standards:

� Fibre Channel standards:

– C-PH version 4.3– FC-PH-2– FC-PH-3

Part number Feature codea

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

69Y1934 A0TJ / 3591 IBM Flex System FC3171 8 Gb SAN Pass-thru

Exception: If you will need to enable full fabric capability later, do not purchase this switch. Instead, purchase the FC3171 8 Gb SAN Switch.

Part Number Feature Code Description

44X1964 5075 / 3286 IBM 8 Gb SFP+ SW Optical Transceiver

39R6475 4804 / 3238 4 Gb SFP Transceiver Option

Page 133: Sg 247984

117

– FC-AL version 4.5– FC-AL-2 Rev 7.0– FC-FLA– FC-GS-3– FC-FG– FC-PLDA– FC-Tape– FC-VI– FC-SW-2– Fibre Channel Element MIB RFC 2837– Fibre Alliance MIB version 4.0

� Fibre Channel protocols:

– Fibre Channel service classes: Class 2 and class 3– Operation modes: Fibre Channel class 2 and class 3, connectionless

� External port type: Transparent fabric port

� Internal port type: Transparent host port/NPIV mode

– Support for up to 44 host NPIV logins

� Port characteristics:

– External ports are automatically detected and self- configuring– Port LEDs illuminate at startup– Number of Fibre Channel ports: 6 external ports and 14 internal ports– Scalability: Up to 239 switches maximum depending on your configuration– Buffer credits: 16 buffer credits per port– Maximum frame size: 2148 bytes (2112 byte payload)– Standards-based FC FC-SW2 Interoperability– Support for up to a 255 to 1 port-mapping ratio– Media type: SFP+ module

� Fabric point-to-point bandwidth: 2 Gbps or 8 Gbps at full duplex

� 2 Gb Specifications

– 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second)– 2 Gb fabric latency: Less than 0.4 msec– 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex

� 4 Gb Specifications

– 4 Gb switch speed: 4.250 Gbps– 4 Gb switch fabric point-to-point: 4 Gbps at full duplex– 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex

� 8 Gb Specifications

– 8 Gb switch speed: 8.5 Gbps– 8 Gb switch fabric point-to-point: 8 Gbps at full duplex– 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex

� System processor: PowerPC

� Maximum frame size: 2148 bytes (2112 byte payload)

� Nonblocking architecture to prevent latency

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3171 8 Gb SAN Pass-thru, at:

http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

Page 134: Sg 247984

118 IBM PureFlex System and IBM Flex System Products and Technology

4.10.10 IBM Flex System IB6131 InfiniBand Switch

IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18 FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data rate (FDR). Figure 4-50 shows the IBM Flex System IB6131 InfiniBand Switch.

Figure 4-50 IBM Flex System IB6131 InfiniBand Switch

Ordering information is listed in Table 4-24.

Table 4-24 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade option

Running the MLNX-OS, this switch has one external 1 Gb management port and a mini USB Serial port for updating software and debug use. These ports are in addition to InfiniBand internal and external ports.

The switch has 14 internal QDR links and 18 CX4 uplink ports. All ports are enabled. The switch can be upgraded to FDR speed (56 Gbps) by using the FOD process with part number 90Y3462 as listed in Table 4-24.

There are no InfiniBand cables shipped as standard with this switch and these must be purchased separately. Supported cables are listed in Table 4-25.

Table 4-25 IB6131 InfiniBand Switch InfiniBand supported cables

The switch has the following specifications:

� IBTA 1.3 and 1.21 compliance

� Congestion control

� Adaptive routing

� Port mirroring

� Auto-Negotiation of 10 Gbps, 20 Gbps, 40 Gbps, or 56 Gbps

Part number Feature codesa

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Product Name

90Y3450 A1EK / 3699 IBM Flex System IB6131 InfiniBand Switch� 18 external QDR ports� 14 QDR internal ports

90Y3462 A1QX / ESW1 IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade)� Upgrades all ports to FDR speeds

Part number Feature codesa

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

49Y9980 3866 / 3249 IB QDR 3m QSFP Cable Option (passive)

90Y3470 A227 / ECB1 3m FDR InfiniBand Cable (passive)

Page 135: Sg 247984

119

� Mellanox QoS: 9 InfiniBand virtual lanes for all ports, eight data transport lanes, and one management lane

� High switching performance: Simultaneous wire-speed any port to any port

� Addressing: 48K Unicast Addresses maximum per Subnet, 16K Multicast Addresses per Subnet

� Switch throughput capability of 1.8 Tb/s

For more information, see the IBM Redbooks Product Guide for the IBM Flex System IB6131 InfiniBand Switch, at:

http://www.redbooks.ibm.com/abstracts/tips0871.html?Open

4.11 Infrastructure planning

This section addresses the key infrastructure planning areas of power, uninterruptible power supply (UPS), cooling, and console management that must be considered when deploying the IBM Flex System Enterprise Chassis.

This section contains these topics:

� 4.11.1, “Supported power cords”� 4.11.2, “Supported PDUs and UPS units”� 4.11.3, “Power planning” on page 120� 4.11.4, “UPS planning” on page 124� 4.11.5, “Console planning” on page 125� 4.11.6, “Cooling planning” on page 126� 4.11.7, “Chassis-rack cabinet compatibility” on page 127

For more information about planning your IBM Flex System power infrastructure, see the IBM Flex System Enterprise Chassis Power Requirements Guide at:

http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401

4.11.1 Supported power cords

The Enterprise Chassis supports the power cords listed in Table 4-26. One power cord, feature 6292, is shipped with each power supply option or standard with the server (one per standard power supply).

Table 4-26 Supported power cords

Part number Feature code Description

00D7192 A2Y3 4.3 m, US/CAN, NEMA L15-30P - (3P+Gnd) to 3X IEC 320 C19

00D7193 A2Y4 4.3 m, EMEA/AP, IEC 309 32A (3P+N+Gnd) to 3X IEC 320 C19

00D7194 A2Y5 4.3 m, A/NZ, (PDL/Clipsal) 32A (3P+N+Gnd) to 3X IEC 320 C19

39Y7916 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable

None 6292 2 m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable

00D7195 6566 2.5 m, 15A/208V, C19 to NEMA 6-15P (US) power cords

00D7196 6537 1.8 m, 15A/208V, C19 to NEMA 6-15P (US) power cords

00D7197 A1NV 4.3 m, 15A/250V, C19 to NEMA 6-15P (US) power cords

Page 136: Sg 247984

120 IBM PureFlex System and IBM Flex System Products and Technology

4.11.2 Supported PDUs and UPS units

Table 4-27 lists the supported PDUs.

Table 4-27 Supported power distribution units

Table 4-28 lists the supported UPS units.

Table 4-28 Supported uninterruptible power supply units

4.11.3 Power planning

The Enterprise Chassis has a maximum of six power supplies installed, so consider how to provide the best power optimized source. Both N+N and N+1 configurations are supported for maximum flexibility in power redundancy.

The chassis can accommodate a maximum of six power supplies. You can therefore balance a 3-phase power input into a single, or group of chassis.

Each power supply in the chassis has a 16A C20 3-pin socket, and can be fed by a C19 power cable from a suitable supply.

Part number Description

39Y8923 DPI 60A 3-Phase C19 Enterprise PDU w/ IEC309 3P+G (208V) fixed power cords

39Y8938 30amp/125V Front-end PDU with NEMA L5-30P connector

39Y8939 30amp/250V Front-end PDU with NEMA L6-30P connector

39Y8940 60amp/250V Front-end PDU with IEC 309 60A 2P+N+Gnd connector

39Y8948 DPI Single Phase C19 Enterprise PDU w/o power cords

46M4002 IBM 1U 9 C19/3 C13 Active Energy Manager DPI PDU

46M4003 IBM 1U 9 C19/3 C13 Active Energy Manager 60A 3-Phase PDU

46M4140 IBM 0U 12 C19/12 C13 50A 3-Phase PDU

46M4134 IBM 0U 12 C19/12 C13 Switched and Monitored 50A 3-Phase PDU

46M4167 IBM 1U 9 C19/3 C13 Switched and Monitored 30A 3-Phase PDU

71762MX IBM Ultra Density Enterprise PDU C19 PDU+ (WW)

71762NX IBM Ultra Density Enterprise PDU C19 PDU (WW)

71763MU IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU+ (NA)

71763NU IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU (NA)

Part number Description

21303RX IBM UPS 7500XHV

21304RX IBM UPS 10000XHV

53956AX IBM 6000VA LCD 4U Rack UPS (200V/208V)

53959KX IBM 11000VA LCD 5U Rack UPS (230V)

Page 137: Sg 247984

121

The chassis power system is designed for efficiency using datacenter power that consists of 3-phase, 60A Delta 200 VAC (North America), or 3-phase 32A wye 380-415 VAC (international). The chassis can also be fed from single phase 200-240VAC supplies if required.

The power is scaled as required, so as additional nodes are added the power and cooling increases accordingly.

This section explains both single phase and 3-phase example configurations for North America and worldwide, starting with 3-phase.

Power cabling: 32A at 380-415V 3-phase (International)Figure 4-51 shows one 3-phase, 32A wye PDU (worldwide, WW) providing power feeds for two chassis. In this case, an appropriate 3-phase power cable is selected for the Ultra-Dense Enterprise PDU+. This cable then splits the phases, supplying one phase to each of the three power supplies within each chassis. One 3-phase 32A wye PDU can power two fully populated chassis within a rack. A second PDU can be added for power redundancy from an alternative power source, if the chassis is configured N+N.

Figure 4-51 shows a typical configuration given a 32A 3-phase wye supply at 380-415VAC (often termed “WW” or “International”) N+N.

Figure 4-51 Example power cabling 32A at 380-415V 3-phase: International

The maximum number of Enterprise Chassis that can be installed with a 42U rack is four. Therefore, the chassis requires a total of four 32A 3-phase wye feeds to provide for a fully redundant N+N configuration.

IEC320 16A C19-C203m power cable

40K9611 IBM DPI 32aCord (IEC 309 3P+N+G)

46M4002 1U 9C19/3 C13 Switched andmonitored DPI PDU

L1

L2L3

GN L1

L2L3

GN

= Powercables

Page 138: Sg 247984

122 IBM PureFlex System and IBM Flex System Products and Technology

Power cabling: 60A at 208V 3-phase (North America)In North America, the chassis requires four 60A 3-phase delta supplies at 200 - 208 VAC. A configuration optimized for 3-phase configuration is shown in Figure 4-52.

Figure 4-52 Example of power cabling 60A at 208V 3-phase

IEC320 16A C19-C20 3mpower cable

46M4003 Includes fixedIEC60309 3P+G 60A line cord

46M4003 1U 9 C19/3C13 Switched andmonitored DPI PDI

L1

L2 L3G

L1

L2 L3G

g ppg

Page 139: Sg 247984

123

Power Cabling: Single Phase 63A (International)Figure 4-53 shows International 63A single phase supply feed example. This example uses the switched and monitored PDU+ with an appropriate power cord. Each PSU can draw up to 13.85A from its supply. Therefore, a single chassis can easily be fed from a 63A single phase supply, leaving 18.45A available capacity. This capacity could feed a single PSU on a second chassis power supply (13.85A). Or it could be available for the PDU to supply further items in the rack such as servers or storage devices.

Figure 4-53 Single phase 63A supply

N

G

LN

G

L

40K9613 IBM DPI 63a Cord (IEC 309 P+N+G)

46M4002 1U 9 C19/3C13 Switched andmonitored DPI PDI

= Cables

Page 140: Sg 247984

124 IBM PureFlex System and IBM Flex System Products and Technology

Power Cabling: 60A 200VAC single phase supply (North America)In North America, UL derating means that a 60 Amp PDU supplies only 48 Amps. At 200VAC, the power supplies in the Enterprise Chassis draw a maximum of 13.85 Amps. Therefore, a single phase 60A supply can power a fully configured chassis. A further 6.8Amps is available from the PDU to power additional items within the chassis such as servers or storage (Figure 4-54).

Figure 4-54 60A 200VAC single phase supply

For more information about planning your IBM Flex System power infrastructure, see the IBM Flex System Enterprise Chassis Power Requirements Guide at:

http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401

4.11.4 UPS planning

It is possible to power the Enterprise Chassis with a UPS, which provides protection in case of power failure or interruption. IBM does not offer a 3-phase UPS. However, single phase UPS available from IBM can be used to supply power to a chassis, at both 200VAC and 220VAC. An alternative is to use third-party UPS product if 3-phase is required.

L1

L2 L3G

L1

L2 L3G

40K9615 IBM DPI 60a Cord (IEC 309 2P+G)

Building power = 200 VAC, 60 Amp, 1 Phase(48A supplied by PDU after UL derating)

46M4002 1U 9 C19/3C13 Switched andmonitored DPI PDI

= Cables

Page 141: Sg 247984

125

At international voltages, the 11000VA UPS is ideal for powering a fully loaded chassis. Figure 4-55 shows how each power feed can be connected to one of the four 20A outlets on the rear of the UPS. This UPS requires hard wiring to a suitable supply by a qualified electrician.

Figure 4-55 Two UPS11000 international single-phase (208-230VAC)

In North America the available UPS at 200-208VAC is the UPS6000. This UPS has two outlets that can be used to power two of the power supplies within the chassis. In a fully loaded chassis, the third pair of power supplies must be connected to another UPS. Figure 4-56 shows this UPS configuration.

Figure 4-56 Two UPS 6000 North American (200-208VAC)

For more information, see the at-a-glance guide for the IBM 11000VA LCD 5U Rack Uninterruptible Power Supply at:

http://www.redbooks.ibm.com/abstracts/tips0814.html

4.11.5 Console planning

Although the Enterprise Chassis is a “lights out” system and can be managed remotely with ease, there are other ways to access a node console:

� Each node can be individually connected to by physically plugging in a console breakout cable to the front of the node. This cable presents a 15pin video connector, two USB sockets and a serial cable out the front. Connecting a portable screen and USB keyboard/mouse near the front of the chassis enables quick connection into the console breakout cable and access directly into the node. This configuration is often called “crash cart” management capability.

53959KXIBM UPS110005U

= Cables

53956AXIBM UPS60004U

= Cables

Page 142: Sg 247984

126 IBM PureFlex System and IBM Flex System Products and Technology

� Connection to the FSM management interface by browser allows remote presence to each node within the chassis.

� Connection remotely into the Ethernet management port of the CMM by using the browser allows remote presence to each node within the chassis.

� You can also connect directly to each IMM2 on a node and start a remote console session to that node through the IMM.

4.11.6 Cooling planning

The chassis is designed to operate in temperatures up to 40°c (104°F), in ASHRAE class A3 operating environments.

The airflow requirements for the Enterprise Chassis are from 270 CFM (cubic feet per minute) to a maximum of 1020 CFM.

The Enterprise Chassis has these environmental specifications:

� Humidity, non-condensing: -12°C dew point (10.4°F) and 8% - 85% relative humidity

� Maximum dew point: 24°C (75°F)

� Maximum elevation: 3050 m (10.006 ft)

� Maximum rate of temperature change: 5°C/hr (41°F/hr)

Heat Output (approximate):

� Maximum configuration: potentially 12.9kW

The 12.9kW figure is only a potential maximum, where the most power hungry configuration is chosen and all power envelopes are maximum. For a more realistic figure, use the IBM Power Configurator tool to establish specific power requirements for a configuration. This tool can be found at:

http://www.ibm.com/systems/x/hardware/configtools.html

Datacenter operation at environmental temperatures above 35°C would generally be in a free air cooling environment. This is the expected definition of ASHRAE class A3 (and also the A4 class which raises the upper limit to 45°C). A conventional datacenter would not normally run with computer room air conditioning (CRAC) units up to 40°C. The risk of either failures of CRAC or power to the CRACs failing gives limited time for shutdowns before over temperatures occur. IBM Flex System Enterprise Chassis is suitable for operation in ASHRAE class A3 environment, installed both operating and non-operating mode.

Information about ASHRAE 2011 thermal guidelines, datacenter classes, and white papers can be found at the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) website at:

http://www.ashrae.org

The chassis can be installed within either IBM or non-IBM racks. However, the IBM 42U 1100 mm Enterprise V2 Dynamic Rack does offer in North America a single floor tile wide and two tiles deep. More information about this sizing is detailed in 4.12, “IBM 42U 1100 mm Enterprise V2 Dynamic Rack” on page 128.

If installed within a non-IBM rack, the vertical rails must have clearances to EIA-310-D. There must be sufficient room in front of the vertical front rack mounting rail to provide minimum bezel clearance of 70 mm (2.76 inches) depth. The rack must be sufficient to support the weight of the chassis, cables, power supplies, and other items installed within. There must be

Page 143: Sg 247984

127

sufficient room behind the rear of the rear rack rails to provide for cable management and routing. Ensure the stability of any non-IBM rack by using stabilization feet or baying kits so that it does not become unstable when it is fully populated. Finally, ensure that sufficient airflow is available to the Enterprise Chassis. Racks with glass fronts do not normally allow sufficient airflow into the chassis.

4.11.7 Chassis-rack cabinet compatibility

IBM offers an extensive range of industry-standard, EIA-compatible rack enclosures and expansion units. The flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components and cable management.

Table 4-29 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.

Table 4-29 The chassis supported in each rack cabinet

The IBM Flex System Enterprise Chassis is not supported in the IBM Netfinity® 42U 9306-900 and 9306-910. It is also not supported in the Netfinity Enterprise 9308-42P and the 9308-42X racks. The NetBay 22U is not a supported rack configuration. These racks have glass-fronted doors that allow insufficient airflow for the IBM Flex System Enterprise Chassis. In some cases, the chassis depth is such that the chassis cannot be accommodated within the dimensions of the rack.

Rack cabinet Part Number Feature code

Enterprise Chassis

IBM 11U Office Enablement Kit 201886X 2731 Yes

IBM S2 25U Static standard rack 93072PX 6690 Yes

IBM S2 25U Dynamic standard rack 93072RX 1042 Yes

IBM S2 42U standard rack 93074RX 1043 Yes

IBM S2 42U Dynamic standard rack 99564RX 5629 Yes

IBM 42U Enterprise rack 93084PX 5621 Yes

IBM 42U 1200 mm Deep Dynamic Rack 93604PX 7649 Yes

IBM 42U 1200 mm Deep Static rack 93614PX 7651 Yes

IBM 47U 1200 mm Deep Static rack 93624PX 7653 Yes

IBM 42U 1100 mm Deep Dynamic racka

a. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways. For more information, see 4.12, “IBM 42U 1100 mm Enterprise V2 Dynamic Rack” on page 128.

93634PX 7953 Yes

Page 144: Sg 247984

128 IBM PureFlex System and IBM Flex System Products and Technology

4.12 IBM 42U 1100 mm Enterprise V2 Dynamic Rack

The IBM 42U 1100 mm Enterprise V2 Dynamic Rack is an industry-standard 24-inch rack that supports the Enterprise Chassis, BladeCenter, System x servers, and options. It is available in either Primary or Expansion form. The expansion rack is designed for baying and has no side panels. It ships with a baying kit. After it is attached to the side of a primary rack, the side panel removed from the primary rack is attached to the side of the expansion rack. The available configurations are shown in Table 4-30.

Table 4-30 Rack options and part numbers

This 42U rack conforms to the EIA(TM)-310-D industry standard for a 24-inch, type A rack cabinet. The dimensions are listed in Table 4-31.

Table 4-31 Dimensions of IBM 42U 1100 mm Enterprise V2 Dynamic Rack, 9363-4PX

The rack features outriggers (stabilizers) allowing for movement while populated.

Model Description Details

9363-4PX IBM 42U 1100 mm Enterprise V2 Dynamic Rack

Rack ships with side panels and is stand-alone.

9363-4EX IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack

Rack ships with no side panels, and is designed to attach to a primary rack

Dimension Value

Height 2009 mm (79.1 in)

Width 600 mm (23.6 in)

Depth 1100 mm (43.3 in)

Weight 174 kg (384 lb), including outriggers.

Page 145: Sg 247984

129

Figure 4-57 shows the 9363-4PX rack.

Figure 4-57 9363-4PX Rack (note tile width relative to rack)

Features of the IBM 42U 1100 mm Enterprise V2 Dynamic Rack rack are:

� A perforated front door allows for improved air flow.

� Square EIA Rail mount points

� Six side-wall compartments support 1U-high PDUs and switches without taking up valuable rack space.

� Cable management rings are included to assist in cable management

� Easy to install and remove side panels are a standard feature.

� The front door can be hinged on either side, providing flexibility to open in either direction.

Page 146: Sg 247984

130 IBM PureFlex System and IBM Flex System Products and Technology

� Front and rear doors and side panels include locks and keys to help secure servers.

� Heavy-duty casters with the use of outriggers (stabilizers) come with the 42U Dynamic racks for added stability, allowing movement of the rack while loaded.

� Tool-less 0U PDU rear channel mounting reduces installation time and increases accessibility

� 1U PDU can be mounted to present power outlets to the rear of the chassis in side pocket openings.

� Removable top and bottom cable access panels in both front and rear

Dynamic designIBM is the only leading vendor with specific ship-loadable designs. These kinds of racks are called dynamic racks. The IBM 42U 1100 mm Enterprise V2 Dynamic Rack and IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack are dynamic racks.

A dynamic rack has extra heavy-duty construction and sturdy packaging that can be reused for shipping a fully loaded rack. They also have outrigger casters for secure movement and tilt stability. Dynamic racks also include a heavy-duty shipping pallet that includes a ramp for easy “on and off” maneuvering.

Dynamic racks undergo additional shock and vibration testing, and all IBM racks are of welded rather than the more flimsy bolted construction.

Page 147: Sg 247984

131

Figure 4-58 shows the rear view of the 42U 1100 mm Flex System Dynamic Rack.

Figure 4-58 42U 1100 mm Flex System Dynamic Rack rear view, with doors and sides panels removed

The IBM 42U 1100 mm Enterprise V2 Dynamic Rack rack also provides additional space for front cable management and the use of front to back cable raceways. There are four cable raceways on each rack: Two each side. The raceways allow cables to be routed from the front of the rack, through the raceway and out to the rear of the rack. The raceways also have openings into the side bays of the rack to allow connection into those bays.

Cable raceway

Outriggers

Mountings for IBM 0U PDU

Page 148: Sg 247984

132 IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-59 shows the cable raceways.

Figure 4-59 Cable raceway (as viewed from rear of rack)

The 1U rack PDUs can also be accommodated in the side bays. In these bays, the PDU is mounted vertically in the rear of the side bay and presents its outlets to the rear of the rack. Four 0U PDUs can also be vertically mounted in the rear of the rack.

Cable raceway

Page 149: Sg 247984

133

The rack width is 600 mm, which is a standard width of a floor tile in many locations, to complement current raised floor datacenter designs. Dimensions of the rack base are shown in Figure 4-60.

Figure 4-60 Rack dimensions

The rack has square mounting holes common in the industry, onto which the Enterprise Chassis and other server and storage products can be mounted.

600 mm

1100

mm

65 mm

65 mm

199 mm

Front of Rack

46 mm

458 mm

Page 150: Sg 247984

134 IBM PureFlex System and IBM Flex System Products and Technology

For implementations where the front anti-tip plate is not required, an air baffle/air recirculation prevention plate is supplied with the rack. You might not want to use the plate when an airflow tile must be positioned directly in front of the rack.

This air baffle shown in Figure 4-61 can be installed to the lower front of the rack. It helps prevent warm air from the rear of the rack from circulating underneath the rack to the front, improving the cooling efficiency of the entire rack solution.

Figure 4-61 Recirculation prevention plate

4.13 IBM Rear Door Heat eXchanger V2 Type 1756

The IBM Rear Door Heat eXchanger V2 is designed to attach to the rear of these racks:

� IBM 42U 1100 mm Enterprise V2 Dynamic Rack � IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack

It provides effective cooling for the warm air exhausts of equipment mounted within the rack. The heat exchanger has no moving parts to fail and no power is required.

The rear door heat exchanger can be used to improve cooling and reduce cooling costs in a high density HPC Enterprise Chassis environment.

Recirculationprevention plate

Page 151: Sg 247984

135

The physical design of the door is slightly different to that of the existing Rear Door Heat Exchanger (32R0712) marketed by IBM System x. This door has a wider rear aperture as shown in Figure 4-62. It is designed for attachment specifically to the rear of either an IBM 42U 1100 mm Enterprise V2 Dynamic Rack or IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack.

Figure 4-62 Rear Door Heat Exchanger

Attaching a rear door heat exchanger to the rear of a rack allows up to 100,000 BTU/hr or 30kw of heat to be removed at a rack level.

As the warm air passes through the heat exchanger, it is cooled with water and exits the rear of the rack cabinet into the datacenter. The door is designed to provide an overall air temperature drop of up to 25°C measured between air that enters the exchanger and exits the rear.

Page 152: Sg 247984

136 IBM PureFlex System and IBM Flex System Products and Technology

Figure 4-63 shows the internal workings of the IBM Rear Door Heat eXchanger V2.

Figure 4-63 IBM Rear Door Heat eXchanger V2

The supply inlet hose provides an inlet for chilled, conditioned water. A return hose delivers warmed water back to the water pump or chiller in the cool loop. It must meet the water supply requirements for secondary loops.

Page 153: Sg 247984

137

Figure 4-64 shows the percentage heat removed from a 30 KW heat load as a function of water temperature and water flow rate. With 18 Degrees at 10 (gpm), 90% of 30 kW heat is removed by the door.

Figure 4-64 Heat removal by Rear Door Heat eXchanger V2 at 30 KW of heat

For efficient cooling, water pressure and water temperature must be delivered in accordance with the specifications listed in Table 4-32. The temperature must be maintained above the dew point to prevent condensation from forming.

Table 4-32 1756 RDHX specifications

140

130

120

110

100

90

80

70

60

50

% heat removal as function of water temperature and flow rate forgiven rack power, rack inlet temperature, and rack air flow rate

4 10 12 14

% h

eat

rem

oval

Water flow rate (gpm)

Watertemperature

12°C *

14°C *

16°C *

18°C *

20°C *

6 8

22°C *

24°C *

Airflow(cfm) = 2500

Rack Power(W) = 30000

Tinlet, air(C) = 27

Rear Door heat exchanger V2 Specifications

Depth 129 mm (5.0 in)

Width 600 mm (23.6 in)

Height 1950 mm (76.8 in)

Empty Weight 39 kg (85 lb)

Filled Weight 48 kg (105 lb)

Temperature Drop Up to 25°C (45°F) between air exiting and entering RDHX

Water Temperature Above Dew Point:18°C ±1°C (64.4°F ±1.8°F) for ASHRAE Class 1 Environment 22°C ±1°C (71.6°F ±1.8°F) for ASHRAE Class 2 Environment

Required water flow rate (as measured at the supply entrance to the heat exchanger)

Minimum: 22.7 liters (6 gallons) per minute, Maximum: 56.8 liters (15 gallons) per minute

Page 154: Sg 247984

138 IBM PureFlex System and IBM Flex System Products and Technology

The installation and planning guide provides lists of suppliers that can provide coolant distribution unit solutions, flexible hose assemblies, and water treatment that meet the suggested water quality requirements. It takes three people to install the rear door heat exchanger. The exchanger requires a non-conductive step ladder to be used for attachment of the upper hinge assembly. Consult the planning and implementation guides before proceeding.

The installation and planning guides can be found at:

http://www.ibm.com/support/entry/portal/

Page 155: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 139

Chapter 5. Compute nodes

This chapter describes the IBM Flex System servers or compute nodes. The applications installed on the compute nodes can run natively on a dedicated physical server. Or they can be virtualized in a virtual machine managed by a hypervisor layer.

The IBM Flex System portfolio of compute nodes includes Intel Xeon processors and IBM POWER7 processors. Depending on the compute node design, nodes can come in one of these form factors:

� Half-wide node: Occupies one chassis bay, half the width of the chassis (approximately 215 mm or 8.5”). An example is the IBM Flex System x240 Compute Node.

� Full-wide node: Occupies two chassis bays side-by-side, the full width of the chassis (approximately 435 mm or 17”). An example is the IBM Flex System p460 Compute Node.

This chapter includes the following sections:

� 5.1, “IBM Flex System Manager” on page 140� 5.2, “IBM Flex System x240 Compute Node” on page 140� 5.3, “IBM Flex System x220 Compute Node” on page 177� 5.4, “IBM Flex System p260 and p24L Compute Nodes” on page 198� 5.5, “IBM Flex System p460 Compute Node” on page 216� 5.6, “I/O adapters” on page 234

5

Page 156: Sg 247984

140 IBM PureFlex System and IBM Flex System Products and Technology

5.1 IBM Flex System Manager

The IBM Flex System Manager (FSM) is a high performance scalable system management appliance based on the IBM Flex System x240 Compute Node. The FSM hardware comes preinstalled with systems management software that enables you to configure, monitor, and manage IBM Flex System resources in up to four chassis.

For more information about the hardware and software of the FSM, see 3.5, “IBM Flex System Manager” on page 46.

5.2 IBM Flex System x240 Compute Node

The IBM Flex System x240 Compute Node, available as machine type 8737 with a three-year warranty, is a half-wide, two-socket server. It runs the latest Intel Xeon processor E5-2600 family (formerly code named Sandy Bridge-EP) processors. It is ideal for infrastructure, virtualization, and enterprise business applications, and is compatible with the IBM Flex System Enterprise Chassis.

5.2.1 Introduction

The x240 supports the following equipment:

� Up to two Intel Xeon E5-2600 series multi-core processors� 24 dual inline memory module (DIMM) modules� Two hot-swap drives� Two PCI Express I/O adapters� Two optional internal USB connectors

Figure 5-1 shows the x240.

Figure 5-1 The x240 type 8737

Page 157: Sg 247984

141

Figure 5-2 shows the location of the controls, LEDs, and connectors on the front of the x240.

Figure 5-2 The front of the x240 showing the location of the controls, LEDs, and connectors

Figure 5-3 shows the internal layout and major components of the x240.

Figure 5-3 Exploded view of the x240, showing the major components

USB port

Console Breakout Cable port

Power button / LED

Hard disk drive activity LED

Hard disk drive status LED

LED panelNMI control

Cover

Air baffle

Heat sink

Hot-swapstorage backplane

Microprocessorheat sink filler

I/O expansionadapter

Air baffle

DIMM

Storagedrive filler

Hot-swapstorage drive

Hot-swapstoragecage

Microprocessor

Page 158: Sg 247984

142 IBM PureFlex System and IBM Flex System Products and Technology

Table 5-1 lists the features of the x240.

Table 5-1 Features of the x240 type 8737

Component Specification

Form factor Half-wide compute node

Chassis support IBM Flex System Enterprise Chassis

Processor Up to two Intel Xeon Processor E5-2600 product family processors. These processors can be eight-core (up to 2.9 GHz), six-core (up to 2.9 GHz), quad-core (up to 3.3 GHz), or dual-core (up to 3.0 GHz). Two QPI links up to 8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache.

Chipset Intel C600 series.

Memory Up to 24 DIMM sockets (12 DIMMs per processor) using Low Profile (LP) DDR3 DIMMs. RDIMMs, UDIMMs, and LRDIMMs supported. 1.5V and low-voltage 1.35V DIMMs supported. Support for up to 1600 MHz memory speed, depending on the processor. Four memory channels per processor, with three DIMMs per channel.

Memory maximums With LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processorsWith RDIMMs: Up to 384 GB with 24x 16 GB RDIMMs and two processorsWith UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processors

Memory protection ECC, optional memory mirroring, and memory rank sparing.

Disk drive bays Two 2.5" hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional eXFlash support for up to eight 1.8” SSDs.

Maximum internal storage

With two 2.5” hot-swap drives: � Up to 2 TB with 1 TB 2.5" NL SAS HDDs� Up to 1.8 TB with 900 GB 2.5" SAS HDDs� Up to 2 TB with 1 TB 2.5" SATA HDDs� Up to 512 GB with 256 GB 2.5" SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. With eXFlash 1.8” SSDs and ServeRAID M5115 RAID adapter, up to 1.6 TB with eight 200 GB 1.8” SSDs.

RAID support RAID 0, 1, 1E, and 10 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, or 50 support and 1 GB cache. Supports up to eight 1.8” SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler.

Network interfaces x2x models: Two 10 Gb Ethernet ports with Embedded 10 Gb Virtual Fabric Ethernet LAN on motherboard (LOM) controller; Emulex BladeEngine 3 based.x1x models: None standard; optional 1 Gb or 10 Gb Ethernet adapters

PCI Expansion slots Two I/O connectors for adapters. PCI Express 3.0 x16 interface.

Ports USB ports: one external. Two internal for embedded hypervisor with optional USB Enablement Kit. Console breakout cable port that provides local keyboard video mouse (KVM) and serial ports (cable standard with chassis; additional cables optional)

Systems management

UEFI, IBM Integrated Management Module II (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, remote presence. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager, IBM ServerGuide.

Security features Power-on password, administrator's password, Trusted Platform Module 1.2

Page 159: Sg 247984

143

Figure 5-4 shows the components on the system board of the x240.

Figure 5-4 Layout of the x240 system board

Video Matrox G200eR2 video core with 16 MB video memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors.

Limited warranty 3-year customer-replaceable unit and on-site limited warranty with 9x5/NBD

Operating systems supported

Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.2.13, “Operating system support” on page 176.

Service and support Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8 hours fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software.

Dimensions Width 215 mm (8.5”), height 51 mm (2.0”), depth 493 mm (19.4”)

Weight Maximum configuration: 6.98 kg (15.4 lb)

Component Specification

Hot-swap drive bay backplane

Processor 2 and 12 memory DIMMs

I/O connector 2Light path diagnostics

Processor 1 and 12 memory DIMMs

Expansion Connector

Fabric Connector

I/O connector 1

Page 160: Sg 247984

144 IBM PureFlex System and IBM Flex System Products and Technology

5.2.2 Models

The current x240 models are shown in Table 5-2. All models include 8 GB of memory (2x 4 GB DIMMs) running at either 1600 MHz or 1333 MHz (depending on model).

Table 5-2 Models of the x240 type 8737

5.2.3 Chassis support

The x240 type 8737 is supported in the IBM Flex System Enterprise Chassis as listed in Table 5-3.

Table 5-3 x240 chassis support

Modelsa

a. Model numbers provided are worldwide generally available variant (GAV) model numbers that are not orderable as listed. They need to be modified by country. The US GAV model numbers use the following nomenclature: xxU. For example, the US orderable part number for 8737-A2x is 8737-A2U. See the product-specific official IBM announcement letter for other country-specific GAV model numbers.

Intel processor (model, cores, core speed, L3 cache, memory speed, TDP power) (two max)

Standardmemoryb

b. Maximum system memory capacity of 768 GB is when using 24x 32 GB DIMMs.

Availabledrive bays

AvailableI/O slotsc

c. Some models include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. This embedded controller precludes the use of an I/O adapter in I/O connector 1 as shown in Figure 5-4 on page 143. For more information, see 5.2.10, “Embedded 10 Gb Virtual Fabric Adapter” on page 170.

10 GbEembedd

d. Models number in the form x2x (for example 8737-L2x) include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. Model numbers in the form x1x (for example 8737-A1x) do not include this embedded controller.

8737-A1x 1x Xeon E5-2630L 6C 2.0 GHz 15 MB 1333 MHz 60 W 2x 4 GB Two (open) 2 No

8737-D2x 1x Xeon E5-2609 4C 2.40 GHz 10 MB 1066 MHz 80 W 2x 4 GB Two (open) 1 Yes

8737-F2x 1x Xeon E5-2620 6C 2.0 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-G2x 1x Xeon E5-2630 6C 2.3 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-H1x 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 2 No

8737-H2x 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-J1x 1x Xeon E5-2670 8C 2.6 GHz 20 MB 1600 MHz 115 W 2x 4 GB Two (open) 2 No

8737-L2x 1x Xeon E5-2660 8C 2.2 GHz 20 MB 1600 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-M1x 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 2x 4 GB Two (open) 2 No

8737-M2x 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 2x 4 GB Two (open) 1 Yes

8737-N2x 1x Xeon E5-2643 4C 3.3 GHz 10 MB 1600 MHz 130 W 2x 4 GB Two (open) 1 Yes

8737-Q2x 1x Xeon E5-2667 6C 2.9 GHz 15 MB 1600 MHz 130 W 2x 4 GB Two (open) 1 Yes

8737-R2x 1x Xeon E5-2690 8C 2.9 GHz 20 MB 1600 MHz 135 W 2x 4 GB Two (open) 1 Yes

Server BladeCenter chassis (All) IBM Flex System Enterprise Chassis

x240 No Yes

Page 161: Sg 247984

145

The x240 is a half wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. Figure 5-5 shows the chassis shelf in the chassis.

Figure 5-5 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or larger, shelves must be removed from within the chassis. Slide the two latches on the shelf towards the center and then slide the shelf from the chassis.

5.2.4 System architecture

The IBM Flex System x240 Compute Node type 8737 features the Intel Xeon E5-2600 series processors. The Xeon E5-2600 series processor has models with two, four, six, and eight cores per processor with up to 16 threads per socket. The processors have the following features:

� Up to 20 MB of shared L3 cache� Hyper-Threading� Turbo Boost Technology 2.0 (depending on processor model)� Two QuickPath Interconnect (QPI) links that run at up to 8 GT/s� One integrated memory controller� Four memory channels that support up to three DIMMs each

The Xeon E5-2600 series processor implements the second generation of Intel Core microarchitecture (Sandy Bridge) by using a 32nm manufacturing process. It requires a new socket type, the LGA-2011, which has 2011 pins that touch contact points on the underside of the processor. The architecture also includes the Intel C600 (Patsburg B) Platform Controller Hub (PCH).

Page 162: Sg 247984

146 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-6 shows the system architecture of the x240 system.

Figure 5-6 IBM Flex System x240 Compute Node system board block diagram

The IBM Flex System x240 Compute Node has the following system architecture features as standard:

� Two 2011-pin type R (LGA-2011) processor sockets

� An Intel C600 PCH

� Four memory channels per socket

� Up to three DIMMs per memory channel

� 24 DDR3 DIMM sockets

� Support for UDIMMs, RDIMMs, and new LRDIMMs

� One integrated 10 Gb Virtual Fabric Ethernet controller (10 GbE LOM in diagram)

� One LSI 2004 SAS controller

� Integrated HW RAID 0 and 1

� One Integrated Management Module II

� Two PCIe x16 Gen3 I/O adapter connectors

� Two Trusted Platform Module (TPM) 1.2 controllers

� One internal USB connector

QPIlinks(8 GT/s)

x4 ESI link

I/O connector 1

10GbE LOM

HDDs or SSDs

LSI2004SASPCIe x4 G2

USB

Internal USBFront USB

IntelC600PCH

Management to midplane

Front KVM port

IMM v2

Video & serial

PCIe x8 G2

I/O connector 2

Sidecar connector

PCIe x16 G3

PCIe x16 G3PCIe x8 G3

PCIe x8 G3PCIe x16 G3

IntelXeon

Processor 1

DDR3 DIMMs4 memory channels3 DIMMs per channel

IntelXeon

Processor 2

x1 USB

Page 163: Sg 247984

147

The new architecture allows the sharing of data on-chip through a high-speed ring interconnect between all processor cores, the last level cache (LLC), and the system agent. The system agent houses the memory controller and a PCI Express root complex that provides 40 PCIe 3.0 lanes. This ring interconnect and LLC architecture is shown in Figure 5-7.

Figure 5-7 Intel Xeon E5-2600 basic architecture

The two Xeon E5-2600 series processors in the x240 are connected through two QuickPath Interconnect (QPI) links. Each QPI link is capable of up to eight giga-transfers per second (GT/s) depending on the processor model installed. Table 5-4 shows the QPI bandwidth of the Intel Xeon E5-2600 series processors.

Table 5-4 QuickPath Interconnect bandwidth

5.2.5 Processor

The Intel Xeon E5-2600 series is available with up to eight cores and 20 MB of last-level cache. It features an enhanced instruction set called Intel Advanced Vector Extensions (AVX). This set doubles the operand size for vector instructions (such as floating-point) to 256 bits and boosts selected applications by up to a factor of two.

The new architecture also introduces Intel Turbo Boost Technology 2.0 and improved power management capabilities. Turbo Boost automatically turns off unused processor cores and increases the clock speed of the cores in use if thermal requirements are still met. Turbo Boost Technology 2.0 takes advantage of the new integrated design. It also implements a more granular overclocking in 100 MHz steps instead of 133 MHz steps on former Nehalem-based and Westmere-based microprocessors.

Intel Xeon E5-2600 series processor

QuickPath Interconnect speed (GT/s)

QuickPath Interconnect bandwidth (GB/s) in each direction

Advanced 8.0 GT/s 32.0 GB/s

Standard 7.25 GT/s 29.0 GB/s

Basic 6.4 GT/s 25.6 GB/s

LLC

LLC

System agent

Core L1/L2 LLC

….

PCIe 3.0 Root Complex

MemoryController

to Chipset

40 lanes PCIe 3.0

4 channels3 DIMMs per channel

QPI link

Ringinterconnect

Core L1/L2

Core L1/L2

Page 164: Sg 247984

148 IBM PureFlex System and IBM Flex System Products and Technology

As listed in Table 5-2 on page 144, standard models come with one processor that is installed in processor socket 1.

In a two processor system, both processors communicate with each other through two QPI links. I/O is served through 40 PCIe Gen2 lanes and through a x4 Direct Media Interface (DMI) link to the Intel C600 PCH.

Processor 1 has direct access to 12 DIMM slots. By adding the second processor, you enable access to the remaining 12 DIMM slots. The second processor also enables access to the sidecar connector, which enables the use of mezzanine expansion units.

Table 5-5 show a comparison between the features of the Intel Xeon 5600 series processor and the new Intel Xeon E5-2600 series processor that is installed in the x240.

Table 5-5 Comparison of Xeon 5600 series and Xeon E5-2600 series processor features

Table 5-6 lists the features for the different Intel Xeon E5-2600 series processor types.

Table 5-6 Intel Xeon E5-2600 series processor features

Specification Xeon 5600 Xeon E5-2600

Cores Up to six cores / 12 threads Up to eight cores / 16 threads

Physical Addressing 40-bit (Uncorea limited)

a. Uncore is an Intel term used by Intel to describe the parts of a processor that are not the core

46-bit (Core and Uncorea)

Cache size 12 MB Up to 20 MB

Memory channels per socket 3 4

Max memory speed 1333 MHz 1600 MHz

Virtualization technology Real Mode support and transition latency reduction

Adds Large VT pages

New instructions AES-NI Adds AVX

QPI frequency 6.4 GT/s 8.0 GT/s

Inter-socket QPI links 1 2

PCI Express 36 Lanes PCIe on chipset 40 Lanes/Socket Integrated PCIe

Processormodel

Processorfrequency

Turbo HT L3 cache Cores PowerTDP

QPI Linkspeeda

Max DDR3speed

Advanced

Xeon E5-2650 2.0 GHz Yes Yes 20 MB 8 95 W 8 GT/s 1600 MHz

Xeon E5-2658 2.1 GHz Yes Yes 20 MB 8 95 W 8 GT/s 1600 MHz

Xeon E5-2660 2.2 GHz Yes Yes 20 MB 8 95 W 8 GT/s 1600 MHz

Xeon E5-2665 2.4 GHz Yes Yes 20 MB 8 115 W 8 GT/s 1600 MHz

Xeon E5-2670 2.6 GHz Yes Yes 20 MB 8 115 W 8 GT/s 1600 MHz

Xeon E5-2680 2.7 GHz Yes Yes 20 MB 8 130 W 8 GT/s 1600 MHz

Xeon E5-2690 2.9 GHz Yes Yes 20 MB 8 135 W 8 GT/s 1600 MHz

Page 165: Sg 247984

149

Table 5-7 lists the processor options for the x240.

Table 5-7 Processors for the x240 type 8737

Standard

Xeon E5-2620 2.0 GHz Yes Yes 15 MB 6 95 W 7.2 GT/s 1333 MHz

Xeon E5-2630 2.3 GHz Yes Yes 15 MB 6 95 W 7.2 GT/s 1333 MHz

Xeon E5-2640 2.5 GHz Yes Yes 15 MB 6 95 W 7.2 GT/s 1333 MHz

Basic

Xeon E5-2603 1.8 MHz No No 10 MB 4 80 W 6.4 GT/s 1066 MHz

Xeon E5-2609 2.4 GHz No No 10 MB 4 80 W 6.4 GT/s 1066 MHz

Low power

Xeon E5-2650L 1.8 GHz Yes Yes 20 MB 8 70 W 8 GT/s 1600 MHz

Xeon E5-2648L 1.8 GHz Yes Yes 20 MB 8 70 W 8 GT/s 1600 MHz

Xeon E5-2630L 2.0 GHz Yes Yes 15 MB 6 60 W 7.2 GT/s 1333 MHz

Special Purpose

Xeon E5-2667 2.9 GHz Yes Yes 15 MB 6 130 W 8 GT/s 1600 MHz

Xeon E5-2643 3.3 GHz No No 10 MB 4 130 W 6.4 GT/s 1600 MHz

Xeon E5-2637 3.0 GHz No No 5 MB 2 80 W 8 GT/s 1600 MHz

a. GT/s = giga transfers per second.

Processormodel

Processorfrequency

Turbo HT L3 cache Cores PowerTDP

QPI Linkspeeda

Max DDR3speed

Part number Feature Description Where used

81Y5180 A1CQ Intel Xeon Processor E5-2603 4C 1.8 GHz 10 MB Cache 1066 MHz 80 W

81Y5182 A1CS Intel Xeon Processor E5-2609 4C 2.40 GHz 10 MB Cache 1066 MHz 80 W D2x

81Y5183 A1CT Intel Xeon Processor E5-2620 6C 2.0 GHz 15 MB Cache 1333 MHz 95 W F2x

81Y5184 A1CU Intel Xeon Processor E5-2630 6C 2.3 GHz 15 MB Cache 1333 MHz 95 W G2x

81Y5206 A1ER Intel Xeon Processor E5-2630L 6C 2.0 GHz 15 MB Cache 1333 MHz 60 W A1x

49Y8125 A2EP Intel Xeon Processor E5-2637 2C 3.0 GHz 5 MB Cache 1600 MHz 80 W

81Y5185 A1CV Intel Xeon Processor E5-2640 6C 2.5 GHz 15 MB Cache 1333 MHz 95 W H1x, H2x

81Y5190 A1CY Intel Xeon Processor E5-2643 4C 3.3 GHz 10 MB Cache 1600 MHz 130 W N2x

95Y4670 A31A Intel Xeon Processor E5-2648L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W

81Y5186 A1CW Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W

81Y5179 A1ES Intel Xeon Processor E5-2650L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W

95Y4675 A319 Intel Xeon Processor E5-2658 8C 2.1 GHz 20 MB Cache 1600 MHz 95 W

81Y5187 A1CX Intel Xeon Processor E5-2660 8C 2.2 GHz 20 MB Cache 1600 MHz 95 W L2x

49Y8144 A2ET Intel Xeon Processor E5-2665 8C 2.4 GHz 20 MB Cache 1600 MHz 115 W

Page 166: Sg 247984

150 IBM PureFlex System and IBM Flex System Products and Technology

For more information about the Intel Xeon E5-2600 series processors, see:

http://www.intel.com/content/www/us/en/processors/xeon/xeon-processor-5000-sequence.html

5.2.6 Memory

This section has the following topics:

� “Memory subsystem overview”� “Memory types” on page 153� “Memory options” on page 154� “Memory channel performance considerations” on page 155� “Memory modes” on page 157� “DIMM installation order” on page 158� “Memory installation considerations” on page 161

The x240 has 12 DIMM sockets per processor (24 DIMMs in total) running at either 800, 1066, 1333, or 1600 MHz. It supports 2 GB, 4 GB, 8 GB, 16 GB, and 32 GB memory modules, as listed in Table 5-10 on page 154.

The x240 with the Intel Xeon E5-2600 series processors can support up to 768 GB of memory in total when using 32 GB LRDIMMs with both processors installed. The x240 uses double data rate type 3 (DDR3) LP DIMMs. You can use registered DIMMs (RDIMMs), unbuffered DIMMs (UDIMMs) or load-reduced DIMMs (LRDIMMs). However, the mixing of the different memory DIMM types is not supported.

The E5-2600 series processor has four memory channels, and each memory channel can have up to three DIMMs. Figure 5-8 shows the E5-2600 series and the four memory channels.

Figure 5-8 The Intel Xeon E5-2600 series processor and the four memory channels

81Y5189 A1CZ Intel Xeon Processor E5-2667 6C 2.9 GHz 15 MB Cache 1600 MHz 130 W Q2x

81Y9418 A1SX Intel Xeon Processor E5-2670 8C 2.6 GHz 20 MB Cache 1600 MHz 115 W J1x

81Y5188 A1D9 Intel Xeon Processor E5-2680 8C 2.7 GHz 20 MB Cache 1600 MHz 130 W M1x, M2x

49Y8116 A2ER Intel Xeon Processor E5-2690 8C 2.9 GHz 20 MB Cache 1600 MHz 135 W R2x

Part number Feature Description Where used

Intel XeonE5-2600

processor

DIM

M 1

0

DIM

M 1

1

DIM

M 3

DIM

M 2

Cha

nne

l 1C

han

nel 3

DIM

M 1

DIM

M 1

2

DIM

M 9

DIM

M 8

DIM

M 4

DIM

M 5

Cha

nne

l 0C

han

nel 2

DIM

M 6

DIM

M 7

Page 167: Sg 247984

151

Memory subsystem overviewTable 5-8 summarizes some of the characteristics of the x240 memory subsystem. Details on all these characteristics are explained in detail in the following sections.

Table 5-8 Memory subsystem characteristics of the x240

Memory subsystem characteristic IBM Flex System x240 Compute Node

Number of memory channels per processor

4

Supported DIMM voltages Low voltage (1.35V)Standard voltage (1.5V)

Maximum number of DIMMs per channel (DPC)

3 (using 1.5V DIMMs)2 (using 1.35V DIMMs)

DIMM slot maximum One processor: 12Two processor: 24

Mixing of memory types (RDIMMS, UDIMMS, LRDIMMs)

Not supported in any configuration

Mixing of memory speeds Supported; lowest common speed for all installed DIMMs

Mixing of DIMM voltage ratings Supported; all 1.35V will run at 1.5V

Registered DIMM (RDIMM) modules

Supported memory sizes 16, 8, 4, and 2 GB

Supported memory speeds 1600, 1333, 1066, and 800 MHz

Maximum system capacity 384 GB (24 x 16 GB)

Maximum memory speed 1.35V @ 2DPC: 1333 MHz1.5V @ 2DPC: 1600 MHz1.5V @ 3DPC: 1066 MHz

Maximum ranks per channel(any memory voltage)

8

Maximum number of DIMMs One processor: 12Two processor: 24

Unbuffered DIMM (UDIMM) modules

Supported memory sizes 4 GB

Supported memory speeds 1333 MHz

Maximum system capacity 64 GB (16 x 4 GB)

Maximum memory speed 1.35V @ 2DPC: 1333 MHz1.5V @ 2DPC: 1333 MHz1.35V or 1.5V @ 3DPC: Not supported

Maximum ranks per channel(any memory voltage)

8

Maximum number of DIMMs One processor: 8Two processor: 16

Load-reduced (LRDIMM) modules

Supported sizes 32 and 16 GB

Page 168: Sg 247984

152 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-9 shows the location of the 24 memory DIMM sockets on the x240 system board and other components.

Figure 5-9 DIMM layout on the x240 system board

Table 5-9 lists which DIMM connectors belong to which processor memory channel.

Maximum capacity 768 GB (24 x 32 GB)

Supported speeds 1333 and 1066 MHz

Maximum memory speed 1.35V @ 2DPC: 1066 MHz1.5V @ 2DPC: 1333 MHz1.35V or 1.5V @ 3DPC: 1066 MHz

Maximum ranks per channel(any memory voltage)

8a

Maximum number of DIMMs One processor: 12Two processor: 24

a. Due to reduced electrical loading, a 4R (four-rank) LRDIMM has the equivalent load of a two-rank RDIMM. This reduced load allows the x240 to support three 4R LRDIMMs per channel (instead of two as with UDIMMs and RDIMMs). For more information, see “Memory types” on page 153.

Tip: When an unsupported memory configuration is detected, the IMM illuminates the “DIMM mismatch” light path error LED and the system will not boot. Examples of a DIMM mismatch error are:

� Mixing of RDIMMs, UDIMMs, or LRDIMMs in the system

� Not adhering to the DIMM population rules

In some cases, the error log points to the DIMM slots that are mismatched.

Memory subsystem characteristic IBM Flex System x240 Compute Node

DIMMs 13-18 DIMMs 1-6

Microprocessor 1

I/O expansion 1

LOM connector(some models only)

I/O expansion 2

DIMMs 19-24 DIMMs 7-12

Microprocessor 2

Page 169: Sg 247984

153

Table 5-9 The DIMM connectors for each processor memory channel

Memory typesThe x240 supports three types of DIMM memory:

� RDIMM modules

Registered DIMMs are the mainstream module solution for servers or any applications that demand heavy data throughput, high density, and high reliability. RDIMMs use registers to isolate the memory controller address, command, and clock signals from the dynamic random-access memory (DRAM). This process results in a lighter electrical load. Therefore, more DIMMs can be interconnected and larger memory capacity is possible. The register does, however, typically impose a clock or more of delay, meaning that registered DIMMs often have slightly longer access times than their unbuffered counterparts.

In general, RDIMMs have the best balance of capacity, reliability, and workload performance with a maximum performance of 1600 MHz (at 2 DPC).

For more information about supported x240 RDIMM memory options, see Table 5-10 on page 154.

� UDIMM modules

In contrast to RDIMMs that use registers to isolate the memory controller from the DRAMs, UDIMMs attach directly to the memory controller. Therefore, they do not introduce a delay, which creates better performance. The disadvantage is limited drive capability. Limited capacity means that the number of DIMMs that can be connected together on the same memory channel remains small due to electrical loading. This leads to less DIMM support, fewer DIMMs per channel (DPC), and overall lower total system memory capacity than RDIMM systems.

UDIMMs have the lowest latency and lowest power usage. They also have the lowest overall capacity.

For more information about supported x240 UDIMM memory options, see Table 5-10 on page 154.

� LRDIMM modules

Load-reduced DIMMs are similar to RDIMMs. They also use memory buffers to isolate the memory controller address, command, and clock signals from the individual DRAMS on the DIMM. Load-reduced DIMMs take the buffering a step further by buffering the memory controller data lines from the DRAMs also.

Processor Memory channel DIMM connector

Processor 1

Channel 0 4, 5, and 6

Channel 1 1, 2, and 3

Channel 2 7, 8, and 9

Channel 3 10, 11, and 12

Processor 2

Channel 0 22, 23, and 24

Channel 1 19, 20, and 21

Channel 2 13, 14, and 15

Channel 3 16, 17, and 18

Page 170: Sg 247984

154 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-10 shows a comparison of RDIMM and LRDIMM memory types.

Figure 5-10 Comparing RDIMM buffering and LRDIMM buffering

In essence, all signaling between the memory controller and the LRDIMM is now intercepted by the memory buffers on the LRDIMM module. This system allows additional ranks to be added to each LRDIMM module without sacrificing signal integrity. It also means that fewer actual ranks are “seen” by the memory controller (for example, a 4R LRDIMM has the same “look” as a 2R RDIMM).

The additional buffering that the LRDIMMs support greatly reduces the electrical load on the system. This reduction allows the system to operate at a higher overall memory speed for a certain capacity. Conversely, it can operate at a higher overall memory capacity at a certain memory speed.

LRDIMMs allow maximum system memory capacity and the highest performance for system memory capacities above 384 GB. They are suited for system workloads that require maximum memory such as virtualization and databases.

For more information about supported x240 LRDIMM memory options, see Table 5-10.

The memory type installed in the x240 combines with other factors to determine the ultimate performance of the x240 memory subsystem. For a list of rules when populating the memory subsystem, see “Memory installation considerations” on page 161.

Memory optionsTable 5-10 lists the memory DIMM options for the x240.

Table 5-10 Memory DIMMs for the x240 type 8737

Memorycontroller

DATA

MemoryBuffer

DRAM

DRAM

DRAM

CMD/ADDR/CLK

DRAM

DRAM

DRAM

DRAM

DRAM

Memorycontroller

DATA

Register

DRAM

DRAM

DRAM

CMD/ADDR/CLK

DRAM

DRAM

DRAM

DRAM

DRAM

Registered DIMM Load-reduced DIMM

Partnumber

FC Description Whereused

Registered DIMM (RDIMM) modules

49Y1405 8940 2 GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

Page 171: Sg 247984

155

Memory channel performance considerationsThe memory installed in the x240 can be clocked at 1600 MHz, 1333 MHz, 1066 MHz, or 800 MHz. You select the speed based on the type of memory, population of memory, processor model, and several other factors. Use the following to determine the ultimate performance of the x240 memory subsystem:

� Model of Intel Xeon E5-2600 series processor installed

As mentioned in section 5.2.4, “System architecture” on page 145, the Intel Xeon E5-2600 series processors includes one integrated memory controller. The model of processor installed determines the maximum speed that the integrated memory controller will clock the installed memory. Table 5-6 on page 148 lists the maximum DDR3 speed that the processor model supports. This maximum speed might not be the ultimate speed of the memory subsystem.

� Speed of DDR3 DIMMs installed

For maximum performance, the speed rating of each DIMM module must match the maximum memory clock speed of the Xeon E5-2600 processor. Keep in mind these rules when matching processors and DIMM modules:

– The processor never over-clocks the memory in any configuration

– The processor clocks all the installed memory at either the rated speed of the processor or the speed of the slowest DIMM installed in the system

For example, an Intel Xeon E5-2640 series processor clocks all installed memory at a maximum speed of 1333 MHz. If any 1600 MHz DIMM modules are installed, they are

49Y1406 8941 4 GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM H1x, H2x, G2x, F2x, D2x, A1x

49Y1407 8942 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1559 A28Z 4 GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM R2x, Q2x, N2x, M2x, M1x, L2x, J1x

90Y3178 A24L 4 GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

90Y3109 A292 8 GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

49Y1397 8923 8 GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1563 A1QT 16 GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1400 8939 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM

00D4968 A2U5 16 GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

Unbuffered DIMM (UDIMM) modules

49Y1404 8648 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM

Load-reduced (LRDIMM) modules

49Y1567 A290 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM

90Y3105 A291 32 GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM

Partnumber

FC Description Whereused

Page 172: Sg 247984

156 IBM PureFlex System and IBM Flex System Products and Technology

clocked at 1333 MHz. However, if any 1066 MHz or 800 MHz DIMM modules are installed, all installed DIMM modules are clocked at the slowest speed (800 MHz).

� Number of DIMMs per channel (DPC)

Generally, the Xeon E5-2600 processor series clocks up to 2DPC at the maximum rated speed of the processor. However, if any channel is fully populated (3DPC), the processor slows all the installed memory down.

For example, an Intel Xeon E5-2690 series processor clocks all installed memory at a maximum speed of 1600 MHz up to 2DPC. However, if any one channel is populated with 3DPC, all memory channels are clocked at 1066 MHz.

� DIMM voltage rating

The Xeon E5-2600 processor series supports both low voltage (1.35V) and standard voltage (1.5V) DIMMs. Table 5-10 on page 154 shows the maximum clock speed for supported low voltage DIMMs is 1333 MHz. The maximum clock speed for supported standard voltage DIMMs is 1600 MHz.

Table 5-11 lists the memory DIMM options for the x240, including memory channel speed based on number of DIMMs per channel, ranks per DIMM, and DIMM voltage rating.

Table 5-11 x240 memory DIMM and memory channel speed support

Part number

Memory capacity

per DIMM

Ranks per

DIMM and data width

DRAM density

Memory channel speed and voltage support by DIMM per channel (NS = Not Supported)

1DPC 2DPC 3DPC

1.35V 1.5V 1.35V 1.5V 1.35V 1.5V

RDIMM

49Y1405 2 GB 1Rx8 2 Gb 1333 1333 1333 1333 NS 1066

49Y1406 4 GB 1Rx4 2 Gb 1333 1333 1333 1333 NS 1066

49Y1407 4 GB 2Rx8 2 Gb 1333 1333 1333 1333 NS 1066

49Y1559 4 GB 1Rx4 2 Gb NS 1600 NS 1600 NS 1066

90Y3178 4 GB 2Rx8 2 Gb NS 1600 NS 1600 NS 1066

90Y3109 8 GB 2Rx4 2 Gb NS 1600 NS 1600 NS 1066

49Y1397 8 GB 2Rx4 2 Gb 1333 1333 1333 1333 NS 1066

49Y1563 16 GB 2Rx4 4 Gb 1333 1333 1333 1333 NS 1066

49Y1400 16 GB 4Rx4 2 Gb 800 1066 NS 800 NS NS

00D4968 16 GB 2Rx4 4 Gb NS 1600 NS 1600 NS 1066

UDIMM

49Y1404 4 GB 2Rx8 2 Gb 1333 1333 1333 1333 NS NS

LRDIMM

49Y1567 16 GB 4Rx4 2 Gb 1066 1333 1066 1333 1066 1066

90Y3105 32 GB 4Rx4 4 Gb 1066 1333 1066 1333 1066 1066

Page 173: Sg 247984

157

Memory modesThe x240 type 8737 supports three memory modes:

� “Independent channel mode”� “Rank-sparing mode”� “Mirrored-channel mode”

These modes can be selected in the Unified Extensible Firmware Interface (UEFI) setup. For more information, see 5.2.12, “Systems management” on page 172.

Independent channel modeThis is the default mode for DIMM population. DIMMs are to be populated in the last DIMM connector on the channel first, then installed one DIMM per channel equally distributed between channels and processors. In this memory mode, the operating system uses the full amount of memory installed and no redundancy is provided.

The IBM Flex System x240 Compute Node configured in independent channel mode yields a maximum of 192 GB of usable memory with one processor installed. It yields 384 GB of usable memory with two processors installed that use 16 GB DIMMs. Memory DIMMs must be installed in the correct order, starting with the last physical DIMM socket of each channel first. The DIMMs can be installed without matching sizes, but avoid this configuration because it might affect optimal memory performance.

For more information about the memory DIMM installation sequence when using independent channel mode, see “Memory DIMM installation: Independent channel and rank-sparing modes” on page 158

Rank-sparing modeIn rank-sparing mode, one memory DIMM rank serves as a spare of the other ranks on the same channel. The spare rank is held in reserve and is not used as active memory. The spare rank must have identical or larger memory capacity than all the other active memory ranks on the same channel. After an error threshold is surpassed, the contents of that rank are copied to the spare rank. The failed rank of memory is taken offline, and the spare rank is put online and used as active memory in place of the failed rank.

The memory DIMM installation sequence when using rank-sparing mode is identical to independent channel mode as described in “Memory DIMM installation: Independent channel and rank-sparing modes” on page 158.

Mirrored-channel modeIn mirrored-channel mode, memory is installed in pairs. Each DIMM in a pair must be identical in capacity, type, and rank count. The channels are grouped in pairs. Each channel in the group receives the same data. One channel is used as a backup of the other, which provides redundancy. The memory contents on channel 0 are duplicated in channel 1, and the memory contents of channel 2 are duplicated in channel 3. The DIMMs in channel 0 and channel 1 must be the same size and type. The DIMMs in channel 2 and channel 3 must be the same size and type. The effective memory that is available to the system is only half of what is installed.

Because memory mirroring is handled in hardware, it is operating system-independent.

Restriction: In a two processor configuration, memory must be identical across the two processors to enable the memory mirroring feature.

Page 174: Sg 247984

158 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-11 shows the E5-2600 series processor with the four memory channels and which channels are mirrored when operating in mirrored-channel mode.

Figure 5-11 Showing the mirrored channels and DIMM pairs when in mirrored-channel mode

For more information about the memory DIMM installation sequence when using mirrored channel mode, see “Memory DIMM installation: Mirrored-channel” on page 161.

DIMM installation orderThis section describes the recommended order in which DIMMs should be installed, based on the memory mode used.

Memory DIMM installation: Independent channel and rank-sparing modesThe following guidelines are only for when the processors are operating in Independent channel mode or rank-sparing mode.

The x240 boots with one memory DIMM installed per processor. However, the suggested memory configuration is to balance the memory across all the memory channels on each processor to use the available memory bandwidth. Use one of the following suggested memory configurations:

� Four, eight, or 12 memory DIMMs in a single processor x240 server � Eight, 16, or 24 memory DIMMs in a dual processor x240 server

This sequence spreads the DIMMs across as many memory channels as possible. For best performance and to ensure a working memory configuration, install the DIMMs in the sockets as shown in the following tables.

Intel XeonE5-2600

processor

DIM

M 1

0

DIM

M 1

1

DIM

M 7

DIM

M 8

Cha

nnel

2C

hann

el 3

DIM

M 9

DIM

M 1

2

DIM

M 1

DIM

M 2

DIM

M 4

DIM

M 5

Cha

nnel

0C

hann

el 1

DIM

M 6

DIM

M 3

Channel 0 & 1mirrored

Channel 2 & 3mirrored

Mirrored Pair

Page 175: Sg 247984

159

Table 5-12 shows DIMM installation if you have one processor installed.

Table 5-12 Suggested DIMM installation for the x240 with one processor installed

Op

tim

al m

emo

ry c

on

fig

a

a. For optimal memory performance, populate all memory channel equally

Nu

mb

er o

fp

roce

sso

rs

Nu

mb

er o

f D

IMM

s

Processor 1 Processor 2

Channel 2 Channel 1 Channel 3 Channel 4 Channel 3 Channel 4 Channel 2 Channel 1

DIM

M 1

DIM

M 2

DIM

M 3

DIM

M 4

DIM

M 5

DIM

M 6

DIM

M 7

DIM

M 8

DIM

M 9

DIM

M 1

0

DIM

M 1

1

DIM

M 1

2

DIM

M 1

3

DIM

M 1

4

DIM

M 1

5

DIM

M 1

6

DIM

M 1

7

DIM

M 1

8

DIM

M 1

9

DIM

M 2

0

DIM

M 2

1

DIM

M 2

2

DIM

M 2

3

DIM

M 2

4

1 1 x

1 2 x x

1 3 x x x

1 4 x x x x

1 5 x x x x x

1 6 x x x x x x

1 7 x x x x x x x

1 8 x x x x x x x x

1 9 x x x x x x x x x

1 10 x x x x x x x x x x

1 11 x x x x x x x x x x x

1 12 x x x x x x x x x x x x

Page 176: Sg 247984

160 IBM PureFlex System and IBM Flex System Products and Technology

Table 5-13 shows DIMM installation if you have two processors installed.

Table 5-13 Suggested DIMM installation for the x240 with two processors installed

Op

tim

al m

emo

ry c

on

fig

a

a. For optimal memory performance, populate all memory channels equally

Nu

mb

er o

fp

roce

sso

rs

Nu

mb

er o

f D

IMM

s

Processor 1 Processor 2

Channel 2 Channel 1 Channel 3 Channel 4 Channel 3 Channel 4 Channel 2 Channel 1

DIM

M 1

DIM

M 2

DIM

M 3

DIM

M 4

DIM

M 5

DIM

M 6

DIM

M 7

DIM

M 8

DIM

M 9

DIM

M 1

0

DIM

M 1

1

DIM

M 1

2

DIM

M 1

3

DIM

M 1

4

DIM

M 1

5

DIM

M 1

6

DIM

M 1

7

DIM

M 1

8

DIM

M 1

9

DIM

M 2

0

DIM

M 2

1

DIM

M 2

2

DIM

M 2

3

DIM

M 2

4

2 1 x

2 2 x x

2 3 x x x

2 4 x x x x

2 5 x x x x x

2 6 x x x x x x

2 7 x x x x x x x

2 8 x x x x x x x x

2 9 x x x x x x x x x

2 10 x x x x x x x x x x

2 11 x x x x x x x x x x x

2 12 x x x x x x x x x x x x

2 13 x x x x x x x x x x x x x

2 14 x x x x x x x x x x x x x x

2 15 x x x x x x x x x x x x x x x

2 16 x x x x x x x x x x x x x x x x

2 17 x x x x x x x x x x x x x x x x x

2 18 x x x x x x x x x x x x x x x x x x

2 19 x x x x x x x x x x x x x x x x x x x

2 20 x x x x x x x x x x x x x x x x x x x x

2 21 x x x x x x x x x x x x x x x x x x x x x

2 22 x x x x x x x x x x x x x x x x x x x x x x

2 23 x x x x x x x x x x x x x x x x x x x x x x x

2 24 x x x x x x x x x x x x x x x x x x x x x x x x

Page 177: Sg 247984

161

Memory DIMM installation: Mirrored-channelTable 5-14 lists the memory DIMM installation order for the x240, with one or two processors installed when operating in mirrored-channel mode.

Table 5-14 The DIMM installation order for mirrored-channel mode

Memory installation considerationsUse the following general guidelines when deciding about the memory configuration of your IBM Flex System x240 Compute Node:

� All memory installation considerations apply equally to one- and two-processor systems.

� All DIMMs must be DDR3 DIMMs.

� Memory of different types (RDIMMs, UDIMMs, and LRDIMMs) cannot be mixed in the system.

� If you mix DIMMs with 1.35V and 1.5V, the system runs all of them at 1.5V and you lose the energy advantage.

� If you mix DIMMs with different memory speeds, all DIMMs in the system run at the lowest speed.

� Install memory DIMMs in order of their size, with the largest DIMM first. The order is described in Table 5-12 on page 159 and Table 5-13 on page 160. The correct installation order is the DIMM slot farthest from the processor first (DIMM slots 1, 4, 9, and 12) working inward.

� Install memory DIMMs in order of their rank, with the largest DIMM in the DIMM slot farthest from the processor. Start with DIMM slots 1, 4, 9, and 12, and work inward.

� Memory DIMMs can be installed one DIMM at a time. However, avoid this configuration because it can affect performance.

� For maximum memory bandwidth, install one DIMM in each of the four memory channels. In other words, in matched quads (four DIMMs at a time).

� Populate equivalent ranks per channel.

DIMM paira

a. The pair of DIMMs must be identical in capacity, type, and rank count

One processor installed Two processors installed

1st 1 & 4 1 & 4

2nd 9 & 12 13 & 16

3rd 2 & 5 9 & 12

4th 8 & 11 21 & 24

5th 3 & 6 2 & 5

6th 7 & 10 14 & 17

7th 8 & 11

8th 20 & 23

9th 3 & 6

10th 15 & 18

11th 7 & 10

12th 19 & 22

Page 178: Sg 247984

162 IBM PureFlex System and IBM Flex System Products and Technology

5.2.7 Standard onboard features

This section describes the standard onboard features of the IBM Flex System x240 Compute Node.

USB portsThe x240 has one external USB port on the front of the compute node. Figure 5-12 shows the location of the external USB connector on the x240.

Figure 5-12 The front USB connector on the x240 compute node

The x240 also supports an option that provides two internal USB ports (x240 USB Enablement Kit) to be primarily used for attaching USB hypervisor keys. For more information, see 5.2.9, “Integrated virtualization” on page 169.

Console breakout cableThe x240 connects to local video, USB keyboard, and USB mouse devices by connecting the console breakout cable. The console breakout cable connects to a connector on the front bezel of the x240 compute node. The console breakout cable also provides a serial connector. Figure 5-13 shows the console breakout cable.

Figure 5-13 Console breakout cable connecting to the x240

External USB connector

Serial connector

2-port USB

Video connector

Breakout cable connector

Page 179: Sg 247984

163

Table 5-15 lists the ordering part number and feature code of the console breakout cable. One console breakout cable ships with the IBM Flex System Enterprise Chassis.

Table 5-15 Ordering part number and feature code

Trusted Platform ModuleTrusted computing is an industry initiative that provides a combination of secure software and secure hardware to create a trusted platform. It is a specification that increases network security by building unique hardware IDs into computing devices. The x240 implements TPM Version 1.2 support.

The TPM in the x240 is one of the three layers of the trusted computing initiative as shown in Table 5-16.

Table 5-16 Trusted computing layers

5.2.8 Local storage

The x240 compute node features an onboard LSI 2004 SAS controller with two small form factor (SFF) hot-swap drive bays. These bays are accessible from the front of the compute node. The onboard LSI SAS2004 controller provides RAID 0, RAID 1, or RAID 10 capability. It supports up to two SFF hot-swap serial-attached SCSI (SAS) or Serial Advanced Technology Attachment (SATA) hard disk drive (HDDs) or two SFF hot-swap solid-state drives. Figure 5-14 shows how the LSI2004 SAS controller and hot-swap storage devices connect to the internal HDD interface.

Figure 5-14 The LSI2004 SAS controller connections to HDD interface

Part Number Feature Code Description

81Y5286 A1NF IBM Flex System Console Breakout Cable

Layer Implementation

Level 1: Tamper-proof hardware, used to generate trustable keys � Trusted Platform Module

Level 2: Trustable platform � UEFI or BIOS� Intel processor

Level 3: Trustable execution � Operating system� Drivers

LSI2004SAS

Controller

Hot-SwapStorageDevice 1

Hot-SwapStorageDevice 2

SAS 0

SAS 1

SAS 0

SAS 1

Page 180: Sg 247984

164 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-15 shows the front of the x240 including the two hot-swap drive bays.

Figure 5-15 The x240 showing the front hot-swap disk drive bays

Local SAS and SATA HDDs and SSDsThe x240 type 8737 has support for up to two hot-swap SFF SAS or SATA HDDs or up two hot-swap SFF solid-state drives (SSDs). These two hot-swap components are accessible from the front of the compute node without removing the compute node from the chassis. See Table 5-17 for a list of supported SAS and SATA HDDs and SSDs.

Table 5-17 Supported SAS and SATA HDDs and SSDs

eXFlash storageIn addition, the x240 supports eXFlash with up to eight 1.8-inch solid-state drives combined with a ServeRAID M5115 SAS/SATA controller (90Y4390). The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Compute Node Fabric Connector is installed. The Compute Node Fabric Connector is used to route the Embedded 10 Gb Virtual

Part number Feature code Description

10K SAS hard disk drives

42D0637 5599 IBM 300 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD

49Y2003 5433 IBM 600 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD

81Y9650 A282 IBM 900 GB 10K 6 Gbps SAS 2.5" SFF HS HDD

15K SAS hard disk drives

42D0677 5536 IBM 146 GB 15K 6 Gbps SAS 2.5" SFF Slim-HS HDD

81Y9670 A283 IBM 300 GB 15K 6 Gbps SAS 2.5" SFF HS HDD

NL SATA

81Y9722 A1NX IBM 250 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD

81Y9726 A1NZ IBM 500 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD

81Y9730 A1AV IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD

NL SAS

42D0707 5409 IBM 500 GB 7200 6 Gbps NL SAS 2.5" SFF Slim-HS HDD

81Y9690 A1P3 IBM 1TB 7.2K 6 Gbps NL SAS 2.5" SFF HS HDD

Solid state drives

43W7718 A2FN IBM 200 GB SATA 2.5" MLC HS SSD

90Y8643 A2U3 IBM 256 GB SATA 2.5" MLC HS Entry SSD

90Y8648 A2U4 IBM 128 GB SATA 2.5" MLC HS Entry SSD

Page 181: Sg 247984

165

Fabric Adapter to bays 1 and 2. For more information, see 5.2.11, “I/O expansion” on page 171. The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1.

The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid-state drives:

� Up to two 2.5-inch drives only� Up to four 1.8-inch drives only� Up to two 2.5-inch drives, plus up to four 1.8-inch solid-state drives� Up to eight 1.8-inch solid-state drives

The ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache. This cache can be backed up to flash when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (90Y4342).

At least one hardware kit is required with the ServeRAID M5115 controller to enable specific drive support:

� ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 (90Y4342) enables support for up to two 2.5” HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane (which is attached through the system board to an onboard controller) with a new backplane. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit.

MegaRAID CacheVault flash cache protection uses NAND flash memory powered by a supercapacitor to protect data stored in the controller cache. This module eliminates the need for a lithium-ion battery commonly used to protect DRAM cache memory on Peripheral Component Interconnect (PCI) RAID controllers. To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk.

� ServeRAID M5100 Series IBM eXFlash Kit for IBM Flex System x240 (90Y4341) enables eXFlash support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay eXFlash backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercap.

� ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 (90Y4391) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles replacing the existing baffles, and each baffle has mounts for two SSDs. Included flexible cables connect the drives to the controller.

Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs, this kit is not required.

Page 182: Sg 247984

166 IBM PureFlex System and IBM Flex System Products and Technology

Table 5-18 shows the kits required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the eXFlash kit, and the SSD Expansion kit.

Table 5-18 ServeRAID M5115 hardware kits

Figure 5-16 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-18).

Figure 5-16 The ServeRAID M5115 and the Enablement Kit installed

Tip: If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit (49Y8119, described in 5.2.9, “Integrated virtualization” on page 165) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time.

Required drive support Components required

Maximum number of 2.5" drives

Maximum number of 1.8" SSDs

ServeRAIDM511590Y4390

EnablementKit90Y4342

eXFlash Kit90Y4341

SSD ExpansionKit 90Y4391a

a. If you install the SSD Expansion Kit, you cannot also installed the x240 USB Enablement Kit (49Y8119).

2 0 => Add this ... and this

0 4 (front) => Add this ... and this

2 4 (internal) => Add this ... and this ... and this

0 8 (both) => Add this ... and this ... and this

ServeRAID M5115 controller

ServeRAID M5115 controller (90Y4390) withServeRAID M5100 Series Enablement Kit (90Y4342)

MegaRAID CacheVault flash cache protection

Replacement 2-drive backplane

Page 183: Sg 247984

167

Figure 5-17 shows how the ServeRAID M5115 and eXFlash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of Table 5-18 on page 166).

Figure 5-17 ServeRAID M5115 with eXFlash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations:

� Four in the front of the system in place of the two 2.5-inch drive bays� Two in a tray above the memory banks for CPU 1� Two in a tray above the memory banks for CPU 2

The ServeRAID M5115 controller has the following specifications:

� Eight internal 6 Gbps SAS/SATA ports

� PCI Express 3.0 x8 host interface

� 6 Gbps throughput per port

� 800 MHz dual-core IBM PowerPC processor with LSI SAS2208 6 Gbps RAID-on-Chip (ROC) controller

� Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411

� Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342.

� Support for SAS and SATA HDDs and SSDs

� Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives in the same array (drive group) is not recommended

� Support for self-encrypting drives (SEDs) with MegaRAID SafeStore

� Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447)

� Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per drive group, and up to 32 physical drives per drive group

� Support for logical unit number (LUN) sizes up to 64 TB

ServeRAID M5115 controller

ServeRAID M5115 controller (90Y4390) withServeRAID M5100 Series IBM eXFlash Kit (90Y4341) and ServeRAID M5100 Series SSD Expansion Kit (90Y4391)

SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)

eXFlash Kit: Replacement 4-drive SSD backplane and drive bays

Eight drives supported:- Four internal drives- Four front-accessible drives

Page 184: Sg 247984

168 IBM PureFlex System and IBM Flex System Products and Technology

� Configurable stripe size up to 1 MB

� Compliant with Disk Data Format (DDF) configuration on disk (CoD)

� S.M.A.R.T. support

� MegaRAID Storage Manager management software

Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. Table 5-19 lists all Feature on Demand (FoD) license upgrades.

Table 5-19 Supported upgrade features

These features have the following characteristics:

� RAID 6 Upgrade (90Y4410)

Adds support for RAID 6 and RAID 60. This is a Feature on Demand license.

� Performance Accelerator (90Y4412)

The Performance Accelerator for IBM Flex System is implemented by using the LSI MegaRAID FastPath software. It provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum input/output operations per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license.

� SSD Caching Enabler for traditional hard drives (90Y4447)

The SSD Caching Enabler for IBM Flex System is implemented by using the LSI MegaRAID CacheCade Pro 2.0. It is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache. This configuration helps maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.

The 1.8-inch solid-state drives supported with the ServeRAID M5115 controller are listed in Table 5-20.

Table 5-20 Supported 1.8-inch solid-state drives

Part number Description Maximum quantitysupported

90Y4410 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System

1

90Y4412 ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath)

1

90Y4447 ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0)

1

Part number Description Maximum quantity supported

43W7746 IBM 200 GB SATA 1.8" MLC SSD 8

43W7726 IBM 50 GB SATA 1.8" MLC SSD 8

Page 185: Sg 247984

169

5.2.9 Integrated virtualization

The x240 offers an IBM standard USB flash drive option preinstalled with VMware ESXi. This is an embedded version of VMware ESXi. It is fully contained on the flash drive, and so does not require any disk space. The IBM USB Memory Key for VMware Hypervisor plugs into the USB ports on the optional x240 USB Enablement Kit (Figure 5-18).

Table 5-21 lists the ordering information for the VMware hypervisor options.

Table 5-21 IBM USB Memory Key for VMware Hypervisor

The USB memory keys connect to the internal x240 USB Enablement Kit. Table 5-22 lists the ordering information for the internal x240 USB Enablement Kit.

Table 5-22 Internal USB port option

The x240 USB Enablement Kit connects to the system board of the server as shown in Figure 5-18. The kit offers two ports, and enables you to install two memory keys. If you do, both devices are listed in the boot menu. This setup allows you to boot from either device, or to set one as a backup in case the first one gets corrupted.

Figure 5-18 The x240 compute node showing the location of the internal x240 USB Enablement Kit

Part number Feature code Description

41Y8300 A2VC IBM USB Memory Key for VMware ESXi 5.0

41Y8298 A2G0 IBM Blank USB Memory Key for VMware ESXi Downloads

Part number Feature code Description

49Y8119 A33M x240 USB Enablement Kit

USB flash key

USB two-portassembly

Page 186: Sg 247984

170 IBM PureFlex System and IBM Flex System Products and Technology

For a complete description of the features and capabilities of VMware ESX Server, see:

http://www.vmware.com/products/vi/esx/

5.2.10 Embedded 10 Gb Virtual Fabric Adapter

Some models of the x240 include an Embedded 10 Gb Virtual Fabric Adapter built into the system board. Table 5-2 on page 144 lists what models of the x240 include the Embedded 10 Gb Virtual Fabric Adapter. Each x240 model that includes the embedded 10 Gb Virtual Fabric Adapter also has the Compute Node Fabric Connector installed in I/O connector 1. The Compute Node Fabric Connector is physically screwed onto the system board, and provides connectivity to the Enterprise Chassis midplane.

Models without the Embedded 10 Gb Virtual Fabric Adapter do not include any other Ethernet connections to the Enterprise Chassis midplane. For those models, an I/O adapter must be installed in either I/O connector 1 or I/O connector 2. This adapter provides network connectivity between the server and the chassis midplane, and ultimately to the network switches.

Figure 5-19 shows the Compute Node Fabric Connector.

Figure 5-19 The Compute Node Fabric Connector

The Compute Node Fabric Connector enables Port 1 on the Embedded 10 Gb Virtual Fabric Adapter to be routed to I/O module bay 1. Similarly, port 2 can be routed to I/O module bay 2. The Compute Node Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1.

The Embedded 10 Gb Virtual Fabric Adapter is based on the Emulex BladeEngine 3, which is a single-chip, dual-port 10 Gigabit Ethernet (10 GbE) Ethernet Controller. The Embedded 10 Gb Virtual Fabric Adapter includes these features:

� PCI-Express Gen2 x8 host bus interface� Supports multiple Virtual Network Interface Card (vNIC) functions� TCP/IP offload Engine (TOE enabled)� SRIOV capable� RDMA over TCP/IP capable� iSCSI and FCoE upgrade offering using FoD

Restriction: If the ServeRAID M5115 SAS/SATA Controller is installed, the IBM USB Memory Key for VMware Hypervisor cannot be installed.

Restriction: If I/O connector 1 has the Embedded 10 Gb Virtual Fabric Adapter installed, only I/O connector 2 is available for the installation of additional I/O adapters.

Page 187: Sg 247984

171

Table 5-23 lists the ordering information for the IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade. This upgrade enables the iSCSI and FCoE support on the Embedded 10 Gb Virtual Fabric Adapter.

Table 5-23 Feature on Demand upgrade for FCoE and iSCSI support

Figure 5-20 shows the x240 and the location of the Compute Node Fabric Connector on the system board.

Figure 5-20 The x240 showing the location of the Compute Node Fabric Connector

5.2.11 I/O expansion

The x240 has two PCIe 3.0 x16 I/O expansion connectors for attaching I/O adapters. There is also another expansion connector designed for future expansion options. The I/O expansion connectors are a high-density 216-pin PCIe connector. Installing I/O adapters allows the x240 to connect to switch modules in the IBM Flex System Enterprise Chassis.

Part Number Feature Code Description

90Y9310 A2TD IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade

Captivescrews

LOMconnector

Page 188: Sg 247984

172 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-21 shows the rear of the x240 compute node and the locations of the I/O connectors.

Figure 5-21 Rear of the x240 compute node showing the locations of the I/O connectors

Table 5-24 lists the I/O adapters that are supported in the x240.

Table 5-24 Supported I/O adapters for the x240 compute node

5.2.12 Systems management

The following section describes some of the systems management features that are available with the x240.

Part number Feature code Ports Description

Ethernet adapters

49Y7900 A1BR 4 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

90Y3466 A1QY 2 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter

90Y3554 A1R1 4 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

Fibre Channel adapters

69Y1938 A1BM 2 IBM Flex System FC3172 2-port 8 Gb FC Adapter

95Y2375 A2N5 2 IBM Flex System FC3052 2-port 8 Gb FC Adapter

88Y6370 A1BP 2 IBM Flex System FC5022 2-port 16Gb FC Adapter

InfiniBand adapters

90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Requirement: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis, but across all compute nodes.

I/O connector 1

I/O connector 2

Page 189: Sg 247984

173

Front panel LEDs and controlsThe front of the x240 includes several LEDs and controls that assist in systems management. They include a hard disk drive activity LED, status LEDs, and power, identify, check log, fault, and light path diagnostic LEDs. Figure 5-22 shows the location of the LEDs and controls on the front of the x240.

Figure 5-22 The front of the x240 with the front panel LEDs and controls shown

Table 5-25 describes the front panel LEDs.

Table 5-25 x240 front panel LED information

USB port

Console Breakout Cable port

Power button / LED

Hard disk drive activity LED

Hard disk drive status LED Identify LED

Check log LED

Fault LED

NMI control

LED Color Description

Power Green This LED lights solid when system is powered up. When the compute node is initially plugged into a chassis, this LED is off. If the power-on button is pressed, the integrated management module (IMM) flashes this LED until it determines the compute node is able to power up. If the compute node is able to power up, the IMM powers the compute node on and turns on this LED solid. If the compute node is not able to power up, the IMM turns off this LED and turns on the information LED. When this button is pressed with the x240 out of the chassis, the light path LEDs are lit.

Location Blue You can use this LED to locate the compute node in the chassis by requesting it to flash from the chassis management module console. The IMM flashes this LED when instructed to by the Chassis Management Module. This LED functions only when the x240 is powered on.

Check error log Yellow The IMM turns on this LED when a condition occurs that prompts the user to check the system error log in the Chassis Management Module.

Fault Yellow This LED lights solid when a fault is detected somewhere on the compute node. If this indicator is on, the general fault indicator on the chassis front panel should also be on.

Hard disk drive activity LED

Green Each hot-swap hard disk drive has an activity LED. When this LED is flashing, it indicates that the drive is in use.

Hard disk drive status LED

Yellow When this LED is lit, it indicates that the drive has failed. If an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly (one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes per second), it indicates that the controller is identifying the drive.

Page 190: Sg 247984

174 IBM PureFlex System and IBM Flex System Products and Technology

Table 5-26 describes the x240 front panel controls.

Table 5-26 x240 front panel control information

Power LEDThe status of the power LED of the x240 shows the power status of the x240 compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-27.

Table 5-27 The power LED states of the x240 compute node

Light path diagnostics panelFor quick problem determination when located physically at the server, the x240 offers a three step guided path:

1. The Fault LED on the front panel2. The light path diagnostics panel shown in Figure 5-23 on page 1753. LEDs next to key components on the system board

Control Characteristic Description

Power on / offbutton

Recessed with Power LED

If the x240 is off, pressing this button causes the x240 to power up and start loading. When the x240 is on, pressing this button causes a graceful shutdown of the individual x240 so that it is safe to remove. This process includes shutting down the operating system (if possible) and removing power from the x240. If an operating system is running, the button might need to be held for approximately 4 seconds to initiate the shutdown. Protect this button from accidental activation. Group it with the Power LED.

NMI Recessed. It can be accessed only by using a small pointed object.

Causes an NMI for debugging purposes.

Power LED state Status of compute node

Off No power to compute node

On; fast flash mode Compute node has powerChassis Management Module is in discovery mode (handshake)

On; slow flash mode Compute node has powerPower in stand-by mode

On; solid Compute node has powerCompute node is operational

Restriction: The power button does not operate when the power LED is in fast flash mode.

Page 191: Sg 247984

175

The x240 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node as shown in Figure 5-23.

Figure 5-23 Location of x240 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis.

The meaning of each LED in the light path diagnostics panel is listed in Table 5-28.

Table 5-28 Light path panel LED definitions

Integrated Management Module IIEach x240 server has an IMM2 onboard, and uses the Unified Extensible Firmware Interface (UEFI) to replace the older BIOS interface.

The IMM2 provides the following major features as standard:

� IPMI v2.0-compliance

� Remote configuration of IMM2 and UEFI settings without the need to power on the server

LED Color Meaning

LP Green The light path diagnostics panel is operational

S BRD Yellow System board error is detected

MIS Yellow A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST

NMI Yellow A non-maskable interrupt (NMI) has occurred

TEMP Yellow An over-temperature condition has occurred that was critical enough to shut down the server

MEM Yellow A memory fault has occurred. The corresponding DIMM error LEDs on the system board are also lit.

ADJ Yellow A fault is detected in the adjacent expansion unit (if installed)

Page 192: Sg 247984

176 IBM PureFlex System and IBM Flex System Products and Technology

� Remote access to system fan, voltage, and temperature values

� Remote IMM and UEFI update

� UEFI update when the server is powered off

� Remote console by way of a serial over LAN

� Remote access to the system event log

� Predictive failure analysis and integrated alerting features (for example, by using Simple Network Management Protocol (SNMP))

� Remote presence, including remote control of server by using a Java or Active x client

� Operating system failure window (blue screen) capture and display through the web interface

� Virtual media that allow the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server

For more information about the IMM, see 3.4.1, “Integrated Management Module II” on page 43.

5.2.13 Operating system support

The following operating systems are supported by the x240:

� Microsoft Windows Server 2008 HPC Edition

� Microsoft Windows Server 2008 R2

� Microsoft Windows Server 2008, Datacenter x64 Edition

� Microsoft Windows Server 2008, Enterprise x64 Edition

� Microsoft Windows Server 2008, Standard x64 Edition

� Microsoft Windows Server 2008, Web x64 Edition

� Red Hat Enterprise Linux 5 Server with Xen x64 Edition

� Red Hat Enterprise Linux 5 Server x64 Edition

� Red Hat Enterprise Linux 6 Server x64 Edition

� SUSE LINUX Enterprise Server 10 for AMD64/EM64T

� SUSE LINUX Enterprise Server 11 for AMD64/EM64T

� SUSE LINUX Enterprise Server 11 with Xen for AMD64/EM64T

� VMware ESX 4.1

� VMware ESXi 4.1

� VMware vSphere 5

For the latest list of supported operating systems, see IBM ServerProven at:

http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. This address allows you to remotely manage the x240 by connecting directly to the IMM independent of the FSM or CMM.

Page 193: Sg 247984

177

5.3 IBM Flex System x220 Compute Node

The IBM Flex System x220 Compute Node, machine type 7906, is the next generation cost-optimized compute node designed for less demanding workloads and low-density virtualization. The x220 is efficient and equipped with flexible configuration options and advanced management to run a broad range of workloads.

5.3.1 Introduction

The IBM Flex System x220 Compute Node is a high-availability, scalable compute node optimized to support the next-generation microprocessor technology. With a balance of cost and system features, the x220 is an ideal platform for general business workloads. This section describes the key features of the server.

Figure 5-24 shows the front of the compute node, showing the location of the controls, LEDs, and connectors.

Figure 5-24 IBM Flex System x220 Compute Node

USB port Console breakout cable port

Two 2.5” HS drive bays

LEDpanel

Power

Light path diagnostics

panel

Page 194: Sg 247984

178 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-25 shows the internal layout and major components of the x220.

Figure 5-25 Exploded view of the x220, showing the major components

Table 5-29 lists the features of the x220.

Table 5-29 IBM Flex System x220 Compute Node specifications

Cover

Heat sink

Microprocessorheat sink filler

I/O expansionadapter

Right air baffle

Left air baffle

Microprocessor

Hard diskdrive backplane

Hard diskdrive cage

DIMMHard diskdrive bay filler

Hot-swaphard diskdrive

Components Specification

Form factor Half-wide compute node.

Chassis support IBM Flex System Enterprise Chassis.

Processor Up to two Intel Xeon Processor E5-2400 product family processors. These processors can be eight-core (up to 2.3 GHz), six-core (up to 2.4 GHz), or quad-core (up to 2.2 GHz). There is one QPI link that runs at 8.0 GTps, L3 cache up to 20 MB, and memory speeds up to 1600 MHz. The server also supports one Intel Pentium Processor 1400 product family processor with two cores, up to 2.8 GHz, 5 MB L3 cache, and 1066 MHz memory speeds.

Chipset Intel C600 series.

Memory Up to 12 DIMM sockets (six DIMMs per processor) using LP DDR3 DIMMs. RDIMMs and UDIMMs are supported. 1.5 V and low-voltage 1.35 V DIMMs are supported. Support for up to 1600 MHz memory speed, depending on the processor. Three memory channels per processor (two DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @ 1600 MHz) with single and dual rank RDIMMs.

Page 195: Sg 247984

179

Memory maximums � With RDIMMs: Up to 192 GB with 12x 16 GB RDIMMs and two E5-2400 processors.� With UDIMMs: Up to 48 GB with 12x 4 GB UDIMMs and two E5-2400 processors.Half of these maximums and DIMMs counts with one processor installed.

Memory protection ECC, Chipkill (for x4-based memory DIMMs), and optional memory mirroring and memory rank sparing.

Disk drive bays Two 2.5-inch hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional eXFlash support for up to eight 1.8-inch SSDs. Onboard ServeRAID C105 supports SATA drives only.

Maximum internal storage

With two 2.5-inch hot-swap drives: � Up to 2 TB with 1 TB 2.5-inch NL SAS HDDs� Up to 1.8 TB with 900 GB 2.5-inch SAS HDDs� Up to 2 TB with 1 TB 2.5-inch SATA HDDs� Up to 512 GB with 256 GB 2.5-inch SATA SSDs. An intermix of SAS and SATA HDDs and SSDs is supported. With eXFlash 1.8-inch SSDs and ServeRAID M5115 RAID adapter: Up to 1.6 TB with eight 200 GB 1.8-inch SSDs.

RAID support � Software RAID 0 and 1 with integrated LSI-based 3 Gbps ServeRAID C105 controller; supports SATA drives only. Non-RAID is not supported.

� Optional ServeRAID H1135 RAID adapter with LSI SAS2004 controller, supports SAS/SATA drives with hardware-based RAID 0 and 1. An H1135 adapter is installed in a dedicated PCIe 2.0 x4 connector and does not use either I/O adapter slot (see Figure 5-26 on page 180).

� Optional ServeRAID M5115 RAID adapter with RAID 0, 1, 10, 5, 50 support and 1 GB cache. M5115 uses the I/O adapter slot 1. Can be installed in all models, including models with an Embedded 1 GbE Fabric Connector. Supports up to eight 1.8-inch SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler.

Network interfaces Some models (see Table 5-30 on page 180): Embedded dual-port Broadcom BCM5718 Ethernet Controller that supports Wake on LAN and Serial over LAN, IPv6. TCP/IP offload Engine (TOE) not supported. Routes to chassis I/O module bays 1 and 2 through a Fabric Connector to the chassis midplane. The Fabric Connector precludes the use of I/O adapter slot 1, with the exception that the M5115 can be installed in slot 1 while the Fabric Connector is installed. Remaining models: No network interface standard; optional 1 Gb or 10 Gb Ethernet adapters.

PCI Expansion slots Two connectors for I/O adapters; each connector has PCIe x8+x4 interfaces. Includes an Expansion Connector (PCIe 3.0 x16) for future use to connect a compute node expansion unit. Dedicated PCIe 2.0 x4 interface for ServeRAID H1135 adapter only.

Ports USB ports: One external and two internal ports for an embedded hypervisor. A console breakout cable port on the front of the server provides local KVM and serial ports (cable standard with chassis; additional cables optional).

Systems management

UEFI, IBM IMM2 with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director and Active Energy Manager, and IBM ServerGuide.

Security features Power-on password, administrator's password, and Trusted Platform Module V1.2.

Video Matrox G200eR2 video core with 16 MB video memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors.

Limited warranty Three-year customer-replaceable unit and on-site limited warranty with 9x5/NBD.

Components Specification

Page 196: Sg 247984

180 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-26 shows the components on the system board of the x220.

Figure 5-26 Layout of the IBM Flex System x220 Compute Node system board

5.3.2 Models

The current x220 models are shown in Table 5-30. All models include 4 GB of memory (one 4 GB DIMM) running at either 1333 MHz or 1066 MHz (depending on model).

Table 5-30 Models of the IBM Flex System x220 Compute Node, type 7906

Operating systems supported

Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.3.13, “Operating system support” on page 197.

Service and support Optional service upgrades are available through IBM ServicePac® offerings: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software.

Dimensions Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.)

Weight Maximum configuration: 6.4 kg (14.11 lb).

Components Specification

Hot-swap drive bay backplane

Processor 2 andsix memory DIMMs

I/O connector 2Light path diagnostics

Processor 1 and six memory DIMMs

Expansion Connector

Fabric ConnectorI/O connector 1

Optional ServeRAID H1135

USBport 2

USBport 1

BroadcomEthernet

Model Intel ProcessorE5-2400: 2 maximumPentium 1400: 1 maximum

Memory RAID adapter

Disk baysa

Disks Embedded1 GbEb

I/O slots(used/max)

7906-A2x 1x Intel Pentium 1403 2C 2.6 GHz 5MB 1066 MHz 80W

1x 4 GB UDIMM (1066 MHz)c

ServeRAID C105

2x 2.5” hot-swap

Open Standard 1 / 2b

7906-B2x 1x Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60W

1x 4 GB UDIMM 1333 MHz

ServeRAID C105

2x 2.5” hot-swap

Open Standard 1 / 2b

Page 197: Sg 247984

181

5.3.3 Chassis support

The x220 type 8737 is supported in the IBM Flex System Enterprise Chassis as listed in Table 5-31.

Table 5-31 x220 chassis support

7906-C2x 1x Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80W

1x 4 GB RDIMM (1066 MHz)c

ServeRAID C105

2x 2.5” hot-swap

Open Standard 1 / 2b

7906-D2x 1x Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95W

1x 4 GB RDIMM 1333 MHz

ServeRAID C105

2x 2.5” hot-swap

Open Standard 1 / 2b

7906-G2x 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95W

1x 4 GB RDIMM 1333 MHz

ServeRAID C105

2x 2.5” hot-swap

Open No 0 / 2

7906-G4x 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95W

1x 4 GB RDIMM 1333 MHz

ServeRAID C105

2x 2.5” hot-swap

Open Standard 1 / 2b

7906-H2x 1x Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95W

1x 4 GB RDIMM 1333 MHz

ServeRAID C105

2x 2.5” hot-swap

Open Standard 1 / 2b

7906-J2x 1x Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95W

1x 4 GB RDIMM 1333 MHzc

ServeRAID C105

2x 2.5” hot-swap

Open No 0 / 2

7906-L2x 1x Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95W

1x 4 GB RDIMM 1333 MHzc

ServeRAID C105

2x 2.5” hot-swap

Open No 0 / 2

a. The 2.5-inch drive bays can be replaced and expanded with IBM eXFlash and a ServeRAID M5115 RAID controller. This configuration supports up to eight 1.8-inch SSDs.

b. These models include an embedded 1 Gb Ethernet controller. Connections are routed to the chassis midplane by using a Fabric Connector. Precludes the use of I/O connector 1 (except the ServeRAID M5115).

c. For A2x and C2x, the memory operates at 1066 MHz, the memory speed of the processor. For J2x and L2x, memory operates at 1333 MHz to match the installed DIMM, rather than 1600 MHz.

Model Intel ProcessorE5-2400: 2 maximumPentium 1400: 1 maximum

Memory RAID adapter

Disk baysa

Disks Embedded1 GbEb

I/O slots(used/max)

Server BladeCenter chassis (all) IBM Flex System Enterprise Chassis

x220 No Yes

Page 198: Sg 247984

182 IBM PureFlex System and IBM Flex System Products and Technology

The x220 is a half wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. Figure 5-27 shows the chassis shelf in the chassis.

Figure 5-27 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or larger, shelves must be removed from within the chassis. Remove the shelves by sliding the two latches on the shelf towards the center, and then sliding the shelf from the chassis.

5.3.4 System architecture

The IBM Flex System x220 Compute Node features the Intel Xeon E5-2400 series processors. The Xeon E5-2400 series processor has models with either four, six, or eight cores per processor with up to 16 threads per socket. The processors have the following features:

� Up to 20 MB of shared L3 cache� Hyper-Threading� Turbo Boost Technology 2.0 (depending on processor model)� One QPI link that runs at up to 8 GT/s� One integrated memory controller� Three memory channels that support up to two DIMMs each

The x220 also supports an Intel Pentium 1403 or 1407 dual-core processor for entry-level server applications. Only one Pentium processor is supported in the x220. CPU socket 2 must be left unused, and only six DIMM sockets are available.

Page 199: Sg 247984

183

Figure 5-28 shows the system architecture of the x220 system.

Figure 5-28 IBM Flex System x240 Compute Node system board block diagram

The IBM Flex System x220 Compute Node has the following system architecture features as standard:

� Two 2011-pin type R (LGA-2011) processor sockets

� An Intel C600 PCH

� Three memory channels per socket

� Up to two DIMMs per memory channel

� 12 DDR3 DIMM sockets

� Support for UDIMMs and RDIMMs

� One integrated 1 Gb Ethernet controller (1 GbE LOM in diagram)

� One LSI 2004 SAS controller

� Integrated software RAID 0 and 1 with support for the H1135 LSI-based RAID controller

� One IMM2

� Two PCIe 3.0 I/O adapter connectors with one x8 and one x4 host connection each (12 lanes total).

� One internal and one external USB connector

QPI link (up to 8 GT/s)

x4 ESI link

I/O connector 1

1 GbE LOM

HDDsor SSDs

ServeRAID H1135PCIe 2.0 x4

USB

Internal USBFront USB

IntelC600PCH

Management to midplane

Front KVM port

IMM v2

Video & serial

PCIe 2.0 x2

I/O connector 2

Sidecar connector

PCIe 3.0 x8+x4

PCIe 3.0 x8+x4PCIe 3.0 x4

PCIe 3.0 x4PCIe 3.0 x16

IntelXeon

Processor 1

DDR3 DIMMs

3 memorychannels

2 DIMMs perchannel

IntelXeon

Processor 2

x1 USB

(optional)

Page 200: Sg 247984

184 IBM PureFlex System and IBM Flex System Products and Technology

5.3.5 Processor options

The x220 supports the processor options listed in Table 5-32. The server supports one or two Intel Xeon E5-2400 processors, but supports only one Intel Pentium 1403 or 1407 processor. The table also shows which server models have each processor standard. If no corresponding model for a particular processor is listed, the processor is available only through the configure to order process.

Table 5-32 Supported processors for the x220

5.3.6 Memory options

IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostic procedures for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide.

The x220 supports LP DDR3 memory RDIMMs and UDIMMs. The server supports up to six DIMMs when one processor is installed, and up to 12 DIMMs when two processors are installed. Each processor has three memory channels, with two DIMMs per channel.

The following rules apply when selecting the memory configuration:

� Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V.

� The maximum number of ranks supported per channel is eight.

� The maximum quantity of DIMMs that can be installed in the server depends on the number of processors. For more information, see the “Max. qty supported” row in Table 5-33 on page 185.

Part number Intel Xeon processor description Models where used

90Y4793 Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W L2x

90Y4795 Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W J2x

90Y4796 Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W H2x

90Y4797 Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W G2x, G4x

90Y4799 Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W D2x

90Y4800 Intel Xeon E5-2407 4C 2.2 GHz 10 MB 1066 MHz 80 W -

90Y4801 Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W C2x

90Y4804 Intel Xeon E5-2450L 8C 1.8 GHz 20 MB 1600 MHz 70 W -

90Y4805 Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W B2x

None Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W A2x

Nonea

a. The Intel Pentium 1407 is available through configure to order or special bid only.

Intel Pentium 1407 2C 2.8 GHz 5 MB 1066 MHz 80 W -

Page 201: Sg 247984

185

� All DIMMs in all processor memory channels operate at the same speed, which is determined as the lowest value of:

– Memory speed supported by a specific processor.

– Lowest maximum operating speed for the selected memory configuration that depends on rated speed. For more information, see the “Max. operating speed” section in Table 5-33. The shaded cells indicate that the speed indicated is the maximum that the DIMM allows.

Cells highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated speed.

Table 5-33 Maximum memory speeds

The following memory protection technologies are supported:

� ECC� Chipkill (for x4-based memory DIMMs; look for “x4” in the DIMM description) � Memory mirroring� Memory sparing

If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per processor). Both DIMMs in a pair must be identical in type and size.

If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be

Spec UDIMMs RDIMMs

Rank Single rank Dual rank Single rank Dual rank Quad rank

Part numbers 49Y1403 (2 GB)

49Y1404 (4 GB)

49Y1406 (2 GB)

49Y1407 (4 GB)

49Y1397 (8 GB)

90Y3109(4 GB)

49Y1400(16 GB)

Rated speed 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 1066 MHz

Rated voltage 1.35 V 1.35 V 1.35 V 1.35 V 1.5 V 1.35 V

Operating voltage

1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V 1.5 V 1.35 V 1.5 V

Max quantitya

a. The maximum quantity supported is shown for two processors installed. When one processor is installed, the maximum quantity supported is half of that shown.

12 12 12 12 12 12 12 12 12 12 12

Largest DIMM 2 GB 2 GB 4 GB 4 GB 2 GB 2 GB 8 GB 8 GB 4 GB 16 GB 16 GB

Max memory capacity

24 GB 24 GB

48 GB 48 GB

24 GB 24 GB

96 GB 96 GB

48 GB 192 GB

192 GB

Max memory at max speed

12 GB 12 GB

24 GB 24 GB

24 GB 24 GB

96 GB 96 GB

48 GB 192 GB

192 GB

Maximum operating speed (MHz)

1 DIMM per channel

1333 1333 1333 1333 1333 1333 1333 1333 1600 800 800

2 DIMMs per channel

1066 1066 1066 1066 1333 1333 1333 1333 1600 800 800

Page 202: Sg 247984

186 IBM PureFlex System and IBM Flex System Products and Technology

identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed.

Table 5-34 lists the memory options available for the x220 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of three (one for each of the three memory channels).

Table 5-34 Memory options for the x220

5.3.7 Internal disk storage controllers

The x220 server has two 2.5-inch hot-swap drive bays accessible from the front of the blade server (Figure 5-24 on page 177). The server optionally supports 1.8-inch solid-state drives, as described in “ServeRAID M5115 configurations and options” on page 188.

The x220 supports three disk controllers:

� ServeRAID C105: An onboard SATA controller with software RAID capabilities� ServeRAID H1135: An entry level hardware RAID controller� ServeRAID M5115: An advanced RAID controller with cache, backup, and RAID options

These three controllers are mutually exclusive. Table 5-35 lists the ordering information.

Table 5-35 Internal storage controller ordering information

ServeRAID C105 controllerOn standard models, the two 2.5-inch drive bays are connected to a ServeRAID C105 onboard SATA controller with software RAID capabilities. The C105 function is embedded in the Intel C600 chipset.

Part number Description Models where used

Registered DIMM (RDIMM) modules

49Y1406 4 GB (1x 4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM J2x, L2x

49Y1407 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM C2x, D2x, G2x, G4x, H2x

90Y3109 8 GB (1x 8 GB, 2Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

49Y1397 8 GB (1x 8 GB, 2Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1400 16 GB (1x 16 GB, 4Rx4, 1.35 V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM

Unbuffered DIMM (UDIMM) modules

49Y1403 2 GB (1x 2 GB, 1Rx8, 1.35 V) PC3L-10600 ECC DDR3 1333 MHz LP UDIMM A2x, B2x

49Y1404 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM -

Part number Description Maximum quantity

Integrated ServeRAID C105 1

90Y4750 ServeRAID H1135 Controller for IBM Flex System and IBM BladeCenter

1

90Y4390 ServeRAID M5115 SAS/SATA Controller 1

Page 203: Sg 247984

187

The C105 has the following features:

� Support for SATA drives (SAS is not supported)� Support for RAID 0 and RAID 1 (non-RAID is not supported)� 6 Gbps throughput per port � Support for up to two volumes� Support for virtual drive sizes greater than 2 TB� Fixed stripe unit size of 64 KB� Support for MegaRAID Storage Manager management software

ServeRAID H1135The x220 also supports an entry level hardware RAID solution with the addition of the ServeRAID H1135 Controller for IBM Flex System and BladeCenter. The H1135 is installed in a dedicated slot (Figure 5-26 on page 180). When the H1135 adapter is installed, the C105 controller is disabled.

The H1135 has the following features:

� Based on the LSI SAS2004 6 Gbps SAS 4-port controller� PCIe 2.0 x4 host interface � CIOv form factor (supported in the x220 and BladeCenter HS23E)� Support for SAS, SATA, and SSD drives� Support for RAID 0, RAID 1, and non-RAID� 6 Gbps throughput per port � Support for up to two volumes� Fixed stripe size of 64 KB � Native driver support in Windows, Linux, and VMware� S.M.A.R.T. support � Support for MegaRAID Storage Manager management software

ServeRAID M5115The ServeRAID M5115 SAS/SATA Controller (90Y4390) is an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache, which can be backed up to flash memory when attached to an optional supercapacitor. The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Fabric Connector is installed (used to route the Embedded Gb Ethernet to chassis bays 1 and 2). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. When the M5115 adapter is installed, the C105 controller is disabled.

The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid-state drives:

� Up to two 2.5-inch drives only� Up to four 1.8-inch drives only� Up to two 2.5-inch drives, plus up to four 1.8-inch SSDs� Up to eight 1.8-inch SSDs

For more information about these configurations, see “ServeRAID M5115 configurations and options” on page 188.

Restriction: There is no native (in-box) driver for Windows and Linux. The drivers must be downloaded separately. In addition, there is no support for VMware, Hyper-V, Xen, or SSDs.

Page 204: Sg 247984

188 IBM PureFlex System and IBM Flex System Products and Technology

The ServeRAID M5115 controller has the following specifications:

� Eight internal 6 Gbps SAS/SATA ports.

� PCI Express 3.0 x8 host interface.

� 6 Gbps throughput per port.

� 800 MHz dual-core IBM PowerPC processor with an LSI SAS2208 6 Gbps ROC controller.

� Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411.

� Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342.

� Support for SAS and SATA HDDs and SSDs.

� Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in the same array (drive group) is not recommended.

� Support for SEDs with MegaRAID SafeStore.

� Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447).

� Support for up to 64 virtual drives, up to 128 drive groups, and up to 16 virtual drives per drive group. Also supports up to 32 physical drives per drive group.

� Support for LUN sizes up to 64 TB.

� Configurable stripe size up to 1 MB.

� Compliant with DDF CoD.

� S.M.A.R.T. support.

� MegaRAID Storage Manager management software.

ServeRAID M5115 configurations and optionsThe x220 with the addition of the M5115 controller supports 2.5-inch drives or 1.8-inch eXFlash SSDs or combinations of the two.

At least one hardware kit is required with the ServeRAID M5115 controller. These hardware kits enable specific drive support:

� ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 (90Y4424) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection.

This enablement kit replaces the standard two-bay backplane that is attached through the system board to an onboard controller. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit.

MegaRAID CacheVault flash cache protection uses NAND flash memory powered by a supercapacitor to protect data stored in the controller cache. This module eliminates the need for a lithium-ion battery commonly used to protect DRAM cache memory on PCI RAID controllers.

To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk.

Page 205: Sg 247984

189

� ServeRAID M5100 Series IBM eXFlash Kit for IBM Flex System x220 (90Y4425) enables eXFlash support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay eXFlash backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercapacitor.

� ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 (90Y4426) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles, left and right, that can attach two 1.8-inch SSD attachment locations. It also contains flex cables for attachment to up to four 1.8-inch SSDs.

Table 5-36 shows the kits required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the eXFlash kit, and the SSD Expansion kit.

Table 5-36 ServeRAID M5115 hardware kits

Figure 5-29 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-36).

Figure 5-29 The ServeRAID M5115 and the Enablement Kit installed

Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs only, then this kit is not required.

Drive support required Components required

Maximumnumber of2.5-inch drives

Maximumnumber of1.8-inch SSDs

ServeRAIDM5115 90Y4390

Enablement Kit90Y4424

eXFlash Kit90Y4425

SSD ExpansionKit 90Y4426

2 0 => Add this ... and this

0 4 (front) => Add this ... and this

2 4 (internal) => Add this ... and this ... and this

0 8 (both) => Add this ... and this ... and this

ServeRAID M5115 controller

ServeRAID M5115 controller (90Y4390) withServeRAID M5100 Series Enablement Kit for x220 (90Y4424)

MegaRAID CacheVault flash cache protection

Replacement 2-drive backplane

Page 206: Sg 247984

190 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-30 shows how the ServeRAID M5115 and eXFlash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of Table 5-36 on page 189).

Figure 5-30 ServeRAID M5115 with eXFlash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations:

� Four in the front of the system in place of the two 2.5-inch drive bays� Two in a tray above the memory banks for processor 1� Two in a tray above the memory banks for processor 2

Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. The FoD license upgrades are listed in Table 5-37.

Table 5-37 Supported upgrade features

These features are described as follows:

� RAID 6 Upgrade (90Y4410)

Adds support for RAID 6 and RAID 60. This is an FoD license.

� Performance Accelerator (90Y4412)

The Performance Accelerator for IBM Flex System, implemented by using the LSI MegaRAID FastPath software, provides high-performance I/O acceleration for SSD-based virtual drives. It uses an extremely low-latency I/O path to increase the maximum IOPS capability of the controller. This feature boosts the performance of applications with a

Part number Description Maximum supported

90Y4410 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System 1

90Y4412 ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath)

1

90Y4447 ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0)

1

ServeRAID M5115 controller

ServeRAID M5115 controller (90Y4390) withServeRAID M5100 Series IBM eXFlash Kit for x220 (90Y4425) and ServeRAID M5100 Series SSD Expansion Kit for x220 (90Y4426)

SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)

eXFlash Kit: Replacement 4-drive SSD backplane and drive bays

Eight drives supported:- Four internal drives- Four front-accessible drives

Page 207: Sg 247984

191

highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is an FoD license.

� SSD Caching Enabler for traditional hard drives (90Y4447)

The SSD Caching Enabler for IBM Flex System, implemented by using the LSI MegaRAID CacheCade Pro 2.0, is designed to accelerate the performance of HDD arrays. It can do so with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a FoD license. This feature requires that at least one SSD drive is installed.

5.3.8 Supported internal drives

The x220 supports 1.8-inch and 2.5-inch drives.

Supported 1.8-inch drivesThe 1.8-inch solid-state drives supported with the ServeRAID M5115 are listed in Table 5-38.

Table 5-38 Table 9. Supported 1.8-inch solid-state drives

Supported 2.5-inch drivesThe 2.5-inch drive bays support SAS or SATA HDDs or SATA SSDs. Table 5-39 lists the supported 2.5-inch drive options. The maximum quantity supported is two.

Table 5-39 2.5-inch drive options for internal disk storage

Part number Description Maximum supported

43W7746 IBM 200 GB SATA 1.8-inch MLC SSD 8

43W7726 IBM 50 GB SATA 1.8-inch MLC SSD 8

Part number Description

Supported by ServeRAID controller

C105 H1135 M5115

10 K SAS hard disk drives

42D0637 IBM 300 GB 10 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD No Supported Supported

49Y2003 IBM 600 GB 10 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD No Supported Supported

81Y9650 IBM 900 GB 10 K 6 Gbps SAS 2.5-inch SFF HS HDD No Supported Supported

15 K SAS hard disk drives

42D0677 IBM 146 GB 15 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD No Supported Supported

81Y9670 IBM 300 GB 15 K 6 Gbps SAS 2.5-inch SFF HS HDD No Supported Supported

NL SATA

81Y9722 IBM 250 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD Supported Supported Supported

81Y9726 IBM 500 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD Supported Supported Supported

81Y9730 IBM 1 TB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD Supported Supported Supported

Page 208: Sg 247984

192 IBM PureFlex System and IBM Flex System Products and Technology

5.3.9 Embedded 1 Gb Ethernet controller

Some models of the x220 include an Embedded 1 Gb Ethernet controller (also known as LOM) built into the system board. Table 5-30 on page 180 lists what models of the x220 include the controller. Each x220 model that includes the controller also has the Compute Node Fabric Connector installed in I/O connector 1 and physically screwed onto the system board. The Compute Node Fabric Connector provides connectivity to the Enterprise Chassis midplane. Figure 5-26 on page 180 shows the location of the Fabric Connector.

The Fabric Connector enables port 1 on the controller to be routed to I/O module bay 1. Similarly, port 2 is routed to I/O module bay 2. The Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1.

The Embedded 1 Gb Ethernet controller has the following features:

� Broadcom BCM5718 based� Dual-port Gigabit Ethernet controller� PCIe 2.0 x2 host bus interface� Supports Wake on LAN� Supports Serial over LAN� Supports IPv6

5.3.10 I/O expansion

Like other IBM Flex System compute nodes, the x220 has two PCIe 3.0 I/O expansion connectors for attaching I/O adapters. On the x220, each of these connectors has 12 PCIe lanes. These lanes are implemented as one x8 link (connected to the first application-specific integrated circuit (ASIC) on the installed adapter) and one x4 link (connected to the second ASIC on the installed adapter).

The I/O expansion connectors are high-density 216-pin PCIe connectors. Installing I/O adapters allows the x220 to connect to switch modules in the IBM Flex System Enterprise Chassis. The x220 also has a third expansion connector designed for future expansion options.

NL SAS

42D0707 IBM 500 GB 7200 6 Gbps NL SAS 2.5-inch SFF Slim-HS HDD No Supported Supported

81Y9690 IBM 1 TB 7.2 K 6 Gbps NL SAS 2.5-inch SFF HS HDD No Supported Supported

Solid-state drives

43W7718 IBM 200 GB SATA 2.5-inch MLC HS SSD No No Supported

90Y8643 IBM 256 GB SATA 2.5-inch MLC HS Entry SSD No No Supported

90Y8648 IBM 128 GB SATA 2.5-inch MLC HS Entry SSD No No Supported

Part number Description

Supported by ServeRAID controller

C105 H1135 M5115

Restriction: TCP/IP offload engine (TOE) is not supported.

Page 209: Sg 247984

193

Figure 5-31 shows the rear of the x240 compute node and the locations of the I/O connectors.

Figure 5-31 Rear of the x220 compute node showing the locations of the I/O connectors

Table 5-40 lists the I/O adapters that are supported in the x220.

Table 5-40 Supported I/O adapters for the x220 compute node

5.3.11 Integrated virtualization

The x220 offers USB flash drive options preinstalled with versions of VMware ESXi. This is an embedded version of VMware ESXi and is fully contained on the flash drive, without requiring any disk space. The USB memory key plugs into one of the two internal USB ports on the x220 system board (Figure 5-26 on page 180). If you install USB keys in both USB ports, both

Part number Feature code Ports Description

Ethernet adapters

49Y7900 A1BR 4 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

90Y3466 A1QY 2 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter

90Y3554 A1R1 4 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

Fibre Channel adapters

69Y1938 A1BM 2 IBM Flex System FC3172 2-port 8 Gb FC Adapter

95Y2375 A2N5 2 IBM Flex System FC3052 2-port 8 Gb FC Adapter

88Y6370 A1BP 2 IBM Flex System FC5022 2-port 16Gb FC Adapter

InfiniBand adapters

90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Restriction: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis but across all compute nodes.

I/O connector 1

I/O connector 2

Page 210: Sg 247984

194 IBM PureFlex System and IBM Flex System Products and Technology

devices are listed in the boot menu. This configuration allows you to boot from either device, or set one as a backup in case the first gets corrupted.

The supported USB memory keys are listed in Table 5-41.

Table 5-41 Virtualization options

5.3.12 Systems management

The following section describes some of the systems management features that are available with the x220.

Front panel LEDs and controlsThe front of the x220 includes several LEDs and controls that assist in systems management. They include a hard disk drive activity LED, status LEDs, and power, identify, check log, fault, and light path diagnostic LEDs. Figure 5-32 shows the location of the LEDs and controls on the front of the x220.

Figure 5-32 The front of the x220 with the front panel LEDs and controls shown

Table 5-42 describes the front panel LEDs.

Table 5-42 x220 front panel LED information

Part number Description Maximum supported

41Y8300 IBM USB Memory Key for VMware ESXi 5.0 2

41Y8298 IBM Blank USB Memory Key for VMware ESXi Downloads 2

USB port

Console Breakout Cable port

Power button / LED

Hard disk drive activity LED

Hard disk drive status LED Identify LED

Check log LED

Fault LED

NMI control

LED Color Description

Power Green This LED lights solid when system is powered up. When the compute node is initially plugged into a chassis, this LED is off. If the power-on button is pressed, the IMM flashes this LED until it determines that the compute node is able to power up. If the compute node is able to power up, the IMM powers the compute node on and turns on this LED solid. If the compute node is not able to power up, the IMM turns off this LED and turns on the information LED. When this button is pressed with the server out of the chassis, the light path LEDs are lit.

Page 211: Sg 247984

195

Table 5-43 describes the x220 front panel controls.

Table 5-43 x220 front panel control information

Power LEDThe status of the power LED of the x220 shows the power status of the compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-44.

Table 5-44 The power LED states of the x240 compute node

Location Blue A user can use this LED to locate the compute node in the chassis by requesting it to flash from the chassis management module console. The IMM flashes this LED when instructed to by the Chassis Management Module. This LED functions only when the server is powered on.

Check error log Yellow The IMM turns on this LED when a condition occurs that prompts the user to check the system error log in the Chassis Management Module.

Fault Yellow This LED lights solid when a fault is detected somewhere on the compute node. If this indicator is on, the general fault indicator on the chassis front panel should also be on.

Hard disk drive activity LED

Green Each hot-swap hard disk drive has an activity LED, and when this LED is flashing, it indicates that the drive is in use.

Hard disk drive status LED

Yellow When this LED is lit, it indicates that the drive has failed. If an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly (one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes per second), it indicates that the controller is identifying the drive.

LED Color Description

Control Characteristic Description

Power on / offbutton

Recessed with Power LED

If the server is off, pressing this button causes the server to power up and start loading. When the server is on, pressing this button causes a graceful shutdown of the individual server so it is safe to remove. This process includes shutting down the operating system (if possible) and removing power from the server. If an operating system is running, the button might have to be held for approximately 4 seconds to initiate the shutdown. This button must be protected from accidental activation. Group it with the Power LED.

NMI Recessed. It can be accessed only by using a small pointed object.

Causes an NMI for debugging purposes.

Power LED state Status of compute node

Off No power to compute node

On; fast flash mode Compute node has powerChassis Management Module is in discovery mode (handshake)

On; slow flash mode Compute node has powerPower in stand-by mode

On; solid Compute node has powerCompute node is operational

Exception: The power button does not operate when the power LED is in fast flash mode.

Page 212: Sg 247984

196 IBM PureFlex System and IBM Flex System Products and Technology

Light path diagnostic proceduresFor quick problem determination when located physically at the server, the x220 offers a three step guided path:

1. The Fault LED on the front panel2. The light path diagnostics panel, shown in Figure 5-333. LEDs next to key components on the system board

The x220 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node as shown in Figure 5-33.

Figure 5-33 Location of x220 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis.

The meaning of each LED in the light path diagnostics panel is listed in Table 5-45.

Table 5-45 Light path panel LED definitions

LED Color Meaning

LP Green The light path diagnostics panel is operational

S BRD Yellow System board error is detected

MIS Yellow A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST

NMI Yellow An NMI has occurred

TEMP Yellow An over-temperature condition has occurred that was critical enough to shut down the server

MEM Yellow A memory fault has occurred. The corresponding DIMM error LEDs on the system board should also be lit.

ADJ Yellow A fault is detected in the adjacent expansion unit (if installed)

Page 213: Sg 247984

197

Integrated Management Module IIEach x220 compute node has an IMM2 onboard and uses the UEFI to replace the older BIOS interface.

The IMM2 provides the following major features as standard:

� IPMI v2.0-compliance

� Remote configuration of IMM2 and UEFI settings without the need to power on the server

� Remote access to system fan, voltage, and temperature values

� Remote IMM and UEFI update

� UEFI update when the server is powered off

� Remote console by way of a serial over LAN

� Remote access to the system event log

� Predictive failure analysis and integrated alerting features (for example, by using SNMP)

� Remote presence, including remote control of server by using a Java or Active x client

� Operating system failure window (blue screen) capture and display through the web interface

� Virtual media that allows the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server

For more information about the IMM, see 3.4.1, “Integrated Management Module II” on page 43.

5.3.13 Operating system support

The following operating systems are supported by the x220:

� Microsoft Windows Server 2008 HPC Edition � Microsoft Windows Server 2008 R2 � Microsoft Windows Server 2008, Datacenter x64 Edition � Microsoft Windows Server 2008, Enterprise x64 Edition � Microsoft Windows Server 2008, Standard x64 Edition � Microsoft Windows Server 2008, Web x64 Edition � Red Hat Enterprise Linux 5 Server with Xen x64 Edition � Red Hat Enterprise Linux 5 Server x64 Edition � Red Hat Enterprise Linux 6 Server x64 Edition � SUSE LINUX Enterprise Server 10 for AMD64/EM64T � SUSE LINUX Enterprise Server 11 for AMD64/EM64T � SUSE LINUX Enterprise Server 11 with Xen for AMD64/EM64T � VMware ESX 4.1 � VMware ESXi 4.1 � VMware vSphere 5

Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. This address allows you to remotely manage the x220 by connecting directly to the IMM independent of the IBM Flex System Manager or Chassis Management Module.

Page 214: Sg 247984

198 IBM PureFlex System and IBM Flex System Products and Technology

For the latest list of supported operating systems, see IBM ServerProven at:

http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

5.4 IBM Flex System p260 and p24L Compute Nodes

The IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node are based on IBM POWER architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology.

This chapter describes the server offerings and the technology used in their implementation.

5.4.1 Specifications

The IBM Flex System p260 Compute Node is a half-wide, Power Systems compute node with these characteristics:

� Two POWER7 processor sockets� Sixteen memory slots� Two I/O adapter slots� An option for up to two internal drives for local storage

The IBM Flex System p260 Compute Node has the specifications shown in Table 5-46.

Table 5-46 IBM Flex System p260 Compute Node specifications

Remember: The IBM Flex System p260 Compute Node can be ordered only as part of IBM PureFlex System as described in Chapter 2, “IBM PureFlex System” on page 11.

Components Specification

Model numbers 7895-22X.

Form factor Half-wide compute node.

Chassis support IBM Flex System Enterprise Chassis.

Processor Two IBM POWER7 processors. Each processor contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core.

Chipset IBM P7IOC I/O hub.

Memory 16 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports IBM Active Memory™ Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs.

Memory maximums

256 GB using 16x 16 GB DIMMs.

Page 215: Sg 247984

199

Memory protection

ECC, chipkill.

Disk drive bays Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together.

Maximum internal storage

1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives.

RAID support RAID support by using the operating system.

Network interfaces

None standard. Optional 1 Gb or 10 Gb Ethernet adapters.

PCI Expansion slots

Two I/O connectors for adapters. PCI Express 2.0 x16 interface.

Ports One external USB port.

Systems management

FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager.

Security features Power-on password, selectable boot sequence.

Video None. Remote management by using Serial over LAN and IBM Flex System Manager.

Limited warranty 3-year customer-replaceable unit and on-site limited warranty with 9x5/NBD.

Operating systems supported

IBM AIX, IBM i, and Linux.

Service and support

Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software.

Dimensions Width: 215 mm (8.5”), height: 51 mm (2.0”), depth: 493 mm (19.4”).

Weight Maximum configuration: 7.0 kg (15.4 lb).

Components Specification

Page 216: Sg 247984

200 IBM PureFlex System and IBM Flex System Products and Technology

5.4.2 System board layout

Figure 5-34 shows the system board layout of the IBM Flex System p260 Compute Node.

Figure 5-34 Layout of the IBM Flex System p260 Compute Node

5.4.3 IBM Flex System p24L Compute Node

The IBM Flex System p24L Compute Node shares several similarities with the IBM Flex System p260 Compute Node. It is a half-wide, Power Systems compute node with two POWER7 processor sockets, 16 memory slots, and two I/O adapter slots, This compute note has an option for up to two internal drives for local storage. The IBM Flex System p24L Compute Node is optimized for lower-cost Linux installations.

The IBM Flex System p24L Compute Node has the following features:

� Up to 16 POWER7 processing cores, with up to 8 per processor� Sixteen DDR3 memory DIMM slots that support Active Memory Expansion� Supports VLP and LP DIMMs� Two P7IOC I/O hubs� RAID-compatible SAS controller that supports up to 2 SSD or HDD drives� Two I/O adapter slots� Flexible service processor (FSP)� System management alerts� IBM Light Path Diagnostics� USB 2.0 port� IBM EnergyScale™ technology

The system board layout for the IBM Flex System p24L Compute Node is identical to the IBM Flex System p260 Compute Node, and is shown in Figure 5-34.

POWER7 processors 16 DIMM slots

Two I/O adapter connectors Two I/O Hubs

Connector for future expansion

(HDDs are mounted on the cover, located over the memory DIMMs)

Page 217: Sg 247984

201

5.4.4 Front panel

The front panel of Power Systems compute nodes has the following common elements, as shown in Figure 5-35.

� USB 2.0 port� Power control button and light path LED (green)� Location LED (blue)� Information LED (amber)� Fault LED (amber)

Figure 5-35 Front panel of the IBM Flex System p260 Compute Node

The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises.

Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.

USB 2.0 port Power button LEDs (left-right): location, info, fault

Page 218: Sg 247984

202 IBM PureFlex System and IBM Flex System Products and Technology

The power-control button on the front of the server (Figure 5-35 on page 201) has two functions:

� When the system is fully installed in the chassis: Use this button to power the system on and off

� When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-36

Figure 5-36 Light path diagnostic panel

The LEDs on the light path panel indicate the status of the following devices:

� LP: Light Path panel power indicator� S BRD: System board LED (might indicate trouble with processor or MEM, too)� MGMT: Flexible Support Processor (or management card) LED� D BRD: Drive (or direct access storage device (DASD)) board LED� DRV 1: Drive 1 LED (SSD 1 or HDD 1)� DRV 2: Drive 2 LED (SSD 2 or HDD 2)

If problems occur, the light path diagnostics LEDs assist in identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing this button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts.

Typically, you can obtain this information from the IBM Flex System Manager or Chassis Management Module before removing the node. However, having the LEDs helps with repairs and troubleshooting if on-site assistance is needed.

For more information about the front panel and LEDs, see the IBM Flex System p260 and p460 Compute Node Installation and Service Guide available at:

http://www.ibm.com/support

Page 219: Sg 247984

203

5.4.5 Chassis support

The Power Systems compute nodes can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter.

There is no onboard video capability in the Power Systems compute nodes. The systems are accessed by using Serial over LAN (SOL) or the IBM Flex System Manager.

5.4.6 System architecture

This section covers the system architecture and layout of the p260 and p24L Power Systems compute node. The overall system architecture for the p260 and p24L is shown in Figure 5-37.

Figure 5-37 IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node block diagram

This diagram shows the two CPU slots, with eight memory slots for each processor. Each processor is connected to a P7IOC I/O hub, which connects to the I/O subsystem (I/O adapters, local storage). At the bottom, you can see a representation of the service processor (FSP) architecture.

5.4.7 Processor

The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with

SMI

SMI

SMI

SMI

SMI

SMI

SMI

SMI Tofront panel

FlashNVRAM

256 MB DDR2TPMD

Anchor card/VPD

DIMMDIMM

DIMMDIMMDIMMDIMM

DIMMDIMM

DIMMDIMM

DIMMDIMMDIMMDIMM

DIMMDIMM

Each: PCIe 2.0 x8

4 bytes each

GX++4 bytes

I/O connector 1

I/O connector 2

Each: PCIe 2.0 x8

USBcontroller

PCIe to PCI

SAS

Each: PCIe 2.0 x8

HDDs/SSDs

Systems Management

connector

BCM5387Ethernetswitch

GbEthernet ports

POWER7 Processor 0

POWER7 Processor 1

FSP

P7IOCI/O hub

P7IOCI/O hub

ETE connector

Phy

Page 220: Sg 247984

204 IBM PureFlex System and IBM Flex System Products and Technology

a wide range of related technologies to deliver leading throughput, efficiency, scalability, and reliability, availability, and serviceability (RAS).

Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. As with previous generations, the design philosophy for POWER7 processor-based systems is system-wide balance. The POWER7 processor plays an important role in this balancing.

Processor options for the p260 and p24LTable 5-47 defines the processor options for the p260 and p24L compute nodes.

Table 5-47 p260 and p24L processor options

To optimize software licensing, you can unconfigure or disable one or more cores. The feature is listed in Table 5-48.

Table 5-48 Unconfiguration of cores for p260 and p24L

ArchitectureIBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7 processor and POWER7 processor-based systems include (but are not limited to) the following elements:

� On-chip L3 cache implemented in embedded dynamic random-access memory (eDRAM)

� Cache hierarchy and component innovation

� Advances in memory subsystem

� Advances in off-chip signaling

The superscalar POWER7 processor design also provides other capabilities:

� Binary compatibility with the prior generation of POWER processors

� Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from IBM POWER6® and IBM POWER6+™ processor-based systems

Featurecode

Cores perPOWER7processor

Number ofPOWER7processors

Totalcores

Corefrequency

L3 cache size perPOWE7 processor

IBM Flex System p260 Compute Node

EPR1 4 2 8 3.3 GHz 16 MB

EPR3 8 2 16 3.2 GHz 32 MB

EPR5 8 2 16 3.55 GHz 32 MB

IBM Flex System p24L Compute Node

EPR8 8 2 16 3.2 GHz 32 MB

EPR9 8 2 16 3.55 GHz 32 MB

EPR7 6 2 12 3.7 GHz 24 MB

Featurecode

Description Minimum Maximum

2319 Factory Deconfiguration of 1-core 0 1 less than the total number of cores(For EPR5, the maximum is 7)

Page 221: Sg 247984

205

Figure 5-38 shows the POWER7 processor die layout with major areas identified: Eight POWER7 processor cores, L2 cache, L3 cache and chip power bus interconnect, SMP links, GX++ interface, and integrated memory controller.

Figure 5-38 POWER7 processor architecture

5.4.8 Memory

Each POWER7 processor has an integrated memory controller. Industry standard DDR3 RDIMM technology is used to increase the reliability, speed, and density of the memory subsystems.

Memory placement rulesThe preferred memory minimum and maximum for the p260 and p24L are listed in Table 5-49.

Table 5-49 Preferred memory limits for p260 and p24L

Generally, use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB). However, this configuration is not sufficient for reasonable production use of the system.

LP and VLP form factorsOne benefit of deploying IBM Flex System systems is the ability to use LP memory DIMMs. This design allows for more choices to configure the system to match your needs.

C1CoreL2

4 MB L3

L2C1

Core

4 MB L3

Mem

ory

Con

trolle

r

C1CoreL2

4 MB L3

C1CoreL2

4 MB L3

C1CoreL2

4 MB L3

L2C1

Core

4 MB L3L2C1

Core

4 MB L3L2C1

Core

4 MB L3

SMP

GX++ Bridge

Mem

ory

Buf

fers

Model Minimum memory Maximum memory

IBM Flex System p260 Compute Node 8 GB 256 GB (16x 16 GB DIMMs)

IBM Flex System p24L Compute Node 24 GB 256 GB (16 x 16 GB DIMMs)

Page 222: Sg 247984

206 IBM PureFlex System and IBM Flex System Products and Technology

Table 5-50 lists the available memory options for the p260 and p24L.

Table 5-50 Memory options for p260 and p24L

There are 16 buffered DIMM slots on the p260 and the p24L as shown in Figure 5-39.

Figure 5-39 Memory DIMM topology (IBM Flex System p260 Compute Node)

The memory-placement rules are as follows:

� Install DIMM fillers in unused DIMM slots to ensure effective cooling.

� Install DIMMs in pairs. Both need to be the same size.

� Both DIMMs in a pair must be the same size, speed, type, and technology. Otherwise, you can mix compatible DIMMs from multiple manufacturers.

� Install only supported DIMMs, as described on the IBM ServerProven website:

http://www.ibm.com/servers/eserver/serverproven/compat/us/

Partnumber

Featurecode

Description Speed Formfactor

78P1011 EM04 2x 2 GB DDR3 DIMM 1066 MHz LP

78P0501 8196 2x 4 GB DDR3 DIMM 1066 MHz VLP

78P0502 8199 2x 8 GB DDR3 DIMM 1066 MHz VLP

78P0639 8145 2x 16 GB DDR3 DIMM 1066 MHz LP

Requirement: Due to the design of the on-cover storage connections, clients who want to use SAS HDDs must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if VLP DIMMs and SAS HDDs are configured in the same system. This mixture physically obstructs the cover.

Solid-state drives (SSDs) and LP DIMMs can be used together, however. For more information, see 5.4.10, “Storage” on page 209.

SMI

SMI

SMI

SMI

SMI

SMI

SMI

SMI

POWER7 Processor 1

POWER7 Processor 0

DIMM 1 (P1-C1)DIMM 2 (P1-C2)

DIMM 3 (P1-C3)DIMM 4 (P1-C4)DIMM 5 (P1-C5)DIMM 6 (P1-C6)

DIMM 7 (P1-C7)DIMM 8 (P1-C8)

DIMM 9 (P1-C9)DIMM 10 (P1-C10)

DIMM 11 (P1-C11)DIMM 12 (P1-C12)DIMM 13 (P1-C13)DIMM 14 (P1-C14)

DIMM 15 (P1-C15)DIMM 16 (P1-C16)

Page 223: Sg 247984

207

Table 5-51 shows the required placement of memory DIMMs for the p260 and the p24L, depending on the number of DIMMs installed.

Table 5-51 DIMM placement: p260 and p24L

Use of mixed DIMM sizesAll installed memory DIMMs do not have to be the same size. However, keep the following groups of DIMMs the same size:

� Slots 1-4� Slots 5-8� Slots 9-12� Slots 13-16

5.4.9 Active Memory Expansion

The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX 6.1 or later, this innovative compression and decompression of memory content using processor cycles allows memory expansion of up to 100%.

This memory expansion allows an AIX 6.1 or later partition to do more work with the same physical amount of memory. Conversely, a server can run more partitions and do more work with the same physical amount of memory.

Active Memory Expansion uses processor resources to compress and extract memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice. However, the degree of expansion varies based on how compressible the memory content is. Have adequate spare processor capacity available for the compression and decompression. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional processor used. Other test workloads had more modest results.

Nu

mb

er o

f D

IMM

s Processor 0 Processor 1

DIM

M 1

DIM

M 2

DIM

M 3

DIM

M 4

DIM

M 5

DIM

M 6

DIM

M 7

DIM

M 8

DIM

M 9

DIM

M 1

0

DIM

M 1

1

DIM

M 1

2

DIM

M 1

3

DIM

M 1

4

DIM

M 1

5

DIM

M 1

6

2 x x

4 x x x x

6 x x x x x x

8 x x x x x x x x

10 x x x x x x x x x x

12 x x x x x x x x x x x x

14 x x x x x x x x x x x x x x

16 x x x x x x x x x x x x x x x x

Page 224: Sg 247984

208 IBM PureFlex System and IBM Flex System Products and Technology

You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion wanted in each partition to help control the amount of processor used by the Active Memory Expansion function. An IBM Public License (IPL) is required for the specific partition that is turns memory expansion on or off. After being turned on, there are monitoring capabilities in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon.

Figure 5-40 represents the percentage of processor used to compress memory for two partitions with various profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.

Figure 5-40 Processor usage versus memory expansion effectiveness

Both cases show a knee of the curve relationship for processor resources required for memory expansion:

� Busy processor cores do not have resources to spare for expansion.� The more memory expansion that is done, the more processor resources are required.

The knee varies, depending on how compressible the memory contents are. This variability demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. The tool allows you to sample actual workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool.

% CPU utilizationforexpansion

Amount of memory expansion

1 = Plenty of spare CPU resource available

2 = Constrained CPU resource – already running at significant utilization

1

2

Very cost effective

Page 225: Sg 247984

209

Figure 5-41 shows an example of the output returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the wanted effective memory and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.

Figure 5-41 Output from the AIX Active Memory Expansion planning tool

For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, available at:

http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html

5.4.10 Storage

The p260 and p24L has an onboard SAS controller that can manage up to two non-hot-pluggable internal drives. Both 2.5-inch HDDs and 1.8-inch SSDs are supported. The drives attach to the cover of the server, as shown in Figure 5-42 on page 210.

Storage configuration impact to memory configurationThe type of local drives used impacts the form factor of your memory DIMMs:

� If HDDs are chosen, only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration.

� The use of SSDs does not have the same limitation, and both LP and VLP DIMMs can be used with SSDs.

Active Memory Expansion Modeled Statistics:-----------------------Modeled Expanded Memory Size : 8.00 GB

Expansion True Memory Modeled Memory CPU Usage Factor Modeled Size Gain Estimate--------- -------------- ----------------- ----------- 1.21 6.75 GB 1.25 GB [ 19%] 0.00 1.31 6.25 GB 1.75 GB [ 28%] 0.20 1.41 5.75 GB 2.25 GB [ 39%] 0.35 1.51 5.50 GB 2.50 GB [ 45%] 0.58 1.61 5.00 GB 3.00 GB [ 60%] 1.46

Active Memory Expansion Recommendation:---------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors.

Page 226: Sg 247984

210 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-42 The IBM Flex System p260 Compute Node showing hard disk drive location on top cover

Local storage and cover optionsLocal storage options are shown in Table 5-52. None of the available drives are hot-swappable. If you use local drives, you need to order the appropriate cover with connections for your drive type. The maximum number of drives that can be installed in the p260 or p24L is two. SSD and HDD drives cannot be mixed.

As shown in Figure 5-42, the local drives (HDD or SDD) are mounted to the top cover of the system. When ordering your p260 or p24L select the cover that is appropriate for your system (SSD, HDD, or no drives).

Table 5-52 Local storage options

Featurecode

Partnumber

Description

2.5 inch SAS HDDs

7069 None Top cover with HDD connectors for the p260 and p24L

8274 42D0627 300 GB 10K RPM non-hot-swap 6 Gbps SAS

8276 49Y2022 600 GB 10K RPM non-hot-swap 6 Gbps SAS

8311 81Y9654 900 GB 10K RPM non-hot-swap 6 Gbps SAS

1.8 inch SSDs

7068 None Top cover with SSD connectors for the p260 and p24L

8207 74Y9114 177 GB SATA non-hot-swap SSD

No drives

7067 None Top cover for no drives on the p260 and p24L

Page 227: Sg 247984

211

Local drive connectionOn covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-43.

Figure 5-43 Connector on drive interposer card mounted to server cover

The connection for the cover’s drive interposer on the system board is shown in Figure 5-44.

Figure 5-44 Connection for drive interposer card mounted to the system cover

RAID capabilitiesDisk drives and solid-state drives in the p260 and p24L can be used to implement and manage various types of RAID arrays. They can do so in operating systems that are on the ServerProven list. For the compute node, you must configure the RAID array through the smit sasdam command which is the SAS RAID Disk Array Manager for AIX.

The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use smit sasdam to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format from:

http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/

Page 228: Sg 247984

212 IBM PureFlex System and IBM Flex System Products and Technology

For more information, see “Using the Disk Array Manager” in the Systems Hardware Information Center at:

http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/sasusingthesasdiskarraymanager.htm

5.4.11 I/O expansion

The networking subsystem of the IBM Flex System Enterprise Chassis is designed to provide increased bandwidth and flexibility. The new design also allows for more ports on the available expansion adapters, which allows for greater flexibility and efficiency with your system design.

I/O adapter slotsThere are two I/O adapter slots on the p260 and the p24L. Unlike IBM BladeCenter, the I/O adapter slots on IBM Flex System nodes are identical in shape (form factor). Also different is that the I/O adapters for the Power Systems compute nodes have their own connector that plugs into the IBM Flex System Enterprise Chassis midplane.

Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before you can create a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes.

If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.

Restriction: There is no onboard network capability in the Power Systems compute nodes other than the FSP NIC interface.

All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.

Page 229: Sg 247984

213

A typical I/O adapter card is shown in Figure 5-45.

Figure 5-45 The underside of the IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

Note the large connector, which plugs into one of the I/O adapter slots on the system board. Also, notice that it has its own connection to the midplane of the Enterprise Chassis. Several of the expansion cards connect directly to the midplane such as the CFFh and HSSF form factors. Others, such as the CIOv, CFFv, SFF, and StFF form factors, do not.

PCI hubsThe I/O is controlled by two P7-IOC I/O controller hub chips. These chips provide additional flexibility when assigning resources within Virtual I/O Server (VIOS) to specific Virtual Machine/logical partitions (LPARs).

Available adaptersTable 5-53 shows the available I/O adapter cards for the p260 and p24L. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.

Table 5-53 Supported I/O adapters for the p260 and p24L

Featurecode

PartNumber

Description Numberof ports

1762a

a. At least one 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter must be configured in each server.

81Y3124 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter 4

1763a 49Y7900 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter 4

1764 69Y1938 IBM Flex System FC3172 2-port 8 Gb FC Adapter 2

1761 90Y0134 IBM Flex System IB6132 2-port QDR InfiniBand Adapter 2

PCIe connector

Guide block to ensure correct installation

Midplane connector

Adapters share a common size (100 mm x 80 mm)

Page 230: Sg 247984

214 IBM PureFlex System and IBM Flex System Products and Technology

5.4.12 System management

There are several advanced system management capabilities built into the p260 and p24L. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and SOL capability, which are described in this section.

Flexible Support ProcessorAn FSP provides out-of-band system management capabilities. These capabilities include system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the Flexible Support Processor directly. Rather, you use tools such as IBM Flex System Manager, Chassis Management Module, and external IBM Systems Director Management Console.

The Flexible Support Processor provides an SOL interface, which is available by using the Chassis Management Module and the console command.

Serial over LANThe p260 and p24L do not have an on-board video chip and do not support keyboard, video, and mouse (KVM) connection. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a command-line interface (CLI) over a Telnet or Secure Shell (SSH) connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager. SOL provides console redirection for both Software Management Services (SMS) and the server operating system. The SOL feature redirects server serial-connection data over a local area network (LAN) without requiring special cabling. It does so by routing the data by using the Chassis Management Module network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the Chassis Management Module.

SOL offers the following advantages:

� Remote administration without KVM (headless servers)� Reduced cabling and no requirement for a serial concentrator� Standard Telnet/SSH interface, eliminating the requirement for special client software

The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration enables the p260 and p24L to be managed from a remote location.

Anchor cardThe anchor card, shown in Figure 5-46 on page 215, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferable from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information.

The vital product data chip includes information such as system type, model, and serial number.

Page 231: Sg 247984

215

Figure 5-46 Anchor card

5.4.13 Integrated features

As stated in 5.4.1, “Specifications” on page 198 and 5.4.3, “IBM Flex System p24L Compute Node” on page 200, the integrated features are as follows:

� Flexible Support Processor� IBM POWER7 Processors� SAS RAID-capable Controller� USB port

In the p260 and p24L, there is a thermal sensor in the Light Path panel assembly.

5.4.14 Operating system support

The IBM Flex System p24L Compute Node is designed to run Linux only. The IBM Flex System p260 Compute Node supports the following configurations:

� AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284

� AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later (planned availability: June 29, 2012)

� AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012)

� AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283

� AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later (planned availability: June 29, 2012)

� AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later (planned availability: June 29, 2012)

� AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012)

� IBM i 6.1 with i 6.1.1 machine code, or later

� IBM i 7.1, or later

� Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality

Remember: AIX 5.3 Service Extension is required.

Page 232: Sg 247984

216 IBM PureFlex System and IBM Flex System Products and Technology

� Red Hat Enterprise Linux 5.7, for POWER, or later

� Red Hat Enterprise Linux 6.2, for POWER, or later

� VIOS 2.2.1.4, or later

5.5 IBM Flex System p460 Compute Node

The IBM Flex System p460 Compute Node is based on IBM POWER architecture technologies. This compute node runs in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment by using advanced processing technology.

This section describes the server offerings and the technology used in their implementation.

5.5.1 Overview

The IBM Flex System p460 Compute Node is a full-wide, Power Systems compute node. It has four POWER7 processor sockets, 32 memory slots, four I/O adapter slots, and an option for up to two internal drives for local storage.

The IBM Flex System p460 Compute Node has the specifications shown in Table 5-54.

Table 5-54 IBM Flex System p260 Compute Node specifications

Remember: The IBM Flex System p460 Compute Node can be ordered only as part of IBM PureFlex System as described in Chapter 2, “IBM PureFlex System” on page 11.

Components Specification

Model numbers 7895-42X

Form factor Full-wide compute node.

Chassis support IBM Flex System Enterprise Chassis.

Processor p460: Four IBM POWER7 processors.Each processor contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core.

Chipset IBM P7IOC I/O hub.

Memory 32 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP and VLP DIMMs are supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs.

Memory maximums

512 GB using 32x 16 GB DIMMs.

Memory protection

ECC, chipkill.

Page 233: Sg 247984

217

Disk drive bays Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together.

Maximum internal storage

1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives.

RAID support RAID support by using the operating system.

Network interfaces

None standard. Optional 1 Gb or 10 Gb Ethernet adapters.

PCI Expansion slots

Two I/O connectors for adapters. PCI Express 2.0 x16 interface.

Ports One external USB port.

Systems management

FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager.

Security features Power-on password, selectable boot sequence.

Video None. Remote management by using Serial over LAN and IBM Flex System Manager.

Limited warranty 3-year customer-replaceable unit and on-site limited warranty with 9x5/NBD.

Operating systems supported

IBM AIX, IBM i, and Linux.

Service and support

Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software.

Dimensions Width: 437 mm (17.2"), height: 51 mm (2.0”), depth: 493 mm (19.4”).

Weight Maximum configuration: 14.0 kg (30.6 lb).

Components Specification

Page 234: Sg 247984

218 IBM PureFlex System and IBM Flex System Products and Technology

5.5.2 System board layout

Figure 5-47 shows the system board layout of the IBM Flex System p460 Compute Node.

Figure 5-47 Layout of the IBM Flex System p460 Compute Node

5.5.3 Front panel

The front panel of Power Systems compute nodes has the following common elements, as shown in Figure 5-48 on page 219:

� USB 2.0 port� Power control button and light path LED (green)� Location LED (blue)� Information LED (amber)� Fault LED (amber)

POWER7 processors 32 DIMM slots

Four I/O adapter connectors

I/O adapter installed

Page 235: Sg 247984

219

Figure 5-48 Front panel of the IBM Flex System p460 Compute Node

The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises.

The power-control button on the front of the server (Figure 5-35 on page 201) has these functions:

� When the system is fully installed in the chassis: Use this button to power the system on and off.

� When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-49.

Figure 5-49 Light path diagnostic panel

USB 2.0 port Power button LEDs (left-right): location, info, fault

Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.

Page 236: Sg 247984

220 IBM PureFlex System and IBM Flex System Products and Technology

The LEDs on the light path panel indicate the status of the following devices:

� LP: Light Path panel power indicator� S BRD: System board LED (might indicate trouble with processor or MEM)� MGMT: Flexible Support Processor (or management card) LED� D BRD: Drive (or DASD) board LED� DRV 1: Drive 1 LED (SSD 1 or HDD 1)� DRV 2: Drive 2 LED (SSD 2 or HDD 2)� ETE: Sidecar connector LED (not present on the IBM Flex System p460 Compute Node)

If problems occur, the light path diagnostics LEDs assist in identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing the button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts.

You usually obtain this information from the IBM Flex System Manager or Chassis Management Module before removing the node. However, having the LEDs helps with repairs and troubleshooting if on-site assistance is needed.

For more information about the front panel and LEDs, see the IBM Flex System p260 and p460 Compute Node Installation and Service Guide available at:

http://www.ibm.com/support

5.5.4 Chassis support

The p460 can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter.

There is no onboard video capability in the Power Systems compute nodes. The systems are accessed by using SOL or the IBM Flex System Manager.

Page 237: Sg 247984

221

5.5.5 System architecture

The IBM Flex System p460 Compute Node shares many of the same components as the IBM Flex System p260 Compute Node. The IBM Flex System p460 Compute Node is a full-wide node, and adds additional processors and memory along with two more adapter slots. It has the same local storage options as the IBM Flex System p260 Compute Node. The IBM Flex System p460 Compute Node system architecture is shown in Figure 5-50.

Figure 5-50 IBM Flex System p460 Compute Node block diagram

Gb Ethernet ports

SMI

SMI

SMI

SMI

SMI

SMI

SMI

SMITo front panel

FlashNVRAM256 MB DDR2TPMDAnchor card/VPD

DIMMDIMM

DIMMDIMMDIMMDIMM

DIMMDIMM

DIMMDIMM

DIMMDIMMDIMMDIMM

DIMMDIMM

Each: PCIe 2.0 x8

4 bytes each

GX++4 bytes

I/O connector 1

I/O connector 2

Each: PCIe 2.0 x8

Systems Management

connector

BCM5387 Ethernetswitch

P7IOCI/O hub

SMI

SMI

SMI

SMI

SMI

SMI

SMI

SMI

DIMMDIMM

DIMMDIMMDIMMDIMM

DIMMDIMM

DIMMDIMM

DIMMDIMMDIMMDIMM

DIMMDIMM

Each: PCIe 2.0 x8

4 bytes each

I/O connector 3

I/O connector 4

Each: PCIe 2.0 x8

HDDs/SSDs

P7IOCI/O hub

USB controller

PCIe to PCI

P7IOCI/O hub

FSP

POWER7Processor

2

POWER7Processor

3

POWER7Processor

0

POWER7Processor

1

FSPIO

SAS

P7IOCI/O hub

Phy

Page 238: Sg 247984

222 IBM PureFlex System and IBM Flex System Products and Technology

The four processors in the IBM Flex System p460 Compute Node are connected in a cross-bar formation as shown in Figure 5-51.

Figure 5-51 IBM Flex System p460 Compute Node processor connectivity

5.5.6 Processor

The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS.

Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. The design philosophy for POWER7 processor-based systems is system-wide balance, in which the POWER7 processor plays an important role.

Table 5-55 defines the processor options for the p460.

Table 5-55 Processor options for the p460

POWER7Processor

0

POWER7Processor

1

POWER7Processor

2

POWER7Processor

3

4 bytes each

Featurecode

Cores perPOWER7processor

Number ofPOWER7processors

Totalcores

Corefrequency

L3 cache size perPOWE7 processor

EPR2 4 4 16 3.3 GHz 16 MB

EPR4 8 4 32 3.2 GHz 32 MB

EPR6 8 4 32 3.55 GHz 32 MB

Page 239: Sg 247984

223

To optimize software licensing, you can unconfigure or disable one or more cores. The feature is listed in Table 5-56.

Table 5-56 Unconfiguration of cores

5.5.7 Memory

Each POWER7 processor has two integrated memory controllers in the chip. Industry standard DDR3 RDIMM technology is used to increase reliability, speed, and density of memory subsystems.

Memory placement rulesThe preferred memory minimum and maximums for the p460 are shown in Table 5-57.

Table 5-57 Preferred memory limits for the p460

Use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB) but that is not sufficient for reasonable production use of the system.

LP and VLP form factorsOne benefit of deploying IBM Flex System systems is the ability to use LP memory DIMMs. This design allows for more choices to configure the system to match your needs.

Table 5-58 lists the available memory options for the p460.

Table 5-58 Memory options for the p460

Featurecode

Description Minimum Maximum

2319 Factory Deconfiguration of 1-core 0 1 less than the total number of cores(For EPR5, the maximum is 7)

Model Minimum memory Maximum memory

IBM Flex System p460 Compute Node

32 GB 512 GB (32x 16 GB DIMMs)

Partnumber

Featurecode

Description Speed Formfactor

78P1011 EM04 2 GB DDR3 DIMM 1066 MHz LP

78P0501 8196 4 GB DDR3 DIMM 1066 MHz VLP

78P0502 8199 8 GB DDR3 DIMM 1066 MHz VLP

78P0639 8145 16 GB DDR3 DIMM 1066 MHz LP

Requirement: Due to the design of the on-cover storage connections, if you use SAS HDDs, you must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if VLP DIMMs and SAS hard disk drives are configured in the same system. Combining the two physically obstructs the cover from closing. For more information, see 5.4.10, “Storage” on page 209.

Page 240: Sg 247984

224 IBM PureFlex System and IBM Flex System Products and Technology

There are 16 buffered DIMM slots on the p260 and the p24L, as shown in Figure 5-52. The IBM Flex System p460 Compute Node adds two more processors and 16 additional DIMM slots, divided evenly (eight memory slots) per processor.

Figure 5-52 Memory DIMM topology (Processors 0 and 1 shown)

The memory-placement rules are as follows:

� Install DIMM fillers in unused DIMM slots to ensure efficient cooling.

� Install DIMMs in pairs. Both need to be the same size.

� Both DIMMs in a pair must be the same size, speed, type, and technology. You can mix compatible DIMMs from multiple manufacturers.

� Install only supported DIMMs, as described on the IBM ServerProven website:

http://www.ibm.com/servers/eserver/serverproven/compat/us/

SMI

SMI

SMI

SMI

SMI

SMI

SMI

SMI

POWER7 Processor 1

POWER7 Processor 0

DIMM 1 (P1-C1)DIMM 2 (P1-C2)

DIMM 3 (P1-C3)DIMM 4 (P1-C4)DIMM 5 (P1-C5)DIMM 6 (P1-C6)

DIMM 7 (P1-C7)DIMM 8 (P1-C8)

DIMM 9 (P1-C9)DIMM 10 (P1-C10)

DIMM 11 (P1-C11)DIMM 12 (P1-C12)DIMM 13 (P1-C13)DIMM 14 (P1-C14)

DIMM 15 (P1-C15)DIMM 16 (P1-C16)

Page 241: Sg 247984

225

For the IBM Flex System p460 Compute Node, Table 5-59 shows the required placement of memory DIMMs, depending on the number of DIMMs installed.

Table 5-59 DIMM placement on IBM Flex System p460 Compute Node

Use of mixed DIMM sizesAll installed memory DIMMs do not have to be the same size. However, for best results, keep these groups of DIMMs the same size:

� Slots 1-4� Slots 5-8� Slots 9-12� Slots 13-16� Slots 17-20� Slots 21-24� Slots 25-28� Slots 29-32

Nu

mb

er o

f D

IMM

s CPU 0 CPU 1 CPU 2 CPU 3

DIM

M 1

DIM

M 2

DIM

M 3

DIM

M 4

DIM

M 5

DIM

M 6

DIM

M 7

DIM

M 8

DIM

M 9

DIM

M 1

0

DIM

M 1

1

DIM

M 1

2

DIM

M 1

3

DIM

M 1

4

DIM

M 1

5

DIM

M 1

6

DIM

M 1

7

DIM

M 1

8

DIM

M 1

9

DIM

M 2

0

DIM

M 2

1

DIM

M 2

2

DIM

M 2

3

DIM

M 2

4

DIM

M 2

5

DIM

M 2

6

DIM

M 2

7

DIM

M 2

8

DIM

M 2

9

DIM

M 3

0

DIM

M 3

1

DIM

M 3

2

2 x x

4 x x x x

6 x x x x x x

8 x x x x x x x x

10 x x x x x x x x x x

12 x x x x x x x x x x x x

14 x x x x x x x x x x x x x x

16 x x x x x x x x x x x x x x x x

18 x x x x x x x x x x x x x x x x x x

20 x x x x x x x x x x x x x x x x x x x x

22 x x x x x x x x x x x x x x x x x x x x x x

24 x x x x x x x x x x x x x x x x x x x x x x x x

26 x x x x x x x x x x x x x x x x x x x x x x x x x x

28 x x x x x x x x x x x x x x x x x x x x x x x x x x x x

30 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

32 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Page 242: Sg 247984

226 IBM PureFlex System and IBM Flex System Products and Technology

5.5.8 Active Memory Expansion

The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX 6.1 or later, this innovative compression and decompression of memory content using processor cycles allows memory expansion of up to 100%.

This efficiency allows an AIX 6.1 or later partition to do more work with the same physical amount of memory. Conversely, a server can run more partitions and do more work with the same physical amount of memory.

Active Memory Expansion uses processor resources to compress and extract memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice. However, the degree of expansion varies based on how compressible the memory content is. Have adequate spare processor capacity available for the compression and decompression. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional processor used. Other test workloads had more modest results.

You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion wanted in each partition to help control the amount of processor used by the Active Memory Expansion function. An IPL is required for the specific partition that is turning on or off memory expansion. After being turned on, there are monitoring capabilities in standard AIX performance tools, such as lparstat, vmstat, topas, and svmon.

Figure 5-53 represents the percentage of processor used to compress memory for two partitions with different profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.

Figure 5-53 Processor usage versus memory expansion effectiveness

Both cases show a knee of the curve relationship for processor resources required for memory expansion:

� Busy processor cores do not have resources to spare for expansion.� The more memory expansion that is done, the more processor resources are required.

The knee varies, depending on how compressible the memory contents are. This variation demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. This tool allows you to sample actual

% CPU utilizationforexpansion

Amount of memory expansion

1 = Plenty of spare CPU resource available

2 = Constrained CPU resource – already running at significant utilization

1

2

Very cost effective

Page 243: Sg 247984

227

workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool.

Figure 5-54 shows an example of the output returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the required effective memory, and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.

Figure 5-54 Output from the AIX Active Memory Expansion planning tool

For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, available at:

http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html

5.5.9 Storage

The p460 has an onboard SAS controller that can manage up to two, non-hot-pluggable internal drives. The drives attach to the cover of the server, as shown in Figure 5-55 on page 228. Even though the p460 is a full-wide server, it has the same storage options as the p260 and the p24L.

The type of local drives used impacts the form factor of your memory DIMMs. If HDDs are chosen, then only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration. The use of SSDs does not have the same limitation, and so LP DIMMs can be used with SSDs.

Active Memory Expansion Modeled Statistics:-----------------------Modeled Expanded Memory Size : 8.00 GB

Expansion True Memory Modeled Memory CPU Usage Factor Modeled Size Gain Estimate--------- -------------- ----------------- ----------- 1.21 6.75 GB 1.25 GB [ 19%] 0.00 1.31 6.25 GB 1.75 GB [ 28%] 0.20 1.41 5.75 GB 2.25 GB [ 39%] 0.35 1.51 5.50 GB 2.50 GB [ 45%] 0.58 1.61 5.00 GB 3.00 GB [ 60%] 1.46

Active Memory Expansion Recommendation:---------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors.

Page 244: Sg 247984

228 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-55 The IBM Flex System p260 Compute Node showing hard disk drive location

5.5.10 Local storage and cover options

Local storage options are shown in Table 5-60. None of the available drives are hot-swappable. If you use local drives, you need to order the appropriate cover with connections for your drive type. The maximum number of drives that can be installed in any Power Systems compute node is two. SSD and HDD drives cannot be mixed.

As shown in Figure 5-55, the local drives (HDD or SDD) are mounted to the top cover of the system. When ordering your p460, select the cover that is appropriate for your system (SSD, HDD, or no drives) as shown in Table 5-60.

Table 5-60 Local storage options

Featurecode

Partnumber

Description

2.5 inch SAS HDDs

7066 None Top cover with HDD connectors for the IBM Flex System p460 Compute Node (full-wide)

8274 42D0627 300 GB 10K RPM non-hot-swap 6 Gbps SAS

8276 49Y2022 600 GB 10K RPM non-hot-swap 6 Gbps SAS

8311 81Y9654 900 GB 10K RPM non-hot-swap 6 Gbps SAS

1.8 inch SSDs

7065 None Top Cover with SSD connectors for IBM Flex System p460 Compute Node (full-wide)

8207 74Y9114 177 GB SATA non-hot-swap SSD

Page 245: Sg 247984

229

On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-56.

Figure 5-56 Connector on drive interposer card mounted to server cover

The connection for the cover’s drive interposer on the system board is shown in Figure 5-57.

Figure 5-57 Connection for drive interposer card mounted to the system cover

5.5.11 Hardware RAID capabilities

Disk drives and solid-state drives in the Power Systems compute nodes can be used to implement and manage various types of RAID arrays in operating systems. These operating systems must be on the ServerProven list. For the compute node, you must configure the

No drives

7005 None Top cover for no drives on the IBM Flex System p460 Compute Node (full-wide)

Featurecode

Partnumber

Description

Page 246: Sg 247984

230 IBM PureFlex System and IBM Flex System Products and Technology

RAID array through the smit sasdam command, which is the SAS RAID Disk Array Manager for AIX.

The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use smit sasdam to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format at:

http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/

For more information, see “Using the Disk Array Manager” in the Systems Hardware Information Center at:

http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/sasusingthesasdiskarraymanager.htm

5.5.12 I/O expansion

The networking subsystem of the IBM Flex System Enterprise Chassis is designed to provide increased bandwidth and flexibility. The new design also allows for more ports on the available expansion adapters, which allows for greater flexibility and efficiency with your system design.

I/O adapter slotsThere are four I/O adapter slots on the IBM Flex System p460 Compute Node. Unlike IBM BladeCenter, the I/O adapter slots on IBM Flex System nodes are identical in shape (form factor). Also, the I/O adapters for the p460 have their own connector that plugs into the IBM Flex System Enterprise Chassis midplane.

Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before creating a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes.

If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.

Restriction: There is no onboard network capability in the Power Systems compute nodes other than the FSP NIC interface.

All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.

Page 247: Sg 247984

231

A typical I/O adapter card is shown in Figure 5-58.

Figure 5-58 The underside of the IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

Note the large connector, which plugs into one of the I/O adapter slots on the system board. Also, notice that it has its own connection to the midplane of the Enterprise Chassis. Several of the expansion cards connect directly to the midplane such as the CFFh and HSSF form factors. Others such as the CIOv, CFFv, SFF, and StFF form factors do not.

PCI hubsThe I/O is controlled by four P7-IOC I/O controller hub chips. This configuration provides additional flexibility when assigning resources within the VIOS to specific Virtual Machine/LPARs.

Available adaptersTable 5-61 shows the available I/O adapter cards for the p460. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.

Table 5-61 Supported I/O adapters for the p460

Featurecode

PartNumber

Description

1762a

a. At least one 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter must be configured in each server.

81Y3124 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter

1763a 49Y7900 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

1764 69Y1938 IBM Flex System FC3172 2-port 8 Gb FC Adapter

1761 90Y0134 IBM Flex System IB6132 2-port QDR InfiniBand Adapter

PCIe connector

Guide block to ensure correct installation

Midplane connector

Adapters share a common size (100 mm x 80 mm)

Page 248: Sg 247984

232 IBM PureFlex System and IBM Flex System Products and Technology

5.5.13 System management

There are several advanced system management capabilities built into the p460. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and Serial-over-LAN capability that are described in this section.

Flexible Support ProcessorAn FSP provides out-of-band system management capabilities, such as system control, runtime error detection, configuration, and diagnostic procedures. Generally, you do not interact with the Flexible Support Processor directly. Rather, you use tools, such as IBM Flex System Manager, Chassis Management Module, and external IBM Systems Director Management Console.

The Flexible Support Processor provides a Serial-over-LAN interface, which is available by using the Chassis Management Module and the console command.

The IBM Flex System p460 Compute Node, even though it is a full-wide system, has only one Flexible Support Processor.

Serial over LANThe Power Systems compute nodes do not have an on-board video chip and do not support KVM connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager. SOL provides console redirection for both SMS and the server operating system. The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the Chassis Management Module network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the Chassis Management Module.

SOL offers the following advantages:

� Remote administration without KVM (headless servers)� Reduced cabling and no requirement for a serial concentrator� Standard Telnet/SSH interface, eliminating the requirement for special client software

The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration allows you to manage the Power Systems compute nodes from a remote location.

Anchor cardThe anchor card, shown in Figure 5-59 on page 233, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferred from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information.

The vital product data chip includes information such as system type, model, and serial number.

Page 249: Sg 247984

233

Figure 5-59 Anchor card

5.5.14 Integrated features

As stated in 5.5.1, “Overview” on page 216, the IBM Flex System p460 Compute Node has these integrated features:

� Flexible Support Processor� IBM POWER7 Processors� SAS RAID-capable Controller� USB port

5.5.15 Operating system support

The IBM Flex System p460 Compute Node supports the following configurations:

� AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284

� AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later (planned availability: June 29, 2012)

� AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012)

� AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283

� AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later (planned availability: June 29, 2012)

� AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later (planned availability: June 29, 2012)

� AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (planned availability: June 29, 2012)

� IBM i 6.1 with i 6.1.1 machine code, or later

� IBM i 7.1, or later

� Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality

� Red Hat Enterprise Linux 5.7, for POWER, or later

Remember: AIX 5.3 Service Extension is required.

Page 250: Sg 247984

234 IBM PureFlex System and IBM Flex System Products and Technology

� Red Hat Enterprise Linux 6.2, for POWER, or later

� VIOS 2.2.1.4, or later

5.6 I/O adapters

Each compute node has the optional capability of accommodating one or more I/O adapters to provide connections to the chassis switch modules. The routing of the I/O adapters ports is done through the chassis midplane to the I/O modules. The I/O adapters allow the compute nodes to connect, through the switch modules or pass-through modules in the chassis, to different LAN or SAN fabric types.

As described in 5.2.11, “I/O expansion” on page 171, any supported I/O adapter can be installed in either I/O connector. On servers with the embedded 10 Gb Ethernet controller, the LOM connector must be unscrewed and removed. After it is installed, the I/O adapter on I/O connector 1 is routed to I/O module bay 1 and bay 2 of the chassis. The I/O adapter installed on I/O connector 2 is routed to I/O module bay 3 and bay 4 of the chassis.

For more information about specific port routing information see 4.9, “I/O architecture” on page 85.

5.6.1 Form factor

The I/O adapters attach to a compute node through a high-density 216-pin Molex PCIe connector.

Currently the IBM Flex System compute nodes support only one form factor for I/O adapters. A typical I/O adapter is shown in Figure 5-60.

Figure 5-60 I/O adapter

PCIe connector

Guide block to ensure correct installation

Midplane connector

Adapters share a common size (96.7 mm x 84.8 mm)

Page 251: Sg 247984

235

5.6.2 Naming structure

Figure 5-61 shows the naming structure for the I/O adapters.

Figure 5-61 The naming structure for the I/O adapters

5.6.3 Supported compute nodes

Table 5-62 lists the available I/O adapters and their compatibility with compute nodes.

Table 5-62 I/O adapter compatibility matrix: Compute nodes

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

EN2092

Fabric Type: EN = EthernetFC = Fibre ChannelCN = Converged NetworkIB = InfiniBand

Series:2 for 1 Gb3 for 8 Gb4 for 10 Gb5 for 16 Gb6 for InfiniBand

Vendor name where A=0102 = Brocade09 = IBM13 = Mellanox17 = QLogic

Maximum numberof partitions 2 = 2 partitions

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

Pagex220 x240 p260 p460

Ethernet adapters

49Y7900 1763 EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes Yes 236

90Y3466 None EN4132 2-port 10 Gb Ethernet Adapter Yes Yes No No 238

None 1762 EN4054 4-port 10Gb Ethernet Adapter No No Yes Yes 240

90Y3554 None CN4054 10Gb Virtual Fabric Adapter Yes Yes No No 242

Fibre Channel adapters

69Y1938 1764 FC3172 2-port 8Gb FC Adapter Yes Yes Yes Yes 246

95Y2375 None FC3052 2-port 8Gb FC Adapter Yes Yes No No 247

88Y6370 None FC5022 2-port 16Gb FC Adapter Yes Yes No No 249

InfiniBand adapters

90Y3454 None IB6132 2-port FDR InfiniBand Adapter Yes Yes No No 251

None 1761 IB6132 2-port QDR InfiniBand Adapter No No Yes Yes 253

Page 252: Sg 247984

236 IBM PureFlex System and IBM Flex System Products and Technology

5.6.4 Supported switches

Table 5-63 lists which switches support the available I/O adapters.

Table 5-63 I/O adapter compatibility matrix: Switches

5.6.5 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter

The IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter is a quad-port network adapter from Broadcom. It provides 1 Gb per second, full duplex, Ethernet links between a compute node and Ethernet switch modules installed in the chassis. The adapter interfaces to the compute node by using the Peripheral Component Interconnect Express (PCIe) bus.

Table 5-64 lists the ordering part number and feature code.

Table 5-64 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter ordering information

System xpartnumber

PowerFC I/O adapters

Ethernet switches Fibre Channel switches IB

EN

4093

10G

b S

cala

ble

S

wit

ch, 4

9Y42

70

EN

2092

1G

b E

ther

net

S

wit

ch, 4

9Y42

94

EN

4091

10

Gb

E

ther

net

Pas

s-th

ru,

FC

5022

16G

b S

AN

S

cala

ble

Sw

itch

,

FC

5022

16

Gb

ES

B

Sw

itch

, 90Y

9356

FC

3171

8 G

b S

AN

S

wit

ch, 6

9Y19

30

FC

3171

8 G

b S

AN

P

ass-

thru

, 69Y

1934

IB61

31 In

fin

iBan

d

Sw

itch

, 90Y

3450

Ethernet adapters

49Y7900 1763 EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes

90Y3466 None EN4132 2-port 10 Gb Ethernet Adapter Yes No Yes

None 1762 EN4054 4-port 10Gb Ethernet Adapter Yes Yes Yes

90Y3554 None CN4054 10Gb Virtual Fabric Adapter Yes Yes Yes

Fibre Channel adapters

69Y1938 1764 FC3172 2-port 8Gb FC Adapter Yes Yes Yes Yes

95Y2375 None FC3052 2-port 8Gb FC Adapter Yes Yes Yes Yes

88Y6370 None FC5022 2-port 16Gb FC Adapter Yes Yes No No

InfiniBand adapters

90Y3454 None IB6132 2-port FDR InfiniBand Adapter Yes

None 1761 IB6132 2-port QDR InfiniBand Adapter Yes

Partnumber

System xfeaturecode

Powerfeaturecode Description

49Y7900 A1BR 1763 EN2024 4-port 1Gb Ethernet Adapter

Page 253: Sg 247984

237

The adapter is supported in compute nodes as listed in Table 5-65.

Table 5-65 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter supported servers

The adapter supports the switches listed in Table 5-66.

Table 5-66 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter supported switches

The EN2024 4-port 1Gb Ethernet Adapter has the following features:

� Dual Broadcom BCM5718 ASICs

� Quad-port Gigabit 1000BASE-X interface

� Two PCI Express 2.0 x1 host interfaces, one per ASIC

� Full duplex (FDX) capability, enabling simultaneous transmission and reception of data on the Ethernet network

� MSI and MSI-X capabilities, up to 17 MSI-X vectors

� I/O virtualization support for VMware NetQueue, and Microsoft VMQ

� Seventeen receive queues and 16 transmit queues

� Seventeen MSI-X vectors supporting per-queue interrupt to host

� Function Level Reset (FLR)

� ECC error detection and correction on internal static random-access memory (SRAM)

� TCP, IP, and UDP checksum offload

� Large Send offload, TCP segmentation offload

� Receive-side scaling

� Virtual LANs (VLANs): IEEE 802.1q VLAN tagging

� Jumbo frames (9 KB)

� IEEE 802.3x flow control

� Statistic gathering (SNMP MIB II, Ethernet-like MIB [IEEE 802.3x, Clause 30])

� Comprehensive diagnostic and configuration software suite

� Advanced Configuration and Power Interface (ACPI) 1.1a-compliant: multiple power modes

� Wake-on-LAN (WOL) support

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

49Y7900 1763 EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes

System xpartnumber

PowerFC I/O adapters E

N40

93 1

0Gb

S

cala

ble

Sw

itch

, 49

Y42

70

EN

2092

1G

b

Eth

ern

et S

wit

ch,

49Y

4294

EN

4091

10

Gb

E

ther

net

Pas

s-th

ru,

88Y

6043

49Y7900 1763 EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes

Page 254: Sg 247984

238 IBM PureFlex System and IBM Flex System Products and Technology

� Preboot Execution Environment (PXE) support

� RoHS-compliant

Figure 5-62 shows the IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter.

Figure 5-62 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide for the EN2024 4-port 1Gb Ethernet Adapter, available at:

http://www.redbooks.ibm.com/abstracts/tips0845.html?Open

5.6.6 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter

The IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter from Mellanox provides the highest performing and most flexible interconnect solution for servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments.

Table 5-67 lists the ordering information.

Table 5-67 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter ordering information

The adapter is supported in compute nodes as listed in Table 5-68.

Table 5-68 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported servers

Partnumber

System xfeaturecode

Powerfeaturecode Description

90Y3466 A1QY None EN4132 2-port 10 Gb Ethernet Adapter

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

90Y3466 None EN4132 2-port 10 Gb Ethernet Adapter Yes No No

Page 255: Sg 247984

239

The adapter supports the switches listed in Table 5-69.

Table 5-69 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported switches

The IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter has the following features:

� Based on Mellanox Connect-X3 technology

� IEEE Std. 802.3 compliant

� PCI Express 3.0 (1.1 and 2.0 compatible) through an x8 edge connector up to 8 GT/s

� 10 Gbps Ethernet

� Processor offload of transport operations

� CORE-Direct application offload

� GPUDirect application offload

� RDMA over Converged Ethernet (RoCE)

� End-to-end QoS and congestion control

� Hardware-based I/O virtualization

� TCP/UDP/IP stateless offload

� Ethernet encapsulation using Ethernet over InfiniBand (EoIB)

� RoHS-6 compliant

Restriction: This I/O adapter is currently not supported on the p260 and p460. Use the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter instead.

System xpartnumber

PowerFC I/O adapters E

N40

93 1

0Gb

S

cala

ble

Sw

itch

, 49

Y42

70

EN

2092

1G

b

Eth

ern

et S

wit

ch,

49Y

4294

EN

4091

10

Gb

E

ther

net

Pas

s-th

ru,

88Y

6043

90Y3466 None EN4132 2-port 10 Gb Ethernet Adapter Yes No Yes

Page 256: Sg 247984

240 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-63 shows the IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter.

Figure 5-63 The EN4132 2-port 10 Gb Ethernet Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide for the EN4132 2-port 10 Gb Ethernet Adapter at:

http://www.redbooks.ibm.com/abstracts/tips0873.html?Open

5.6.7 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter

The IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter from Emulex enables the installation of four 10 Gb ports of high-speed Ethernet into an IBM Power Systems compute node. These ports interface to chassis switches or pass-through modules, enabling connections within and external to the IBM Flex System Enterprise Chassis.

The firmware for this four port adapter is provided by Emulex, whereas the AIX driver and AIX tool support are provided by IBM.

Table 5-70 lists the ordering information.

Table 5-70 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter ordering information

The adapter is supported in compute nodes as listed in Table 5-71.

Table 5-71 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported servers

Partnumber

System xfeaturecode

Powerfeaturecode Description

None None 1762 EN4054 4-port 10Gb Ethernet Adapter

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

None 1762 EN4054 4-port 10Gb Ethernet Adapter No Yes Yes

Restriction: This I/O adapter is not supported on the x240. Use the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter instead.

Page 257: Sg 247984

241

The adapter supports the switches listed in Table 5-72.

Table 5-72 IBM Flex System EN4132 2-port 10 Gb Ethernet Adapter supported switches

The IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter has the following features and specifications:

� Four-port 10 Gb Ethernet adapter

� Dual-ASIC Emulex BladeEngine 3 controller

� Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation)

� PCI Express 3.0 x8 host interface (The p260 and p460 support PCI Express 2.0 x8.)

� Full-duplex capability

� Bus-mastering support

� Direct memory access (DMA) support

� PXE support

� IPv4/IPv6 TCP, UDP checksum offload

– Large send offload – Large receive offload – Receive-Side Scaling (RSS) – IPv4 TCP Chimney offload – TCP Segmentation offload

� VLAN insertion and extraction

� Jumbo frames up to 9000 bytes

� Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), teaming support, and IEEE 802.3ad

� Enhanced Ethernet (draft)

– Enhanced Transmission Selection (ETS) (P802.1Qaz) – Priority-based Flow Control (PFC) (P802.1Qbb) – Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX

(P802.1Qaz)

� Supports Serial over LAN (SoL)

� Total Max Power: 23.1 W

System xpartnumber

PowerFC I/O adapters E

N40

93 1

0Gb

S

cala

ble

Sw

itch

, 49

Y42

70

EN

2092

1G

b

Eth

ern

et S

wit

ch,

49Y

4294

EN

4091

10

Gb

E

ther

net

Pas

s-th

ru,

88Y

6043

None 1762 EN4054 4-port 10Gb Ethernet Adapter Yes Yes Yes

Page 258: Sg 247984

242 IBM PureFlex System and IBM Flex System Products and Technology

Figure 5-64 shows the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter.

Figure 5-64 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter, available at:

http://www.redbooks.ibm.com/abstracts/tips0868.html?Open

5.6.8 IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

The IBM Flex System CN4054 10 Gb Virtual Fabric Adapter from Emulex is a 4-port 10 Gb converged network adapter. It can scale to up to 16 virtual ports and support multiple protocols like Ethernet, iSCSI, and FCoE.

Table 5-73 lists the ordering part numbers and feature codes.

Table 5-73 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter ordering information

System xpartnumber

System xfeaturecode

Powerfeaturecode Description

90Y3554 A1R1 None IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

90Y3558 A1R0 None IBM Flex System CN4054 Virtual Fabric Adapter Upgrade

Page 259: Sg 247984

243

The adapter and upgrade are supported in compute nodes as listed in Table 5-74.

Table 5-74 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter supported servers

The adapter supports the switches listed in Table 5-75.

Table 5-75 IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter supported switches

The IBM Flex System CN4054 10 Gb Virtual Fabric Adapter has the following features and specifications:

� Dual-ASIC Emulex BladeEngine 3 controller

� Operates either as a 4-port 1/10 Gb Ethernet adapter, or supports up to 16 Virtual Network Interface Cards (vNICs).

� In virtual NIC (vNIC) mode, it supports:

– Virtual port bandwidth allocation in 100 Mbps increments.– Up to 16 virtual ports per adapter (four per port).– With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, four of the 16 vNICs (one

per port) support iSCSI or FCoE.

� Support for two vNIC modes: IBM Virtual Fabric Mode and Switch Independent Mode.

� Wake On LAN support.

� With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, the adapter adds FCoE and iSCSI hardware initiator support.

– iSCSI support is implemented as a full offload and presents an iSCSI adapter to the operating system.

� TCP offload Engine (TOE) support with Windows Server 2003, 2008, and 2008 R2 (TCP Chimney) and Linux.

� Connection and its state are passed to the TCP offload engine.

� Data transmit and receive is handled by adapter.

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

90Y3554 None IBM Flex System CN4054 10 Gb Virtual Fabric Adapter

Yes No No

90Y3558 None IBM Flex System CN4054 Virtual Fabric Adapter Upgrade

Yes No No

Note: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System EN4054 4-port 10 Gb Ethernet Adapter instead.

System xpartnumber

PowerFC I/O adapters E

N40

93 1

0Gb

S

cala

ble

Sw

itch

, 49

Y42

70

EN

2092

1G

b

Eth

ern

et S

wit

ch,

49Y

4294

EN

4091

10

Gb

E

ther

net

Pas

s-th

ru,

88Y

6043

90Y3554 None CN4054 10Gb Virtual Fabric Adapter Yes Yes Yes

Page 260: Sg 247984

244 IBM PureFlex System and IBM Flex System Products and Technology

� Supported with iSCSI.

� Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation).

� PCI Express 3.0 x8 host interface.

� Full-duplex capability.

� Bus-mastering support.

� DMA support.

� PXE support.

� IPv4/IPv6 TCP, UDP checksum offload:

– Large send offload– Large receive offload – RSS – IPv4 TCP Chimney offload – TCP Segmentation offload

� VLAN insertion and extraction.

� Jumbo frames up to 9000 bytes.

� Load balancing and failover support, including AFT, SFT, ALB, teaming support, and IEEE 802.3ad.

� Enhanced Ethernet (draft):

– Enhanced Transmission Selection (ETS) (P802.1Qaz) – Priority-based Flow Control (PFC) (P802.1Qbb) – Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX

(P802.1Qaz)

� Supports Serial over LAN (SoL)

� Total Max Power: 23.1 W

The IBM Flex System CN4054 10 Gb Virtual Fabric Adapter supports the following modes of operation:

� IBM Virtual Fabric Mode

This mode works only in conjunction with a IBM Flex System Fabric EN4093 10 Gb Scalable Switch installed in the chassis. In this mode, the adapter communicates with the switch module to obtain vNIC parameters by using Data Center Bridging Exchange (DCBX). A special tag within each data packet is added and later removed by the NIC and switch for each vNIC group. This tag helps maintain separation of the virtual channels.

In IBM Virtual Fabric Mode, each physical port is divided into four virtual ports, providing a total of 16 virtual NICs per adapter. The default bandwidth for each vNIC is 2.5 Gbps. Bandwidth for each vNIC can be configured at the EN4093 switch from 100 Mbps to 10 Gbps, up to a total of 10 Gb per physical port. The vNICs can also be configured to have 0 bandwidth if you must allocate the available bandwidth to fewer than eight vNICs. In IBM Virtual Fabric Mode, you can change the bandwidth allocations through the EN4093 switch user interfaces without having to reboot the server.

When storage protocols are enabled on the adapter by using CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, six ports are Ethernet, and two ports are either iSCSI or FCoE.

Page 261: Sg 247984

245

� Switch Independent vNIC Mode

This vNIC mode is supported with the following switches:

– IBM Flex System Fabric EN4093 10 Gb Scalable Switch – IBM Flex System EN4091 10 Gb Ethernet Pass-thru and a top-of-rack switch

Switch Independent Mode offers the same capabilities as IBM Virtual Fabric Mode in terms of the number of vNICs and bandwidth that each can have. However, Switch Independent Mode extends the existing customer VLANs to the virtual NIC interfaces. The IEEE 802.1Q VLAN tag is essential to the separation of the vNIC groups by the NIC adapter or driver and the switch. The VLAN tags are added to the packet by the applications or drivers at each end station rather than by the switch.

� Physical NIC (pNIC) mode

In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 4-port Ethernet expansion card.

When in pNIC mode, the expansion card functions with any of the following I/O modules:

– IBM Flex System Fabric EN4093 10 Gb Scalable Switch – IBM Flex System EN4091 10 Gb Ethernet Pass-thru and a top-of-rack switch– IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

In pNIC mode, the adapter with the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, applied operates in traditional converged network adapter (CNA) mode. It operates with four ports of Ethernet and four ports of storage (iSCSI or FCoE) available to the operating system.

Figure 5-65 shows the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter.

Figure 5-65 The CN4054 10Gb Virtual Fabric Adapter for IBM Flex System

For more information, see the IBM Redbooks Product Guide for the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter, at:

http://www.redbooks.ibm.com/abstracts/tips0868.html?Open

Page 262: Sg 247984

246 IBM PureFlex System and IBM Flex System Products and Technology

5.6.9 IBM Flex System FC3172 2-port 8 Gb FC Adapter

The IBM Flex System FC3172 2-port 8 Gb FC Adapter from QLogic enables high-speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel SAN. This adapter is based on the proven QLogic 2532 8 Gb ASIC design. It works with any of the 8 Gb or 16 Gb IBM Flex System Fibre Channel switch modules.

Table 5-76 lists the ordering part number and feature code.

Table 5-76 IBM Flex System FC3172 2-port 8 Gb FC Adapter ordering information

The adapter is supported in compute nodes as listed in Table 5-77.

Table 5-77 IBM Flex System FC3172 2-port 8 Gb FC Adapter supported servers

The adapter supports the switches listed in Table 5-78.

Table 5-78 IBM Flex System FC3172 2-port 8 Gb FC Adapter supported switches

The IBM Flex System FC3172 2-port 8 Gb FC Adapter has the following features:

� QLogic ISP2532 controller

� PCI Express 2.0 x4 host interface

� Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second maximum at full-duplex per port

� 8/4/2 Gbps auto-negotiation

� Support for FCP SCSI initiator and target operation

� Support for full-duplex operation

� Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet protocol (FCP-IP)

� Support for point-to-point fabric connection (F-port fabric login)

� Support for Fibre Channel Arbitrated Loop (FC-AL) public loop profile: Fibre Loop-(FL-Port)-Port Login

� Support for Fibre Channel services class 2 and 3

Part number Feature codesa

a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.

Description

69Y1938 A1BM / 1764 IBM Flex System FC3172 2-port 8 Gb FC Adapter

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

69Y1938 1764 IBM Flex System FC3172 2-port 8 Gb FC Adapter Yes Yes Yes

System xpartnumber

PowerFC I/O adapters F

C50

22 1

6Gb

S

AN

Sca

lab

le

Sw

itch

,

FC

5022

16

Gb

E

SB

Sw

itch

, 90

Y93

56

FC

3171

8 G

b

SA

N S

wit

ch,

69Y

1930

FC

3171

8 G

b

SA

N P

ass-

thru

, 69

Y19

34

69Y1938 1764 FC3172 2-port 8Gb FC Adapter Yes Yes Yes Yes

Page 263: Sg 247984

247

� Configuration and boot support in UEFI

� Power usage: 3.7 W typical

� RoHS 6 compliant

Figure 5-66 shows the IBM Flex System FC3172 2-port 8 Gb FC Adapter.

Figure 5-66 The IBM Flex System FC3172 2-port 8 Gb FC Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3172 2-port 8 Gb FC Adapter, at:

http://www.redbooks.ibm.com/abstracts/tips0867.html?Open

5.6.10 IBM Flex System FC3052 2-port 8 Gb FC Adapter

The IBM Flex System FC3052 2-port 8 Gb FC Adapter from Emulex provides compute nodes with high-speed access to a Fibre Channel SAN. This 2-port 8 Gb adapter is based on the Emulex 8 Gb Fibre Channel application-specific integrated circuits (ASIC). It uses industry-proven technology to provide high-speed, reliable access to SAN connected storage. The two ports enable redundant connections to the SAN, which can increase reliability and reduce downtime.

Table 5-79 lists the ordering part number and feature code.

Table 5-79 IBM Flex System FC3052 2-port 8 Gb FC Adapter ordering information

System xpartnumber

System xfeaturecode

Powerfeaturecode

Description

95Y2375 A2N5 None IBM Flex System FC3052 2-port 8 Gb FC Adapter

Page 264: Sg 247984

248 IBM PureFlex System and IBM Flex System Products and Technology

The adapter is supported in compute nodes as listed in Table 5-80.

Table 5-80 IBM Flex System FC3052 2-port 8 Gb FC Adapter supported servers

The adapter supports the switches listed in Table 5-81.

Table 5-81 IBM Flex System FC3052 2-port 8 Gb FC Adapter supported switches

The IBM Flex System FC3052 2-port 8 Gb FC Adapter has the following features and specifications:

� Uses the Emulex “Saturn” 8 Gb Fibre Channel I/O Controller chip

� Multifunction PCIe 2.0 device with two independent FC ports

� Auto-negotiation between 2-Gbps, 4-Gbps, and 8-Gbps FC link attachments

� Complies with the PCIe base and CEM 2.0 specifications

� Enablement of high-speed and dual-port connection to a Fibre Channel SAN

� Comprehensive virtualization capabilities with support for N_Port ID Virtualization (NPIV) and Virtual Fabric

� Simplified installation and configuration by using common HBA drivers

� Common driver model that eases management and enables upgrades independent of HBA firmware

� Fibre Channel specifications:

– Bandwidth: Burst transfer rate of up to 1600 MBps full-duplex per port

– Support for point-to-point fabric connection: F-Port Fabric Login

– Support for FC-AL and FC-AL-2 FL-Port Login

– Support for Fibre Channel services class 2 and 3

� Single-chip design with two independent 8 Gbps serial Fibre Channel ports, each of which provides these features:

– Reduced instruction set computer (RISC) processor

– Integrated serializer/deserializer

– Receive DMA sequencer

– Frame buffer

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

95Y2375 None IBM Flex System FC3052 2-port 8 Gb FC Adapter Yes No No

Restriction: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System FC3172 2-port 8 Gb FC Adapter instead.

System xpartnumber

PowerFC I/O adapters F

C50

22 1

6Gb

S

AN

Sca

lab

le

Sw

itch

,

FC

5022

16

Gb

E

SB

Sw

itch

, 90

Y93

56

FC

3171

8 G

b

SA

N S

wit

ch,

69Y

1930

FC

3171

8 G

b

SA

N P

ass-

thru

, 69

Y19

34

95Y2375 None FC3052 2-port 8Gb FC Adapter Yes Yes Yes Yes

Page 265: Sg 247984

249

� Onboard DMA: DMA controller for each port: Transmit and receive

� Frame buffer first in, first out (FIFO): Integrated transmit and receive frame buffer for each data channel

Figure 5-67 shows the IBM Flex System FC3052 2-port 8 Gb FC Adapter.

Figure 5-67 IBM Flex System FC3052 2-port 8 Gb FC Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC3052 2-port 8 Gb FC Adapter, at:

http://www.redbooks.ibm.com/abstracts/tips0869.html?Open

5.6.11 IBM Flex System FC5022 2-port 16Gb FC Adapter

The network architecture on the IBM Flex System platform is designed to address network challenges. It gives you a scalable way to integrate, optimize, and automate your data center. The IBM Flex System FC5022 2-port 16Gb FC Adapter enables high-speed access to external SANs. This adapter is based on Brocade architecture, and offers end-to-end 16 Gb connectivity to SAN. It can auto-negotiate, and also work at 8 Gb and 4 Gb speeds. It has enhanced features like N-port trunking, and increased encryption for security.

Table 5-82 lists the ordering part number and feature code.

Table 5-82 IBM Flex System FC5022 2-port 16Gb FC Adapter ordering information

System xpartnumber

System xfeaturecode

Powerfeaturecode

Description

88Y6370 A1BP None IBM Flex System FC5022 2-port 16Gb FC Adapter

Page 266: Sg 247984

250 IBM PureFlex System and IBM Flex System Products and Technology

The adapter is supported in compute nodes as listed in Table 5-83.

Table 5-83 IBM Flex System FC5022 2-port 16Gb FC Adapter supported servers

The adapter supports the switches listed in Table 5-84.

Table 5-84 IBM Flex System FC5022 2-port 16Gb FC Adapter supported switches

The IBM Flex System FC5022 2-port 16Gb FC Adapter has the following features:

� 16 Gbps Fibre Channel

– Use 16 Gbps bandwidth to eliminate internal oversubscription– Investment protection with the latest Fibre Channel technologies– Reduce the number of ISL external switch ports, optics, cables, and power

� Over 500,000 IOPS per port, which maximizes transaction performance and density of VMs per compute node

� Achieves performance of 315,000 IOPS for Email Exchange and 205,000 IOPS for SQL Database

� Boot from SAN allows the automation SAN Boot LUN discovery to simplify boot from SAN and reduce image management complexity

� Brocade Server Application Optimization (SAO) provides quality of service (QoS) levels assignable to VM applications

� Direct I/O enables native (direct) I/O performance by allowing VMs to bypass the hypervisor and communicate directly with the adapter

� Brocade Network Advisor simplifies and unifies the management of Brocade adapter, SAN, and LAN resources through a single pane-of-glass

� LUN Masking, an Initiator-based LUN masking for storage traffic isolation

� NPIV allows multiple host initiator N_Ports to share a single physical N_Port, dramatically reducing SAN hardware requirements

� Target Rate Limiting (TRL) throttles data traffic when accessing slower speed storage targets to avoid back pressure problems

� RoHS-6 compliant

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

95Y2375 None IBM Flex System FC3052 2-port 8 Gb FC Adapter Yes No No

Restriction: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System FC3172 2-port 8 Gb FC Adapter instead.

System xpartnumber

PowerFC I/O adapters F

C50

22 1

6Gb

S

AN

Sca

lab

le

Sw

itch

,

FC

5022

16

Gb

E

SB

Sw

itch

, 90

Y93

56

FC

3171

8 G

b

SA

N S

wit

ch,

69Y

1930

FC

3171

8 G

b

SA

N P

ass-

thru

, 69

Y19

34

88Y6370 None FC5022 2-port 16Gb FC Adapter Yes Yes No No

Page 267: Sg 247984

251

Figure 5-68 shows the IBM Flex System FC5022 2-port 16Gb FC Adapter.

Figure 5-68 IBM Flex System FC5022 2-port 16Gb FC Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System FC5022 2-port 16Gb FC Adapter, at:

http://www.redbooks.ibm.com/abstracts/tips0891.html?Open

5.6.12 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

InfiniBand is a high-speed server-interconnect technology that is ideally suited as the interconnect technology for access layer and storage components. It is designed for application and back-end IPC applications, for connectivity between application and back-end layers, and from back-end to storage layers. Through use of host channel adapters (HCAs) and switches, InfiniBand technology is used to connect servers with remote storage and networking devices, and other servers. It can also be used inside servers for interprocess communication (IPC) in parallel clusters

The IBM Flex System IB6132 2-port FDR InfiniBand Adapter delivers low-latency and high bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications can achieve significant performance improvements. These improvements in turn help reduce the completion time and lowers the cost per operation.

The IB6132 2-port FDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments.

Table 5-85 lists the ordering part number and feature code.

Table 5-85 IBM Flex System IB6132 2-port FDR InfiniBand Adapter ordering information

System xpartnumber

System xfeaturecode

Powerfeaturecode

Description

90Y3454 A1QZ None IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Page 268: Sg 247984

252 IBM PureFlex System and IBM Flex System Products and Technology

The adapter is supported in compute nodes as listed in Table 5-86.

Table 5-86 IBM Flex System IB6132 2-port FDR InfiniBand Adapter supported servers

The adapter supports the switches listed in Table 5-87.

Table 5-87 IBM Flex System IB6132 2-port FDR InfiniBand Adapter supported switches

The IB6132 2-port FDR InfiniBand Adapter has the following features and specifications:

� Based on Mellanox Connect-X3 technology

� Virtual Protocol Interconnect (VPI)

� InfiniBand Architecture Specification v1.2.1 compliant

� Supported InfiniBand speeds (auto-negotiated):

– 1X/2X/4X SDR (2.5 Gbps per lane)– DDR (5 Gbps per lane)– QDR (10 Gbps per lane)– FDR10 (40 Gbps, 10 Gbps per lane)– FDR (56 Gbps, 14 Gbps per lane)

� IEEE Std. 802.3 compliant

� PCI Express 3.0 x8 host-interface up to 8 GT/s bandwidth

� Processor offload of transport operations

� CORE-Direct application offload

� GPUDirect application offload

� Unified Extensible Firmware Interface (UEFI)

� WoL

� RoCE

� End-to-end QoS and congestion control

� Hardware-based I/O virtualization

� TCP/UDP/IP stateless offload

� Ethernet encapsulation (EoIB)

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

90Y3454 None IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Yes No No

Restriction: This I/O adapter is not supported on the p260 and p460. Use the IBM Flex System IB6132 2-port QDR InfiniBand Adapter instead.

System xpartnumber

PowerFC I/O adapters IB

6131

In

fin

iBan

d

Sw

itch

, 90

Y34

50

90Y3454 None IB6132 2-port FDR InfiniBand Adapter Yes

Page 269: Sg 247984

253

� RoHS-6 compliant

� Power consumption: Typical: 9.01 W, maximum 10.78 W

Figure 5-69 shows the IBM Flex System IB6132 2-port FDR InfiniBand Adapter.

Figure 5-69 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System IB6132 2-port FDR InfiniBand Adapter, at:

http://www.redbooks.ibm.com/abstracts/tips0872.html?Open

5.6.13 IBM Flex System IB6132 2-port QDR InfiniBand Adapter

The IBM Flex System IB6132 2-port QDR InfiniBand Adapter provides a high-performing and flexible interconnect solution for servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. The adapter is based on Mellanox ConnectX-2 EN technology, which improves network performance by increasing available bandwidth to the processor, especially in virtualized server environments.

Table 5-88 lists the ordering part number and feature code.

Table 5-88 IBM Flex System IB6132 2-port QDR InfiniBand Adapter ordering information

The adapter is supported in compute nodes as listed in Table 5-89.

Table 5-89 IBM Flex System IB6132 2-port QDR InfiniBand Adapter supported servers

System xpartnumber

System xfeaturecode

Powerfeaturecode

Description

None None 1761 IB6132 2-port QDR InfiniBand Adapter

System xpartnumber

Powerfeaturecode I/O adapters

Supported servers

x240 p260 p460

None 1761 IB6132 2-port QDR InfiniBand Adapter No Yes Yes

Page 270: Sg 247984

254 IBM PureFlex System and IBM Flex System Products and Technology

The adapter supports the switches listed in Table 5-90.

Table 5-90 IBM Flex System IB6132 2-port QDR InfiniBand Adapter supported switches

The IBM Flex System IB6132 2-port QDR InfiniBand Adapter has the following features and specifications:

� ConnectX2 based adapter

� VPI

� InfiniBand Architecture Specification v1.2.1 compliant

� IEEE Std. 802.3 compliant

� PCI Express 2.0 (1.1 compatible) through an x8 edge connector up to 5 GT/s

� Processor offload of transport operations

� CORE-Direct application offload

� GPUDirect application offload

� UEFI

� WoL

� RoCE

� End-to-end QoS and congestion control

� Hardware-based I/O virtualization

� TCP/UDP/IP stateless offload

� RoHS-6 compliant

Restriction: This I/O adapter is not supported on the x240. Use the IBM Flex System IB6132 2-port FDR InfiniBand Adapter instead.

System xpartnumber

PowerFC I/O adapters IB

6131

In

fin

iBan

d

Sw

itch

, 90

Y34

50

None 1761 IB6132 2-port QDR InfiniBand Adapter Yes

Page 271: Sg 247984

255

Figure 5-70 shows the IBM Flex System IB6132 2-port QDR InfiniBand Adapter.

Figure 5-70 IBM Flex System IB6132 2-port QDR InfiniBand Adapter

For more information, see the IBM Redbooks Product Guide for the IBM Flex System IB6132 2-port QDR InfiniBand Adapter, at:

http://www.redbooks.ibm.com/abstracts/tips0890.html?Open

Page 272: Sg 247984

256 IBM PureFlex System and IBM Flex System Products and Technology

Page 273: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 257

Chapter 6. Network integration

This chapter describes different aspects of planning and implementing a network infrastructure of the IBM Flex System Enterprise Chassis. You need to take several factors into account to achieve a successful implementation. These factors include network management, performance, high-availability and redundancy features, VLAN implementation, interoperability, and others.

This chapter includes the following sections:

� 6.1, “Ethernet switch module selection” on page 258� 6.2, “Scalable switches” on page 258� 6.3, “VLAN” on page 260� 6.4, “High availability and redundancy” on page 261� 6.5, “Performance” on page 266� 6.6, “IBM Virtual Fabric Solution” on page 267� 6.7, “VMready” on page 270

6

Page 274: Sg 247984

258 IBM PureFlex System and IBM Flex System Products and Technology

6.1 Ethernet switch module selection

There are a number of I/O modules that can be used to provide network connectivity. They include Ethernet switch modules that provide integrated switching capabilities and pass-through modules that make internal compute node ports available to the outside. Plan to use the Ethernet switch modules whenever possible, because they often provide the required functions with simplified cabling. However, some circumstances such as specific security policies or certain network requirements prevent using integrated switching capabilities. In these cases, use pass-through modules.

For more information about Ethernet pass-through module for the Enterprise Chassis, see 4.10.5, “IBM Flex System EN4091 10 Gb Ethernet Pass-thru” on page 100.

Make sure that the external interface ports of the switches selected are compatible with physical cabling that you use or are planning to use in your data center. Also make sure that features and functions required in the network are supported by proposed switch modules.

Table 6-1 lists common selection considerations that are useful when selecting an appropriate switch module.

Table 6-1 Switch module selection criteria

6.2 Scalable switches

The switches that are installable within the Enterprise Chassis are scalable. Additional ports (or partitions) can be added as required, growing the switch to meet new requirements.

The architecture allows for up to 16 scalable switch partitions within each chassis, with a total of four partitions per switch. The number of partitions is dictated by the specific I/O adapter and I/O module combination. The scalable switch module requires upgrades to enable partitioning.

Suitable switch module EN2092 1Gb Ethernet Switch

EN4093 10Gb Scalable Switch

Requirement

Gigabit Ethernet to nodes/10 Gb Ethernet Uplinks Yes Yes

10 Gb Ethernet to nodes/10 Gb Ethernet Uplinks No Yes

Basic Layer 2 switching (VLAN, port aggregation) Yes Yes

Advanced Layer 2 switching: IEEE features (Failover, QoS) Yes Yes

Layer 3 IPv4 switching (forwarding, routing, ACL filtering) Yes Yes

Layer 3 IPv6 switching (forwarding, routing, ACL filtering) Yes Yes

10 Gb Ethernet CEE/FCoE No Yesa

a. Support for Fibre Channel over Ethernet (FCoE) and switch stacking is planned for later in 2012

Switch stacking No Yesa

vNIC support No Yes

VMready Yes Yes

Page 275: Sg 247984

259

Port upgrades to scalable switches are added as part of Feature on Demand capability (FoD), so you can increase ports with no hardware changes. As each FoD is enabled, the ports of a switch are activated. If the node has a suitable I/O adapter, the ports are available to the node.

For more information about switch capability, see 4.10, “I/O modules” on page 92.

The example shown in Figure 6-1 is the EN4093 10Gb Scalable Switch. Fourteen ports are available in the base product together with 10 uplink ports. However, additional logical partitions can be enabled with a FoD upgrade, providing a second set of 14 internal ports.

Figure 6-1 Logical Partitions for the IBM Flex System Fabric EN4093 10 Gb Scalable Switch

Figure 6-2 shows a node using a two port LAN on Motherboard (LOM). Port 1 is connected to the first switch. The second port is connected to the second switch.

Figure 6-2 Switch to I/O Module connections

Logicalpartition 1

Logicalpartition 2

Logicalpartition 3

(with future adapter)

• Base Switch: Enables fourteen internal 10 Gb ports (one to each server) and ten external 10 Gb ports

• Supports the 2 port 10 Gb LOM and Virtual Fabric capability

14internal

ports

Poo

l of u

plin

k po

rts • First Upgrade via FoD: Enables second set of fourteen internal 10 Gb ports (one to each server) and two 40 Gb ports

• Each 40 Gb port can be used as four 10 Gb ports• Supports the 4-port Virtual Fabric adapter

• Second Upgrade via FoD: Enables third set of fourteen internal 10 Gb ports (one to each server) and four external 10 Gb ports

• Capable of supporting a six port card in the future

14internal

ports

14internal

ports

42 10 Gb KRlanes

Node

10 Gb Enet switch(2nd switch optional)

10 Gb Enet switch

Switch 4

Switch 3I/Ocard 2

10 Gb LOM

Page 276: Sg 247984

260 IBM PureFlex System and IBM Flex System Products and Technology

Figure 6-3 shows a 4-port 10 Gb Ethernet adapter (IBM Flex System CN4054 10 Gb Virtual Fabric Adapter) and a 2-port Fibre Channel (FC) I/O Adapter (IBM Flex System FC5022 2-port 16Gb FC Adapter). These adapters deliver six fabrics to each node.

Figure 6-3 Showing six port connections to six fabric implementation of Ethernet combined with FC

6.3 VLAN

VLANs are commonly used in the Layer 2 network to split up groups of network users into manageable broadcast domains. They are also used to create logical segmentation of workgroups, and to enforce security policies among logical segments. VLAN considerations include the number and types of VLANs supported, tagging protocols supported, and configuration protocols implemented.

All switch modules for Enterprise Chassis support the 802.1Q protocol for VLAN tagging.

Another use of 802.1Q VLAN tagging is to divide one physical Ethernet interface into several logical interfaces that belong to different VLANs. In other words, a compute node can send and receive tagged traffic from different VLANs on the same physical interface. This process can be done with network adapter management software. This software is the same as used for network interface card (NIC) teaming, as described in 6.5.3, “NIC teaming” on page 267. Each logical interface displays as a separate network adapter in the operating system with its own set of characteristics. These characteristics include IP addresses, protocols, and services.

Use several logical interfaces when an application requires more than two separate interfaces, and you do not want to dedicate a whole physical interface to it. This might be the case if you do not have enough interfaces or low traffic. VLANs might also be helpful if you need to implement strict security policies for separating network traffic. Implementing such policies with VLANs might eliminate the need to implement Layer 3 routing in the network. This configuration can be done without needing to implement Layer 3 routing in the network.

To ensure that the application supports logical interfaces, check the documentation for possible restrictions applied to the NIC teaming configurations. Checking documentation is especially important in a clustering solutions implementation.

For more information about Ethernet switch modules available with the Enterprise Chassis, see 4.10, “I/O modules” on page 92.

Node

10 Gb Enet switch(2nd switch optional)

16 Gb FC switch(2nd switch optional)

16 Gb FC switch

4p 10 Gb Enet card

16 Gb FCcard

10 Gb Enet switch

Page 277: Sg 247984

261

6.4 High availability and redundancy

You might need to have continuous access to your network services and applications. Providing high availability for client network resources is a complex task that involves fitting multiple “pieces” together on a hardware and a software level. One HA component is to provide network infrastructure availability.

Network infrastructure availability can be achieved by implementing certain techniques and technologies. Most of them are widely used standards, but some of them are specific to Enterprise Chassis. This section addresses the most common technologies that can be implemented in an Enterprise Chassis environment to provide high availability for network infrastructure.

In general, a typical LAN infrastructure consists of server NICs, client NICs, and network devices such as Ethernet switches and cables that connect them together. The potential failures in a network include port failures (both on switches and servers), cable failures, and network device failures.

To provide high availability and redundancy, avoid or minimize single points of failure. Provide redundancy for network equipment and communication links by using:

� Two Ethernet ports on each compute node (LOM enabled node)� Two or four I/O modules on each node (four on double wide nodes)� Two or four ports on I/O expansion cards on each compute node� Two Ethernet switches per dual port for device redundancy

For more information about connection topology between I/O adapters and I/O modules, see 4.10, “I/O modules” on page 92.

Implement technologies that provide automatic failover in case of any failure. Automatic failover can be configured by using certain feature protocols that are supported by network device, together with server-side software.

Consider implementing these technologies, which can help achieve a higher level of availability in an Enterprise Chassis network solution (depending on your network architecture):

� Spanning Tree Protocol

� Layer 2 failover (also known as Trunk Failover)

� Virtual Link Aggregation Groups

� Virtual Router Redundancy Protocol

� Routing Protocol such as Router Information Protocol (RIP) or Open Shortest Path First (OSPF)

Page 278: Sg 247984

262 IBM PureFlex System and IBM Flex System Products and Technology

6.4.1 Redundant network topologies

The Enterprise Chassis can be connected to the enterprise network in several ways (Figure 6-4).

Figure 6-4 IBM redundant paths

Topology 1 in Figure 6-4 has each switch module in Enterprise Chassis directly connected to the one of the top of rack switches. The switch modules are connected through aggregation links by using some of the external ports on the switch. The specific number of external ports used for link aggregation depends on your redundancy requirements, performance considerations, and real network environments. This topology is the simplest way to integrate the Enterprise Chassis into an existing network, or to build a new one.

Topology 2 in Figure 6-4 has each switch module in the Enterprise Chassis with two direct connections to a pair of top of rack switches. This topology is more advanced, and has a higher level of redundancy. However, protocols such as Spanning Tree or Virtual Link Aggregation Groups must be implemented. Otherwise, network loops and broadcast storms might cause network failures.

6.4.2 Spanning Tree Protocol

Spanning Tree Protocol (STP) is a 802.1D standard protocol used in Layer 2 redundant network topologies. When multiple paths exist between two points on a network, STP or one of its enhanced variants can prevent broadcast loops. It can also ensure that the switch uses the most efficient network path. STP can also enable automatic network reconfiguration in case of failure. For example, top of rack switch 1 and 2, together with switch 1 in the

Com

pute

nod

e

NIC 2

NIC 1Switch 1TORSwitch 1

TORSwitch 2 Switch 2

Res

t of

Net

wor

k

Com

pute

nod

eNIC 2

NIC 1Switch 1TORSwitch 1

TORSwitch 2 Switch 2

Res

t of

Net

wor

k

Trunk

Topology 1

Topology 2

Chassis

Chassis

Page 279: Sg 247984

263

Enterprise Chassis, create a loop in a Layer 2 network. For more information, see Topology 2 in Figure 6-4 on page 262. In this case, use STP as a loop prevention mechanism because a Layer 2 network cannot operate in a loop.

Assume that the link between TOR 2 and Enterprise Chassis switch 1 is disabled by STP to break a loop. Therefore, traffic goes through the link between enterprise switch 1 and Enterprise Chassis switch 1. During link failure, STP reconfigures the network and activates the previously disabled link. The process of reconfiguration can take tenths of a second, during which time the service is unavailable.

Whenever possible, plan to use trunking with VLAN tagging for interswitch connections. This configuration can help achieve higher performance by increasing interswitch bandwidth, and higher availability by providing redundancy for links in the aggregation bundle. For more information about trunking, see 6.5.1, “Trunking” on page 266.

STP modifications, such as Port Fast Forwarding or Uplink Fast, might help to improve STP convergence time and the performance of the network infrastructure. Additionally, several instances of STP can run on the same switch simultaneously. These instances run on a per-VLAN basis. That is, each VLAN has its own copy of STP to load balance traffic across uplinks more efficiently.

For example, assume that a switch has two uplinks in a redundant loop topology and several VLANs are implemented. If single STP is used, one of these uplinks is disabled and the other carries traffic from all VLANs. However, if two STP instances are running, one link is disabled for one set of VLANs while carrying traffic from another set of VLANs, and vice versa. In other words, both links are active, thus enabling more efficient use of available bandwidth.

6.4.3 Layer 2 failover

Each compute node can have one IP address per each Ethernet port, or one virtual NIC consisting of two or more physical interfaces with one IP address. This configuration is known as NIC teaming technology. From the Enterprise Chassis perspective, NIC Teaming is useful when you plan to implement high availability configurations with automatic failover in case of internal or external uplink failures.

You can use only two ports on compute node per virtual NIC for high availability configurations. One port is active, and the other is standby. One port is connected to a switch in I/O bay 1, and the other port to a switch in I/O bay 2. If you plan to use an Ethernet I/O Adapter for high availability configurations, the same rules apply. Connect the active and standby ports to switches on different bays.

During internal port or link failure of the active NIC, the teaming driver switches the port roles. The standby port becomes active and the active port becomes standby. This process takes only a few seconds. After restoration of the failed link, the teaming driver can run a failback or do nothing, depending on the configuration.

Look at topology 1 in Figure 6-4 on page 262. Assume that NIC Teaming is on, and that the compute node NIC port connected to switch 1 is active and the other is on standby. If something goes wrong with the internal link to switch 1, the teaming driver detects the NIC port failure and runs a failover. If external connections are lost, such as the connection from Enterprise Chassis switch 1 to top of rack switch 1, nothing happens. There is no failover because the internal link is still on and the teaming driver does not detect any failure. Therefore the network service becomes unavailable.

To address this issue, use the Layer 2 Failover technique. Layer 2 Failover can disable all internal ports on switch module in the case of an upstream links failure. A disabled port

Page 280: Sg 247984

264 IBM PureFlex System and IBM Flex System Products and Technology

means no link, so the NIC teaming driver runs a failover. This process is a special feature supported on Enterprise Chassis switch modules. If Layer 2 Failover is enabled and you lose connectivity with top of rack switch 1, the NIC teaming driver runs a failover. Service is then available through top of rack switch 2 and Enterprise Chassis switch 2.

Use Layer 2 Failover with NIC active/standby teaming. Before using NIC teaming, verify whether it is supported by the operating system and applications deployed.

6.4.4 Virtual Link Aggregation Groups

In many data center environments, downstream switches connect to upstream devices that consolidate traffic as shown in Figure 6-5.

Figure 6-5 Typical switching layers with STP and VLAG

A switch in the access layer can be connected to more than one switch in the aggregation layer to provide network redundancy. Typically, STP is used to prevent broadcast loops, blocking redundant uplink paths. This protocol has the unwanted consequence of reducing the available bandwidth between the layers by as much as 50%. In addition, STP might be slow to resolve topology changes that occur during a link failure, and can result in considerable Media Access Control (MAC) address flooding.

Using Virtual Link Aggregation Groups (VLAGs), the redundant uplinks remain active using all available bandwidth. Using the VLAG feature, the paired VLAG peers display to the downstream device as a single virtual entity for establishing a multi-port trunk. The VLAG-capable switches synchronize their logical view of the access layer port structure and internally prevent implicit loops. The VLAG topology also responds more quickly to link failure, and does not result in unnecessary MAC address flooding.

Remember: Generally, do not use automatic failback for NIC teaming to avoid issues when you replace the failed switch module. A newly installed switch module has no configuration data, and can cause service disruption.

STP blocksimplicit loops

AggregationLayer

AccessLayer

Servers

VLAGs

VLAGPeers

Links remainactive

ISL

Page 281: Sg 247984

265

VLAGs are also useful in multi-layer environments for both uplink and downlink redundancy to any regular LAG-capable device as shown in Figure 6-6.

Figure 6-6 VLAG with multiple layers

6.4.5 Virtual Router Redundancy Protocol

If you are integrating the Enterprise Chassis into a Layer 3 network with different subnets, routing, and routing protocols, some Layer 3 techniques can be used. These techniques provide high availability service to clients. Traditionally, in multi-subnet IP networks, servers use IP default gateways to communicate with each other. In a redundant network, in case of a router failure, certain protocols need to be used to keep network availability. One of them is Virtual Router Redundancy Protocol (VRRP).

VRRP enables redundant router configurations within a LAN, providing alternative router paths for a host to eliminate single point of failure within a network. Each participating routing device with VRRP function is configured with the same virtual router IPv4 address and ID number. One of the routing devices is elected as the master router and controls the shared virtual router IPv4 address. If the master fails, one of the backup routing devices takes control of the virtual router IPv4 address and actively processes traffic addressed to it.

Currently, switch modules use VRRP version 2, which supports only IPv4 protocol. VRRP version 3 is defined in RFC 5798. VRRPv3 introduces support for IPv6 in addition to IPv4. But implementation for IPv6 is still not stable, so current switch operating systems do not support IPv6 for VRRP.

The IBM Flex System Fabric EN4093 10 Gb Scalable Switch and IBM Flex System EN2092 1 Gb Ethernet Scalable Switch for Enterprise Chassis both offer the VRRP function.

LACP-capableSwitch

LACP-capableServer

VLAG 2

VLAGPeers

VLAGPeers

VLAG 1

VLAGPeers

VLAG 3 VLAG 4

ISLISL

ISL

LACP-capableRouters

VLAG 5 VLAG 6

Layer 2/3 Border

Layer 2/3 Regionwith multiple levels

Servers

Page 282: Sg 247984

266 IBM PureFlex System and IBM Flex System Products and Technology

6.4.6 Routing protocols

A routing protocol is a protocol that specifies how routers communicate with each other. It disseminates information that enables them to select routes between any two nodes on a network. The choice of the route is done by routing algorithms. Typical standard routing protocols that exist in enterprise networks include Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). Additionally, ISPs and other network service providers use Border Gateway Protocol (BGP).

6.5 Performance

Another major topic to be considered during network planning is network performance. Planning network performance is a complicated task, so the following sections provide guidance about the performance features of IBM Flex System network infrastructures. The commonly used features include link aggregation, jumbo frames, NIC Teaming, and network or server load balancing.

6.5.1 Trunking

Trunking (also commonly referred as EtherChannel in Cisco switches) is a simple way to acquire more network bandwidth between switches. Trunking is a technique that combines several physical links into one logical link to get more bandwidth. A trunk group also provides some level of redundancy for its physical links. That is, if one of the physical links in the trunk group fails, traffic is distributed between the remaining functional links.

There are two main ways of establishing a trunk group: Static and dynamic. Static trunk groups can be mostly used without any limitations. It is simple and easy to manage. As for dynamic trunk group, the widely used protocol is Link Aggregation Control Protocol (LACP). This protocol is supported by IBM Flex System EN2092 1 Gb Ethernet Scalable Switch and IBM Flex System Fabric EN4093 10 Gb Scalable Switch.

6.5.2 Jumbo frames

Jumbo frames are used to speed up server network performance. Unlike a traditional Ethernet frame size of up to 1.5 KB, the Ethernet jumbo frames can be up to 9 KB in size. The original 1.5 KB payload size for Ethernet frames was used because of the high error rates and low speed of communications. Thus, if you receive a corrupted packet, only 1.5 KB must be resent to correct the error. However, each frame requires that the network hardware and software process it. If the frame size is increased, the same amount of data can be transferred with less effort. This configuration reduces processor utilization and increases throughput by allowing the system to concentrate on the data in the frames, instead of the frames around the data. Therefore jumbo frames can speed up server network processing, and can provide better utilization of network.

Jumbo frames must be supported by all network devices in the communication path. For example, if you plan to implement iSCSI storage with jumbo frames, all components including server NICs, network switches, and storage system NICs must support jumbo frames.

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch and IBM Flex System Fabric EN4093 10 Gb Scalable Switch I/O modules support jumbo frames.

Page 283: Sg 247984

267

6.5.3 NIC teaming

NIC teaming can be used for high-availability purposes, but it can also be used to get more network bandwidth for specific servers by configuring separate network connections to act as a single high-bandwidth logical connection.

The generic trunking and IEEE 802.3ad LACP modes of NIC teaming can both be used for interfaces connected to the same Ethernet switch module. When NICs are connected to different switch modules, you need to use different interfaces. For Windows, use the vendor-specific drivers and configuration tools. For Linux, use bonding modes 0 or 2.

� For Broadcom chip-based network adapter IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter, the teaming software is Broadcom Advanced Server Program (BASP) for Windows operating systems. BASP settings are configured by Broadcom Advanced Control Suite (BACS) utility.

� For the Emulex-based IBM Flex System CN4054 10 Gb Virtual Fabric Adapter and the LOM implementation of this network adapter, use the OneCommand manager software to configure NIC Teaming. The OneCommand NIC Teaming (and Multiple VLAN Manager) is installed automatically when the Windows driver is installed.

For more information about each configuration tool, see the network adapter vendor’s documentation.

6.5.4 Server Load Balancing

In a scale-out environment, the performance of network applications can be increased by implementing load balancing clusters. You can use the following methods:

� IP load balancing such as Microsoft Network Load Balancing or Linux Virtual Server

� Application load balancing by using specific software features such as IBM WebSphere® Load Balancer

� Application load balancing by using network devices hardware features such as Server Load Balancing with third-party Layer 4 or Layer 7 Ethernet switches

Besides performance, Server Load Balancing also provides high availability by redistributing client requests to the operational servers in case of any server or application failure. Server Load Balancing uses virtual server concept similar to virtual router Together with VRRP, it can provide even higher level of availability for network applications. VRRP and Server Load Balancing can also be used for inter-chassis redundancy and even disaster recovery solutions.

6.6 IBM Virtual Fabric Solution

Currently, deployment of server virtualization technologies in data centers requires significant effort to provide sufficient network I/O bandwidth to satisfy the demands of virtualized applications and services. For example, every virtualized system can host several dozen network applications and services, and each of these services requires bandwidth to function properly. Furthermore, because of different network traffic patterns relevant to different service types, these traffic flows might interfere with each other. This interference can lead to serious network problems, including the inability of the service to run its functions. This type of interference becomes particularly important when I/O disk storage data traffic uses the same physical infrastructure (for example, iSCSI).

Page 284: Sg 247984

268 IBM PureFlex System and IBM Flex System Products and Technology

The IBM Virtual Fabric Virtual Network Interface Card (vNIC) solution addresses the issues described previously. The solution is based on 10 Gb Converged Enhanced Ethernet infrastructure. It takes a 10 Gb port that is on a 10 Gb virtual fabric adapter, and splits the 10 Gb physical port into four vNICs. This configuration allows each vNIC or virtual channel to be between 100 MB and 10 Gb in increments of 100 MB. The total of all four vNICs cannot exceed 10 Gb.

The vNIC solution is a way to divide a physical NIC into smaller logical NICs (or partition them). This configuration allows the OS to have more possible ways to logically connect to the infrastructure. The vNIC feature is supported only on 10 Gb ports on the EN4093 10Gb Scalable Switch facing the compute nodes within the chassis. It requires a node adapter, CN4054 10Gb Virtual Fabric Adapter, or Embedded Virtual Fabric Adapter that also supports this function.

Two primary forms of vNIC are available: Virtual Fabric mode (or switch dependent mode) and switch independent mode. The Virtual Fabric mode is also subdivided into two submodes: Dedicated uplink vNIC mode and shared uplink vNIC mode.

These are some of the common elements of all vNIC modes:

� Only supported on 10 Gb connections.

� Each allows a NIC to be divided into up to four vNIC’s per physical NIC (can be less than four, but not more).

� They all require an adapter that has support for one or more of the vNIC modes.

� When creating vNICs, the default bandwidth is 2.5 Gb for each vNIC. However, the bandwidth can be configured to be anywhere from 100 Mb up to the full bandwidth of the NIC.

� The bandwidth of all configured vNICs on a physical NIC cannot exceed 10 Gb.

Table 6-2 shows a comparison of these modes, with details in the following sections.

Table 6-2 Attributes of vNIC modes

Capability

IBM Virtual Fabric mode Switch independent modeDedicated

uplinkShared uplink

Requires support in the I/O module Yes Yes No

Requires support in the NIC Yes Yes Yes

Supports adapter transmit rate control Yes Yes Yes

Support I/O module transmit rate control Yes Yes No

Supports changing rate dynamically Yes Yes No

Requires a dedicated uplink per vNIC group Yes No No

Support for node OS based tagging Yes No Yes

Support for failover per vNIC group Yes Yes No

Support for more than one uplink per vNIC group No Yes Yes

Page 285: Sg 247984

269

6.6.1 Virtual Fabric mode vNIC

Virtual Fabric mode or switch dependent mode depends on the switch in the I/O switch module that participates in the vNIC process. Specifically, the I/O module that supports this mode of operation today in the Enterprise Chassis is the IBM Flex System Fabric EN4093 10Gb Scalable Switch. It also requires having an adapter on the node that also supports the vNIC switch-dependent mode feature.

In switch dependent vNIC mode, the switch itself is configured. This configuration information is communicated between the switch and the adapter so that both sides agree on and enforce bandwidth controls. It can be changed to different speeds at any time, without reloading either the OS or the I/O module.

As noted, there are two types of switch-dependent vNIC mode: Dedicated uplink mode and shared uplink mode. Both modes have the concept of a vNIC group on the switch. This concept is used to associated vNICs and physical ports into virtual switches within the chassis. How these vNIC groups are used is the primary difference between dedicated uplink mode and shared uplink mode.

These are common attributes of switch-dependent vNIC modes:

� They have the concept of a vNIC group that needs to be created on the I/O module. � Like vNICs are bundled together into common vNIC groups. � Each vNIC group is treated as a virtual switch within the I/O module. Packets in one vNIC

group can get to a different vNIC group only by going to an external switch/router. � For the purposes of Spanning Tree and packet flow, each vNIC group is treated as a

unique switch by upstream switches and routers. � Both support adding physical NICs (ones from nodes not using vNIC) to vNIC groups.

Adding NICs allows for internal communication to other physical NICs and vNICs in that vNIC group, and sharing any uplink associated with that vNIC group.

Dedicated uplink modeDedicated uplink mode is the default mode when vNIC is enabled on the I/O module. In dedicated uplink mode, each vNIC group must have its own dedicated physical or logical (aggregation) uplink. It does not allow you to assign more than a single physical or logical uplink to a vNIC group. In addition, it assumes that high availability will be achieved by some combination of aggregation on the uplink and NIC teaming on the server.

In this mode, vNIC groups are VLAN agnostic to the nodes and the rest of the network. This configuration means that you do not need to create VLANs for each VLAN used by the nodes. The vNIC group simply takes each packet, tagged or untagged, and moves it through the switch.

This process is accomplished by the use of a form of Q-in-Q tagging. Each vNIC group is assigned a VLAN that is unique to that group. Any packet, tagged or untagged, that comes in on a port in that vNIC group gets a tag placed on it equal to the vNIC group VLAN. As that packet leaves the vNIC, the tag is stripped off, revealing the original tag (or no tag, depending on the original packet).

Shared uplink modeShared uplink mode is a global option that can be enabled on an I/O module that has vNIC enabled. Changing the I/O module to share uplink mode allows you to share an uplink among vNIC groups, which reduces the number of uplinks required.

It also changes the way the vNIC groups process packets for tagging. In shared uplink mode, the servers no longer use tags. Instead, the vNIC group VLAN acts as a tag that is placed on

Page 286: Sg 247984

270 IBM PureFlex System and IBM Flex System Products and Technology

the packet. When a server sends a packet and it gets to the vNIC group, it gets a tag placed on it equal to the vNIC group VLAN. The packet is then sent out the uplink tagged with that VLAN.

This approach is illustrated in Figure 6-7.

Figure 6-7 IBM Virtual Fabric vNIC shared uplink mode

6.6.2 Switch independent mode vNIC

Switch independent mode vNIC is accomplished strictly on the node itself. The I/O module is unaware of this virtualization, and acts as a normal switch in all ways. This mode is enabled at the node directly, and has similar rules as dedicated vNIC mode regarding how you can divide the vNIC. However, any bandwidth settings made are limited to how the node sends traffic, not how the I/O module sends traffic back to the node. They cannot be changed in real time because doing so requires a reload.

Ultimately, which mode is best for a user depends on their requirements. Virtual Fabric dedicated uplink mode offers the most control, and switch independent mode offers the most flexibility with uplink connectivity.

6.7 VMready

VMready is a unique solution that enables the network to be virtual machine aware. The network can be configured and managed for virtual ports (v-ports) rather than just for physical ports. VMready allows for a define-once-use-many configuration. That means the network attributes are bundled with a v-port. The v-port belongs to a VM, and is movable. Wherever the VM migrates, even to a different physical host, the network attributes of the v-port remain the same.

The hypervisor manages the various virtual entities (VEs) on the host server: Virtual machines (VMs), virtual switches, and so on. Currently, VMready function supports up to

Operating SystemVMware ESX Physical NIC

EXT-1

vSwitch1

vSwitch2

vSwitch3

vSwitch4

Compute Node

INT-1

EN4093 10 Gb Scalable Switch

EXT-9

EXT-x

vNIC 1.1 Tag VLAN 100

vNIC 1.2 Tag VLAN 200

vNIC 1.3 Tag VLAN 300

vNIC 1.4 Tag VLAN 400

vmnic8

vmnic6

vmnic4

vmnic2

10 Gb NIC

vNIC-Group 1

VLAN100

vNIC-Group 2VLAN200

vNIC-Group 3VLAN300

vNIC-Group 4VLAN400

Page 287: Sg 247984

271

2048 VEs in a virtualized data center environment. The switch automatically discovers the VEs attached to switch ports, and distinguishes between regular VMs, Service Console Interfaces, and Kernel/Management Interfaces in a VMware environment.

VEs can be placed into VM groups on the switch to define communication boundaries. VEs in the same VM group can communicate with each other, whereas VEs in different groups cannot. VM groups also allow for configuring group-level settings such as virtualization policies and access control lists (ACLs).

The administrator can also pre-provision VEs by adding their MAC addresses (or their IPv4 addresses or VM names in a VMware environment) to a VM group. When a VE with a pre-provisioned MAC address becomes connected to the switch, the switch automatically applies the appropriate group membership configuration. In addition, VMready together with IBM NMotion® allows seamless migration/failover of VMs to different hypervisor hosts, preserving network connectivity configurations.

VMready works with all major virtualization products, including VMware, Hyper-V, Xen, and KVM and Oracle VM, without modification of virtualization hypervisors or guest operating systems. A VMready switch can also connect to a virtualization management server to collect configuration information about associated VEs. It can automatically push VM group configuration profiles to the virtualization management server. This process in turn configures the hypervisors and VEs, providing enhanced VE mobility.

VMready is supported on both IBM Flex System Fabric EN4093 10 Gb Scalable Switch and IBM Flex System EN2092 1 Gb Ethernet Scalable Switch.

Page 288: Sg 247984

272 IBM PureFlex System and IBM Flex System Products and Technology

Page 289: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 273

Chapter 7. Storage integration

IBM Flex System Enterprise Chassis offers several possibilities for integration into storage infrastructure, such as Fibre Channel, iSCSI, and Converged Enhanced Ethernet. This chapter addresses major considerations to take into account during IBM Flex System Enterprise Chassis storage infrastructure planning. These considerations include storage system interoperability, I/O module selection and interoperability rules, performance, high availability and redundancy, backup, and boot from SAN.

This chapter includes the following sections:

� 7.1, “External storage” on page 274� 7.2, “Fibre Channel” on page 281� 7.3, “iSCSI” on page 286� 7.4, “High availability and redundancy” on page 287� 7.5, “Performance” on page 288� 7.6, “Backup solutions” on page 289� 7.7, “Boot from SAN” on page 291� 7.8, “Converged networks” on page 292

7

Page 290: Sg 247984

274 IBM PureFlex System and IBM Flex System Products and Technology

7.1 External storage

There are several options for attaching external storage systems to Enterprise Chassis:

� Storage area networks (SANs) based on Fibre Channel (FC) technologies� SANs based on iSCSI� Converged Networks based on 10 Gb Converged Enhanced Ethernet (CEE)

Traditionally, Fibre Channel-based SANs are the most common and advanced design of external storage infrastructure. They provide high levels of performance, availability and redundancy, and scalability. However, the cost of implementing FC SANs is higher in comparison with CEE or iSCSI. Almost every FC SAN includes these major components:

� Host bus adapters (HBAs)� FC switches� FC storage servers � FC tape devices � Optical cables for connecting these devices to each other

iSCSI-based SANs provide all the benefits of centralized shared storage in terms of storage consolidation and adequate levels of performance. However, they use traditional IP-based Ethernet networks instead of expensive optical cabling. iSCSI SANs consist of these components:

� Server hardware iSCSI adapters or software iSCSI initiators

� Traditional network components such as switches and routers

� Storage servers with an iSCSI interface, such as IBM System Storage® DS3500 or IBM N Series

Converged Networks can carry both SAN and LAN types of traffic over the same physical infrastructure. Consolidation allows you to decrease costs and increase efficiency in building, maintaining, operating, and managing the networking infrastructure.

iSCSI, FC-based SANs, and Converged Networks can be used for diskless solutions to provide greater levels of utilization, availability, and cost effectiveness.

These IBM storage products that are supported with the Enterprise Chassis are addressed:

� IBM Storwize V7000

� IBM XIV® Storage System series

� IBM System Storage DS8000® series

� IBM System Storage DS5000 series

� IBM System Storage DS3000 series

� IBM System Storage N series

� IBM System Storage TS3500 Tape Library

� IBM System Storage TS3310 Tape Library

� IBM System Storage TS3100 Tape Library

For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC):

http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

Page 291: Sg 247984

275

7.1.1 IBM Storwize V7000

IBM Storwize V7000 is an innovative storage offering that delivers essential storage efficiency technologies and exceptional ease of use and performance. It is integrated into a compact, modular design.

Scalable solutions require highly flexible systems. In a truly virtualized environment, you need virtualized storage. All Storwize V7000 storage is virtualized.

The Storwize V7000 offers the following features:

� Enables rapid, flexible provisioning, and simple configuration changes� Enables non-disruptive movement of data among tiers of storage, including IBM Easy

Tier®� Enables data placement optimization to improve performance

The most important aspect of the Storwize V7000 and its use with the IBM Flex System Enterprise Chassis is that Storwize V7000 can virtualize external storage. In addition, Storwize V7000 has these features:

� Capacity from existing storage systems becomes part of the IBM storage system

� Single user interface to manage all storage, regardless of vendor

� Designed to significantly improve productivity

� Virtualized storage inherits all the rich base system functions including IBM FlashCopy®, Easy Tier, and thin provisioning

� Moves data transparently between external storage and the IBM storage system

� Extends life and enhances value of existing storage assets

Storwize V7000 offers thin provisioning, FlashCopy, EasyTier, performance management, and optimization. External virtualization allows for rapid data center integration into existing IT infrastructures. The Metro/Global Mirroring option provides support for multi-site recovery.

Figure 7-1 shows the IBM Storwize V7000.

Figure 7-1 IBM Storwize V7000

The levels of integration of Storwize V7000 with IBM Flex System provide these additional features:

� Starting Level

– IBM Flex System Single Point of Management

� Higher Level

– Datacenter Management– IBM Flex System Manager Storage Control

Page 292: Sg 247984

276 IBM PureFlex System and IBM Flex System Products and Technology

� Detailed Level

– Data Management– Storwize V7000 Storage User GUI

� Upgrade Level

– Datacenter Productivity– TPC Storage Productivity Center

IBM Storwize V7000 provides a number of configuration options that simplify the implementation process. It also provides automated wizards, called directed maintenance procedures (DMP), to assist in resolving any events. IBM Storwize V7000 is a clustered, scalable, and midrange storage system, as well as an external virtualization device.

IBM Storwize V7000 Unified is the latest release of the product family. This virtualized storage system is designed to consolidate block and file workloads into a single storage system. This consolidation provides simplicity of management, reduced cost, highly scalable capacity, performance, and high availability. IBM Storwize V7000 Unified Storage also offers improved efficiency and flexibility through built-in solid-state drive (SSD) optimization, thin provisioning, and nondisruptive migration of data from existing storage. The system can virtualize and reuse existing disk systems, providing a greater potential return on investment.

For more information about IBM Storwize V7000, see:

http://www.ibm.com/systems/storage/disk/storwize_v7000/overview.html

Statements of directionIBM intends to further enhance the integration of server, storage, and networking with the introduction of an IBM Flex System storage node. This new storage system will share the software functional richness of IBM Storwize V7000, including IBM System Storage Easy Tier for automated SSD optimization. It will also be physically and logically integrated into IBM PureFlex System.

The Flex System storage node is being designed to build on the industry-leading storage virtualization and efficiency capabilities of IBM Storwize V7000. It is intended to have these advantages:

� Simplify and speed up deployment� Provide greater integration of server and storage management � Automate and streamline provisioning� Greater responsiveness to business needs� Lower overall cost

IBM statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at the sole discretion of IBM. Information regarding potential future products is intended to outline the general product direction. Do not rely on it in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. Information about potential future products cannot be incorporated into any contract. The development, release, and timing of any future features or functionality described for IBM products remains at the sole discretion of IBM.

7.1.2 IBM XIV Storage System series

The IBM XIV Storage System is a proven, high-end disk storage series designed to address storage challenges across the application spectrum. It addresses challenges in virtualization, email, database, analytics, and data protection solutions. The XIV series delivers consistent

Page 293: Sg 247984

277

high performance and high reliability at tier 2 costs for even the most demanding workloads. It uses massive parallelism to allocate system resources evenly at all times, and can scale seamlessly without manual tuning. Its virtualized design and customer-acclaimed ease of management dramatically reduce administrative costs and bring optimization to virtualized server and cloud environments.

The XIV Storage System series has these key features:

� A revolutionary high-end disk system for UNIX and Intel processor-based environments designed to reduce the complexity of storage management.

� Provides even and consistent performance for a broad array of applications. No tuning is required. XIV Gen3 is suitable for demanding workloads.

� Scales up to 360 TB of physical capacity, 161 TB of usable capacity.

� Thousands of instantaneous and highly space-efficient snapshots enable point-in-time copies of data.

� Built-in thin provisioning can help reduce direct and indirect costs.

� Synchronous and asynchronous remote mirroring provides protection against primary site outages, disasters, and site failures.

� Offers FC and iSCSI attach for flexibility in server connectivity.

For more information about the XIV, see:

http://www.ibm.com/systems/storage/disk/xiv/index.html

7.1.3 IBM System Storage DS8000 series

Through its extraordinary flexibility, reliability, and performance, the IBM System Storage DS8000 series is designed to manage a broad scope of storage workloads effectively and efficiently. This flagship IBM disk system can simplify your storage environment. It supports a mix of random and sequential I/O workloads for a mix of interactive and batch applications. It supports these workloads whether they are running on a distributed server platforms or on the mainframe.

Here are the key features of the DS8800:

� Performance: DS8800 model offers superior performance with new IBM POWER6+ controllers, faster 8 gigabits per second (Gbps) host and device adapters, and 6 gigabits per second (Gbps) SAS (serial-attached SCSI) drives

� Availability and resiliency: Greater than 99.999% availability and over 10-year lineage of incremental hardware and microcode improvements built on the IBM POWER server architecture

� Optimized storage tiering: IBM System Storage Easy Tier feature automatically helps optimize application performance by automating placement of data across the appropriate drive tiers

� Flexibility: Support for an extensive variety of server platforms, drive tiers, and application workloads that helps enable cost-effective storage consolidation

� Scalability: Models can scale up from the smallest configuration to the largest configuration (over three petabytes) nondisruptively by upgrading drive capacity, host adapters, drive adapters, and memory

For more information about the DS8000 series, see:

http://www.ibm.com/systems/storage/disk/ds8000/index.html

Page 294: Sg 247984

278 IBM PureFlex System and IBM Flex System Products and Technology

7.1.4 IBM System Storage DS5000 series

DS5000 series storage systems are designed to meet demanding open-systems requirements, and establish a new standard for lifecycle longevity with field-replaceable host interface cards. Seventh-generation architecture delivers relentless performance, real reliability, multidimensional scalability, and unprecedented investment protection.

The DS5000 series has these key features:

� Provides SAN-ready flexible, efficient, scalable disk storage system for UNIX and Intel processor-based environments

� Field-replaceable host interface cards (HIC): Two per controller

� Current release supports four 8 Gbps Fibre Channel HICs or one 10 Gbps iSCSI dual ported (16 total host ports)

� Scalable up to 448 drives with the EXP5000 enclosure, and up to 960 TB of high-density storage with the EXP5060 enclosure

� Support for intermixing drive types (FC, FC-SAS, SED, SATA, and SSD) and host interfaces (Fibre Channel and iSCSI) for investment protection and cost-effective tiered storage

� Supports business continuance with its optional high-availability software and advanced Enhanced Remote Mirroring function

� Helps protect customer data with its multi-RAID capability, including RAID 6, and hot-swappable redundant components

For more information about the DS5000 series, see:

http://www.ibm.com/systems/storage/disk/ds5000/index.html

7.1.5 IBM System Storage DS3000 series

IBM combines best-of-type development with leading host interface and drive technology in the IBM System Storage DS3500 Express. With next-generation 6 Gbps SAS back-end and host technology, you have a seamless path to consolidated and efficient storage. This configuration improves performance, flexibility, scalability, data security, and ultra-low power consumption without sacrificing simplicity, affordability, or availability.

Here are the key features of the DS3000:

� Six Gbps SAS systems deliver midrange performance and scalability at entry-level prices

� Mixed host interface support enables direct-attached storage (DAS) and SAN tiering, reducing overall operation and acquisition costs

� Full disk encryption with local key management provides relentless data security

� Supports Network Equipment Building System (NEBS) and European Telecommunications Standards Institute (ETSI)

For more information about the DS3000, see:

http://www.ibm.com/systems/storage/disk/ds3500/index.html

7.1.6 IBM System Storage N series

The IBM System Storage N series products provide an integrated storage solution where a single storage system can support mission critical applications by using Fibre Channel,

Page 295: Sg 247984

279

iSCSI, and NAS protocols. Using one N series storage system instead of three separate boxes can help simplify IT device management. The unique multiprotocol storage architecture of N series is intended to help organizations reduce investment, operational, and management costs by reducing complexity.

Here are the key features of the N series:

� Integrated storage architecture: Provides a single storage platform to support heterogeneous, multiprotocol storage requirements. This architecture can simultaneously handle both Block I/O (with FCP or iSCSI protocol) and File I/O (with CIFS, NFS, HTTP, FTP, FCoE) application needs.

� Application-aware software: SnapManager software provides host-based data management of N series storage for databases and business applications. Simplifies application-consistent policy-based automation for data protection and disaster recovery. Creates snapshot copies to automate error-free data restores, and enables application-aware disaster recovery.

� Thin Provisioning: Allows applications and users to get more space dynamically and nondisruptively without IT staff intervention.

� Ease of installation: Offers installation tools designed to simplify installation and setup.

� Increased access: Allows heterogeneous access to IP attached storage and Fibre Channel attached storage subsystems.

� Operating system: Optimized and finely tuned for storing and sharing data assets. The OS is designed to enable greater efficiency within your organization, and help lower total cost of ownership (TCO) through improved efficiency and productivity.

� Flexibility: Enables cross-platform data access for Microsoft Windows, UNIX, and Linux environments. This access can help reduce network complexity and expense, and allow data to be shared across the organization.

� Network-attached storage (NAS): Supports Network File System (NFS), Common Internet File System (CIFS) protocols for attachment to Microsoft Windows, UNIX, and Linux systems.

� IP SAN: Supports Internet Small Computer System Interface (iSCSI) protocols for IP SAN that can be attached to host servers that include Microsoft Windows, Linux, and UNIX systems.

� FC SAN: Supports Fibre Channel Protocol (FCP) for accommodating attachment and participation in Fibre Channel SAN environments.

� FCoE: Supports Fibre Channel flow over Ethernet networks.

� Expandability: Supports nondisruptive capacity increases and thin-provisioning, which allows you to dynamically increase and decrease user capacity assignments. Allows you to increase your storage infrastructure to keep pace with company growth.

� Designed to maintain availability and productivity during upgrades.

� Manageability: Includes integrated system diagnostics and management tools, which are designed to help minimize downtime.

� Redundancy: Several redundancy and hot-swappable features provide the highest system availability characteristics.

� Copy Services: Provides extensive outboard services that help recover data in disaster recovery environments. SnapMirror provides one-to-one, one-to-many, and many-to-one mirroring over Fibre Channel or IP infrastructures.

� NearStore (near-line) feature: SATA drive technology enables online and quick access to archived and nonintensive transactional data.

Page 296: Sg 247984

280 IBM PureFlex System and IBM Flex System Products and Technology

� Deduplication: Provides block-level deduplication of data stored in NearStore volumes.

� Compliance and data retention: Software and hardware features offer nonerasable and nonrewritable data protection to meet the industry’s highest regulatory requirements for retaining company data assets.

For more information about the N series, see:

http://www.ibm.com/systems/storage/network/hardware/index.html

7.1.7 IBM System Storage TS3500 Tape Library

The IBM System Storage TS3500 Tape Library is designed to provide a highly scalable, automated tape library for mainframe and open systems backup and archive. This system can scale from midrange to enterprise environments.

The TS3500 Tape Library continues to lead the industry in tape drive integration with these features:

� Persistent worldwide name (WWN) � Multipath architecture� Drive/media exception reporting� Remote drive/media management� Host-based path failover

Here are the key features of the TS3500:

� Supports highly scalable, automated data retention on tape using the LTO Ultrium and IBM 3592 and TS1100 families of tape drives

� Extreme scalability and capacity that can grow from 1 to 16 frames per library, and from 1 to 15 libraries per library complex by using the TS3500 shuttle connector

� Up to 900 PB of automated, low-cost storage under a single library image, which dramatically improves floor space utilization and reduces storage cost per terabyte

� Optional second robotic accessor enhances data availability and reliability

� Provides data security and regulatory compliance by using support for tape drive encryption and WORM cartridges

For more information about the TS3500, see:

http://www.ibm.com/systems/storage/tape/ts3500/index.html

7.1.8 IBM System Storage TS3310 series

If you have rapidly growing data backup needs and limited physical space for a tape library, the IBM System Storage TS3310 offers simple, rapid expansion as your processing needs grow. This tape library allows you to start with a single five EIA rack unit (5U) tall library. As your need for tape backup expands, you can add additional 9U expansion modules, each of which contains space for additional cartridges, tape drives, and a redundant power supply. The entire system grows vertically. Currently, available configurations include the 5U base library module and a 5U base with up to four 9U expansion modules.

Here are the key features of the TS3310:

� Modular, scalable tape library designed to grow as your needs grow

� Available in desktop, desk-side and rack-mounted configurations

Page 297: Sg 247984

281

� Designed for optimal data storage efficiency with high cartridge density using standard or Write Once Read Many (WORM) Linear Tape-Open (LTO) data cartridges

� Hot-swap tape drives and power supplies

� Redundant power and host path connectivity failover options

� Remote web-based management and Storage Management Initiative Specification (SMI-S) interface capable

For more information about the TS3310, see:

http://www.ibm.com/systems/storage/tape/ts3310/index.html

7.1.9 IBM System Storage TS3100 Tape Library

The IBM TS3100 Tape Library Express Model is well-suited for handling backup, save and restore, and archival data-storage needs for small to medium-size environments. The IBM TS3100 model has one full-height tape drive or up to 2 half-height tape drives and a 24 tape cartridge capacity. It is designed to take advantage of LTO technology to help cost effectively handle storage requirements.

Here are the key features of the TS3100:

� Designed to support the newest generation of LTO with one IBM Ultrium 5 full-height tape drive or up to two IBM Ultrium 5 half-height tape drives. Also supports LTO generations 3 and 4 tape drives by using a 2U form factor.

� Fibre Channel attachment support for half height LTO-5 and LTO-4 tape drives

� Designed to offer outstanding capacity, performance, and reliability for a cost effective backup, restore, and archive for midrange storage environments

� Remote library management through a standard web interface supports flexibility and improved administrative control over storage operations

For more information about the TS3100, see:

http://www.ibm.com/systems/storage/tape/ts3100/index.html

7.2 Fibre Channel

Fibre Channel is a proven and reliable network for storage interconnect. The IBM Flex System Enterprise Chassis FC portfolio offers various choices to meet your needs and interoperate with exiting SAN infrastructure.

7.2.1 Fibre Channel requirements

In general, if Enterprise Chassis is integrated into FC storage fabric, ensure that the following requirements are met. Check the compatibility guides from your storage system vendor for confirmation.

� Enterprise Chassis server hardware and HBA are supported by the storage system.

� The FC fabric used or proposed for use is supported by the storage system.

� The operating systems deployed are supported both by IBM server technologies and the storage system.

Page 298: Sg 247984

282 IBM PureFlex System and IBM Flex System Products and Technology

� Multipath drivers exist and are supported by the operating system and storage system (in case you plan for redundancy).

� Clustering software is supported by the storage system (in case you plan to implement clustering technologies).

If any of these requirements are not met, consider another solution that is supported.

Almost every vendor of storage systems or storage fabrics has extensive compatibility matrixes that include supported HBAs, SAN switches, and operating systems. For more information about IBM System Storage compatibility, see the IBM System Storage Interoperability Center at:

http://www.ibm.com/systems/support/storage/config/ssic

7.2.2 FC switch selection and fabric interoperability rules

IBM Flex System Enterprise Chassis provides integrated FC switching functions by using several switch options:

� IBM Flex System FC3171 8 Gb SAN Switch � IBM Flex System FC3171 8 Gb SAN Pass-thru� IBM Flex System FC5022 16 Gb SAN Scalable Switch

Considerations for the FC5022 16Gb SAN Scalable SwitchThe module can function either in Fabric OS Native mode or Brocade Access Gateway mode. The switch ships with Fabric OS mode as the default. The mode can be changed by using OS commands or web tools.

Access Gateway simplifies SAN deployment by using N_Port ID Virtualization (NPIV). NPIV provides FC switch functions that improve switch scalability, manageability, and interoperability.

The default configuration for Access Gateway is that all N-Ports have fail-over and fall back enabled. In Access Gateway mode, the external ports can be N_Ports, and the internal ports (1–28) can be F_Ports as shown in Table 7-1

Table 7-1 Default configuration

F_port N_port F_port N_Port

1,21 0 11 38

2,22 29 12 39

3,23 30 13 40

4,24 31 14 41

5,25 32 15 42

6,26 33 16 43

7,27 34 17 44

8,28 35 18 45

9 36 19 46

10 37 20 47

Page 299: Sg 247984

283

For more information, see the Brocade Access Gateway Administrator’s Guide.

Considerations for the FC3171 8 Gb SAN Pass-thru and FC3171 8 Gb SAN Switch

Both these I/O Modules provide seamless integration of IBM Flex System Enterprise Chassis into existing Fibre Channel fabric. They avoid any multivendor interoperability issues by using NPIV technology.

All ports are licensed on both these switches (there are no port licensing requirements). The I/O module has 14 internal ports and 6 external ports presented to the rear of the chassis.

You can reconfigure the FC3171 8 Gb SAN Switch to become a Pass-Thru module by using the switch GUI or CLI. The module can be converted back to a full function SAN switch at any time. The switch requires a reset when turning on or off transparent mode.

Operating in pass-through mode adds ports to the fabrics, and not domain IDs like switches. This process is not apparent to the switches in the fabric. This section describes how the NPIV concept works for the Intelligent pass-through Module (and the Brocade Access Gateway).

Several basic types of ports are used in Fibre Channel fabrics:

� N_Ports (node ports) represent an end-point FC device (such as host, storage system, or tape drive) connected to the FC fabric.

� F_Ports (fabric ports) are used to connect N_Ports to the FC switch (that is, the host HBA’s N_port is connected to the F_Port on the switch).

� E_Ports (expansion ports) provide interswitch connections. If you need to connect one switch to another, E_ports are used. The E_port on one switch is connected to the E_Port on another switch.

When one switch is connected to another switch in the existing FC fabric, it uses the Domain ID to uniquely identify itself in the SAN (like a switch address). Because every switch in the fabric has the Domain ID and this ID is unique in the SAN, the number of switches and number of ports is limited. This in turn limits SAN scalability. For example, QLogic theoretically supports up to 239 switches, and McDATA supports up to 31 switches.

Another concern with E_Ports is an interoperability issue between switches from different vendors. In many cases only the so-called “interoperability mode” can be used in these fabrics, thus disabling most of the vendor’s advanced features.

Each switch requires some management tasks to be performed on it. Therefore, an increased number of switches increases the complexity of the management solution, especially in heterogeneous SANs consisting of multivendor fabrics. NPIV technology helps to address these issues.

Initially, NPIV technology was used in virtualization environments to share one HBA with multiple virtual machines, and assign unique port IDs to each of them. This configuration allows you to separate traffic between virtual machines (VMs). You can deal with VMs in the same way as physical hosts, by zoning fabric or partitioning storage.

Attention: If you will need Full Fabric capabilities at any time in the future, purchase the Full Fabric Switch Module (FC3171 8 Gb SAN Switch) instead of the Pass-Thru module (FC3171 8 Gb SAN Pass-thru). The pass-through module can never be upgraded.

Page 300: Sg 247984

284 IBM PureFlex System and IBM Flex System Products and Technology

For example, if NPIV is not used, every virtual machine shares one HBA with one WWN. This restriction means that you are not able to separate traffic between these systems and isolate LUNs because all of them use the same ID. In contrast, when NPIV is used, every VM has its own port ID, and these port IDs are treated as N_Ports by the FC fabric. You can perform storage partitioning or zoning based on the port ID of the VM. The switch that the virtualized HBAs are connected to must support NPIV as well. Check the documentation that comes with the FC switch.

The IBM Flex System FC3171 8 Gb SAN Switch in pass-through mode, the IBM Flex System FC3171 8 Gb SAN Pass-thru, and the Brocade Access Gateway use the NPIV technique. The technique presents the node’s port IDs as N_Ports to the external fabric switches. This process eliminates the need for E_Ports connections between the Enterprise Chassis and external switches. In this way, all 14 internal nodes FC ports are multiplexed and distributed across external FC links and presented to the external fabric as N_Ports.

This configuration means that external switches connected to the chassis that are configured for Fibre pass-through do not see the pass-through module. They see only N_ports connected to the F_ports. This configuration can help to achieve a higher port count for better scalability without using Domain IDs, and avoid multivendor interoperability issues. However, modules that operate in Pass-Thru cannot be directly attached to the storage system. They must be attached to an external NPIV-capable FC switch. See the switch documentation about NPIV support.

Select a SAN module that can provide the required functionality together with seamless integration into the existing storage infrastructure (Table 7-2). There are no strict rules to follow during integration planning. However, several considerations must be taken into account.

Table 7-2 SAN module feature comparison and interoperability

FC5022 16Gb SAN Scalable Switch

FC3171 8 Gb SAN Switch

FC5022 16Gb SAN Scalable Switch in Brocade Access Gateway mode

FC3171 8 Gb SAN Pass-thru (and FC3171 8 Gb SAN Switch in pass-through mode)

Basic FC connectivity

FC-SW-2 interoperability Yesa

a. Indicates that a feature is supported without any restrictions for existing fabric, but with restrictions for added fabric, and vice versa.

Yes Not applicable Not applicable

Zoning Yes Yes Not applicable Not applicable

Maximum number of Domain IDs 239 239 Not applicable Not applicable

Advanced FC connectivity

Port Aggregation Yes Nob

b. Does not necessarily mean that a feature is not supported. Instead, it means that severe restrictions apply to the existing fabric. Some functions of the existing fabric potentially must be disabled (if used).

Not applicable Not applicable

Advanced fabric security Yes Yes Not applicable Not applicable

Interoperability (existing fabric)

Brocade fabric interoperability Yes No Yes Yes

QLogic fabric interoperability No No No No

Cisco fabric interoperability No No No Yes

Page 301: Sg 247984

285

Almost all switches support interoperability standards, which means that almost any switch can be integrated into existing fabric by using interoperability mode. Interoperability mode is a special mode used for integration of different vendors’ FC fabrics into one. However, only standards-based functionality is available in the interoperability mode. Advanced features of a storage fabric’s vendor might not be available. Broadcom, McDATA, and Cisco have interoperability modes on their fabric switches. Check the compatibility matrixes for a list of supported and unsupported features in the interoperability mode. Table 7-2 on page 284 provides a high-level overview of standard and advanced functions available for particular Enterprise Chassis SAN switches. It lists how these switches might be used for designing new storage networks or integrating with existing storage networks.

For example, if you integrate FC3052 2-port 8Gb FC Adapter (Brocade) into QLogic fabric, you cannot use Brocade proprietary features such as ISL trunking. However, QLogic fabric does not lose functionality. Conversely, if you integrate QLogic fabric into existing Brocade fabric, placing all Brocade switches in interoperability mode loses Advanced Fabric Services functions.

If you plan to integrate Enterprise Chassis into a Fibre Channel fabric that is not listed here, QLogic might be a good choice. However, this configuration is possible with interoperability mode only, so extended functions are not supported. A better way would be to use the FC3171 8 Gb SAN Pass-thru or Brocade Access Gateway.

Switch selection and interoperability has the following rules:

� FC3171 8 Gb SAN Switch is used when Enterprise Chassis is integrated into existing QLogic fabric or when basic FC functionality is required. That is, with one Enterprise Chassis with a direct-connected storage server.

� FC5022 16Gb SAN Scalable Switch is used when Enterprise Chassis is integrated into existing Brocade fabric or when advanced FC connectivity is required. You might use this switch when several Enterprise Chassis are connected to high performance storage systems.

If you plan to use advanced features such as ISL trunking, you might need to acquire specific licenses for these features.

If Enterprise Chassis is attached to a non-IBM storage system, support is provided by the storage system’s vendor. Even if non-IBM storage is listed on IBM ServerProven, it means only that the configuration has been tested. It does not mean that IBM provides support for it. See the vendor compatibility information for supported configurations.

For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at:

http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

Remember: Advanced (proprietary) FC connectivity features from different vendors might be incompatible with each other, even those that provide almost the same function. For example, both Brocade and Cisco support port aggregation. However, Brocade uses ISL trunking and Cisco uses PortChannels, and they are incompatible with each other.

Tip: Using FC storage fabric from the same vendor often avoids possible operational, management, and troubleshooting issues.

Page 302: Sg 247984

286 IBM PureFlex System and IBM Flex System Products and Technology

7.3 iSCSI

iSCSI uses a traditional Ethernet network for block I/O between storage system and servers. Servers and storage systems are connected to the LAN, and use iSCSI to communicate with each other. Because iSCSI uses a standard TCP/IP stack, you can use iSCSI connections across LAN or wide area network (WAN) connections.

iSCSI targets IBM System Storage DS3500 iSCSI models, an optional DHCP server, and a management station with iSCSI Configuration Manager.

The software iSCSI initiator is specialized software that uses a server’s processor for iSCSI protocol processing. A hardware iSCSI initiator exists as microcode that is built in to the LAN on Motherboard (LOM) on the node or on the I/O Adapter providing it is supported.

Both Software and Hardware initiator implementations provide iSCSI capabilities for Ethernet NICs. However, an operating system driver can be used only after the locally installed operating system is turned on and running. In contrast, the NIC built-in microcode is used for boot-from-SAN implementations, but cannot be used for storage access when the operating system is already running.

Currently, iSCSI on Enterprise Chassis nodes can be implemented on the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter and the embedded 10 Gb Virtual Fabric adapter LOM.

Software initiators can be obtained from the operating system vendor. For example, Microsoft offers a software iSCSI initiator for download. Or they can be obtained as a part of an NIC firmware upgrade (if supported by NIC).

For more information about IBM Flex System CN4054 10 Gb Virtual Fabric Adapter, see 5.5.1, “Overview” on page 216 and 5.6.12, “IBM Flex System IB6132 2-port FDR InfiniBand Adapter” on page 251.

For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at:

http://www.ibm.com/systems/support/storage/config/ssic

If you plan for redundancy, you must use multipath drivers. Generally, they are provided by the operating system vendor for iSCSI implementations, even if you plan to use hardware initiators.

It is possible to implement high availability (HA) clustering solutions by using iSCSI, but certain restrictions might apply. For more information, see the storage system vendor compatibility guides.

When planning your iSCSI solution, consider the following items:

� IBM Flex System Enterprise Chassis nodes, the initiators, and the operating system are supported by an iSCSI storage system. For more information, see the compatibility guides from the storage vendor.

Remember: Both of these NIC solutions require a Feature on Demand (FoD) upgrade, which enables and provides iSCSI initiator.

Tip: Consider using a separate network segment for iSCSI traffic. That is, isolate NICs, switches (or virtual local area network (VLANs)), and storage system ports that participate in iSCSI communications from other traffic.

Page 303: Sg 247984

287

� Multipath drivers exist, and are supported by the operating system and the storage system (when redundancy is planned). For more information, see the compatibility guides from the operating system vendor and storage vendor.

For more information, see the following publications:

� IBM SSIC

http://www.ibm.com/systems/support/storage/config/ssic

� IBM System Storage N series Interoperability Matrix, found at:

http://ibm.com/support/docview.wss?uid=ssg1S7003897

� Microsoft Support for iSCSI (from Microsoft), found at:

http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiscsi.mspx

7.4 High availability and redundancy

The Enterprise Chassis has built-in network redundancy. All I/O Adapter servers are dual port. I/O modules can be installed as a pair into the Enterprise Chassis to avoid possible single points of failure in the storage infrastructure. All major vendors, including IBM, use dual controller storage systems to provide redundancy.

A typical topology for integrating Enterprise Chassis into a Fibre Channel infrastructure is shown in Figure 7-2.

Figure 7-2 IBM Enterprise Chassis LAN infrastructure topology

This topology includes a dual port FC I/O Adapter installed onto the node. A pair of FC I/O Modules is installed into bays 3 and 4 of the Enterprise Chassis.

In a failure, the specific operating system driver provided by the storage system manufacturer is responsible for the automatic failover process. This process is also known as multipathing capability.

Controller 2

Chassis

I/O Module

StorageNetwork

Controller 1

Storage System

Nod

e

I/O Module

Page 304: Sg 247984

288 IBM PureFlex System and IBM Flex System Products and Technology

If you plan to use redundancy and high availability for storage fabric, ensure that failover drivers satisfy the following requirements:

� They are available from the vendor of the storage system.

� They come with the system or can be ordered separately (remember to order them in such cases).

� They support the node operating system.

� They support the redundant multipath fabric that you plan to implement (that is, they support the required number of redundant paths).

For more information, see the storage system documentation from the vendor.

7.5 Performance

Performance is an important consideration during storage infrastructure planning. Providing the required end-to-end performance for your SAN can be accomplished in several ways.

First, the storage system’s failover driver can provide load balancing across redundant paths in addition to high availability. IBM System Storage Multi-path Subsystem Device Driver (SDD) used with DS8000 provides this function. If you plan to use such drivers, ensure that they satisfy the following requirements:

� They are available from the storage system vendor.

� They come with the system, or can be ordered separately.

� They support the node operating system.

� They support the multipath fabric that you plan to implement. That is, they support the required number of paths implemented.

Also, you can use static LUN distribution between two storage controllers in the storage system. Some LUNs are served by controller 1, and others are served by controller 2. A zoning technique can also be used together with static LUN distribution if you have redundant connections between FC switches and the storage system controllers.

Trunking or PortChannels between FC or Ethernet switches can be used to increase network bandwidth, increasing performance. Trunks in the FC network use the same concept as in standard Ethernet networks. Several physical links between switches are grouped into one logical link with increased bandwidth. This configuration is typically used when an Enterprise Chassis is integrated into existing advanced FC infrastructures. However, keep in mind that only the FC5022 16Gb SAN Scalable Switch supports trunking. Also be aware that this is an optional feature that requires the purchase of an additional license.

For more information, see the storage system vendor documentation and the switch vendor documentation.

Page 305: Sg 247984

289

7.6 Backup solutions

Backup is an important consideration when deploying infrastructure systems. First, you need to decide which tape backup solution to implement. There are a number of ways to back up data:

� Centralized local area network (LAN) backup with dedicated backup server (compute node in the chassis) with FC-attached tape autoloader or tape library

� Centralized LAN backup with dedicated backup server (server external to the chassis) with FC-attached tape autoloader or tape library

� LAN-free backup with FC-attached tape autoloader or library (see 7.6.2, “LAN-free backup for nodes” on page 290.)

If you plan to use a node as a dedicated backup server or LAN-free backup for nodes, use only certified tape autoloaders and tape libraries. If you plan to use a dedicated backup server on a non-Enterprise Chassis system, use tape devices that are certified for that server. Also, verify that the tape device and type of backup you select are supported by the backup software you plan to use.

For more information about supported tape devices and interconnectivity, see the IBM System Storage Interoperability Center at:

http://www.ibm.com/systems/support/storage/config/ssic

7.6.1 Dedicated server for centralized LAN backup

The simplest way to provide backup for the Enterprise Chassis is to use a compute node or external server with a SAS-attached or FC-attached tape unit. In this case, all nodes that require backup have backup agents, and backup traffic from these agents to the backup server uses standard LAN paths.

If you use an FC-attached tape drive, connect it to FC fabric (or at least to an HBA) that is dedicated for backup. Do not connect it to the FC fabric that carries the disk traffic. If you cannot use dedicated switches, use zoning techniques on FC switches to separate these two fabrics.

If you plan to use a node as a dedicated backup server with FC-attached tape, use one port of the I/O adapter for tape and another for disk. There is no redundancy in this case.

Consideration: Avoid mixing disk storage and tape storage on the same FC HBA. If you experience issues with your SAN because the tape and disk on the same HBA, IBM Support will request that you separate these devices.

Page 306: Sg 247984

290 IBM PureFlex System and IBM Flex System Products and Technology

Figure 7-3 shows possible topologies and traffic flows for LAN backups and FC-attached storage devices.

Figure 7-3 LAN backup topology and traffic flow

The topology shown in Figure 7-3 has the following characteristics:

� Each Node participating in backup, except the backup server itself, has dual connections to the disk storage system.

� The backup server has only one disk storage connection (shown in red).

� The other port of the FC HBA is dedicated for tape storage.

� A backup agent is installed onto each Node requiring backup.

The backup traffic flow starts with the backup agent transfers backup data from the disk storage to the backup server through LAN. The backup server stores this data on its disk storage, for example on the same storage system. Then the backup server transfers data from its storage directly to the tape device. Zoning is implemented on an FC Switch Module to separate disk and tape data flows. Zoning is almost like VLANs in networks.

7.6.2 LAN-free backup for nodes

LAN-free backup means that the SAN fabric is used for the backup data flow instead of LAN. LAN is used only for passing control information between the backup server and agents. LAN-free backup can save network bandwidth for network applications, providing better network performance. The backup agent transfers backup data from the disk storage directly to the tape storage during LAN-free backup.

Chassis

Nod

e ba

ckup

age

nt

EthernetI/O Module

Nod

e b

acku

p se

rver

Tape Autoloader

FCSM

StorageNetwork

FCSM

EthernetI/O Module

Controller 1 Controller 2

Storage System

FCSwitch Module

Backup data is moved from diskbackup storage to tape backupstorage by backup server

Backup data is moved from diskstorage to backup server's diskstorage through LAN by backupagent

Page 307: Sg 247984

291

Figure 7-4 illustrates this process.

Figure 7-4 LAN-free backup without disk storage redundancy

Figure 7-4 shows the simplest topology for LAN-free backup. With this topology, the backup server controls the backup process, and the backup agent moves the backup data from the disk storage directly to the tape storage. In this case, there is no redundancy provided for the disk storage and tape storage. Zones are not required because the second Fibre Channel Switching Module (FCSM) is exclusively used for the backup fabric.

Backup software vendors can use other (or additional) topologies and protocols for backup operations. Consult the backup software vendor documentation for a list of supported topologies and features, and additional information.

7.7 Boot from SAN

Boot from SAN (or SAN Boot) is a technique used when the node in the chassis has no local disk drives. It uses an external storage system LUN to boot the operating system. Both the operating system and data are on the SAN. This technique is commonly used to provide higher availability and better utilization of the systems storage (where the operating system is). Hot spare Nodes or “Rip-n-Replace” techniques can also be easily implemented by using boot from SAN.

7.7.1 Implementing Boot from SAN

To successfully implement SAN Boot, the following conditions need to be met. Check the respective storage system compatibility guides for the information you need.

� Storage system supports SAN Boot.� Operating system supports SAN Boot.� FC HBAs, or iSCSI initiators support SAN Boot.

Chassis

Nod

e ba

ckup

age

nt

EthernetI/O Module

Nod

e ba

ckup

ser

ver

Tape Autoloader

FCSM 2

StorageNetwork

FCSM

EthernetI/O Module

Controller 1 Controller 2

Storage System

Page 308: Sg 247984

292 IBM PureFlex System and IBM Flex System Products and Technology

You can also check the documentation for the operating system used for boot from SAN support and requirements as well as storage vendors. See the following sources for additional SAN boot-related information:

� Windows Boot from Fibre Channel SAN – Overview and Detailed Technical Instructions for the System Administrator can be found at:

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=2815

� SAN Configuration Guide (from VMware), found at:

http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf

For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at:

http://www.ibm.com/systems/support/storage/config/ssic

7.7.2 iSCSI SAN Boot specific considerations

iSCSI SAN Boot enables a diskless node to be started from an external iSCSI storage system. You can use either the onboard 10 Gb Virtual Fabric LOM on the node itself or an I/O adapter. Specifically, the IBM Flex System CN4054 10 Gb Virtual Fabric Adapter supports iSCSI with the IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, part 90Y3558.

For the latest compatibility information, see the storage vendor compatibility guides. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at:

http://www.ibm.com/systems/support/storage/config/ssic

7.8 Converged networks

One common way to do reduce administration costs is by converging technologies that have been implemented on separate infrastructures. Just as office phone systems have been reduced from a separate cabling plant and components to a common IP infrastructure, Fibre Channel networks are also converging to Ethernet.

FCoE (Fibre Channel over Ethernet) removes the need for separate HBAs on the servers and separate Fibre Channel cables that come out of the server or chassis. Instead, a Converged Network Adapter (CNA) is installed in the server. This adapter presents what appears to be both a NIC and an HBA to the OS, but the output out of the server is 10 Gb Ethernet.

The CN4054 10Gb Virtual Fabric Adapter or Embedded Virtual Fabric Adapter on the x240 with optional Virtual Fabric Upgrade offer this service. Either must be used in conjunction with EN4091 10 Gb Ethernet Pass-thru connected to the external FCoE-capable top of rack switch. The EN4091 10 Gb Ethernet Pass-thru connects a node that runs a CNA to an upstream switch that acts as an FCF (FCoE Forwarder). The Fibre Channel packet is broken back out of the Ethernet packet, and sent into the Fibre Channel SAN.

FCoE support on the EN4093 10Gb Scalable Switch is planned for later 2012.

Page 309: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 293

AC alternating current

ACL access control list

AES-NI Advanced Encryption Standard New Instructions

AMM advanced management module

AMP Apache, MySQL, and PHP/Perl

ANS Advanced Network Services

API application programming interface

AS Australian Standards

ASIC application-specific integrated circuit

ASU Advanced Settings Utility

AVX Advanced Vector Extensions

BACS Broadcom Advanced Control Suite

BASP Broadcom Advanced Server Program

BE Broadband Engine

BGP Border Gateway Protocol

BIOS basic input/output system

BOFM BladeCenter Open Fabric Manager

CEE Converged Enhanced Ethernet

CFM cubic feet per minute

CLI command-line interface

CMM Chassis Management Module

CPM Copper Pass-thru Module

CPU central processing unit

CRTM Core Root of Trusted Measurements

DC domain controller

DHCP Dynamic Host Configuration Protocol

DIMM dual inline memory module

DMI Desktop Management Interface

DRAM dynamic random-access memory

DRTM Dynamic Root of Trust Measurement

DSA Dynamic System Analysis

ECC error checking and correcting

EIA Electronic Industries Alliance

ESB Enterprise Switch Bundle

ETE everything-to-everything

FC Fibre Channel

Abbreviations and acronyms

FC-AL Fibre Channel Arbitrated Loop

FDR fourteen data rate

FSM Flex System Manager

FSP flexible service processor

FTP File Transfer Protocol

FTSS Field Technical Sales Support

GAV generally available variant

GB gigabyte

GT gigatransfers

HA high availability

HBA host bus adapter

HDD hard disk drive

HPC high-performance computing

HS hot swap

HT Hyper-Threading

HW hardware

I/O input/output

IB InfiniBand

IBM International Business Machines

ID identifier

IEEE Institute of Electrical and Electronics Engineers

IGMP Internet Group Management Protocol

IMM integrated management module

IP Internet Protocol

IS information store

ISP Internet service provider

IT information technology

ITE IT Element

ITSO International Technical Support Organization

KB kilobyte

KVM keyboard video mouse

LACP Link Aggregation Control Protocol

LAN local area network

LDAP Lightweight Directory Access Protocol

LED light emitting diode

LOM LAN on Motherboard

LP low profile

Page 310: Sg 247984

294 IBM PureFlex System and IBM Flex System Products and Technology

LPC Local Procedure Call

LR long range

LR-DIMM load-reduced DIMM

MAC media access control

MB megabyte

MSTP Multiple Spanning Tree Protocol

NIC network interface card

NL nearline

NS not supported

NTP Network Time Protocol

OPM Optical Pass-Thru Module

OSPF Open Shortest Path First

PCI Peripheral Component Interconnect

PCIe PCI Express

PDU power distribution unit

PF power factor

PSU power supply unit

QDR quad data rate

QPI QuickPath Interconnect

RAID redundant array of independent disks

RAM random access memory

RAS remote access services; row address strobe

RDIMM registered DIMM

RFC request for comments

RHEL Red Hat Enterprise Linux

RIP Routing Information Protocol

ROC RAID-on-Chip

ROM read-only memory

RPM revolutions per minute

RSS Receive-Side Scaling

SAN storage area network

SAS Serial Attached SCSI

SATA Serial ATA

SDMC Systems Director Management Console

SerDes Serializer-Deserializer

SFF small form factor

SLC Single-Level Cell

SLES SUSE Linux Enterprise Server

SLP Service Location Protocol

SNMP Simple Network Management Protocol

SSD solid-state drive

SSH Secure Shell

SSL Secure Sockets Layer

STP Spanning Tree Protocol

TCG Trusted Computing Group

TCP Transmission Control Protocol

TDP thermal design power

TFTP Trivial File Transfer Protocol

TPM Trusted Platform Module

TXT text

UDIMM unbuffered DIMM

UDLD Unidirectional link detection

UEFI Unified Extensible Firmware Interface

UI user interface

UL Underwriters Laboratories

UPS uninterruptible power supply

URL Uniform Resource Locator

USB universal serial bus

VE Virtualization Engine

VIOS Virtual I/O Server

VLAG Virtual Link Aggregation Groups

VLAN virtual LAN

VM virtual machine

VPD vital product data

VRRP Virtual Router Redundancy Protocol

VT Virtualization Technology

WW worldwide

WWN Worldwide Name

Page 311: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 295

Related publications and education

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks

The following publications from IBM Redbooks provide additional information about IBM Flex System. These are available from:

http://www.redbooks.ibm.com/portals/puresystems

� IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989

� IBM Flex System Networking in an Enterprise Data Center, REDP-4834

Chassis and Compute Nodes:

� IBM Flex System Enterprise Chassis, TIPS0863

� IBM Flex System p260 and p460 Compute Node, TIPS0880

� IBM Flex System x240 Compute Node, TIPS0860

� IBM Flex System Manager, TIPS0862

Switches:

� IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861

� IBM Flex System Fabric EN4093 10Gb Scalable Switch, TIPS0864

� IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865

� IBM Flex System FC5022 16Gb SAN Scalable Switch and FC5022 24-port 16Gb ESB SAN Scalable Switch, TIPS0870

� IBM Flex System IB6131 InfiniBand Switch, TIPS0871

� IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866

Adapters:

� IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845

� IBM Flex System FC5022 2-port 16Gb FC Adapter, TIPS0891

� IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868

� IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869

� ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884

� IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872

� IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873

� IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890

� IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867

Page 312: Sg 247984

296 IBM PureFlex System and IBM Flex System Products and Technology

Other relevant documents:

� IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849:

http://www.redbooks.ibm.com/abstracts/tips0849.html

You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website:

ibm.com/redbooks

IBM education

The following are IBM educational offerings for IBM Flex System. Note that some course numbers and titles might have changed slightly after publication.

� NGT10/NGV10/NGP10, IBM Flex System - Introduction

� NGT20/NGV20/NGP20, IBM Flex System x240 Compute Node

� NGT30/NGV30/NGP30, IBM Flex System p260 and p460 Compute Nodes

� NGT40/NGV40/NGP40, IBM Flex System Manager Node

� NGT50/NGV50/NGP50, IBM Flex System Scalable Networking

For more information about these, and many other IBM System x educational offerings, visit the global IBM Training website located at:

http://www.ibm.com/training

Online resources

These websites are also relevant as further information sources:

� IBM Flex System Enterprise Chassis Power Requirements Guide:

http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401

� Integrated Management Module II User’s Guide

http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346

� IBM Flex System Information Center

http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp

� ServerProven for IBM Flex System

http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html

� ServerProven compatibility page for operating system support

http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

Note: IBM courses prefixed with NGTxx are traditional, face-to-face classroom offerings. Courses prefixed with NGVxx are Instructor Led Online (ILO) offerings. Courses prefixed with NGPxx are Self-paced Virtual Class (SPVC) offerings.

Page 313: Sg 247984

297

� IBM Flex System Interoperability Guide

http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=sa&subtype=wh&htmlfid=WZL12345USEN

� Configuration and Option Guide

http://www.ibm.com/systems/xbc/cog/

� xREF - IBM x86 Server Reference

http://www.redbooks.ibm.com/xref

� IBM System Storage Interoperation Center

http://www.ibm.com/systems/support/storage/ssic

Help from IBM

IBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

Page 314: Sg 247984

298 IBM PureFlex System and IBM Flex System Products and Technology

Page 315: Sg 247984

© Copyright IBM Corp. 2012. All rights reserved. 299

Index

Numerics00D4692 4700D4693 4700D4968 15500D7192 11900D7193 11900D7194 11900D7195 11900D7196 11900D7197 11900D7550 4700D7551 4700D7554 4710GBASE-KR 8839Y7916 11941Y8298 169, 19441Y8300 169, 19442D0637 164, 19142D0677 164, 19142D0707 164, 19242U 1100mm Enterprise V2 Dynamic Rack 12843W7718 164, 19243W7726 168, 19143W7746 168, 19143W9049 6743W9055 6343W9057 6343W9078 7049Y1397 155, 18649Y1400 155, 18649Y1403 18649Y1404 155, 18649Y1405 15449Y1406 155, 18649Y1407 155, 18649Y1559 15549Y1563 15549Y1567 15549Y2003 164, 19149Y4270 9549Y4294 10349Y4298 10349Y4798 9549Y7900 235–23649Y8116 15049Y8119 16949Y8125 14949Y8144 14968Y7030 8369Y1930 11369Y1934 11669Y1938 235, 2467895-22X 1987895-42X 216

7906 17781Y5179 14981Y5180 14981Y5182 14981Y5183 14981Y5184 14981Y5185 14981Y5186 14981Y5187 14981Y5188 15081Y5189 15081Y5190 14981Y5206 14981Y5286 16381Y9418 15081Y9650 164, 19181Y9670 164, 19181Y9690 164, 19281Y9722 164, 19181Y9726 164, 19181Y9730 164, 1918721 578721-A1x 598731 468737 14088Y6037 9588Y6043 10188Y6370 23588Y6374 10790Y3105 15590Y3109 155, 18690Y3178 15590Y3450 11890Y3454 235, 25190Y3462 11890Y3466 235, 23890Y3554 235, 24290Y3558 24290Y3562 10390Y4217 4790Y4222 4790Y4249 4790Y4341 16590Y4342 16590Y4390 164–165, 18790Y4410 168, 19090Y4412 168, 19090Y4424 18890Y4425 18990Y4426 18990Y4447 168, 19090Y4793 18490Y4795 18490Y4796 18490Y4797 184

Page 316: Sg 247984

300 IBM PureFlex System and IBM Flex System Products and Technology

90Y4799 18490Y4800 18490Y4801 18490Y4804 18490Y4805 18490Y8643 164, 19290Y8648 164, 19290Y9310 17190Y9338 8490Y9356 10794Y9219 4794Y9220 4795Y1174 4795Y1179 4795Y2375 235, 24795Y4670 14995Y4675 149

AAccess Gateway 282Active Memory Expansion 207adapter cards 234–255agents 54air filter 63air flow 73AIX

p260 Compute Node 215p460 Compute Node 233

anchor cardEnterprise Chassis 59p260 Compute Node 214p460 Compute Node 232

architecture 85ASHRAE class A3 126

Bbackup solutions 289blades

See compute nodesblock diagram

I/O architecture 86p260 Compute Node 203p460 Compute Node 222x220 Compute Node 183x240 Compute Node 146

boot from SAN 291Broadcom

BCM5718 controller in x220 192EN2024 4-port 1 Gb Ethernet Adapter 236

BrocadeBrocade Access Gateway 282FC5022 16 Gb SAN Scalable Switch 107FC5022 2-port 16 Gb FC Adapter 249

CC105 186cable raceways 131CacheCade Pro 2.0 168

chassisSee Enterprise Chassis

Chassis Management Module 82–84connections 83default IP address 40factory defaults 84functions 40, 83IPv6 40LEDs 84overview 39ports 39, 83reset 84web interface 40

Chassis Map 7Check error log LED 72cloud 1CN4054 10 Gb Virtual Fabric Adapter 242

FCoE support 292Common Agent 54compatibility 235compute nodes 139–255

See also Flex System ManagerSee also p24L Compute NodeSee also p260 Compute NodeSee also p460 Compute NodeSee also x220 Compute NodeSee also x240 Compute Nodemanagement 43overview 5, 8

console breakout cable 162console planning 125cooling

planning 126Rear Door Heat eXchanger 134

DDACs

EN2092 1 Gb Ethernet Switch 104EN4091 10 Gb Ethernet Pass-thru 101EN4093 10 Gb Scalable Switch 96

damper 72default IP addresses 43DPOD 108DS3500 278DS5000 278DS8000 277dust filter 64dynamic rack 128

EE_Ports 283Emulex

BladeEngine 3 controller in the x240 170CN4054 10 Gb Virtual Fabric Adapter 242EN4054 4-port 10 Gb Ethernet Adapter 240FC3052 2-port 8 Gb FC Adapter 247

EN2024 4-port 1 Gb Ethernet Adapter 236EN2092 1 Gb Ethernet Switch 102

comparison with 10 Gb switch 258

Page 317: Sg 247984

301

DACs 104features 104LEDs 104ports 103specifications 104transceivers 104upgrades 103

EN4054 4-port 10 Gb Ethernet Adapter 240EN4091 10 Gb Ethernet Pass-thru 100

DACs 101FCoE support 292LEDs 101ports 101specifications 101transceivers 101use with four-port adapters 101

EN4093 10 Gb Scalable Switch 94comparison with 1 Gb switch 258DAC cables 96FCoE support 292features 97ports 96scalable 259specifications 97transceivers 96upgrades 95–96, 259uplink ports 95

EN4132 2-port 10 Gb Ethernet Adapter 238Enterprise Chassis 57–138

See also Chassis Management Moduleair filter 63air flow 73air vents 72airflow 60anchor card 59architecture 85capping 79components 58console planning 125cooling apertures 73damper 72depth 63dimensions 63dust filter 64fan logic modules 70fan module requirements 77fan modules 68filter 63form factor 62four-port adapters 91front view 58, 60height 63hot-swap components 65I/O architecture 85I/O modules 92information panel 60, 71input power 63interconnects 87introduction 58KR lanes 90

lanes 88LEDs 60line cords 119midplane 61modules 92N+N power redundancy 66networking 87noise level 63overview 8, 58panel 60personality card 59planning 120policies 79power consumption 63power cords 119power planning 120power supply requirements 78racks 127rear view 62redundancy 66security 41security policy 42shelf 64shuttle 59sizing 77sound level 63specifications 62switches 92temperature 63two-port adapters 91vents 72VPD 59weight 63width 63

Enterprise Switch Bundle 107ESB 107EtherChannel 266Ethernet

See also EN2092 1 Gb Ethernet SwitchSee also EN4091 10 Gb Ethernet Pass-thruSee also EN4093 10 Gb Scalable SwitchCN4054 10 Gb Virtual Fabric Adapter 242EN2024 4-port 1 Gb Ethernet Adapter 236EN4054 4-port 10 Gb Ethernet Adapter 240EN4132 2-port 10 Gb Ethernet Adapter 238internal management network 38Virtual Fabric 267VLANs 260x220 Compute Node 192x240 Compute Node 170

Ethernet switch modulescomparison 258selection of 258

eXFlashx220 Compute Node 188x240 Compute Node 164

expansion cards 234–255expert integrated systems 1Express, PureFlex System 2, 13external storage 274

Page 318: Sg 247984

302 IBM PureFlex System and IBM Flex System Products and Technology

FF_Ports 283Fabric Manager

features 48part numbers 47

fabrics 283fan logic modules 70fan modules 68

sizing 77FastPath 168Fault LED 72FC3052 2-port 8 Gb FC Adapter 247FC3171 8 Gb SAN Pass-thru 116

features 116ports 116standards 116tools 116transceivers 116

FC3171 8 Gb SAN Switch 113comparison 284pass-thru mode 283specifications 114standards 114tools 114

FC3172 2-port 8 Gb FC Adapter 246FC5022 16 Gb SAN Scalable Switch 107

benefits 109comparison 284DPOD 108ESB 107, 111NPIV 107ports 107–108standards 112transceivers 109

FC5022 16Gb SAN Scalable Switchcomparison 284

FC5022 2-port 16 Gb FC Adapter 249FCoE

CN4054 10 Gb Virtual Fabric Adapter 242FCoE upgrade

x240 Compute Node 171Fibre Channel

See also FC3171 8 Gb SAN Pass-thruSee also FC3171 8 Gb SAN SwitchSee also FC5022 16 Gb SAN Scalable Switchcomparison of switch modules 284fabrics 283FC3052 2-port 8 Gb FC Adapter 247FC3172 2-port 8 Gb FC Adapter 246FC5022 2-port 16 Gb FC Adapter 249interoperability 281redundancy 287switch selection 282

Flex System Enterprise ChassisSee Enterprise Chassis

Flex System Manageragents 54Common Agent 54components 49controls 50

features 47front panel 50hardware 48licenses 47memory 49networking 51out-of-band management 54overview 5, 7part numbers 47partitions 51planar 50Platform Agent 54preload 51processor 49software 51specifications 49storage 50system board 50

foundations 2FSM

See Flex System ManagerFSP 44

p260 Compute Node 214

HH1135 187hot-swap components 65HTTP access 42

II/O adapter cards 234–255

compatibility 235I/O architecture 85I/O modules 92

compatibility 236EN2092 1 Gb Ethernet Switch 102EN4091 10 Gb Ethernet Pass-thru 100EN4093 10 Gb Scalable Switch 94FC3171 8 Gb SAN Pass-thru 116FC3171 8 Gb SAN Switch 113FC5022 16 Gb SAN Scalable Switch 107IB6131 InfiniBand Switch 118LEDs 93overview 9serial cable 93USB cable 93

IB6131 InfiniBand Switch 118cables 118ports 118specifications 118

IB6132 2-port FDR InfiniBand Adapter 251IB6132 2-port QDR InfiniBand Adapter 253IBM Flex System Manager

See Flex System ManagerIBM i

p260 Compute Node 215p460 Compute Node 233

IMMv2See Integrated Management Module II

Page 319: Sg 247984

303

InfiniBandIB6132 2-port FDR InfiniBand Adapter 251IB6132 2-port QDR InfiniBand Adapter 253See IB6131 InfiniBand Switch

Integrated Management Module IIfeatures 43overview 43x220 Compute Node 197x240 Compute Node 175

integrated systems 1Intel C600 146Intel processors

x220 Compute Node 184x240 Compute Node 147

internal management network 38IP addresses 43

Chassis Management Module 40iSCSI 286

boot from SAN 292software initiator 286

Jjumbo frames 266

KKR lanes 90

LLEDs

chassis 60, 71Chassis Management Module 84EN2092 1 Gb Ethernet Switch 104fan logic module 71fan modules 70I/O modules 93p260 Compute Node 202power modules 67switches 93x220 Compute Node 194x240 Compute Node 141, 173

light path diagnosticsp260 Compute Node 202p460 Compute Node 219x220 Compute Node 196x240 Compute Node 174

line cords 119load balancing 267Locate LED 71LOM 86

Mmanagement 37–56

See also Chassis Management Modulecompute nodes 43FSP 44I/O modules 45Integrated Management Module II 43internal network 38

IP addresses 43network 38security 41

MegaRAID 168Mellanox

EN4132 2-port 10 Gb Ethernet Adapter 238IB6131 InfiniBand Switch 118IB6132 2-port FDR InfiniBand Adapter 251IB6132 2-port QDR InfiniBand Adapter 253

memorymemory channels 150p260 Compute Node 205p460 Compute Node 223x220 Compute Node 184x240 Compute Node 150

midplane 61

NN series 278N_Ports 283N+N power redundancy 66naming

I/O adapter cards 235I/O modules 94

networking 257–271Ethernet switch module selection 258load balancing 267performance 266Virtual Fabric 267VLANs 260

NPIV 283

Oout-of-band management 54outriggers 128

Pp24L Compute Node 200

See also p260 Compute Nodefeatures 200

p260 Compute Node 198–216Active Memory Expansion 207anchor card 214architecture 203–204block diagram 203cover limitations 206dimensions 199DIMM installation sequence 207DIMM options 205front panel 201FSP 44, 214I/O expansion 212LEDs 202light path diagnostics 202local storage 209LP DIMMs, use of 209memory 205operating systems 215

Page 320: Sg 247984

304 IBM PureFlex System and IBM Flex System Products and Technology

p24L, compared with 200processors 203RAID 211Serial over LAN 214slots 212specifications 198storage 209system board 200systems management 214USB port 201VPD card 214weight 199

p460 Compute Node 216–234Active Memory Expansion 226adapters 231anchor card 232architecture 221block diagram 222cross-bar processors 222DIMM installation sequence 225front panel 218FSP 44I/O expansion 230light path diagnostics 219LP DIMMs, use of 223memory 223operating systems 233overview 216power button 218processors 222RAID 229slots 230SOL 232specifications 216storage 227system board 218systems management 232USB 2.0 port 218VPD card 232warranty 217weight 217

PDUs 120performance

networking 266storage 288

Performance Accelerator 168, 190personality card 59planning

console 125cooling 126power 120rack 133Rear Door Heat eXchanger 134UPS units 124

Platform Agent 54pNIC mode 245policies

power 79security 42

power

cabling 121capping 79cords 119

power suppliesline cords 119PDUs 120policies 79power cords 119sizing 78

Power Systems compute nodesSee p260 Compute NodeSee p460 Compute Node

POWER7 processor 203, 222PowerLinux

See also p24L Compute Nodeprocessors

p260 Compute Node 203p460 Compute Node 222x220 Compute Node 182, 184x240 Compute Node 147

PureApplication System 3PureFlex System 2, 11, 20, 27–33

Enterprise 27Express 13Standard 20

PureSystems 1

QQLogic

FC3171 8 Gb SAN Pass-thru 116FC3171 8 Gb SAN Switch 113FC3172 2-port 8 Gb FC Adapter 246

Rraceways 131racks 127RAID 6 Upgrade 168, 190rank sparing 157RDHX 134RDIMMs

x220 Compute Node 184Rear Door Heat eXchanger 134Red Hat Enterprise Linux

p260 Compute Node 216p460 Compute Node 233

Redbooks website 296Contact us xiv

redundancypower policies 79power supplies 66SAN fabric 287

remote presence 7reset the CMM 84

SSAN boot 291scalable switches 258security 41

Page 321: Sg 247984

305

Serial-over-LAN 44ServeRAID C105 186ServeRAID H1135 187ServeRAID M5100 Performance Accelerator 168, 190ServeRAID M5100 RAID 6 Upgrade 168, 190ServeRAID M5100 Series Enablement Kit

x220 Compute Node 188x240 Compute Node 165

ServeRAID M5100 Series IBM eXFlash Kitx220 Compute Node 189x240 Compute Node 165

ServeRAID M5100 Series SSD Expansion Kitx220 Compute Node 189x240 Compute Node 165

ServeRAID M5100 SSD Caching Enabler 168, 190ServeRAID M5115 164, 187servers

See compute nodesshelf 64ship-loadable designs 130shuttle 59solid-state drives

x220 Compute Node 191x240 Compute Node 164

Spanning Tree Protocol 262SSD Caching Enabler 168, 190stabilizers 128storage 273–292

backup solutions 289boot from SAN 291E_Ports 283external 274F_Ports 283fabrics 283Fibre Channel interoperability 281interoperability mode 285iSCSI 286N_Ports 283NPIV 283overview 6performance 288SAN boot 291SAN modules 284switch comparison 284tape 289virtualization environments 283

Storwize V7000 275SUSE Linux Enterprise Server

p260 Compute Node 215p460 Compute Node 233

switches 92See I/O modules 92compatibility 236selection criteria 258

System Storage DS3500 278System Storage DS5000 278System Storage DS8000 277System Storage N series 278System Storage TS3310 280System Storage TS3500 Tape Library 280

systems management 37–56, 82See also Chassis Management Modulecompute nodes 43FSP 44I/O modules 45Integrated Management Module II 43internal network 38IP addresses 43network 38p260 Compute Node 214p460 Compute Node 232security 41x240 Compute Node 172

Ttape storage 289thermals 136time-to-value 7TPM 163transceivers

EN2092 1 Gb Ethernet Switch 104EN4091 10 Gb Ethernet Pass-thru 101EN4093 10 Gb Scalable Switch 96FC3171 8 Gb SAN Pass-thru 116FC5022 16 Gb SAN Scalable Switch 109

trunking 266TS3310 280TS3500 Tape Library 280Turbo Boost Technology 2.0 147

UUDIMMs

x220 Compute Node 184UEFI 175, 197UPS units

planning 124supported models 120

USB portsp260 Compute Node 201p460 Compute Node 218USB Enablement Kit in the x240 169x220 193–194x240 Compute Node 162, 173

VV7000 275VIOS

p260 Compute Node 216p460 Compute Node 234

Virtual Fabric 267Virtual Fabric Mode 244Virtual Link Aggregation Groups 264Virtual Router Redundancy Protocol 265virtualization of storage 283VLAGs 264VLANs 260VMControl 48VMready 270

Page 322: Sg 247984

306 IBM PureFlex System and IBM Flex System Products and Technology

VMware ESXi 169vNIC mode 245, 269VPD

chassis 59p260 Compute Node 214p460 Compute Node 232

VRRP 265

Wwizards

Flex System Manager 7

Xx220 Compute Node 177–197

architecture 183block diagram 183Broadcom BCM5718 192dimensions 180disk drives 191drives 191Embedded 1 Gb Ethernet 192eXFlash 188exploded view 178features 178, 183front panel 194I/O expansion 192Integrated Management Module II 43, 197Intel processors 182internal storage 186introduction 177IPMI compliance 197LEDs 194light path diagnostics 196LOM 192memory 184memory features 185models 180motherboard 180operating systems 197processors 182, 184ServeRAID C105 186ServeRAID H1135 187slots 192specifications 178storage 186system board 180UEFI 197virtualization 193weight 180

x240 Compute Node 140–17610 Gb Virtual Fabric Adapter 170block diagram 146comparison 148components 141Compute Node Fabric Connector 170console breakout cable 162DIMM installation 158Embedded 10 Gb Virtual Fabric Adapter 170Emulex BladeEngine 3 170

Ethernet 170eXFlash 164exploded view 141FCoE 292FCoE upgrade 171features 142, 146front panel 141, 173I/O expansion 171independent channel mode 157Integrated Management Module II 43, 175Intel C600 hub 146Intel processors 145internal USB 169introduction 140IPMI compliance 175LEDs 141light path diagnostics 174LSI controller 163memory 150memory channels 152memory mirroring 157models 144motherboard 143operating systems 176planar 143processor SKUs 148processors 145, 147–150QPI 145, 147rank-sparing mode 157recommendations 158SAS drives 164ServeRAID M5115 164shelf 145slots 171solid-state drives 164specifications 142storage 163system board 143TPM 163UEFI 175USB Enablement Kit 169USB internal slot 169USB ports 162virtualization 169warranty 140

XIV Storage System 276

Page 323: Sg 247984

(0.5” spine)0.475”<

->0.873”

250 <->

459 pages

IBM PureFlex System

and IBM Flex System

Products and Technology

IBM PureFlex System

and IBM Flex

System Products and Technology

IBM PureFlex System

and IBM Flex

System Products and Technology

IBM PureFlex System

and IBM Flex System

Products and Technology

Page 324: Sg 247984

IBM PureFlex System

and IBM Flex

System Products and Technology

IBM PureFlex System

and IBM Flex

System Products and Technology

Page 325: Sg 247984
Page 326: Sg 247984

®

SG24-7984-00 ISBN 0738436992

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

®

IBM PureFlex System and IBM Flex System Products and Technology

Describes the IBM Flex System Enterprise Chassis and compute node technology

Provides details of available I/O modules and expansion options

Explains networking and storage configurations

To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more.

The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications.

The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future.

This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity.

This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.

Back cover


Recommended