+ All Categories
Home > Documents > IBM Flex System Interoperability Guide

IBM Flex System Interoperability Guide

Date post: 22-Jan-2015
Category:
Upload: ibm-india-smarter-computing
View: 3,256 times
Download: 6 times
Share this document with a friend
Description:
Learn about IBM Flex System Interoperability Guide. This IBM Redpaper publication is a reference to compatibility and interoperability of components inside and connected to IBM PureFlex System and IBM Flex System solutions. For more information on Pure Systems, visit http://ibm.co/J7Zb1v. Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Popular Tags:
62
ibm.com/redbooks Redpaper Front cover IBM Flex System Interoperability Guide David Watts Ilya Krutov Quick reference for IBM Flex System Interoperability Covers internal components and external connectivity Latest updates as of 30 January 2013
Transcript
  • 1. Front coverIBM Flex SystemInteroperability GuideQuick reference for IBM Flex SystemInteroperabilityCovers internal components andexternal connectivityLatest updates as of30 January 2013David Watts Ilya Krutovibm.com/redbooksRedpaper

2. International Technical Support OrganizationIBM Flex System Interoperability Guide30 January 2013 REDP-FSIG-00 3. Note: Before using this information and the product it supports, read the information in Notices on page v.This edition applies to:IBM PureFlex SystemIBM Flex System Enterprise ChassisIBM Flex System ManagerIBM Flex System x220 Compute NodeIBM Flex System x240 Compute NodeIBM Flex System x440 Compute NodeIBM Flex System p260 Compute NodeIBM Flex System p24L Compute NodeIBM Flex System p460 Compute NodeIBM 42U 1100 mm Enterprise V2 Dynamic Rack Copyright International Business Machines Corporation 2012, 2013. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp. 4. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 30 January 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 8 December 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 29 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 13 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 2 October 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Chapter 1. Chassis interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Chassis to compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Switch to adapter interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Ethernet switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Fibre Channel switches and adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.3 InfiniBand switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Switch to transceiver interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.1 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.2 Fibre Channel switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3.3 InfiniBand switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Switch upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . . . . 91.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch . . . . . . . . . 101.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . . . 111.4.4 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . . . . 12 1.5 vNIC and UFP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.6 Chassis power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7 Rack to chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 2. Compute node component compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . .17 2.1 Compute node-to-card interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 2.2 Memory DIMM compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202.2.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202.2.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Internal storage compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.1 x86 compute nodes: 2.5-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222.3.2 x86 compute nodes: 1.8-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232.3.3 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Embedded virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 2.5 Expansion node compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5.1 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262.5.2 Flex System I/O adapters - PCIe Expansion Node . . . . . . . . . . . . . . . . . . . . . . . .26 Copyright IBM Corp. 2012, 2013. All rights reserved. iii 5. 2.5.3 PCIe I/O adapters - PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.4 Internal storage - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.5 RAID upgrades - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Chapter 3. Software compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323.1.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 3.2 IBM Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Chapter 4. Storage interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1 Unified NAS storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 FCoE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39 4.3 iSCSI support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 NPIV support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41 4.5 Fibre Channel support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .414.5.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.5.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 Other publications and online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46iv IBM Flex System Interoperability Guide 6. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not grant you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where suchprovisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATIONPROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS ORIMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM websites are provided for convenience only and do not in anymanner serve as an endorsement of those websites. The materials at those websites are not part of thematerials for this IBM product and use of those websites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Any performance data contained herein was determined in a controlled environment. Therefore, the resultsobtained in other operating environments may vary significantly. Some measurements may have been madeon development-level systems and there is no guarantee that these measurements will be the same ongenerally available systems. Furthermore, some measurements may have been estimated throughextrapolation. Actual results may vary. Users of this document should verify the applicable data for theirspecific environment.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrate programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. Copyright IBM Corp. 2012, 2013. All rights reserved.v 7. TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business MachinesCorporation in the United States, other countries, or both. These and other IBM trademarked terms aremarked on their first occurrence in this information with the appropriate symbol ( or ), indicating USregistered or common law trademarks owned by IBM at the time this information was published. Suchtrademarks may also be registered or common law trademarks in other countries. A current list of IBMtrademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtmlThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AIX POWER7+ Redbooks (logo) BladeCenter POWER7RETAIN DS8000PowerVM ServerProven IBM Flex System POWER Storwize IBM Flex System Manager PureFlexSystem Storage IBM RackSwitchSystem x Netfinity RedbooksXIV Power Systems RedpaperThe following terms are trademarks of other companies:Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of IntelCorporation or its subsidiaries in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,other countries, or both.Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or itsaffiliates.Other company, product, or service names may be trademarks or service marks of others.viIBM Flex System Interoperability Guide 8. Preface To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete and optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy, and scales to meet your needs in the future. This IBM Redpaper publication is a reference to compatibility and interoperability of components inside and connected to IBM PureFlex System and IBM Flex System solutions. The latest version of this document can be downloaded from: http://www.redbooks.ibm.com/fsigThe team who wrote this paper This paper was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. David Watts is a Consulting IT Specialist at the ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks publications for hardware and software topics that are related to IBM System x and IBM BladeCenter servers and associated client platforms. He has authored over 300 books, papers, and web documents. David has worked for IBM both in the US and Australia since 1989. He is an IBM Certified IT Specialist and a member of the IT Specialist Certification Review Board. David holds a Bachelor of Engineering degree from the University of Queensland (Australia). Ilya Krutov is a Project Leader at the ITSO Center in Raleigh and has been with IBM since 1998. Before joining the ITSO, Ilya served in IBM as a Run Rate Team Leader, Portfolio Manager, Brand Manager, Technical Sales Specialist, and Certified Instructor. Ilya has expertise in IBM System x and BladeCenter products, server operating systems, and networking solutions. He has a Bachelors degree in Computer Engineering from the Moscow Engineering and Physics Institute. Special thanks to Ashish Jain, the former author of this document. Copyright IBM Corp. 2012, 2013. All rights reserved.vii 9. Now you can become a published author, too!Heres an opportunity to spotlight your skills, grow your career, and become a publishedauthorall at the same time! Join an ITSO residency project and help write a book in yourarea of expertise, while honing your experience using leading-edge technologies. Your effortswill help to increase product acceptance and customer satisfaction, as you expand yournetwork of technical contacts and relationships. Residencies run from two to six weeks inlength, and you can participate either in person or as a remote resident working from yourhome base.Find out more about the residency program, browse the residency index, and apply online at:ibm.com/redbooks/residencies.htmlComments welcomeYour comments are important to us!We want our papers to be as helpful as possible. Send us your comments about this paper orother IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: [email protected] Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.htmlviii IBM Flex System Interoperability Guide 10. Summary of changes This section describes the technical changes made in this edition of the paper and in previous editions. This edition might also include minor corrections and editorial changes that are not identified.30 January 2013 New information More specifics about configuration support for chassis power supplies, Table 1-17 on page 15. Windows Server 2012 support, Table 3-1 on page 32. Red Hat Enterprise Linux 5 support for the p260 model 23X, Table 3-2 on page 33 Changed information x440 restriction regarding the use of the ServeRAID M5115 is now removed with the release of IMM2 firmware build 40a, Updated the Fibre Channel support section, 4.5, Fibre Channel support on page 41.8 December 2012 New information Added Table 2-2 on page 19 indicating which slots I/O adapters are supported in with Power Systems compute nodes. The x440 now supports UDIMMs, Table 2-3 on page 2029 November 2012 Changed information Clarified that the use of expansion nodes requires that the second processor be installed in the compute node, Table 2-10 on page 26. Corrected the NPIV information, 4.4, NPIV support on page 41. Clarified NAS supported, 4.1, Unified NAS storage on page 38.13 November 2012 This revision reflects the addition, deletion, or modification of new and changed information described below. Copyright IBM Corp. 2012, 2013. All rights reserved. ix 11. New informationAdded information about these new products: IBM Flex System p260 Compute Node, 7895-23X IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch IBM Flex System Fabric EN4093R 10Gb Scalable Switch IBM Flex System CN4058 8-port 10Gb Converged Adapter IBM Flex System EN4132 2-port 10Gb RoCE Adapter IBM Flex System Storage Expansion Node IBM Flex System PCIe Expansion Node IBM PureFlex System 42U Rack IBM Flex System V7000 Storage NodeThe x220 now supports 32 GB LRDIMM, page Table 2-3 on page 20The Power Systems compute nodes support new DIMMs, Table 2-4 on page 21.New 2100W power supply option for the Enterprise Chassis, 1.6, Chassis powersupplies on page 14.New section covering Features on Demand upgrades for scalable switches, 1.4, Switchupgrades on page 9. Changed informationMoved the FCoE and NPIV tables to Chapter 4, Storage interoperability on page 37.Added machine types & models (MTMs) for the x220 and x440 when ordered via AAS(e-config), Table 1-1 on page 2Added footnote regarding power management and the use of 14 Power Systems computenodes with 32 GB DIMMs, Table 1-1 on page 2Added AAS (e-config) feature codes to various tables of x86 compute node options. Notethat AAS feature codes for the x220 and x440 are the same as those used in the HVECsystem (x-config). However the AAS feature codes for the x240 are different than theequivalent HVEC feature codes. This is noted in the table.Updated the FCoE table, 4.2, FCoE support on page 39Updated the vNIC table, Table 1-14 on page 13Clarified that the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) and x240 USBEnablement Kit (49Y8119) cannot be installed at the same time, Table 2-6 on page 23.Updated the table of supported 2.5-inch drives, Table 2-5 on page 22.Updated the operating system table, Table 3-1 on page 322 October 2012 This revision reflects the addition, deletion, or modification of new and changed information described below. New informationTemporary restrictions on the use of network and storage adapters with the x440, page 18 Changed informationUpdated the x86 memory table, Table 2-3 on page 20Updated the FCoE table, 4.2, FCoE support on page 39x IBM Flex System Interoperability Guide 12. Updated the operating system table, Table 3-1 on page 32Clarified the support of the Pass-thru module and Fibre Channel switches with IBM FabricManager, Table 3-4 on page 35.Summary of changesxi 13. xii IBM Flex System Interoperability Guide 14. 1Chapter 1. Chassis interoperability The IBM Flex System Enterprise Chassis is a 10U next-generation server platform with integrated chassis management. It is a compact, high-density, high-performance, rack-mount, and scalable server platform system. It supports up to 14 one-bay compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise Chassis. In addition, it can also support up to seven 2-bay compute nodes or three 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay, and 4-bay compute nodes to meet your specific hardware needs. Topics in this chapter are: 1.1, Chassis to compute node on page 2 1.2, Switch to adapter interoperability on page 3 1.3, Switch to transceiver interoperability on page 5 1.4, Switch upgrades on page 9 1.5, vNIC and UFP support on page 13 1.6, Chassis power supplies on page 14 1.7, Rack to chassis on page 16 Copyright IBM Corp. 2012, 2013. All rights reserved. 1 15. 1.1 Chassis to compute node Table 1-1 lists the maximum number of compute nodes installed in the chassis.Table 1-1 Maximum number of compute nodes installed in the chassisCompute nodes Machine typeMaximum number ofcompute nodes in theSystem x Power System Enterprise Chassis(x-config) (e-config)8721-A1x 7893-92X(x-config) (e-config)x86 compute nodesIBM Flex System x220 Compute Node 7906 7906-25X 14 14IBM Flex System x240 Compute Node 8737 7863-10X 14 14IBM Flex System x440 Compute Node 7917 7917-45X 77IBM Power Systems compute nodesIBM Flex System p24L Compute Node None1457-7FL14a14aIBM Flex System p260 Compute Node (POWER7) None 7895-22X 14a14aIBM Flex System p260 Compute Node (POWER7+)None 7895-23X 14a14aIBM Flex System p460 Compute Node None 7895-42X 7a 7aManagement nodeIBM Flex System Manager 8731-A1x 7955-01M 1b 1b a. For Power Systems compute nodes: if the chassis is configured with the power management policy AC PowerSource Redundancy with Compute Node Throttling Allowed, some maximum chassis configurations containingPower Systems compute nodes with large populations of 32GB DIMMs may result in the chassis having insufficientpower to power on all 14 compute nodes bays. In such circumstances, only 13 of the 14 bays would be allowed tobe powered on. b. One Flex System Manager management node can manage up to four chassis2IBM Flex System Interoperability Guide 16. 1.2 Switch to adapter interoperabilityIn this section, we describe switch to adapter interoperability.1.2.1 Ethernet switches and adaptersTable 1-2 lists Ethernet switch to card compatibility. Switch upgrades: To maximize the usable port count on the adapters, the switches may need additional license upgrades. See 1.4, Switch upgrades on page 9 for details.Table 1-2 Ethernet switch to card compatibility CN4093EN4093REN4093EN4091EN2092 10Gb10Gb 10Gb10Gb1Gb SwitchSwitch SwitchPass-thru Switch Part number 00D5823 95Y330949Y4270 88Y6043 49Y4294 Part FeatureA3HH /A3J6 / A0TB /A1QV /A0TF / number codesaFeature codesa ESW2ESW7 359337003598 None None x220 Embedded 1 GbYesbYesYes NoYes None None x240 Embedded 10 Gb Yes YesYes Yes Yes None None x440 Embedded 10 Gb Yes YesYes Yes Yes 49Y7900A1BR / 1763EN2024 4-port 1Gb Yes YesYes YescYes Ethernet Adapter 90Y3466A1QY / EC2DEN4132 2-port 10 Gb NoYesYes Yes No Ethernet Adapter None None / 1762EN4054 4-port 10GbYes YesYes YescYes Ethernet Adapter 90Y3554A1R1 / 1759CN4054 10Gb Virtual Yes YesYes YescYes Fabric Adapter None None / EC24CN4058 8-port 10GbYesdYesd YesdYescYese Converged Adapter None None / EC26EN4132 2-port 10GbNoYesYes Yes No RoCE Adapter a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config) b. 1 Gb is supported on the CN4093s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support1 GbE speeds. c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru. d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch. Chapter 1. Chassis interoperability 3 17. 1.2.2 Fibre Channel switches and adapters Table 1-3 lists Fibre Channel switch to card compatibility.Table 1-3 Fibre Channel switch to card compatibilityFC5022FC5022FC5022FC3171FC317116Gb16Gb16Gb8Gb 8Gb12-port 24-port 24-port switchPass-thruESB Part number88Y6374 00Y3324 90Y9356 69Y1930 69Y1934Part FeatureFeature codesaA1EH /A3DP /A2RQ /A0TD /A0TJ /number codesa 3770ESW537713595359169Y1938A1BM / 1764 FC3172 2-port 8Gb FC Yes Yes Yes Yes Yes Adapter95Y2375A2N5 / EC25 FC3052 2-port 8Gb FC Yes Yes Yes Yes Yes Adapter88Y6370A1BP / EC2B FC5022 2-port 16Gb FCYes Yes Yes NoNo Adapter a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config)1.2.3 InfiniBand switches and adapters Table 1-4 lists InfiniBand switch to card compatibility.Table 1-4 InfiniBand switch to card compatibilityIB6131 InfiniBandSwitch Part number90Y3450Part Featurenumber codesa Feature codea A1EK / 369990Y3454A1QZ / EC2CIB6132 2-port FDR InfiniBand AdapterYesbNone None / 1761IB6132 2-port QDR InfiniBand AdapterYes a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config) b. To operate at FDR speeds, the IB6131 switch will need the FDR upgrade, as described in 1.4, Switch upgrades onpage 94IBM Flex System Interoperability Guide 18. 1.3 Switch to transceiver interoperabilityThis section specifies the transceivers and direct-attach copper (DAC) cables supported bythe various IBM Flex System I/O modules.1.3.1 Ethernet switchesSupport for transceivers and cables for Ethernet switch modules is shown in Table 1-5.Table 1-5 Modules and cables supported in Ethernet I/O modulesCN4093EN4093REN4093 EN4091 EN209210Gb10Gb 10Gb 10Gb 1GbSwitchSwitch Switch Pass-thruSwitch Part number00D5823 95Y330949Y427088Y604349Y4294 PartFeatureFeature codesaA3HH /A3J6 / A0TB / A1QV / A0TF / numbercodesa ESW2ESW7 3593 3700 3598 SFP transceivers - 1 Gbps 81Y1622 3269 /IBM SFP SX Transceiver Yes YesYesYesYes EB2A(1000Base-SX) 81Y1618 3268 /IBM SFP RJ45 Transceiver Yes YesYesYesYes EB29(1000Base-T) 90Y9424 A1PN /IBM SFP LX Transceiver Yes YesYesYesYes ECB8(1000Base-LX) SFP+ transceivers - 10 Gbps 44W4408 4942 /10 GBase-SR SFP+ (MMFiber) Yes YesYesYesYes 3282 46C3447 5053 /IBM SFP+ SR TransceiverYes YesYesYesYes EB28(10GBase-SR) 90Y9412 A1PM /IBM SFP+ LR TransceiverYes YesYesYesYes ECB9(10GBase-LR) QSFP+ transceivers - 40 Gbps 49Y7884 A1DR /IBM QSFP+ SR Transceiver Yes YesYesNo No EB27(40Gb) 8 Gb Fibre Channel SFP+ transceivers 44X1964 5075 /IBM 8 Gb SFP+ SW Optical Yes No No No No 3286Transceiver SFP+ direct-attach copper (DAC) cables 90Y9427 A1PH /1m IBM Passive DAC SFP+Yes YesYesNo Yes None 90Y9430 A1PJ /3m IBM Passive DAC SFP+Yes YesYesNo Yes None 90Y9433 A1PK /5m IBM Passive DAC SFP+Yes YesYesNo Yes ECB6Chapter 1. Chassis interoperability 5 19. CN4093EN4093REN4093 EN4091EN209210Gb10Gb 10Gb 10Gb1GbSwitchSwitch Switch Pass-thru Switch Part number00D5823 95Y330949Y427088Y6043 49Y4294PartFeature Feature codesaA3HH /A3J6 / A0TB / A1QV /A0TF /numbercodesaESW2ESW7 3593 3700359849Y7886 A1DL / 1m 40 Gb QSFP+ to 4 x 10 GbYes YesYesNoNoEB24 SFP+ Cable49Y7887 A1DM / 3m 40 Gb QSFP+ to 4 x 10 GbYes YesYesNoNoEB25 SFP+ Cable49Y7888 A1DN / 5m 40 Gb QSFP+ to 4 x 10 GbYes YesYesNoNoEB26 SFP+ Cable95Y0323 A25A / IBM 1m 10 GBase Copper NoNo No Yes NoNone SFP+ Twinax (Active)95Y0326 A25B / IBM 3m 10 GBase Copper NoNo No Yes NoNone SFP+ Twinax (Active)95Y0329 A25C / IBM 5m 10 GBase Copper NoNo No Yes NoNone SFP+ Twinax (Active)81Y8295 A18M / 1m 10 GbE Twinax Act CopperNoNo No Yes NoNone SFP+ DAC (active)81Y8296 A18N / 3m 10 GE Twinax Act Copper NoNo No Yes NoNone SFP+ DAC (active)81Y8297 A18P / 5m 10 GE Twinax Act Copper NoNo No Yes NoNone SFP+ DAC (active)QSFP cables49Y7890 A1DP / 1m IBM QSFP+ to QSFP+Yes YesYesNoNoEB2B Cable49Y7891 A1DQ / 3m IBM QSFP+ to QSFP+Yes YesYesNoNoEB2H CableFiber optic cables90Y3519 A1MM / 10m IBM MTP Fiber OpticalYes YesYesNoNoEB2J Cable90Y3521 A1MN / 30m IBM MTP Fiber OpticalYes YesYesNoNoEC2K a Cable a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config)6 IBM Flex System Interoperability Guide 20. 1.3.2 Fibre Channel switches Support for transceivers and cables for Fibre Channel switch modules is shown in Table 1-6.Table 1-6 Modules and cables supported in Fibre Channel I/O modules FC5022FC5022 FC5022FC3171FC3171 16Gb16Gb 16Gb8Gb 8Gb 12-port 24-port24-port switchPass-thruESBPart number88Y6374 00Y332490Y9356 69Y1930 69Y1934 PartFeature Feature codesaA1EH /A3DP / A2RQ /A0TD /A0TJ / numbercodesa3770ESW5 377135953591 16 Gb transceivers 88Y6393 A22R / Brocade 16 Gb SFP+ Optical Yes YesYes NoNo 5371 Transceiver 8 Gb transceivers 88Y6416 A2B9 / Brocade 8 Gb SFP+ SW Optical Yes YesYes NoNo 5370 Transceiver 44X1964 5075 / IBM 8 Gb SFP+ SW Optical NoNo NoYes Yes 3286 Transceiver 4 Gb transceivers 39R6475 4804 / 4 Gb SFP Transceiver OptionNoNo NoYes Yes 3238 a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config) Chapter 1. Chassis interoperability 7 21. 1.3.3 InfiniBand switches Support for transceivers and cables for InfiniBand switch modules is shown in Table 1-7. Compliant cables: The IB6131 switch supports all cables compliant to the InfiniBand Architecture specification.Table 1-7 Modules and cables supported in InfiniBand I/O modules IB6131 InfiniBand SwitchPart number90Y3450Part Featurenumber codesaFeature codesaA1EK / 369949Y99803866 / 3249IB QDR 3m QSFP Cable Option (passive)Yes90Y3470A227 / ECB13m FDR InfiniBand Cable (passive)Yes a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config)8IBM Flex System Interoperability Guide 22. 1.4 Switch upgrades Various IBM Flex System switches can be upgraded via software licenses to enable additional ports or features. Switches covered in this section: 1.4.1, IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch on page 9 1.4.2, IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch on page 10 1.4.3, IBM Flex System EN2092 1Gb Ethernet Scalable Switch on page 11 1.4.4, IBM Flex System IB6131 InfiniBand Switch on page 11 1.4.5, IBM Flex System FC5022 16Gb SAN Scalable Switch on page 121.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch The CN4093 switch is initially licensed for fourteen 10 GbE internal ports, two external 10 GbE SFP+ ports, and six external Omni Ports enabled. Further ports can be enabled, including 14 additional internal ports and two external 40 GbE QSFP+ uplink ports with the Upgrade 1 (00D5845) and 14 additional internal ports and six additional external Omni Ports with the Upgrade 2 (00D5847) license options. Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability. Table 1-8 shows the part numbers for ordering the switches and the upgrades.Table 1-8 CN4093 10Gb Converged Scalable Switch part numbers and port upgrades PartFeatureDescription Total ports enabled numbercodea Internal External External External 10Gb 10Gb SFP+10Gb Omni40Gb QSFP+ 00D5823 A3HH / ESW2Base switch (no upgrades)14 260 00D5845 A3HL / ESU1Add Upgrade 128 26 2 00D5847 A3HM / ESU2Add Upgrade 228 212 0 00D5845 A3HL / ESU1Add both Upgrade 1 and 42 212 2 00D5847 A3HM / ESU2Upgrade 2 a. The first feature code listed is for configurations ordered through System x sales channels. The second featurecode is for configurations ordered through the IBM Power Systems channel. Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all internal ports Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all internal ports Chapter 1. Chassis interoperability9 23. 1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch The EN4093 and EN4093R are initially licensed with fourteen 10 Gb internal ports enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and four additional SFP+ 10Gb ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied. These are IBM Features on Demand license upgrades. Table 1-9 lists the available parts and upgrades.Table 1-9 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades PartFeatureProduct descriptionTotal ports enabled numbercodeaInternal10 Gb uplink 40 Gb uplink 49Y4270 A0TB / 3593IBM Flex System Fabric EN4093 10Gb1410 0Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports 05Y3309 A3J6 / ESW7IBM Flex System Fabric EN4093R 10Gb 1410 0Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports 49Y4798 A1EL / 3596IBM Flex System Fabric EN4093 10Gb2810 2Scalable Switch (Upgrade 1) Adds 2x external 40 Gb uplinks Adds 14x internal 10 Gb ports 88Y6037 A1EM / 3597IBM Flex System Fabric EN4093 10Gb4214 2Scalable Switch (Upgrade 2) (requiresUpgrade 1): Adds 4x external 10 Gb uplinks Add 14x internal 10 Gb ports a. The first feature code listed is for configurations ordered through System x sales channels. The second featurecode is for configurations ordered through the IBM Power Systems channel. Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all internal ports Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all internal ports Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows a total of 42 internal ports -- three ports connected to each of the 14 compute nodes. To take full advantage of all 42 internal ports, a 6-port or 8-port adapter is required, such as the CN4058 8-port 10Gb Converged Adapter. Upgrade 2 still provides a benefit with a 4-port adapter because this upgrade enables an extra four external 10 Gb uplinks as well.10IBM Flex System Interoperability Guide 24. 1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch The EN2092 comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports with IBM Features on Demand license upgrades. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order. Table 1-10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgradesPart numberFeature codea Product description49Y4294A0TF / 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch14 internal 1 Gb ports10 external 1 Gb ports90Y3562A1QW / 3594 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1)Adds 14 internal 1 Gb portsAdds 10 external 1 Gb ports49Y4298A1EN / 3599 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb Uplinks)Adds 4 external 10 Gb uplinksa. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed:The base switch requires a two-port Ethernet adapter installed in each compute node (oneport of the adapter goes to each of two switches)Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two portsof the adapter to each switch)1.4.4 IBM Flex System IB6131 InfiniBand Switch The IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18 FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data rate (FDR) with an IBM Features on Demand license upgrade. Ordering information is listed in Table 1-11. Table 1-11 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade optionPart number Feature codesaProduct Name90Y3450 A1EK / 3699 IBM Flex System IB6131 InfiniBand Switch 18 external QDR ports 14 QDR internal ports90Y3462 A1QX / ESW1 IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade) Upgrades all ports to FDR speedsa. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel.Chapter 1. Chassis interoperability 11 25. 1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch Table 1-12 lists the available port and feature upgrades for the FC5022 16Gb SAN Scalable Switches. These upgrades are all IBM Features on Demand license upgrades.Table 1-12 FC5022 switch upgrades 24-port 24-port16 Gb 16 Gb 16 GbSAN switch ESB switchSAN switch PartFeature numbercodesa Description90Y9356 00Y332488Y6374 88Y6382 A1EP / 3772FC5022 16Gb SAN Scalable Switch (Upgrade 1)NoNo Yes 88Y6386 A1EQ / 3773FC5022 16Gb SAN Scalable Switch (Upgrade 2)Yes YesYes 00Y3320 A3HN / ESW3FC5022 16Gb Fabric Watch Upgrade NoYesYes 00Y3322 A3HP / ESW4FC5022 16Gb ISL/Trunking Upgrade NoYesYes a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code isfor configurations ordered through the IBM Power Systems channel. Table 1-13 shows the total number of active ports on the switch after applying compatible port upgrades. Table 1-13 Total port counts after applying upgradesTotal number of active ports24-port 16 Gb 24-port 16 Gb 16 Gb SAN switchESB SAN switchSAN switch Ports on Demand upgrade90Y9356 00Y3324 88Y6374 Included with base switch242412 Upgrade 1, 88Y6382 (adds 12 ports) Not supported Not supported 24 Upgrade 2, 88Y6386 (adds 24 ports) 48484812IBM Flex System Interoperability Guide 26. 1.5 vNIC and UFP support Table 1-14 lists vNIC (virtual NIC) and UFP (Universal Fabric Port) support by combinations of switch, adapter, and operating system. In the table, we use the following abbreviations for the vNIC modes:vNIC1 = IBM Virtual Fabric ModevNIC2 = Switch Independent Mode10 GbE adapters only: Only 10 Gb Ethernet adapters support vNIC and UFP. 1 GbEadapter do not support these features.Table 1-14 Supported vNIC modesFlex System I/O moduleEN4093 10Gb Scalable Switch EN4091 10Gb Ethernet Pass-thru EN4093R 10Gb Switch CN4093 10Gb Converged Switch Top-of-rack switchNoneIBM RackSwitch G8124E IBM RackSwitch G8264Operating system WindowsLinuxabVMwarecWindows LinuxabVMwarec 10Gb onboard LOM (x240 and x440)vNIC1vNIC1vNIC1vNIC1 vNIC1vNIC1 vNIC2vNIC2vNIC2vNIC2 vNIC2vNIC2 UFPd UFPd UFPUFP UFPUFP CN4054 10Gb Virtual Fabric AdaptervNIC1vNIC1vNIC1vNIC1 vNIC1vNIC1 90Y3554 (e-config #1759)vNIC2vNIC2vNIC2vNIC2 vNIC2vNIC2 UFPd UFPd UFPd UFP UFPUFP EN4054 4-port 10Gb Ethernet Adapter The EN4054 4-port 10Gb Ethernet Adapter does not support vNIC nor UFP. (e-config #1762) EN4132 2-port 10 Gb Ethernet Adapter The EN4132 2-port 10 Gb Ethernet Adapter does not support vNIC nor UFP. 90Y3466 (e-config #EC2D) CN4058 8-port 10Gb ConvergedThe CN4058 8-port 10Gb Converged Adapter does not support vNIC nor Adapter, (e-config #EC24) UFP. EN4132 2-port 10Gb RoCE Adapter, The EN4132 2-port 10Gb RoCE Adapter does not support vNIC nor UFP. (e-config #EC26) a. Linux kernels with Xen are not supported with either vNIC1 nor vNIC2. For support information, see IBM RETAINTip H205800 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480. b. The combination of vNIC2 and iBoot is not supported for legacy booting with Linux. c. The combination of vNIC2 with VMware ESX 4.1 and storage protocols (FCoE and iSCSI) is not supported. d. The CN4093 10Gb Converged Switch is planned to support Universal Fabric Port (UFP) in 2Q/2013 Chapter 1. Chassis interoperability 13 27. 1.6 Chassis power supplies Power supplies are available either as 2500W or 2100W capacities. The standard chassis ships with two 2500W power supplies. A maximum of six power supplies can be installed. The 2100W power supplies are only available via CTO and through the System x ordering channel. Table 1-15 shows the ordering information for the Enterprise Chassis power supplies. Power supplies cannot be mixed in the same chassis.Table 1-15 Power supply module option part numbers Part FeatureDescriptionChassis models number codesawhere standard 43W9049A0UC / 3590IBM Flex System Enterprise Chassis 2500W Power Module8721-A1x (x-config)7893-92X (e-config) 47C7633A3JH / NoneIBM Flex System Enterprise Chassis 2100W Power ModuleNone a. The first feature code listed is for configurations ordered through System x sales channels. The second featurecode is for configurations ordered through the IBM Power Systems channel. A chassis powered by the 2100W power supplies cannot provide N+N redundant power unless all the compute nodes are configured with 95W or lower Intel processors. N+1 redundancy is possible with any processors. Table 1-16 shows the nodes that are supported in chassis when powered by either the 2100W or 2500W modules.Table 1-16 Compute nodes supported by the power supplies Node 2100W 2500Wpower supplypower supply IBM Flex System Manager management nodeYes Yes x220 (with or without Storage Expansion Node or PCIe Expansion Node) Yes Yes x240 (with or without Storage Expansion Node or PCIe Expansion Node) YesaYesa x440 YesaYesa p24L NoYesa p260 NoYesa p460 NoYesa V7000 Storage Node (either primary or expansion node)Yes Yes a. Some restrictions based on the TDP power of the processors installed or the power policy enabled. See Table 1-17on page 15.14IBM Flex System Interoperability Guide 28. Table 1-17 on page 15 lists details of the support for compute nodes supported based on type and number of power supplies installed in the chassis and the power policy enabled (N+N or N+1). In this table, the colors of the cells have the following meaning: Supported with no restrictions as to the number of compute nodes that can be installed Supported but with restrictions on the number of compute nodes that can be installed.Table 1-17 Specific number of compute nodes supported based on installed power supplies Compute CPU 2100W power supplies 2500W power supplies nodeTDP rating N+1, N=5N+1, N=4N+1, N=3N+N, N=3N+1, N=5N+1, N=4 N+1, N=3 N+N, N=36 total 5 total 4 total 6 total 6 total 5 total4 total6 total x24060W141414141414 14 14 70W141413141414 14 14 80W141413141414 14 14 95W141412131414 14 14 115W 141411121414 14 14 130W 141411111414 14 14 135W 141411111414 13 14 x44095W7 7 6 6 7 777 115W 7 7 5 6 7 777 130W 7 7 5 5 7 767 p24LAll Not supported1414 12 13 p260All Not supported1414 12 13 p460All Not supported7 766 x22050W141414141414 14 14 60W141414141414 14 14 70W141414141414 14 14 80W141414141414 14 14 95W141414141414 14 14 FSM 95W2 2 2 2 2 222 V7000 N/A3 3 3 3 3 333 Assumptions:All Compute Nodes fully configuredThrottling and over subscription is enabledTip: Consult the Power configurator for exact configuration support:http://ibm.com/systems/bladecenter/resources/powerconfig.htmlChapter 1. Chassis interoperability 15 29. 1.7 Rack to chassis IBM offers an extensive range of industry-standard and EIA-compatible rack enclosures and expansion units. The flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components and cable management. Table 1-18 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.Table 1-18 The chassis supported in each rack cabinet Part numberRack cabinet Supports the Enterprise Chassis 93634CXIBM PureFlex System 42U Rack Yes (recommended) 93634DXIBM PureFlex System 42U Expansion Rack Yes (recommended) 93634PXIBM 42U 1100 mm Deep Dynamic rackYes (recommended) 201886XIBM 11U Office Enablement KitYes 93072PXIBM S2 25U Static standard rackYes 93072RXIBM S2 25U Dynamic standard rack Yes 93074RXIBM S2 42U standard rack Yes 99564RXIBM S2 42U Dynamic standard rack Yes 93084PXIBM 42U Enterprise rackYes 93604PXIBM 42U 1200 mm Deep Dynamic RackYes 93614PXIBM 42U 1200 mm Deep Static rack Yes 93624PXIBM 47U 1200 mm Deep Static rack Yes 9306-900 IBM Netfinity 42U RackNo 9306-910 IBM Netfinity 42U Rack No 9308-42P IBM Netfinity Enterprise RackNo 9308-42X IBM Netfinity Enterprise RackNo Varies IBM NetBay 22U No16IBM Flex System Interoperability Guide 30. 2Chapter 2. Compute node component compatibility This chapter lists the compatibility of components installed internally to each compute node. Topics in this chapter are: 2.1, Compute node-to-card interoperability on page 18 2.2, Memory DIMM compatibility on page 20 2.3, Internal storage compatibility on page 22 2.4, Embedded virtualization on page 25 2.5, Expansion node compatibility on page 26 Copyright IBM Corp. 2012, 2013. All rights reserved.17 31. 2.1 Compute node-to-card interoperability Table 2-1 lists the available I/O adapters and their compatibility with compute nodes. Power Systems compute nodes: Some I/O adapters supported by Power Systems compute nodes are restricted to only some of the available slots. See Table 2-2 on page 19 for specifics.Table 2-1 I/O adapter compatibility matrix - compute nodesSupported serversp260 22X p260 23X System x x-confige-config x440b p24Lp460 x220x240 part feature feature number codecodeaI/O adapters Ethernet adapters 49Y7900A1BR1763 / A10YEN2024 4-port 1Gb Ethernet AdapterYYY YYYY 90Y3466A1QYEC2D / A1QYEN4132 2-port 10 Gb Ethernet AdapterYYY NNNN None None1762 / NoneEN4054 4-port 10Gb Ethernet Adapter NNN YYYY 90Y3554A1R11759 / A1R1CN4054 10Gb Virtual Fabric AdapterYYY NNNN 90Y3558A1R01760 / A1R0CN4054 Virtual Fabric Adapter UpgradecYYY NNNN None NoneEC24 / NoneCN4058 8-port 10Gb Converged AdapterNNN YYYY None NoneEC26 / NoneEN4132 2-port 10Gb RoCE Adapter NNN YYYY Fibre Channel adapters 69Y1938A1BM1764 / A1BMFC3172 2-port 8Gb FC AdapterYYY YYYY 95Y2375A2N5EC25 / A2N5FC3052 2-port 8Gb FC AdapterYYY NNNN 88Y6370A1BPEC2B / A1BPFC5022 2-port 16Gb FC Adapter YYY NNNN InfiniBand adapters 90Y3454A1QZEC2C / A1QZIB6132 2-port FDR InfiniBand AdapterYYY NNNN None None1761 / NoneIB6132 2-port QDR InfiniBand AdapterNNN YYYY SAS 90Y4390A2XWNone / A2XWServeRAID M5115 SAS/SATA ControllerdYYYbNNNN a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (whensupported). The second is for the x220 and x440. b. For compatibility as listed here, ensure the x440 is running IMM2 firmare Build 40a or later c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed peradapter. d. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. See theServeRAID M5115 Product Guide, http://www.redbooks.ibm.com/abstracts/tips0884.html?Open18IBM Flex System Interoperability Guide 32. For Power Systems compute nodes, Table 2-2 shows which specific I/O expansion slots eachof the supported adapters can be installed in to. Yes in the table means the adapter issupported in that I/O expansion slot.Tip: Table 2-2 applies to Power Systems compute nodes only.Table 2-2 Slot locations supported by I/O expansion cards in Power Systems compute nodes FeatureDescriptionSlot 1 Slot 2 Slot 3 Slot 4 code(p460) (p460) 10 Gb Ethernet EC24 IBM Flex System CN4058 8-port 10Gb Converged Adapter YesYesYesYes EC26 IBM Flex System EN4132 2-port 10Gb RoCE AdapterNo YesYesYes 1762 IBM Flex System EN4054 4-port 10Gb Ethernet AdapterYesYesYesYes 1 Gb Ethernet 1763 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter YesYesYesYes InfiniBand 1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter No YesNo Yes Fibre Channel 1764 IBM Flex System FC3172 2-port 8Gb FC Adapter No YesNo Yes Chapter 2. Compute node component compatibility19 33. 2.2 Memory DIMM compatibilityThis section covers memory DIMMs for both compute node families. It covers thefollowing topics: 2.2.1, x86 compute nodes on page 20 2.2.2, Power Systems compute nodes on page 212.2.1 x86 compute nodesTable 2-3 lists the memory DIMM options for the x86 compute nodes.Table 2-3 Supported memory DIMMs - x86 compute nodes Part x-config e-configDescriptionx220 x240 x440 number featurefeaturea,b Unbuffered DIMM (UDIMM) modules 49Y1403A0QS EEM2 / A0QS 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECCYesNo No DDR3 1333MHz LP UDIMM 49Y14048648 EEM3 / 8648 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECCYesYesYes DDR3 1333MHz LP UDIMM Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz 49Y14058940 EM05 / None 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECCNo YesNo DDR3 1333MHz LP RDIMM 49Y14068941 EEM4 / 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECCYesYesYes DDR3 1333MHz LP RDIMM 49Y14078942 EM09 / 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECCYesYesYes DDR3 1333MHz LP RDIMM 49Y13978923 EM17 / 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECCYesYesYes DDR3 1333MHz LP RDIMM 49Y1563A1QT EM33 / A1QT 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9YesYesYes ECC DDR3 1333MHz LP RDIMM 49Y14008939 EEM1 / 8939 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC YesYesNo DDR3 1066MHz LP RDIMM 90Y3101A1CP EEM7 / None 32GB (1x32GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC No No No DDR3 1066MHz LP RDIMM Registered DIMMs (RDIMMs) - 1600 MHz 49Y1559A28Z EEM5 / A28Z 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC YesYesYes DDR3 1600MHz LP RDIMM 90Y3178A24L EEMC / A24L 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC YesYesNo DDR3 1600MHz LP RDIMM 90Y3109A292 EEM9 / A292 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC YesYesYes DDR3 1600MHz LP RDIMM 00D4968A2U5 EEMB / A2U5 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC YesYesYes DDR3 1600MHz LP RDIMM20IBM Flex System Interoperability Guide 34. Partx-confige-configDescription x220 x240x440 numberfeature featurea,b Load-reduced DIMMs (LRDIMMs) 49Y1567 A290EEM6 / A290 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 No Yes Yes ECC DDR3 1333MHz LP LRDIMM 90Y3105 A291EEM8 / A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 YesYes Yes ECC DDR3 1333MHz LP LRDIMM a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (whensupported). The second is for the x220 and x440. b. For memory DIMMs, the first feature code listed will result in two DIMMs each, whereas the second feature codelisted contains only one DIMM each.2.2.2 Power Systems compute nodes Table 2-4 lists the supported memory DIMMs for Power Systems compute nodes.Table 2-4 Supported memory DIMMs - Power Systems compute nodes Parte-configDescription p24Lp260 p260p460 numberfeature 22X23X 78P1011 EM042x 2 GB DDR3 RDIMM 1066 MHz Yes YesNoYes 78P0501 81962x 4 GB DDR3 RDIMM 1066 MHz Yes YesYes Yes 78P0502 81992x 8 GB DDR3 RDIMM 1066 MHz Yes YesNoYes 78P1917 EEMD2x 8 GB DDR3 RDIMM 1066 MHz Yes YesYes Yes 78P0639 81452x 16 GB DDR3 RDIMM 1066 MHzYes YesNoYes 78P1915 EEME2x 16 GB DDR3 RDIMM 1066 MHzYes YesYes Yes 78P1539 EEMF2x 32 GB DDR3 RDIMM 1066 MHzYes YesYes YesChapter 2. Compute node component compatibility 21 35. 2.3 Internal storage compatibility This section covers supported internal storage for both compute node families. It covers the following topics:2.3.1, x86 compute nodes: 2.5-inch drives on page 222.3.2, x86 compute nodes: 1.8-inch drives on page 232.3.3, Power Systems compute nodes on page 242.3.1 x86 compute nodes: 2.5-inch drives Table 2-5 lists the 2.5-inch drives for x86 compute nodes.Table 2-5 Supported 2-5-inch SAS and SATA drives Partx-config e-config Descriptionx220 x240 x440 numberfeaturefeaturea 10K SAS hard disk drives 90Y8877 A2XC None / A2XCIBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDDNNY 42D0637 5599 3743 / 5599IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD YYN 44W2264 5413 None / 5599IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SED NNY 90Y8872 A2XD None / A2XDIBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDDNNY 49Y2003 5433 3766 / 5433IBM 600GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD YYN 81Y9650 A282 EHD4 / A282IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDDYYY 15K SAS hard disk drives 90Y8926 A2XB None / A2XBIBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDDNNY 42D0677 5536 EHD1 / 5536IBM 146GB 15K 6Gbps SAS 2.5" SFF Slim-HS HDD YYN 81Y9670 A283 EHD5 / A283IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDDYYY NL SAS hard disk drives 81Y9690 A1P3 EHD6 / A1P3IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDDYYY 90Y8953 A2XE None / A2XEIBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDDNNY 42D0707 5409 EHD2 / 5409IBM 500GB 7200 6Gbps NL SAS 2.5" SFF HS HDDYYN NL SATA hard disk drives 81Y9730 A1AV EHD9 / A1AVIBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD YYY 81Y9722 A1NX EHD7 / A1NXIBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD YYY 81Y9726 A1NZ EHD8 / A1NZIBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD YYY Solid-state drives - Enterprise 00W1125 A3HR None / A3HRIBM 100GB SATA 2.5" MLC HS Enterprise SSDYYY 43W7746 5420 None / 5420IBM 200GB SATA 1.8" MLC SSDYYY 43W7718 A2FN EHD3 / A2FNIBM 200GB SATA 2.5" MLC HS SSD YYY 43W7726 5428 None / 5428IBM 50GB SATA 1.8" MLC SSD YYY22IBM Flex System Interoperability Guide 36. Partx-confige-configDescriptionx220x240x440 numberfeature featurea Solid-state drives - Enterprise value 49Y5839 A3ASNone / A3AS IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N 90Y8648 A2U4EHDD / A2U4 IBM 128GB SATA 2.5" MLC HS Enterprise Value SSDY Y Y 90Y8643 A2U3EHDC / A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSDY Y Y 49Y5844 A3AUNone / A3AU IBM 512GB SATA 2.5" MLC HS Enterprise Value SSDY Y N a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (whensupported). The second is for the x220 and x440.2.3.2 x86 compute nodes: 1.8-inch drives The x86 compute nodes support 1.8-inch solid-state drives with the addition of the ServeRAID M5115 RAID controller plus the appropriate enablement kits. For details about configurations, see ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884.Tip: The ServeRAID M5115 RAID controller is installed in I/O expansion slot 1 but can beinstalled along with the Compute Node Fabric Connector (aka periscope connector) usedto connect the onboard Ethernet controller to the chassis midplane. Table 2-6 lists the supported enablement kits and Features on Demand activation upgrades available for use with the ServeRAID M5115.Table 2-6 ServeRAID M5115 compatibility PartFeature Descriptionx220x240x440 numbercodea 90Y4390 A2XWServeRAID M5115 SAS/SATA Controller for IBM Flex SystemYes Yes Yes Hardware enablement kits - IBM Flex System x220 Compute Node 90Y4424 A35LServeRAID M5100 Series Enablement Kit for IBM Flex System x220 Yes NoNo 90Y4425 A35MServeRAID M5100 Series IBM Flex System Flash Kit for x220Yes NoNo 90Y4426 A35NServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220Yes NoNo Hardware enablement kits - IBM Flex System x240 Compute Node 90Y4342 A2XXServeRAID M5100 Series Enablement Kit for IBM Flex System x240 NoYes No 90Y4341 A2XYServeRAID M5100 Series IBM Flex System Flash Kit for x240NoYes No 90Y4391 A2XZServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240NoYesbNo Hardware enablement kits - IBM Flex System x440 Compute Node 46C9030 A3DSServeRAID M5100 Series Enablement Kit for IBM Flex System x440 NoNoYes 46C9031 A3DTServeRAID M5100 Series IBM Flex System Flash Kit for x440NoNoYes 46C9032 A3DUServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440NoNoYesChapter 2. Compute node component compatibility 23 37. PartFeature Description x220x240x440 numbercodea Feature on-demand licenses (for all three compute nodes) 90Y4410 A2Y1ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System Yes Yes Yes 90Y4412 A2Y2ServeRAID M5100 Series Performance Upgrade for IBM Flex SystemYes Yes Yes 90Y4447 A36GServeRAID M5100 Series SSD Caching Enabler for IBM Flex SystemYes Yes Yes a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for thex240 which are for HVEC only. b. If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit(49Y8119) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both includespecial air baffles that cannot be installed at the same time. Table 2-7 lists the drives supported in conjunction with the ServeRAID M5115 RAID controller.Table 2-7 Supported 1.8-inch solid-state drives Part Feature Descriptionx220x240x440 number codea 43W77465420IBM 200GB SATA 1.8" MLC SSDYes Yes Yes 43W77265428IBM 50GB SATA 1.8" MLC SSD Yes Yes Yes 49Y5993A3ARIBM 512GB SATA 1.8" MLC Enterprise Value SSD NoNoNo 49Y5834A3AQIBM 64GB SATA 1.8" MLC Enterprise Value SSDNoNoNo a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for thex240 which are for HVEC only.2.3.3 Power Systems compute nodes Local storage options for Power Systems compute nodes are shown in Table 2-8. None of the available drives are hot-swappable. The local drives (HDD or SDD) are mounted to the top cover of the system. If you use local drives, you must order the appropriate cover with connections for your wanted drive type. The maximum number of drives that can be installed in any Power Systems compute node is two. SSD and HDD drives cannot be mixed.Table 2-8 Local storage options for Power Systems compute nodes e-config Descriptionp24L p260p460 feature 2.5 inch SAS HDDs 8274 300 GB 10K RPM non-hot-swap 6 Gbps SAS YesYes Yes 8276 600 GB 10K RPM non-hot-swap 6 Gbps SAS YesYes Yes 8311 900 GB 10K RPM non-hot-swap 6 Gbps SAS YesYes Yes 7069 Top cover with HDD connectors for the p260 and p24LYesYes No 7066 Top cover with HDD connectors for the p460 No NoYes 1.8 inch SSDs 8207 177 GB SATA non-hot-swap SSD YesYes Yes24IBM Flex System Interoperability Guide 38. e-configDescription p24Lp260 p460 feature 7068Top cover with SSD connectors for the p260 and p24L Yes YesNo 7065Top Cover with SSD connectors for p460NoNo Yes No drives 7067Top cover for no drives on the p260 and p24LYes YesNo 7005Top cover for no drives on the p460 NoNo Yes2.4 Embedded virtualization The x86 compute nodes support an IBM standard USB flash drive (USB Memory Key) option preinstalled with VMware ESXi or VMware vSphere. It is fully contained on the flash drive, without requiring any disk space. On the x240 the USB memory keys plug into the USB ports on the optional x240 USB Enablement Kit. On the x220 and x440, the USB memory keys plug directly into USB ports on the system board. Table 2-9 lists the ordering information for the VMware hypervisor options.Table 2-9 IBM USB Memory Key for VMware hypervisors Partx-confige-config Description x220x240x440 numberfeature featurea 49Y8119 A33MNone / Nonex240 USB Enablement Kit NoYesbNo 41Y8300 A2VCEBK3 / A2VCIBM USB Memory Key for VMware ESXi 5.0Yes Yes Yes 41Y8307 A383None / A383IBM USB Memory Key for VMware ESXi 5.0 Update1Yes Yes Yes 41Y8298 A2G0None / A2G0IBM Blank USB Memory Key for VMware ESXiYes Yes YesDownloads a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (whensupported). The second is for the x220 and x440. b. If the x240 USB Enablement Kit (49Y8119) is installed, the ServeRAID M5100 Series SSD Expansion Kit(90Y4391) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both includespecial air baffles that cannot be installed at the same time. You can use the Blank USB Memory Key, 41Y8298, to use any available IBM customized version of the VMware hypervisor. The VMware vSphere hypervisor with IBM customizations can be downloaded from the following website: http://ibm.com/systems/x/os/vmware/esxi Power Systems compute nodes do not support VMware ESXi installed on a USB Memory Key. Power Systems compute nodes support IBM PowerVM as standard. These servers do support virtual servers, also known as logical partitions or LPARs. The maximum number of virtual serves is 10 times the number of cores in the compute node: p24L: Up to 160 virtual servers (10 x 16 cores) p260: Up to 160 virtual servers (10 x 16 cores) p460: Up to 320 virtual servers (10 x 32 cores)Chapter 2. Compute node component compatibility 25 39. 2.5 Expansion node compatibility This section describes the two expansion nodes and the components that are compatible with each.2.5.1, Compute nodes on page 262.5.2, Flex System I/O adapters - PCIe Expansion Node on page 262.5.3, PCIe I/O adapters - PCIe Expansion Node on page 272.5.4, Internal storage - Storage Expansion Node on page 282.5.5, RAID upgrades - Storage Expansion Node on page 292.5.1 Compute nodes Table 2-10 lists the expansion nodes and their compatibility with compute nodes.Table 2-10 I/O adapter compatibility matrix - compute nodes Supported servers p260 22Xp260 23X System xx-confige-configp24L p460x220 x240 x440 partfeature feature numbercodecode Description 81Y8983 A1BVA1BV IBM Flex System PCIe Expansion NodeYaYaNNNNN 68Y8588 A3JFA3JF IBM Flex System Storage Expansion Node YaYaNNNNN a. The x220 and x240 both require the second processor be installed.2.5.2 Flex System I/O adapters - PCIe Expansion Node The PCIe Expansion Node supports the adapters listed in Table 2-11.Storage Expansion Node: The Storage Expansion Node does not include connectors foradditional I/O adapters.Table 2-11 I/O adapter compatibility matrix - expansion nodes System x x-config e-config Supported in PCIe part numberfeature code feature codeaI/O adaptersExpansion Node Ethernet adapters 49Y7900A1BR 1763 / A1BREN2024 4-port 1Gb Ethernet AdapterYes 90Y3466A1QY EC2D / A1QYEN4132 2-port 10 Gb Ethernet AdapterYesb None None 1762 / NoneEN4054 4-port 10Gb Ethernet Adapter No 90Y3554A1R1 1759 / A1R1CN4054 10Gb Virtual Fabric AdapterYesb 90Y3558A1R0 1760 / A1R0CN4054 Virtual Fabric Adapter UpgradecYes None None EC24 / NoneCN4058 8-port 10Gb Converged AdapterNo None None EC26 / NoneEN4132 2-port 10Gb RoCE Adapter No26IBM Flex System Interoperability Guide 40. System xx-config e-config Supported in PCIe part number feature code feature codea I/O adapters Expansion Node Fibre Channel adapters 69Y1938 A1BM 1764 / A1BM FC3172 2-port 8Gb FC Adapter Yes 95Y2375 A2N5 EC25 / A2N5 FC3052 2-port 8Gb FC Adapter Yes 88Y6370 A1BP EC2B / A1BP FC5022 2-port 16Gb FC AdapterYes InfiniBand adapters 90Y3454 A1QZ EC2C / A1QZ IB6132 2-port FDR InfiniBand Adapter Yes NoneNone 1761 / None IB6132 2-port QDR InfiniBand Adapter No SAS 90Y4390 A2XW Note / A2XW ServeRAID M5115 SAS/SATA ControllerNo a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (whensupported). The second is for the x220 and x440. b. Operates at PCIe 2.0 speeds when installed in the PCIe Expansion Node. For best performance install adapterdirectly on Compute Node. c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed peradapter.2.5.3 PCIe I/O adapters - PCIe Expansion Node The PCIe Expansion Node supports for up to four standard PCIe 2.0 adapters: Two PCIe 2.0 x16 slots that support full-length, full-height adapters (1x, 2x, 4x, 8x, and 16x adapters supported) Two PCIe 2.0 x8 slots that support low-profile adapters (1x, 2x, 4x, and 8x adapters supported) Storage Expansion Node: The Storage Expansion Node does not include connectors for PCIe I/O adapters. Table 2-12 lists the supported adapters. Some adapters must be installed in one of the full-height slots as noted. If the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used, however.Table 2-12 Supported adapter cards System x x-confige-configDescriptionMaximum part feature featuresupported number codecode 46C9078A3J3A3J3IBM 365GB High IOPS MLC Mono Adapter (low-profile adapter) 4 46C9081A3J4A3J4IBM 785GB High IOPS MLC Mono Adapter (low-profile adapter) 4 81Y451959855985640GB High IOPS MLC Duo Adapter (full-height adapter)2 81Y4527A1NBA1NB1.28TB High IOPS MLC Duo Adapter (full-height adapter) 2 90Y4377A3DYA3DYIBM 1.2TB High IOPS MLC Mono Adapter (low-profile adapter) 4 90Y4397A3DZA3DZIBM 2.4TB High IOPS MLC Duo Adapter (full-height adapter)2Chapter 2. Compute node component compatibility 27 41. System x x-confige-config DescriptionMaximum part feature feature supported number codecode 94Y5960A1R4A1R4 NVIDIA Tesla M2090 (full-height adapter) 1a a. if the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the otherfull-height slot. The low-profile slots and Flex System I/O expansion slots can still be used Consult the IBM ServerProven site for the current list of adapter cards that are supported in the Expansion Node: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html Note: Although the design of Expansion Node allows for a much greater set of standard PCIe adapter cards, the preceding table lists the adapters that are specifically supported. If the PCI Express adapter that you require is not on the ServerProven web site, use the IBM ServerProven Opportunity Request for Evaluation (SPORE) process to confirm compatibility in the desired configuration.2.5.4 Internal storage - Storage Expansion Node The Storage Expansion Node adds 12 drive bays to the attached compute node. The expansion node supports 2.5-inch drives, either HDDs or SSDs. PCIe Expansion Node: The PCIe Expansion Node does not support any HDDs or SSDs. Table 2-13 shows the hard disk drives and solid state drives supported within the Storage Expansion Node. Both SSD and HDD can be installed inside the unit at the same time, although as per best practice it is recommended that logical drives are created of similar type of disks. ie for a RAID 1 pair, choose identical drive types, SSD or HDD.Table 2-13 HDDs and SSDs supported in Storage Expansion Node System xx-confige-config Description partfeature feature numbercodecode NL SATA HDDs 81Y9722 A1NXA1NX IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 81Y9726 A1NZA1NZ IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 81Y9730 A1AVA1AV IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 10K SAS HDDs 81Y9650 A282A282 IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD 90Y8872 A2XDA2XD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD 90Y8877 A2XCA2XC IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD Solid state drives (SSD) 90Y8643 A2U3A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD28IBM Flex System Interoperability Guide 42. 2.5.5 RAID upgrades - Storage Expansion NodeThe Storage Expansion Node supports the RAID upgrades listed in Table 2-14.PCIe Expansion Node: The PCIe Expansion Node does not support any of theseupgrades.Table 2-14 FOD options available for the Storage Expansion Node System x x-confige-config Description part feature feature number codecode Hardware upgrades 81Y4559A1WYA1WY ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x 81Y4487A1J4A1J4 ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM System x Features on Demand upgrades (license only) 90Y4410A2Y1A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System 90Y4447A36GA36G ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System 90Y4412A2Y2A2Y2 ServeRAID M5100 Series Performance Accelerator for IBM Flex System Chapter 2. Compute node component compatibility 29 43. 30 IBM Flex System Interoperability Guide 44. 3Chapter 3. Software compatibility This chapter describes aspects of software compatibility. Topics in this chapter are: 3.1, Operating system support on page 32 3.2, IBM Fabric Manager on page 34 Unless it is otherwise specified, updates or service packs equal to or higher within the same operating system release family and version of the operating system are also supported. However, support for newer major versions are not supported unless specifically identified. For customers interested in deploying operating systems not listed here, IBM can provide customers with server hardware only warranty support. For operating system and software support, customers must contact the operating system vendor or community. Customers must obtain the operating system and OS software support directly from the operating system vendor or community. For more information, see Additional OS Information on the IBM ServerProven web page. Copyright IBM Corp. 2012, 2013. All rights reserved.31 45. 3.1 Operating system supportFor the latest information, see IBM ServerProven at the following website:http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml3.1.1 x86 compute nodesTable 3-1 lists the operating systems supported by the x86 compute nodes.Table 3-1 Operating system support - x86 compute nodes Model x220 x240x440 Microsoft Windows Server 2012 YesYes Yes Microsoft Windows Server 2008 R2Yes (SP1)Yes (SP1) Yes (SP1) Microsoft Windows Server 2008 HPC Edition Yes (SP1)Yes (SP1) No Microsoft Windows Server 2008, Datacenter x64 Edition Yes (SP2)Yes (SP2) Yes (SP2) Microsoft Windows Server 2008, Enterprise x64 Edition Yes (SP2)Yes (SP2) Yes (SP2) Microsoft Windows Server 2008, Standard x64 Edition Yes (SP2)Yes (SP2) Yes (SP2) Microsoft Windows Server 2008, Web x64 EditionYes (SP2)Yes (SP2) Yes (SP2) Red Hat Enterprise Linux 6 Server x64 Edition Yes (U2) Yes (U2)Yes (U3) Red Hat Enterprise Linux 5 Server with Xen x64 EditionYes (U7)ab Yes (U7)b Yes (U8)b Red Hat Enterprise Linux 5 Server x64 Edition Yes (U7) Yes (U7)Yes (U8) SUSE Linux Enterprise Server 11 for AMD64/EM64T SP2 Yes (SP2)Yes (SP1) Yes (SP2) SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T SP2Yes (SP2)abYes (SP1)bYes (SP2)b SUSE Linux Enterprise Server 10 for AMD64/EM64T SP4 Yes (SP4)Yes (SP4) Yes (SP4) VMware ESXi 4.1 Yes (U2)aYes (U2)c Yes (U2) VMware ESX 4.1Yes (U2)aYes (U2)c Yes (U2) VMware vSphere 5Yesa YescYes (U1) VMware vSphere 5.1Yesa YescYes a. Xen and VMware hypervisors are not supported with ServeRAID C105 (software RAID), but are supported withServeRAID H1135 Controller 90Y4750 and ServeRAID M5115 Controller 90Y4390. b. Only pNIC mode is supported with Xen kernels. For support information, see RETAIN Tip H205800 athttp://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480. c. The IMM2 Ethernet over USB must be disabled using the IMM2 web interface. For support information, seeRETAIN Tip H205897 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090620.32IBM Flex System Interoperability Guide 46. 3.1.2 Power Systems compute nodes Table 3-2 lists the operating systems supported by the Power Systems compute nodes.Table 3-2 Operating system support - Power Systems compute nodes Modelp24L p260 p260 p460 22X23X IBM AIX Version 7.1 No YesYesYes IBM AIX Version 6.1No YesYesYes IBM i 7.1No YesYesYes IBM i 6.1No Yesa Yesa Yesa IBM Virtual I/O Server (VIOS) 2.2.1.4YesYesNo Yes IBM Virtual I/O Server (VIOS) 2.2.2.0YesYesYesYes Red Hat Enterprise Linux 5 for IBM POWERYes (U7) Yes (U7) Yes (U9) Yes (U7) Red Hat Enterprise Linux 6 for IBM POWER Yes (U2) Yes (U2) Yes (U3) Yes (U2) SUSE Linux Enterprise Server 11 for IBM POWERb Yes (SP2)Yes (SP2)Yes (SP2)Yes (SP2) a. IBM i 6.1 is supported but cannot be ordered preinstalled from IBM Manufacturing. b. With current maintenance updates available from SUSE to enable all planned functionality. Specific technology levels, service pack, and APAR levels are as follows: For the p260 (model 22X), and p460: IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1 TR4, or later VIOS 2.2.1.4, or later AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284 AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later. An IBM AIX 5L V5.3 Service Extension is also required. For the p260 (model 23X): IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later VIOS 2.2.2.0 or later AIX V7.1 with the 7100-02 Technology Level or later AIX V6.1 with the 6100-08 Technology Level or later AIX V6.1 with the 6100-07 Technology Level, with Service Pack 71, or later AIX V6.1 with the 6100-06 Technology Level with Service Pack 111 , or later AIX V5.3 with the 5300-12 Technology Level with Service Pack 7, or later. An IBM AIX 5L V5.3 Service Extension is required. 1 Planned availability March 29, 2013Chapter 3. Software compatibility 33 47. 3.2 IBM Fabric Manager IBM Fabric Manager is a solution that you can use to quickly replace and recover compute nodes in your environment. It accomplishes this task by assigning Ethernet MAC, Fibre Channel WWN, and SAS WWN addresses so that any compute nodes plugged into those bays take on the assigned addresses. This configuration enables the Ethernet and Fibre Channel infrastructure to be configured once and before any compute nodes are connected to the chassis. For information about IBM Fabric Manager, see the following website: http://www.ibm.com/systems/flex/fabricmanager The operating systems that IBM Fabric Manager supports are listed in the IBM Flex System Information Center at the following website: http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.iof m.doc/dw1li_supported_os.html Table 3-3 lists the adapters that support IBM Fabric Manager and the compute nodes that they can be installed in.Table 3-3 IBM Fabric Manager support - adapters PartFeatureDescriptionx220x240x440 p24Lp260 p460 numbercodesa Ethernet expansion cards NoneNone / 1762EN4054 4-port 10Gb Ethernet AdapterN/AbN/AbN/Ab Yes YesYes 90Y3554 A1R1 / 1759CN4054 10Gb Virtual Fabric Adapter Yes Yes YesN/AbN/Ab N/Ab 49Y7900 A1BR / 1763EN2024 4-port 1Gb Ethernet Adapter Yes Yes YesYes YesYes 90Y3466 A1QY / EC2DEN4132 2-port 10Gb Ethernet AdapterYes Yes YesN/AbN/Ab N/Ab NoneNone / EC24CN4058 8-port 10Gb Converged Adapter N/AbN/AbN/Ab Yes YesYes NoneNone / EC26EN4132 2-port 10Gb RoCE AdapterN/AbN/AbN/Ab Yes YesYes Fibre Channel expansion cards 95Y2375 A2N5 / EC25FC3052 2-port 8Gb FC Adapter Yes Yes YesN/AbN/Ab N/Ab 69Y1938 A1BM / 1764FC3172 2-port 8Gb FC Adapter Yes Yes YesYes YesYes 88Y6370 A1BP / EC2BFC5022 2-port 16Gb FC AdapterYes Yes YesN/AbN/Ab N/Ab InfiniBand expansion cards NoneNone / 1761IB6132 2-port QDR InfiniBand Adapter N/AbN/AbN/Ab NoNo No 90Y3454 A1QZ / EC2CIB6132 2-port FDR InfiniBand Adapter NoNoNo N/AbN/Ab N/Ab a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The secondfeature code is for configurations ordered through the IBM Power Systems channel (e-config) b. Not applicable. This combination of adapter and compute node is not supported.34IBM Flex System Interoperability Guide 48. Table 3-4 lists the supported switches.Table 3-4 IBM Fabric Manager support - switches Description PartFeatureSupport IBM numbercodesFabric Manager Flex System Fabric CN4093 10Gb Converged Scalable Switch00D5823 A3HH / ESW2No Flex System Fabric EN4093R 10Gb Scalable Switch 95Y3309 A3J6 / ESW7No Flex System Fabric EN4093 10Gb Scalable Switch49Y4270 A0TB / 3593Yes - VLAN failovera Flex System EN2092 1Gb Ethernet Switch49Y4294 A0TF / 3598Yes - VLAN failovera Flex System EN4091 10Gb Ethernet Pass-thru88Y6043 A1QV / 3700Yesb Flex System FC5022 16Gb SAN Scalable Switch 88Y6374 A1EH / 3770Yesb Flex System FC5022 16Gb 24-port SAN Scalable Switch 00Y3324 A3DP / ESW5Yesb Flex System FC5022 16Gb ESB Switch90Y9356 A2RQ / 3771Yesb Flex System FC3171 8Gb SAN Switch 69Y1930 A0TD / 3595Yesb Flex System FC3171 8Gb SAN Pass-thru69Y1934 A0TJ / 3591Yesb Flex System IB6131 InfiniBand Switch90Y3450 A1EK / 3699No a. VLAN failover (port based or untagged only) is supported b. IBM Fabric Manager is transparent to pass-thru and Fibre Channel switch modules. There is no dependencybetween IBM Fabric Manager and these modules.IBM Fabric Manager V3.0 is supported on the following operating systems (see 3.1,Operating system support on page 32 for operating systems supported by each computenode):Microsoft Windows 7 (client only)Microsoft Windows Server 2003Microsoft Windows Server 2003 R2Microsoft Windows Server 2008Microsoft Windows Server 2008 R2Red Hat Enterprise Linux 5Red Hat Enterprise Linux 6SUSE Linux Enterprise Server 10SUSE Linux Enterprise Server 11IBM Fabric Manager V3.0 is supported on the following web browsers:Internet Explorer 8Internet Explorer 9Firefox 14IBM Fabric Manager V3.0 is supported on Java Runtime Edition 1.6. Chapter 3. Software compatibility 35 49. 36 IBM Flex System Interoperability Guide 50. 4Chapter 4. Storage interoperability This chapter describes storage subsystem compatibility. Topics in this chapter are: 4.1, Unified NAS storage on page 38 4.2, FCoE support on page 39 4.3, iSCSI support on page 40 4.4, NPIV support on page 41 4.5, Fibre Channel support on page 41 Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) found at the following website: http://ibm.com/systems/support/storage/ssic/interoperability.wss The tables in this chapter and in SSIC are used primarily to document Fibre Channel SAN and FCoE-attached block storage interoperability and iSCSI storage when a hardware iSCSI initiator host adapters are used. Copyright IBM Corp. 2012, 2013. All rights reserved.37 51. 4.1 Unified NAS storage NFS, CIFS and iSCSI protocols on storage products such as IBM N series, IBM Storwize V7000 Unified, and SONAS are supported with IBM Flex System based on the requirements including operating system levels. See the following interoperability documentation provided for those products for specific support:N series interoperability:http://ibm.com/support/docview.wss?uid=ssg1S7003897IBM Storwize V7000 Unifiedhttp://ibm.com/support/docview.wss?uid=ssg1S1003911IBM Storwize V7000SVC 6.4: http://ibm.com/support/docview.wss?uid=ssg1S1004113SVC 6.3: http://ibm.com/support/docview.wss?uid=ssg1S1003908SONAShttp://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fovr_nfssupportmatrix.htmlSoftware iSCSI: Generally iSCSI would be supported with all types of storage as long assoftware iSCSI initiators are used on the servers running supported OS and device driverlevels.38 IBM Flex System Interoperability Guide 52. 4.2 FCoE supportThis section lists FCoE support. Table 4-1 lists FCoE support using Fibre Channel targets.Table 4-2 on page 40 lists FCoE support using native FCoE targets (that is, end-to-endFCoE). Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) web site: http://ibm.com/systems/support/storage/ssic/interoperability.wssTable 4-1 FCoE support using FC targets Ethernet Flex System I/O FC ForwarderSupported OperatingStorage targets adaptermodule(FCF) SAN fabricsystems10Gb DS8000onboardCisco MDS SVCEN4091 10GbLOM (x240)Cisco9124IBM StorwizeEthernet+ FCoENexus 5010 Cisco MDS V7000Pass-thruupgrade,Cisco9148V7000 Storage(vNIC2 andWindows90Y9310 Nexus 5020 Cisco MDS Node (FC)pNIC) Server10Gb 9513TS3200, TS3310,2008 R2TS3500onboardSLES 10LOM (x440)EN4093 10Gb Brocade SLES 11+ FCoE IBM B-typeSwitch (vNIC1,VDX 6730RHEL 5upgrade,vNIC2, UFP, RHEL 6 DS800090Y9310and pNIC) Cisco ESX 4.1SVCCN4054EN4093R Nexus 5548vSphereStorwize V700010Gb Cisco MDS10Gb Switch Cisco 5.0V7000 StorageAdapter,(vNIC1, vNIC2,Nexus 5596 Node (FC)90Y3554UFP and pNIC)IBM XIV+ FCoEupgrade,CN4093 10Gb Converged Switch IBM B-type90Y3558 (vNIC1, vNIC2 and pNIC)Cisco MDSEN4093 10Gb(pNIC only)BrocadeEN4093RIBM B-typeVDX 673010Gb Switch(pNIC only)DS8000CN4058AIX 6.1 SVC8-port 10Gb EN4093 10Gb AIX 7.1 Storwize V7000Converged Switch (pNICCisco VIOS 2.2 V7000 StorageAdapter,only) Nexus 5548SLES 11.2 Cisco MDS Node (FC)EC24EN4093R Cisco RHEL 6.3 IBM XIV10Gb Switch Nexus 5596(pNIC only)CN4093 10Gb Converged Switch IBM B-type(pNIC only)Cisco MDSChapter 4. Storage interoperability 39 53. Table 4-2 FCoE support using FCoE targets (end-to-end FCoE) Ethernet adapter Flex System I/OOperating Storage targetsmodule systemsWindows Server 2008 R210Gb onboard LOM (x240) +SLES 10FCoE upgrade, 90Y9310 CN4093 10GbSLES 1110Gb onboard LOM (x440) + Converged SwitchV7000 StorageRHEL 5FCoE upgrade, 90Y9310 (vNIC1, vNIC2, andNode (FCoE)RHEL 6CN4054 10Gb Adapter, 90Y3554pNIC)ESX 4.1+ FCoE upgrade, 90Y3558vSphere 5.0AIX 6.1CN4093 10Gb AIX 7.1CN4058 8-port 10Gb ConvergedV7000 StorageConverged SwitchVIOS 2.2Adapter, EC24 Node (FCoE)(pNIC only) SLES 11.2RHEL 6.34.3 iSCSI supportTable 4-3 lists iSCSI support using a hardware-based iSCSI initiator.IBM System Storage Interoperation Center normally only lists support for iSCSI storageattached using hardware iSCSI offload adapters in the servers. Flex System compute nodessupport any type of iSCSI (1Gb or 10Gb) storage as long as software iSCSI initiator devicedrivers that meet the storage requirements for operating system and device driver levels aremet.Tip: Use these tables only as a starting point. Configuration support must be verifiedthrough the IBM System Storage Interoperation Center (SSIC) web site:http://ibm.com/systems/support/storage/ssic/interoperability.wssTable 4-3 Hardware-based iSCSI support Ethernet adapter Flex System I/O moduleOperating systemsStorage targets10Gb onboard LOMEN4093 10Gb Switch(x240)aWindows Server 2008 R2SVC(vNIC1, vNIC2, UFP,10Gb onboard LOM SLES 10 & 11Storwize V7000and pNIC)(x440)aRHEL 5 & 6V7000 Storage NodeEN4093R 10Gb SwitchCN4054 10GbESX 4.1 (iSCSI)(vNIC1, vNIC2, UFPVirtual Fabric vSphere 5.0 IBM XIVand pNIC)Adapter, 90Y3554b a. iSCSI/FCoE upgrade is required, IBM Virtual Fabric Advanced Software Upgrade (LOM), 90Y9310 b. iSCSI/FCoE upgrade is required, IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, 90Y355840IBM Flex System Interoperability Guide 54. 4.4 NPIV supportNPIV is supported on all Fibre Channel and FCoE adapters that are supported in the computenodes. See Table 2-1 on page 18 for the list of supported adapters. IBM i support: IBM i 6.1 and i 7.1 NPIV attachment for SAN volumes requires 520 byte sectors on those volumes. At this time, only the DS8000s, DS5100 and DS5300 SANs have this capability.4.5 Fibre Channel supportThis section discusses Fibre Channel support for IBM BladeCenter. The following topics arecovered: 4.5.1, x86 compute nodes 4.5.2, Power Systems compute nodes on page 42 Tip: Use these tables only as a starting point. Not all combinations may be supported. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) web site: http://ibm.com/systems/support/storage/ssic/interoperability.wss4.5.1 x86 compute nodesTable 4-1 lists Fibre Channel storage support for x86 compute nodes.Table 4-4 Fibre Channel support: x86 compute nodes FC adapter Flex System I/O moduleExternal SAN Operating FC storagefabric systems targets FC3171 8Gb switch, 69Y1930FC3172 2-port 8GbFC3171 8Gb Pass-thru,V7000FC Adapter,69Y1934Microsoft Storage Cisco MDS69Y1938FC5022 16Gb 12-port, Windows Node (FC) IBM b-typeFC3052 2-port 8Gb88Y6374Server 2008 DS3000 BrocadeFC Adapter,FC5022 16Gb 24-port, RHEL 5DS500095Y237500Y3324RHEL 6DS8000 FC5022 16Gb 24-portSLES 10 SVC ESB, 90Y9356 SLES 11 V7000 FC5022 16Gb 12-port, ESX 4.1 V3500 88Y6374vSphere 5.0 V3700FC5022 2-port 16GbvSphere 5.1 XIV FC5022 16Gb 24-port,IBM b-typeFC Adapter, Tape 00Y3324 Brocade88Y6370 FC5022 16Gb 24-port ESB, 90Y9356 Chapter 4. Storage interoperability41 55. 4.5.2 Power Systems compute nodesTable 4-5 lists Fibre Channel storage support for Power Systems compute nodes.Table 4-5 Fibre Channel support: Power Systems compute nodes FC expansion cardFlex System I/O module External SAN fabric Operating FC storage systems targets FC3171 8Gb switch,V7000 3595Storage FC3171 8Gb Pass-thru,AIX 6.1Node (FC) 3591 AIX 7.1DS8000IBM b-type FC3172 2-port 8Gb FC5022 16Gb 12-port, VIOS 2.2 SVCBrocade FC Adapter, 17643770 SLES 11V7000Cisco MDS FC5022 16Gb 24-port, RHEL 5 V3500 ESW5 RHEL 6 V3700 FC5022 16Gb 24-port XIV ESB, 3771 Tape42IBM Flex System Interoperability Guide 56. Abbreviations and acronymsAPAR Authorized Problem Analysis SAN storage area network Reports SAS Serial Attached SCSIDACdual address cycle SATASerial ATADIMM dual inline memory module SDD Subsystem Device DriverECCerror checking and correcting SED self-encrypting driveESBEnterprise Switch Bundle SFF Small Form FactorFC Fibre Channel SFP small form-factor pluggableFDRfourteen data rate SLESSUSE Linux Enterprise ServerGB gigabyte SRshort rangeHDDhard disk drive SSD solid-state driveHH half-high SSICSystem Storage InteroperationHPChigh performance computingCenterHS hot swapSVC SAN Volume ControllerI/Oinput/outputTOR top of rackIB InfiniBandUDIMM unbuffered DIMMIBMInternational Business Machines USB universal serial busIT information technologyVIOSVirtual I/O ServerITSO International Technical Support WWN worldwide name OrganizationLOMLAN on motherboardLP low profileLR long rangeLRDIMM load-reduced DIMMMACmedia access controlMDSMultilayer Director SwitchM


Recommended