© Copyright International Business Machines Corporation 2009 1
Deploy Fibre Channel over Ethernet (FCoE) Solution with
IBM® BladeCenter® and System x® using the IBM Converged Switch B32
(Brocade 8000)
Khalid Ansari [email protected]
IBM Advanced Technical Support
© Copyright International Business Machines Corporation 2009 2
Revision History
1.0 June 14, 2009 Initial Version
Notices: This paper is intended to provide information regarding IBM BladeCenter Open Fabric Manager. It discusses findings based on configurations that were created and tested under laboratory
conditions. These findings may not be realized in all customer environments, and implementation
in such environments may require additional steps, configurations, and performance analysis. The
information herein is provided “AS IS” with no warranties, express or implied. This information
does not constitute a specification or form part of the warranty for any IBM or non-IBM products.
Information in this document was developed in conjunction with the use of the equipment specified and is limited in application to those specific hardware and software products and levels.
The information contained in this document has not been submitted to any formal IBM test and is
distributed as is. The use of this information or the implementation of these techniques is a
customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for
accuracy in a specific situation, there is no guarantee that the same or similar results will be
obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.
IBM may not officially support techniques mentioned in this document. For questions regarding
officially supported techniques, please refer to the product documentation, announcement letters, or contact the IBM Support Line at 1-800-IBM-SERV.
This document makes references to vendor-acquired applications or utilities. It is the customer
responsibility to obtain licenses of these utilities prior to their usage.
© Copyright International Business Machines Corporation 2009. All rights reserved. U.S. Government Users Restricted Rights – Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
© Copyright International Business Machines Corporation 2009 3
Table of Contents Table of Contents ........................................................................................................................3
Executive Overview .................................................................................................................4 How does Fibre Channel Over Ethernet Work? .........................................................................5 Logical Connectivity ..............................................................................................................6
IBM BladeCenter and System x FCoE Product Details .............................................................9 10Gb Pass-Thru Module Features........................................................................................9 QLogic 2-port 10 Gb Converged Network Adapter (CFFh) for IBM BladeCenter (QMI8142) 10 QLogic 2-port 10Gb Converged Network Adapter for IBM System x....................................10 Brocade 2-port 10 Gb Converged Network Adapter for IBM System x .................................11 Brocade 8000 CEE Switch ...................................................................................................11 Implementing Blade Boot from SAN Solution with BOFM Enabled........................................14 Useful Links ........................................................................................................................35
Trademarks ...........................................................................................................................36
© Copyright International Business Machines Corporation 2009 4
Executive Overview
This document illustrates the technical implementation details of Fibre Channel over Ethernet (FCoE) solution with IBM BladeCenter. The objective of this document is to demonstrate the configuration steps for blade boot from FC SAN using the BladeCenter 10Gb Ethernet Pass-Thru Module and the Brocade B32 Converged Enhanced Ethernet (CEE) solution.
The BladeCenter chassis-based FCoE solution will also leverage
BladeCenter Open Fabric Manager (BOFM) for Ethernet MAC address and Fibre Channel WWPN virtualization solution to enable blade pre-provisioning, simplify blade server re-deployment and automatic blade failover using the Advanced Open Fabric Manager. In addition to address virtualization and automatic blade failover, BOFM simplifies and automates the configuration tasks such as enable adapter BIOS and selectable boot device settings performed in the HBA firmware utility. Traditionally, this requires the administrator to interrupt the blade boot process and change the default HBA settings on one or more blades individually. This can be painful and time consuming if the configuration is performed manually on hundreds of blade servers. With BOFM, you can simply modify the configuration file and apply the profile to one or more blades in multiple chassis at the same time.
The following sections list the configuration process for blade boot from
SAN using the Qlogic CNA attached to DS4700 and System x rack server booting locally, and access IBM DS4700 using the Qlogic CNA.
The FCoE test environment setup includes the following hardware:
• IBM BladeCenter H Chassis
• IBM HS21XM Blade Server
• 42C1830 - QLogic 2-port 10Gb CNA (CFFh) for IBM BladeCenter
• BladeCenter 10Gb Ethernet Pass-Thru Module
• IBM DS4700 Storage Subsystem
• Brocade 8000 FCoE Switch
The following figure shows the final topology implemented for blade boot from SAN environment.
© Copyright International Business Machines Corporation 2009 5
Figure 1: FCoE Physical Topology
How does Fibre Channel Over Ethernet Work?
Fibre Channel over Ethernet (FCoE) enables you to transport Fibre Channel (FC) protocols and frames over Converged Enhanced Ethernet (CEE) networks. CEE is an enhanced Ethernet that enables the convergence of various applications in data centers (LAN, SAN, and HPC) onto a single interconnect technology.
FCoE provides a method of encapsulating FC traffic over a physical Ethernet link. FCoE frames use a unique EtherType that enables FCoE traffic and standard Ethernet traffic to be carried on the same link. FC frames are encapsulated in an Ethernet packet and sent from one FCoE-aware device across an Ethernet network to another FCoE-aware device. The FCoE-aware devices may be FCoE end nodes (ENodes) such as servers, storage arrays, or tape drives on one end and FCoE forwarders on the other end. FCoE Forwarders (FCFs) are switches providing FC fabric services and FCoE-to-FC bridging function.
The motivation behind using CEE networks as a transport mechanism for FC arises from the desire to simplify host protocol stacks and consolidate network interfaces in data center environments. FC standards allow for building
© Copyright International Business Machines Corporation 2009 6
highly reliable, high-performance fabrics for shared storage, and these characteristics are what CEE brings to data centers. Therefore, it is logical to consider transporting FC protocols over a reliable CEE network in such a way that it is completely transparent to the applications. The underlying CEE fabric is highly reliable and high performing, the same as the FC SAN.
In FCoE, ENodes discover FCFs and initialize the FCoE connection through the FCoE Initialization Protocol (FIP). The FIP has a separate EtherType from FCoE. The FIP includes a discovery phase in which ENodes solicit FCFs and FCFs respond to the solicitations with advertisements of their own. At this point, the ENodes know enough about the FCFs to log into them. The fabric login and fabric discover (FLOGI/FDISC) for a VN-to-VF port connection is also part of the FIP. FCoE services include:
• FC fabric services for FCoE VN_port devices, which provide access to FCoE • VN_port devices similar to those provided by FC F_port to FC N_port
devices
• FCoE-to-FC switching and translation services: - FCoE servers to/from FC targets - FCoE targets to/from FC servers
FCoE features include:
• FIP—solicited and unsolicited FCF advertisements (per June 10, 2008, version of T11-FC-BB5)
• FCoE FLOGI • VF_ports • N_port ID virtualization (NPIV) on VF_ports (up to 1000 devices per FCF)
NOTE
• NPIV is an FC facility allowing multiple N_port IDs to share a single physical N_port. This allows multiple FC initiators to occupy a single physical port.
• For FCoE ENodes, there is support for only directly-connected links.
• Flow isolation
Logical Connectivity
Figure 2 shows a logical view of the Brocade 8000 CEE switch. There are
two independent switches, one for CEE and the other for FC. There are six embedded FCoE virtual ports bridging the two switches. Each of these port is a 10-Gigabit FCoE port. The Fabric OS has FCoE-specific CLI commands for the configuration and management of these FCoE ports.
© Copyright International Business Machines Corporation 2009 7
The FCoE ports differ from the usual concept of a port in a Brocade switch as they are embedded ports and are not directly associated with an external physical port on the switch. Configuration of the FCoE ports through the regular Fabric OS FC CLI is disabled; only the Fabric OS FCoE CLI commands can be used to configure and manage these ports.
The FCoE ports are displayed as “ports” in some show command displays. The data displayed for a single FCoE port is the sum of the individual ports comprising the FCoE port. CEE statistics are not shown for the FCoE ports. NOTE
In the Fabric OS CLI the FCoE ports can be enabled or disabled; there is no other configuration required. Each of the embedded FCoE ports is used explicitly for FCoE VF_port service and provide four MAC addresses and flow isolation for up to four flows per FCoE port.
Figure 2: FCoE Logical Topology
The FCoE VF_ports provide FC services to FCoE initiators and targets, as well as an FCoE-to-FC bridging service that allows FCoE initiators to access FC targets and conversely allows FC targets to access FCoE initiators.
© Copyright International Business Machines Corporation 2009 8
Brocade's implementation of FCoE for the Brocade 8000 CEE switch provides integral N_port ID Virtualization (NPIV) support. Multiple VN_port devices can log in to a single FCoE VF_port interface. Up to 1,000 VN_port devices can log in to a single Brocade 8000 CEE switch. Each of the embedded FCoE ports supports four logical traffic paths. These four logical traffic paths share the FCoE port bandwidth. Any single traffic path may use the entire 10-Gbps of FCoE port bandwidth if available, but it must share the bandwidth equally with the other logical path traffic flows if more than one path is active. The bandwidth available to any single logical traffic path (and therefore any single FCoE-to-FC traffic flow) is between 2.5 Gbps and 10 Gbps.
While they share the FCoE port bandwidth, the logical traffic paths are independent from one another in the event of downstream congestion. If the traffic flowing on one logical path stalls because of congestion, the traffic flowing on the other logical paths on the same FCoE port is not affected. Note however, that a logical traffic path is not limited to a single FCoE-to-FC traffic flow. If multiple traffic flows are sharing the same logical traffic path, congestion independence between the flows cannot be enforced.
The Qlogic 10Gb FCoE Adapter if installed on the rack server can directly connect to a top of the rack 10Gb CEE switch. The Qlogic 10Gb FCoE Adapter if installed on the blade server today can connect to external 10Gb CEE switch via the 10Gb Pass-thru modules installed in bays 7 and 9 of the BCH chassis. The following figure shows the logical path from the host to the native FC storage:
Figure 3: Host to Storage logical configuration
At a high-level, the following configuration tasks needs to be completed to occur in order to establish end-to-end FCoE connectivity:
• The host must have FCoE adapter installed and connected to CEE + FCoE
© Copyright International Business Machines Corporation 2009 9
switch
• 10Gb Ethernet and 8Gb FC device drivers must be installed on the host
• The link has to be established from FCoE to the corresponding 10Gb CEE
port
• The FC device driver logs into the fabric as a VN_port.
• The FC target logs into the fabric as an N port device
• The FCoE host and F Target WWPNs are zoned
• The FC LUN(s) are mapped to the host
• The Host sees the LUNs from the FCoE HBA firmware if boot from SAN
• The host sees the LUNs from the OS if booting locally
IBM BladeCenter and System x FCoE Product Details
10Gb Pass-Thru Module Features • 14 internal and 14 external 10Gb Copper or Optical ports
• 10Gb End-to-end unblocked access with no packet drop
• Low cost solution for Clients to connect to any Top Of Rack 10Gb or CEE
capable Switch
• Part of first FCoE Convergence solution offered on BladeCenter
• BC-H & BC-HT Chassis ONLY
• 14 Internal – 14 External Ports
• No Configuration necessary
• No On-Board Management
© Copyright International Business Machines Corporation 2009 10
QLogic 2-port 10 Gb Converged Network Adapter (CFFh) for IBM BladeCenter (QMI8142)
Figure 4: Qlogic 2 port 10 Gb CNA
• Combo Form Factor (CFFh) PCI Express x8 2.0 adapter
• Communication module: QLogic ISP8112
• Two ports with XAUI interfaces for PCI-E x8 to the HSSM
• Support for up to two CEE HSSMs in BC-H or BC-HT chassis
• Support for 10Gb Converged Enhanced Ethernet (CEE)
• Support for Fiber Channel over Converged Enhanced Ethernet (FCoCEE)
• Full hardware offload for Fibre Channel over Converged Enhanced
Ethernet (FCoCEE) protocol processing
• Support for IPv4 and IPv6
• Support for SAN boot over CEE, PXE boot, and iSCSI boot
• Support for Wake on LAN
• Support for BladeCenter Open Fabric Manager for BIOS, UEFI, and FCode
QLogic 2-port 10Gb Converged Network Adapter for IBM System x
• PCI Express x8 2.0 Generation 2 compliance
• Two SFP+ cages for either SFP+ Fiber SR or SFP+ Active Copper
• Standard PCI Express half length card with low profile form factor
• Support for both standard PCI-E slot and low profile PCI-E slot
© Copyright International Business Machines Corporation 2009 11
• Support for 10Gb Converged Enhanced Ethernet (CEE)
• Support for Fibre Channel over Converged Enhanced Ethernet (FCoCEE)
• Full hardware offload for FC protocol processing
• Support for IPv4 and IPv6
• Support for SAN boot over CEE, PXE boot, and iSCSI boot
• Support for BladeCenter Open Fabric Manager for BIOS, UEFI, and FCode
Brocade 2-port 10 Gb Converged Network Adapter for IBM System x
• Compliance with PCI Express x8 2.0 Generation 2 servers
• Two SFP+ cages for either SFP+ Fiber SR or SFP+ Active Copper
• Standard PCI Express half length card with low profile form factor
• Support for both a standard PCI-E slot and a low profile PCI-E slot
• Support for 10Gb Converged Enhanced Ethernet (CEE)
• Support for Fibre Channel over Converged Enhanced Ethernet (FCoCEE)
• Full hardware offload for FCoE protocol processing
• Support for IPv4 and IPv6
• Support for SAN boot over CEE, PXE boot, and iSCSI boot
Brocade 8000 CEE Switch
Fibre Channel ports
• Eight Fibre Channel universal (E, F, M, and FL) ports with 1, 2, 4, and 8
Gbps
• CEE ports
• 24 ports with 10 Gigabit Ethernet
FCoE features
• Complete T11 FCoE entity and FCoE bridging
The FCoE translation entity built into the hardware engine provides:
• Detection of Fibre Channel encapsulation and redirection of FCoE fabric
login frames
• Encapsulation of Fibre Channel frames in FCoE Ethernet packets (FC >
© Copyright International Business Machines Corporation 2009 12
FCoE)
• Extraction of Fibre Channel frames from FCoE Ethernet packets (FCoE >
FC)
• Mapping of Fibre Channel destination Virtual Fabrics and destination
FC_ID to Ethernet Virtual LAN and destination MAC addresses
Fabric-Provided MAC Addresses (FPMAs) enable new Ethernet MAC addresses to
be created using the FC_ID assigned by the fabric
CEE features
• Data Center Bridging eXchange (DCBX)
• Priority-based Flow Control (PFC) – IEEE 802.1Qbb
• Enhanced Transmission Selection (ETS) – IEEE 802.1Qaz
Performance
• Fibre Channel: 1, 2, 4, and 8 Gbps line speed full duplex
• CEE: 10 Gbps line speed
ISL Trunking
• Frame-based ISL Trunking (optional license) enables up to eight ports
between a pair of switches to be combined into a logical ISL with speeds
of up to 64 Gbps (128 Gbps full duplex) for optimal bandwidth utilization
and load balancing; exchange-based load balancing across ISLs with DPS
(included in Fabric OS)
• Link aggregation (10 Gigabit Ethernet)
• Link Aggregation Control Protocol (LACP), Brocade-enhanced and 802.3ad
standards-based
Maximum frame size
• 2112-byte Fibre Channel payload; 9048-byte Ethernet frame
Classes of service
• Class 2, Class 3, Class F (inter-switch frames)
Port types
• FL_Port, F_Port, M_Port (Mirror Port), E_Port; self-discovery based on
switch type (U_Port); optional port type control
© Copyright International Business Machines Corporation 2009 13
Data traffic types
• Fabric switches supporting unicast, multicast (255 groups), and broadcast
Media types
• Fibre Channel media type: Hot-pluggable, industry-standard Small Form
Factor Pluggable (SFP) and SFP+, LC connector; Short-Wave Laser (SWL)
and Long-Wave Laser (LWL); distance depends on fiber optic cable and
port speed; supports SFP+ (2, 4, and 8 Gbps) and SFP (1, 2, and 4 Gbps)
optical transceivers
• CEE media type: Hot-pluggable, Brocade 10 Gigabit Ethernet SFP+
supports any combination of Short-Reach (SR) and Long-Reach (LR)
optical transceivers; Brocade copper twinax cables of one, three, or five
meters
USB
• 1 USB port for firmware download, support save, and configuration
upload/download
Fibre Channel fabric services
• Simple Name Server (SNS), Registered State Change Notification (RSCN),
NTP, RADIUS, LDAP, Reliable Commit Service (RCS), Dynamic Path
Selection (DPS), Enhanced Group Management (EGM), and Web Tools;
optional fabric services include Fabric Watch, ISL Trunking, and Advanced
Performance Monitoring
CEE services
• Spanning Tree Protocol (STP, MSTP, RSTP, VLAN Tagging (802.1q), MAC
address learning and aging; native FCoE switching; IEEE 802.3ad Link
Aggregation (LACP); access control lists based on VLAN, source,
destination address, and port; eight priority levels for QoS and 4k VLANs;
Priority-based Flow Control (PFC); Data Center Bridging eXchange
(DCBX)-Capabilities Exchange; Enhanced Transmission Selection (ETS)
Licensing options
Fabric OS 6.1.2_cee includes the following optional features that can be enabled
© Copyright International Business Machines Corporation 2009 14
via license keys and are applicable only to the Fibre Channel ports of the Brocade
8000:
• Brocade Fibre Channel ISL Trunking
• Brocade Advanced Performance Monitoring
• Brocade Fabric Watch
Management software
• SSH v2, HTTP/HTTPS, SNMP v1/v3, Telnet; SNMP (FE MIB, FC
Management MIB, RMON, and IF-MIB for CEE); Web Tools; Data Center
Fabric Manager (DCFM) Professional and Enterprise; SMI-S; RADIUS
Management access
• One 10/100/ Gigabit Ethernet (RJ-45), in-band over Fibre Channel, one
serial port (RJ-45), and one USB port
Implementing Blade Boot from SAN Solution with BOFM Enabled
The following sections illustrate the step by step configuration process to successfully boot the blade with Qlogic CNA adapter from DS4700 attached to Brocade 8000 CEE switch. The HS21XM blade server with Qlogic CNA Adapter is connected to 10Gb Pass-thru module installed in bays 7 and 9 of the BCH chassis. Major setup and configuration tasks:
• Ensure FCoE HBA has the latest firmware that is also compatible with
BOFM
• The Blade BIOS and BMC must be compatible with BOFM
• BOFM is enabled on the blade slot
• Connect the cables from the 10Gb Pass-thru to 10Gb interface on Brocade
8000
• The link status shows UP on the 10Gb Ethernet Interface on Brocade 8000
• Configure the Brocade 8000
o CEE map
o Port based VLAN
o CEE Mode
© Copyright International Business Machines Corporation 2009 15
o Trunking
• Configure and Enable Zoning
• Define Host and Storage Partitioning on the Storage
Note: Verify and confirm that BOFM is enabled on the Blade slot and the status is
“Normal”, from the AMM GUI, select Blade tasks � Configuration � Open Fabric
Manager and the blade slot”. The ports 1 and 2 are on-board 1Gbps Broadcom NICs
and ports 5 & 7 are 10Gbps FCoE ports. Both Ethernet and Fibre Channel ports map to
high-speed switch modules in bays 5 and 7. So, when creating the BOFM configuration,
you must select ports 5 and 7 to enable BOFM on the Qlogic CNA.
The following figure lists the WWPNs of the FCoE HBA ports virtualized by
BOFM and the WWPNs of the storage controller is also applied on the HBA. Important: Once BOFM is enabled on the blade slot then it will automatically
configures the following tasks:
• Enable Qlogic FC Adapter BIOS
• Enable Selectable Boot Settings
• Configures the FC Target WWPNs and LUN ID
BOFM eliminates the manual configuration tasks listed above. BOFM also
overwrites the adapter BIOS and Selectable Boot Settings configuration
performed manually from the HBA firmware utility.
© Copyright International Business Machines Corporation 2009 16
Figure 5: BOFM Fibre Channel Port mapping for Qlogic CNA
1. Verify the Link status on 10Gb Ethernet interface on Brocade 8000 CEE.
From the Brocade Web interface, select “Port Admin” � CEE interfaces as
shown below. The blade is connected to Ethernet interfaces 3 and 14 on
the Brocade 8000 switch via the pass-thru module. The link status on
ports 3 & 14 shows “Online”
© Copyright International Business Machines Corporation 2009 17
Figure 6: Brocade 8000 CEE Interface Configuration and Management GUI
2. Configure the Brocade 8000 CEE
a. Create CEE map and configure QOS as shown below:
b. Configure LLDP with DCBX attributes:
c. Create an FCoE capable VLAN:
Switch:admin>cmsh (change shell to configure Ethernet ) switch# configure terminal
Create a CEE MAP to configure PFC (Priority Flow Control) and ETS (Enhanced
Transmission Selection):
switch(config)# cee-map fcoe-test1 switch(config-ceemap)# priority-group-table 1 weight 40 pfc
switch(config-ceemap)# priority-group-table 2 weight 60
switch(config-ceemap)# priority-table 2 2 2 1 2 2 2 2 switch(config-ceemap)# exit
switch(config)# protocol lldp
switch(conf-lldp)# advertise dcbx-fcoe-app-tlv switch(conf-lldp)# advertise dcbx-fcoe-logical-link-tlv
© Copyright International Business Machines Corporation 2009 18
d. Administratively enable the interface:
e. Configure the interfaces as L2 (Configures Trunk Ports):
f. Apply CEE MAP to the interfaces:
The following commands can be used to verify the configuration from the Brocade 8000 CLI: The “fcoe –loginshow” displays the FCoE device WWPNs connected to the fabric. The WWPNs in blue font belongs to the blade in bay 12 connected to ports 3 & 14 on the Brocade 8000 switch. The WWPNs listed below should also be available from the zone configuration menu to add as zone members with FC target WWPNs.
Switch(config)# interface vlan 1002
Switch(conf-vlan100)# fcf forward
Switch(conf-vlan)# exit
switch(config)# interface TenGigabitethernet 0/3
switch(conf-if-te-0/3)# no shutdown
Switch(config)# interface TenGigabitethernet 0/14 switch(conf-if-te-0/14)# no shutdown
switch(config)# interface TenGigabitethernet 0/3
switch(conf-if-te-0/3)# switchport
switch(conf-if-te-0/3)# switchport mode converged switch(conf-if-te-0/3)# switchport converged allowed vlan add 1002
switch(config)# interface TenGigabitethernet 0/14
switch(conf-if-te-0/14)# switchport
switch(conf-if-te-0/14)# switchport mode converged switch(conf-if-te-0/14)# switchport converged allowed vlan add 1002
switch(config)# interface TenGigabitethernet 0/3
switch(conf-if-te-0/3)# cee fcoe-test1
switch(conf-if-te-0/3)# exit
switch(config)# interface TenGigabitethernet 0/14
switch(conf-if-te-0/14)# cee fcoe-test1 switch(conf-if-te-0/14)# exit
© Copyright International Business Machines Corporation 2009 19
The following “switchshow” output displays native FC devices connected to ports 0 - 7 and FcoE devices connected to ports 8 – 13.
swd77:admin> switchshow
switchName: swd77 switchType: 76.7
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:05:1e:76:71:80 zoning: ON (FCoE_Test)
switchBeacon: OFF
Area Port Media Speed State Proto
===================================== 0 0 id N4 Online FC F-Port 50:0a:09:82:87:e9:5d:03
1 1 -- N8 No_Module FC
2 2 id N4 Online FC F-Port 20:15:00:a0:b8:26:12:10
3 3 -- N8 No_Module FC
4 4 id N4 Online FC F-Port 50:0a:09:82:97:e9:5d:03 5 5 -- N8 No_Module FC
6 6 id N4 Online FC F-Port 20:14:00:a0:b8:26:12:10 7 7 -- N8 No_Module FC
8 8 -- 10 Online FCoE F-Port 3 NPIV public
9 9 -- 10 Online FCoE F-Port 3 NPIV public 10 10 -- 10 Online FCoE F-Port 1 NPIV public
11 11 -- 10 Online FCoE F-Port 2 NPIV public
12 12 -- 10 Online FCoE F-Port 20:0c:00:05:1e:76:71:80
13 13 -- 10 Online FCoE F-Port 20:0d:00:05:1e:76:71:80 swd77:admin>
swd77:admin> fcoe --loginshow 8 Number of connected devices: 2 ========================================================================================================================== Peer Type Connect Info Device WWN Device MAC Session MAC FCoE Port MAC Te port ========================================================================================================================== FCOE_DEVICE Direct 21:00:00:c0:dd:10:0e:1b 00:c0:dd:10:0e:1b 0e:fc:00:01:08:04 00:05:1e:76:71:00 Te 0/0 FCOE_DEVICE Direct 21:80:00:e0:8b:00:01:1e 00:c0:dd:10:10:99 0e:fc:00:01:08:03 00:05:1e:76:71:03 Te 0/3 swd77:admin> swd77:admin> fcoe --loginshow 11 Number of connected devices: 1 ========================================================================================================================== Peer Type Connect Info Device WWN Device MAC Session MAC FCoE Port MAC Te port ========================================================================================================================== FCOE_DEVICE Direct 21:80:00:e0:8b:00:01:1f 00:c0:dd:10:10:9b 0e:fc:00:01:0b:02 00:05:1e:76:71:0e Te 0/14 swd77:admin>
© Copyright International Business Machines Corporation 2009 20
The following “show running-config” output lists the configuration on the 10Gb Ethernet switch, CEE map applied, and FCoE forwarding and converged mode enabled on the FCoE interfaces. Note: The “show running-config” output displayed below is truncated. It does not list the configuration for all the “TenGigabitEthernet” interfaces.
swd77#sh running-config !
no protocol spanning-tree
! cee-map fcoe-test1
priority-group-table 1 weight 40 pfc
priority-group-table 2 weight 60
priority-table 2 2 2 1 2 2 2 2 !
interface Vlan 1
! interface Vlan 1002
fcf forward !
interface TenGigabitEthernet 0/0
switchport switchport mode converged
switchport converged allowed vlan add 1002
no shutdown
lldp fcoe-priority-bits 0x8
cee fcoe-test1
!
interface TenGigabitEthernet 0/1 no shutdown
lldp fcoe-priority-bits 0x8
!
interface TenGigabitEthernet 0/6
no shutdown
lldp fcoe-priority-bits 0x8
! interface TenGigabitEthernet 0/7
switchport
switchport mode converged switchport converged allowed vlan add 1002
no shutdown lldp fcoe-priority-bits 0x8
cee fcoe-test1
!
interface TenGigabitEthernet 0/8
switchport
switchport mode converged
switchport converged allowed vlan add 1002 no shutdown
lldp fcoe-priority-bits 0x8
cee fcoe-test1
© Copyright International Business Machines Corporation 2009 21
!
interface TenGigabitEthernet 0/9 no shutdown
lldp fcoe-priority-bits 0x8 !
interface TenGigabitEthernet 0/10
switchport switchport mode converged
switchport converged allowed vlan add 1002
no shutdown
cee fcoe-test1 !
interface TenGigabitEthernet 0/13
shutdown !
interface TenGigabitEthernet 0/14 switchport
switchport mode converged
switchport converged allowed vlan add 1002 no shutdown
cee fcoe-test1
!
interface TenGigabitEthernet 0/15 shutdown
!
protocol lldp advertise dcbx-fcoe-app-tlv
advertise dcbx-fcoe-logical-link-tlv
!
line console 0
login
line vty 0 31
login !
end
swd77#
Verify Zone Configuration
The “cfgshow” output shows the defined and active zoning on the Brocade 8000 switch. The active zoning includes the zone for Blade in slot 12 “: BCH5_BS12p1_DS4700” lists the blade and storage WWPN so that there is a single path from the host to the storage for initial Microsoft® Windows® 2003 R2
SP2 OS install. Additional paths can be added in the zoning and on the storage once the OS install completes successfully on the boot LUN. Note: The zone configuration information listed is truncated. It does not list the defined configuration and only lists the active configuration. swd77:admin> cfgshow
© Copyright International Business Machines Corporation 2009 22
Effective configuration:
cfg: FCoE_Test zone: BCH5_BS12p1_DS4700
20:15:00:a0:b8:26:12:10 21:80:00:e0:8b:00:01:1e
zone: BCH5_BS2p1_DS4700
20:14:00:a0:b8:26:12:10 20:15:00:a0:b8:26:12:10
21:00:00:c0:dd:10:10:99
zone: BCH5_BS2p2_DS4700
20:14:00:a0:b8:26:12:10 20:15:00:a0:b8:26:12:10
21:00:00:c0:dd:10:10:9b
zone: X3550p1_DS4700 20:14:00:a0:b8:26:12:10
20:15:00:a0:b8:26:12:10 21:00:00:c0:dd:10:0e:19
zone: X3550p2_DS4700
20:14:00:a0:b8:26:12:10 20:15:00:a0:b8:26:12:10
21:00:00:c0:dd:10:0e:1b
swd77:admin>
Note: For instructions to configure zoning, refer to the Brocade FOS guide. Verify Storage Configuration
From the DS4000® Storage Manager GUI, verify and confirm that the boot LUN is mapped to the host as shown in figure below:
Figure 7: DS4000 Logical Drive Mapping View
Once the above configuration steps are completed, the blade is now setup to boot from SAN and ready for OS installation.
© Copyright International Business Machines Corporation 2009 23
Installing Windows 2003 R2 SP2 32 bit Operating System
1. Disable the on-board SAS disk controller as shown in figure
Figure 8: Disable on-board disk controller on HS21XM blade server
2. Download the latest device driver for the FCoE adapter and extract to a
floppy diskette
3. The FCoE device driver contents should include the files listed in the
following figure:
© Copyright International Business Machines Corporation 2009 24
Figure 9: Windows 2003 Driver Diskette content for Qlogic CNA
4. Setup the boot sequence for blade so that it boots from CD first. From
AMM GUI select Blade Tasks � Configuration � Boot Sequence � Blade.
5. Insert the OS CD1 and power ON the blade. For this test, we will be
installing Windows 2003 R2 SP2 integrated.
Note: The Service Pack 2 integrated is required for successful install of
Win2003 R2 image. Without SP2 integrated, the OS will not load the FCoE
device driver correctly.
6. Once the Windows 2003 R2 SP2 CD is detected, immediately look for F6
prompt and quickly press F6 key.
© Copyright International Business Machines Corporation 2009 25
Figure 10: F6 prompt during Windows 2003 Install
7. If F6 key was pressed during the initial setup, then the following prompt
shown in figure below will be displayed that allows you to load the FCoE
driver. Ensure that the USB Floppy diskette is inserted and select “S” to
specify Additional Device.
Figure 11: Install FCoE driver during Windows Setup process
8. The following menu is displayed after reading the contents of the diskette.
© Copyright International Business Machines Corporation 2009 26
The Windows Setup successfully recognizes the driver and is ready to
load. Press Enter key to load the FCoE driver from the diskette.
Figure 12: Windows Setup correctly recognizes the FCoE driver
9. The following Windows Setup menu shows that the FCoE driver was
successfully read from the diskette. Do not remove the diskette as
Windows Setup needs to copy the driver from the diskette. Press Enter to
continue as no additional driver is needed for the blade to boot from SAN.
© Copyright International Business Machines Corporation 2009 27
Figure 13: Windows Setup will load the FCoE driver for Qlogic CNA
10. Press F8 key to agree to Windows Licensing Agreement and proceed with
OS installation.
Figure 14: Windows License Agreement screen
11. The following screen shows that the Windows Setup can see the 20GB
© Copyright International Business Machines Corporation 2009 28
boot LUN via the Qlogic CNA. Press C to create partition.
Figure 15: Windows 2003 Disk Partition menu
12. The following figure shows that new partition is created on boot LUN ID
=0, bus = 0 on qlfcoe adapter:
Figure 16: Windows 2003 Disk Partition menu
© Copyright International Business Machines Corporation 2009 29
13. From the following menu, select the File System and Format type and
press Enter key.
Figure 17: Windows 2003 Setup, Create File system menu
14. The following figure shows the Windows Setup is formatting the boot LUN:
Figure 18: Windows 2003 File System Format in progress
15. The following figure shows the Windows Setup is copying the FCoE device
© Copyright International Business Machines Corporation 2009 30
driver from the USB floppy diskette:
Figure 19: Windows 2003 Setup, copying the FCoE driver
16. The following figure shows that the Windows 2003 R2 SP2 OS installation process in progress:
Figure 20: Windows 2003 Installing Devices menu From this point, you can follow the prompts to complete the Windows 2003 OS
© Copyright International Business Machines Corporation 2009 31
installation. Remember that the OS was installed on the boot LUN via the single
path. Download and install RDAC driver on the host and add additional paths
from the host to the boot LUN.
Installing Multipath and Failover driver compatible with the Storage
Subsystem in use
1. Download and install the latest MPIO driver available at the DS4000
download site. The MPIO driver to access the DS3K/DS4K/5K storage
subsystem is embedded in the DS4000 storage manager application. For
other storage subsystem, refer to the storage interop matrix and install the
compatible driver.
2. Download the latest Storage Manager application package from the
DS3K/DS4K/5K storage download site.
3. Install the MPIO driver by selecting the “Host” option as shown in the
following figure:
© Copyright International Business Machines Corporation 2009 32
Figure 21: IBM DS Storage Manager Installation Menu
Figure 22: DS4000 Storage Manager Pre-Installation Summary Menu
© Copyright International Business Machines Corporation 2009 33
3. The following figure shows that the MPIO driver is successfully installed.
Figure 23: Successful MPIO driver installation on the Host
4. The following figure shows that there are multiple paths from the host to
the boot LUN with MPIO driver:
© Copyright International Business Machines Corporation 2009 34
Figure 24: The Windows Device Manager shows the MPIO driver is installed
5. You can perform the failover test by disabling one path at a time from the host to storage. This can be done by disabling the switch ports to which the blade or storage ports are connected.
6. Once the path is disabled on host side or the storage side, the OS should
still be available via the alternate path. The MPIO driver automatically moves the LUN to the available path.
Conclusion This concludes the configuration process to deploy blade boot from SAN using the Qlogic 10 Gbps CNA attached to the Brocade 8000 CEE switch and IBM DS4700 storage subsystem.
© Copyright International Business Machines Corporation 2009 35
Useful Links
1. IBM eServer™ BladeCenter for Converged Network Adapters Supported Software
� 42C1830 - QLogic 2-port 10Gb CNA (CFFh) for IBM BladeCenter (FRU P/N
42C1832, Card P/N 42C1831):
http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/Product_detail_new.aspx?oemid=394&companyid=6
2. IBM System x Converged Network Adapters Supported Software
� 42C1800 - QLogic 10Gb CNA for IBM System x (FRU P/N 42C1802, Card P/N 42C1801)
http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/Product_detail_new.aspx?oemid=394&companyid=6
3. IBM DS Storage Subsystem Supported Software
� https://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?brandind=5000028&familyind=0&oldbrand=0&oldfamily=0&oldtype=0&taskind=1&psid=bm
4. QLogic 2-port 10Gb Converged Network Adapter (CFFh) for IBM BladeCenter
Tech Note
� http://www.redbooks.ibm.com/abstracts/tips0716.html?Open 5. Brocade 10Gb CNA for IBM System x
� http://www.redbooks.ibm.com/abstracts/tips0718.html?Open
© Copyright International Business Machines Corporation 2009 36
Trademarks IBM, the IBM Logo, BladeCenter, DS4000, eServer, and System x are trademarks of International Business Machines Corporation in the United States, other countries, or both.
For a complete list of IBM Trademarks, see http://www.ibm.com/legal/copytrade.shtml.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.