Date post: | 23-Dec-2015 |
Category: |
Documents |
Upload: | samantha-short |
View: | 371 times |
Download: | 2 times |
QLogic I/O Solutions for IBM System X, Blade and PureFlex
System Storage™PureSystems™POWER Systems™System x™BladeCenter®
Ingrained into the IBM server ecosystem
2
QLogic Adapter Value
July 2013
QLogic Adapter value
• Readiness for virtualized environments & Cloud deployments• Support for VMware ESX, Microsoft Hyper-V, Linux/KVM, PowerVM• Optimized for virtualized environments
• Superior application level I/O performance• Microsoft Exchange: up to 133% higher IOPs than competition*
• Oracle OLTP: up to 106% higher IOPs than competition**
• 200,000 IOPS per port
• 1,600MB throughput
• Power efficiency• lowest power consumption for FC adapters in IBM Flex portfolio
• QLogic StarPower
4
* http://www.qlogic.com/Resources/Documents/WhitePapers/Adapters/White_Paper_The_8Gb_Fibre_Channel_Adapter_of_Choice_in_MS-Exchange_Environments.pdf** http://www.qlogic.com/Resources/Documents/WhitePapers/Adapters/White_Paper_The_8Gb_Fibre_Channel_Adapter_of_Choice_in_Oracle_Environments_106.pdf
QLogic 8Gb HBAs
1 2 3 4 5 6 7 80
10,000
20,000
30,000
40,000
50,000
60,000
70,000
LPe12000 QLogic QLE2560
5
Tra
ns
act
ion
s p
er
Sec
on
d
Outstanding I/O
1 2 4 8 16 320
50,000
100,000
150,000
200,000
LPe12000 QLogic QLE2560
Oracle OLTP PerformanceExchange Workload
Scalability
224% better performance
Dynamic Power Management
January 24, 20136
Dynamic Power Management
• Smart QLogic ASIC senses the PCI-e bus and uses the right amount of lanes to provide maximum performance
• Uses up to 40% less power
7
QLogic connectivity solutions for IBM System x
Server connectivity• 16Gb single & dual port Fibre Channel adapters• 4 & 8Gb single & dual port FC adapters• 10GbE Converged Network Adapter• 10GbE Virtual Fabric Adapters
16Gb Fibre Channel Technologyfor IBM System X
8
Introducing QLogic 16Gb HBAs for System x
• QLogic 16Gb Fibre Channel adapters
• The first PCIe Gen3 FC adapter in the IBM product portfolio
• Ideally suited for IBM System x M4 servers but also compatible with M3 servers• OS support
• Windows 2008 & 2012, RHEL 5 & 6, SLES 10 & 11, VMware ESX & ESXi 4.1, VMware vSphere 5 & 5.1
• 16GB SFP+ (P/N 00Y3345) ships with adapter
9
Product nameSystem x part
numberQLogic part
numberIBM System x M4 servers
IBM System x M3 servers
QLogic 16Gb FC Single-port HBA for System x 00Y3337 QLE2660 √ √
QLogic 16Gb FC Dual-port HBA for System x 00Y3341 QLE2662 √ √
QLogic 16Gb FC performance benefits
10
8Gb 16Gb0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
1,400,000
IOPs
3x IOPs
• Twice the Throughput of 8G, cutting response time in half
• 3 times the IOPs of 8G Fibre
• 40% faster than 10G Ethernet/FCoE
• Backward compatibility ensures investment protection
Improve your price per performance and lower your power requirements
• Companies wanting to consolidate hardware by deploying virtualized environments will benefit by using 16Gb FC
• On Line Transaction Processing (OLTP) applications such as on-line banking and travel reservation systems or any other type of high volume/many user applications will benefit from 16Gb FC
• Using 16Gb FC, backup and restore applications will complete in nearly ½ the time that it would take to complete in 8Gb FC environments
What does TRUE PCI-e 3.0 mean to your customers?
It means 16% better performance than PCI-e 2.0 adapters
This Demartek study explains the benefits of the QLogic 16Gb PCI-e 3 adapters compared to 16Gb PCI-e 2 adapters
http://www.qlogic.com/Resources/Documents/MediaCoverage/QLE2600_16GFC_HBA_Evaluation_Demartek.pdf
QLogic Architectural Advantage: Port Isolation
11
Per Port Functionality QLogic
Independent CPU Isolated Memory Independent Firmware Image
QLogic’s Architecture Provides• Independent functionality• Higher reliability• Simplified manageability• Predictable performance • Excellent stability
Shared Architecture• Shared resources across ports creates:
• Higher risk of failure• Irregular performance • Risk of instability
Memory
Firmware
Processor
Port 0
Port 1
PCIe
Memory
Firmware
Processor
QLogic Product Portfolio12
Port Isolation Performance Characteristics
• At first glance, it would appear that The reasoning behind Emulex’s design is for customers running in an Active/Passive or Failover Configuration.
• Assuming this was true, then a customer might want better single port performance.
• When considering that the purpose of a failover configuration is redundancy and reliability, the fact that both ports share memory, CPU and Firmware make this decision counterintuitive.
• If one port fails due to internal issue, both will fail, leaving no port to failover to.
• QLogic’s Port Isolation ensures this does not happen.
QLogic VFA Product Portfolio
QLogic Virtual Fabric Adapters for IBM System x
• QLogic 8200 VFA• Standard PCIe card• Support for IBM System x M4 servers
• QLogic Embedded VFA• Mezzanine board• Support for System x x3550, x3650 M4 servers• Doesn't use up a PCIe slot
• Two products, same functionality• 10GbE dual-port with vNIC capability
• 4 virtual ports per physical port• Licence upgrade with IBM Feature-on-Demand
• Full Converged Network Adapter capability• FCoE & iSCSI offload
14
QLogic Virtual Fabric Adapters for IBM System x
15
All of the same capabilities: • PCIe Gen2 x8 • Supports Optical and Copper DAC
• (SFPs are NOT included on Embedded VFA)• Multi-personality ports (10GE, iSCSI & FCoE)• QLogic’s switch agnostic N-PAR (NIC Partitioning)• Multi Protocol Hardware offload • Dual port removable 10Gb SFP+• Industry-leading 10Gbps performance
Flexible networking:• Upgrade for iSCSI and FCoE functionality
Product nameSystem x part
numberQLogic part
numberIBM System x
M4 servers
QLogic Embedded VFA 90Y6454 n/a √
QLogic Embedded VFA FOD License 90Y5179 n/a √
QLogic 8200 VFA 90Y4600 QLE3262 √
QLogic 8200 VFA FOD License 00Y5624 n/a √
QLogic Product Portfolio16
Supported Transceivers / Cables
QLogic Product Portfolio17
QLogic VFA (8200) vs QLogic 10Gb CNA (8100)
Feature Comparison 8100 8200
PCIe Gen2 x8 No Yes
Full iSCSI hardware offload No Yes
NIC Partitioning No Yes
Congestion Notification (802.1Qau) No Yes
VEB(Virtual Ethernet Bridge), VEPA(Virtual Ethernet Port Aggregator) capable
No Yes
Simultaneous Multi-protocol support Yes Yes
Optical Cable support Yes Yes
Active Copper DAC support Yes Yes
Passive Copper DAC support No Yes
Progressive functionality (Feature On Demand) No Yes
QLogic PureFlex Product Line
July 2013
IBM - PureSystem Product Portfolio
PureFlex
x220 x240 p260 p460 p24L IBM PureFlex Chassis
Adapter
Ser
ver
to s
tora
ge I
nter
oper
abi
lity
IBM Flex System FC3172 2-port 8Gb FC Adapter 69Y1938
Network IBM Flex System FC3171 8Gb SAN
Switch69Y1930
IBM Flex System FC3171 8Gb Pass-
thru69Y1934
x440
19
IBM Flex System Fabric 10GbE Converged Scalable
Switch (CN4093)00D5823
Choices when deploying a new Flex chassis
• Connecting to an existing Fibre Channel SAN• use Pass-thru transparent mode• quick to deploy, configured in minutes• zero interoperability issues with external switches• automatic load balancing and failover• no reconfiguration needed to existing SAN
• Connecting directly to Fibre Channel storage• use 8Gb SAN Switch • fully featured with no additional licences to buy• virtualization ready with enhanced NPIV support• autosensing to 2Gb, 4Gb, 8Gb/s operation
• Management• QLogic QuickTools management suite• run from within Flex Systems Manager
20
CN4093 - Flex convergence with QLogic
• IBM Flex System Fabric 10GbE Converged Scalable Switch (CN4093)• QLogic module provides Fibre Channel
Gateway functionality• Similar to QLogic Virtual Fabric Ext Module
in BladeCenter• Allows Flex chassis to connect directly to
Fibre Channel SAN or storage• Omniports configurable to 10GbE or Fibre
Channel• Benefits
• Converged networking in Flex chassis• No requirement for expensive Top of Rack
switch
21
22
IBM PureFlex - reasons to specify QLogic products
• Complete end-to-end 8Gb Fibre Channel solution• Cost effective, default choice for today's customer requirements
• The only Fibre Channel adapter supporting all Compute Nodes• x220, x240, x440, p24L, p260, p460
• Chassis connectivity options for all deployments• Connecting to Fibre Channel Storage• Connecting to Fibre Channel SAN
• Proven technology from IBM BladeCenter and System x Rack/Tower• adapter of choice for IBM customers today in BladeCenter and System x rack/tower• Fibre Channel adapters - industry market leaders at 55%• >13 M adapter ports shipped
Flex Enterprise
Chassis support
System x part
number
POWER Systems feature
code
IBM Flex
Systems™
IBM PureFlex™
IBM PureApplicatio
n™
IBM Flex System
FC3171 8Gb SAN Switch
69Y1930 3595 √ defau
lt default
IBM Flex System
FC3171 8Gb Pass-thru
69Y1934 3591 √ defau
lt default
Flex Compute
node support
System x part
number
POWER Systems feature code
x220
x240
x440
p24L
p260
p460
IBM Flex System
FC3172 2-port 8Gb FC
Adapter
69Y1938 1764 √ √ √ √ √ √
IBM Smartcloud for PureFlex
23
• Smartcloud for PureFlex• Pre-configured Express, Standard, Enterprise• Intel & Power compute nodes• Storwize V7000 Storage Array• QLogic FC Adapters & Switch• Virtualization support
• VMware• KVM, HyperV
• Smartcloud capabilities• Create images: simplify storage of images• Deploy VMs: reduce deployment time• Operate a Cloud
QLogic BladeCenter Product Line
July 2013
IBM - System X Product Portfolio
Server Adapter
(Mezz or Standup)
Server Platform
BladeCenter
BladeCenterS, E, HT, H
Network44X1905 20-
port 8Gb FC SAN switch module
88Y64064/8Gb SAN
Switch
88Y64104/8Gb FC
IPM
44X1907 8Gb FC
IPM
46M6172Virtual Fabric
Extension Module (FCoE)
8Gb FC + 1GE Combo HBA
(CFFh)44X1940
FCoE 10Gb CNA42C1803
8Gb HBA (CIOv)
44X1945
4Gb HBA (CIOv)
46M6065
QLogic 10GbE VFA
00Y3332
HS, HX5, JS and PS
26
QLogic VFA for BladeCenter• New product introduction
- Announce June 25th, GA July 26th - Three different part numbers
• Two opportunities for I/O consolidation- Combine mulitiple 1GbE connections into fewer 10GbE Converged Data and Storage networking
IBM BladeCenter Product Part # Description
QLogic 10Gb VFA 00Y3332 10GbE NIC with QLogic NIC partitioning (NPAR)Dynamic QoS settings
QLogic 10Gb VF Advanced FoD Upgrade
00Y5622 Feature-on-Demand upgrade licence
QLogic 10Gb VF Advanced CNA
00Y5618 10GbE NIC with QLogic NIC partitioning (NPAR)Dynamic QoS settings Full Converged Network Adapter functionality
NPAR (NIC Partitioning)
PF7Disabled
PF6Disabled
PF5Disabled
PF4Disabled
PF2Disabled
PF2NIC *Empty PF0
NIC
NPAR Theory of Operation – Partitioning
28
PCIe Configuration Bus
PF3NIC * EmptyPF1
NIC
Physical Port 1
PF4NIC *
PF3Disabled
PF2Disabled
PF7NIC *
PF6NIC *
PF5NIC *PF5
iSCSIPF4
iSCSIPF7
FCoEPF6
FCoE
Physical Port 0
Changing the Personality of the NIC Partition
NPAR provides multiple Ethernet interfaces per physical 10Gb Ethernet port.
This is done by partitioning the port’s PCIe interface into four independent Ethernet functions.
• Each NPAR Function is presented as a unique Ethernet interface to the server and the OS, with its own unique MAC address
• Each Function has its own instance of a device driver
• NPAR supports functions with NIC, FCoE and iSCSI personalities
NPAR – eSwitch
• NPAR Implements an eSwitch which is similar in function and capability to the hypervisor’s vSwitch• VLAN aware• MAC address lookup
• The eSwitch works in conjunction with the hypervisor’s vSwitch.• The vSwitches cascade into the NPAR eSwitches through the NPAR NIC Functions
• Each eSwitch is associated with a single Physical Port• All NIC Functions associated with a single Physical Port are switched in the same eSwitch• The Physical Port is the Uplink for its associated eSwitch• Keeps switch statistics within the card.
• NPAR is external switch agnostic • It does not require any switch specific features in the external switch for it to work correctly
29
Server
vSwitch to eSwitch
30
VMvNIC
VMvNIC
VMvNIC
VMvNIC
VMvNIC
External Bridge
Hyp
ervi
sor
vSwitch
NIC Port 0eSwitch
PHYTX/RX
VMvNIC
vSwitch
PF 4 PF 6
VMvNIC
VMvNIC
VMvNIC
VMvNIC
vSwitchvSwitch
PF 2PF 0
NIC Traffic Flow within the Host
• Can NOT run FCoE and iSCSI at the same time. • Cannot turn NIC partitioning off.• A firmware update and reboot are required for adding iSCSI or FCoE
The Competition
Not all NIC Partitioning implementations are created
equal!
31
NIC
NIC
NIC/iSCSI
NIC/FCoE
NIC
Par
titi
on
ing
NIC
NIC
NIC/iSCSI
NIC/FCoE
QLogic's NPAR = True Flexibility• CAN run FCoE and iSCSI concurrently. • NIC partitioning can be fully disabled.• FOD key is all that is needed for FCoE and iSCSI capabilities
NIC
NIC
NIC
NIC
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
32
• NPAR functions (NIC/FCoE and iSCSI) can be disabled if desired.
NPAR QoS Configuration
• Minimum Bandwidth setting• This is a guaranteed amount of bandwidth available through the NIC Function• Specified as a percentage of the NIC’s portion of the physical port’s bandwidth• The sum of all the Minimum settings for the NIC Functions of a physical port must be
less than or equal to 100%
• Maximum Bandwidth setting • Sets the Maximum amount of bandwidth a NIC Function is allowed to utilize• Specified as percentage of the NIC’s portion of the physical port’s bandwidth• The sum of all the Maximum settings for the NIC Functions of a physical port may
exceed 100%
33
QoS settings are tunable “On the Fly”.
Changes can be implemented seamlessly without a reboot or port reset.
Oversubscription
34
• Oversubscription:• Allows the total maximum bandwidth settings of the NIC Functions of a
physical port to exceed the port’s actual bandwidth
• Each NIC Function can claim up to100% of a physical port’s bandwidth If no other NIC Function is using it.
• Allows unused bandwidth to be dynamically shifted to where it is needed
• Oversubscription prevents bandwidth waste
QLogic vs the Competition
Key Features The Competition QLogic 8200Divide physical port into multiple partitions
Bandwidth Guarantee per Partition
Oversubscription Capabilities X
Dynamic QoS Configuration X (reboot required)
Concurrent FCoE & iSCSI traffic X
Operating System, Integrated Tools
X
Offload VM-to-VM Traffic (eSwitch)
X
Can disable partitioned ports X
(Change QoS “on the fly”)
(NIC Partitioning –NPAR)
35
QLogic Tools
QLogic Adapter Hardware37
Adapter Management
PreBoot Utilities Fast!UTIL QConvergeConsole CLI
QConvergeConsole GUI (QCC) VMware Plug-in
QCC GUI: Simplified Management
• Single-Pane-of-Glass• QConvergeConsole: Unified web-based single-pane-of-glass console (GUI &
CLI) for FC, Ethernet and Converge Network adapters• Customer freedom to manage all adapters in data center
• QLogic QConvergeConsole Plug-in for VMware vCenter • 3rd Party tools using standard APIs - One tool – all protocols – all OSs• Integrated with APIs - visibility across data center management Native OS
tools for Networking • Role based Authentication – for LAN and SAN administrators
• Common Driver Stack• Single driver – backward compatible with 8Gb and 4Gb FC Adapters • Windows, Linux, ESX, Solaris, XenServer
• Diskless Boot• BIOS, UEFI, FCode
• Higher Uptime• Extended Hardware Assisted Firmware Tracing (eHAFT) – faster resolution
38
QCC CLI Installation & Overview39
QCC CLI: Menu Driven
Scanning for QLogic adapters, please wait…
QConvergeConsole
CLI - Version 1.0.1 (Build 32)
Main Menu
1: Adapter Information 2: Adapter Configuration 3: Adapter Updates 4: Adapter Diagnostics 5: Adapter Statistics 6: NIC Partitioning (NPAR) Information 7: NIC Partitioning (NPAR) Configuration 8: Refresh 9: Help10: Exit
Please Enter Selection:
QCC CLI Installation & Overview40
QCC CLI: Scripting
• Using QCC CLI commands with options, a script can be created• Example Script for CNA (8000 Series)
• Set Port 2 NIC to Wake on LAN• Set Port 2 FC parameters to factory default• Set Port 2 FC Frame Size to 1024
qaucli –pr nic –n 1 Port_Wake_On_LAN_Option 1qaucli –n 1 defaultqaucli –n 1 FR 1024
QLogic Converged Console - vCenter Plug-in
41
QLogic Adapter Management integrated with VMware vCenterDynamic bandwidth Provisioning
Network and Storage Maps – connect the fabric to the Cloud
In-build Diagnostics and Statistics
Online Firmware Updates at the click of a button
Bandwidth allocation charts help map fabric to business logic
Color coding simplify the determination of health of a component
Additional Information
Reference material
43
Videos
Enabling NPAR on QLogic 10GbE • http://www.youtube.com/watch?v=9PY8OlQGneU
Configuring NPAR under Windows 2008 • http://www.youtube.com/watch?v=rK1OXNKynNw
NIC teaming 3200/8200 teaming • http://www.youtube.com/watch?v=UEfGFqoz_Nc
QLogic CNA featuring multi-protocol VMware vMotion • http://www.youtube.com/watch?v=YZ6h0YYYPmg
QLogic Proof-of-Concept/Demo facilities
• QLogic Solutions Lab, Minnesota• Customer Proof-of-Concept and Demo facilities
• remote access available• Additional IBM equipment recently deployed
• IBM PureFlex• Flex Enterprise Chassis• FC3171 8G FC SAN switch, 10GbE switch• x86 & Power compute nodes• V7000 Storwize, Flex System Manager node
• Additional IBM servers• 3 x System x M4• 2 x Power Systems 720
44
To arrange access• email [email protected]