Post on 19-Aug-2015
transcript
Nexus 7000 connectivity for Cisco UCS
Best Practices and Design Considerations
CISCO
Brad Hedlund, CCIE #5530Technical Solutions Architectbhedlund@cisco.comBlog: http://bradhedlund.com
DC ByteZDecember 15, 2010
1
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKCOM-2003_c1 2
Cisco UCS Networking: Physical Architecture
6100Fabric A
6100Fabric A
6100Fabric B
6100Fabric B
B200B200 B250B250
CNA
FEX
A
FEX
A CNA CNA
FEX
B
FEX
B
FEX
A
FEX
A
FEX
B
FEX
B
SAN A SAN BETH 1 ETH 2
MGMTMGMT MGMTMGMT
Chassis 1 Chassis 20
Fabric Switch
Fabric Extenders
Uplink Ports
Compute BladesHalf / Full width
OOB Mgmt
Server Ports
Virtualized Adapters
Cluster
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKCOM-2003_c1 3
AdapterAdapter
SwitchSwitch
10GEA
Eth 1/1
FEX A
6100-A
Physical Cable
Virtual Cable(VNTag)
Abstracting the Logical Architecture
ServerServer
vNIC1
vNIC1
10GEA
vEth1
vEth1
FEX A
Adapter
6100-A
vHBA1
vHBA1
vFC1
vFC1
Service Profile(Server)
Service Profile(Server)
Cable
vNIC1
vEth1
vEth1
6100-A
vHBA1
vFC1
vFC1
(Server)
Compute Blade
Dynamic, Rapid Provisioning
State abstraction
Location Independence
What you getWhat you see
Nexus 7000 connectivity for Cisco UCS
4
vEth1
vEth1
6100-A
vFC1
vFC1
Service Profile(Server)
vHBA1
vHBA1
vNIC1
vNIC1
vHBA2
vHBA2
vNIC2
vNIC2
vEth2
vEth2
vFC2
vFC2
6100-B
Compute Blade
Aggregation
UnifiedAccess
WHY NEXUS 7000The ideal Aggregation platform
High AvailabilityDual Supervisor ISSUGrid & N+1 powerStateful Sup SwitchoverStateful Proc Restart
Unified Fabric ReadyLossless Switching FabricFCoE hardware shipping now
VirtualizationOTV – data center interconnectVDC – switch consolidationvPC – more bandwidthFabric Path
10GE Today, 40/100GE Ready512 line rate 10G ports (L2)512 4:1, 128 1:1 10G ports (L3)230 Gbps/slot (today)550 Gbps/slot (future)
Servers
SAN BSAN A
Choosing the Nexus 7000 linecard
5
vEth1
vEth1
6100-A
vFC1
vFC1
Service Profile(Server)
vHBA1
vHBA1
vNIC1
vNIC1
vHBA2
vHBA2
vNIC2
vNIC2
vEth2
vEth2
vFC2
vFC2
6100-B
Compute Blade
Aggregation
UnifiedAccess
Servers
M1 or F1?
SAN BSAN A
M1 Linecard
Scalability with Simplicity128,000 MACsNo special planningConsistent local L2/L3 forwardingFlexible 2:1 or 4:132-port M1
Majority traffic L3 switchedCisco UCS linked to “Aggregation”
Concern: Line rate L2 & L3, Cost8-port M1 linecard
Low cost, Low oversubscription, L3Port density not a concern8-port M1 linecard
F1 Linecard
Majority of traffic L2 switchedCisco UCS linked to “Access” N7KFabricPath Edge
Concern: Low latency, oversub (L2)Concern: Port density & Cost
F1 for port density & L28-port M1 for L3
MAC scalability not a concern16,000 MACs
Nexus 7000 Unified Aggregation
6
vEth1
vEth1
6100-A
vFC1
vFC1
Service Profile(Server)
vHBA1
vHBA1
vNIC1
vNIC1
vHBA2
vHBA2
vNIC2
vNIC2
vEth2
vEth2
vFC2
vFC2
6100-B
Compute Blade
UnifiedAccess
Servers
Storage VDC Storage VDC
UnifiedAggregation
FCoE FCoE
Coming Soon: CY11
BenefitsLower TCO
Consolidate SAN/LAN AggregationFlexibility
Pool of 10GE, SAN or LAN
Enabling technologies
Nexus 700032-port F1 module (shipping)
FCoE readyNX-OS FCF capabilitiesFCoE ports only, no FCLink w/ FCoE to MDS or N5K
Cisco UCS Fabric InterconnectFCoE uplinks (NPV)
Storage
Core
Agg
OTV
Nexus 7000 VDCs
VRF VRF
VRF VRF
Connecting Cisco UCS to FabricPath
7
6100-A6100-A 6100-B6100-B
FabricPath facing ports:
interface Ethernet 1/1description FabricPath networkswitchport mode fabricpath
interface Port-Channel 10description link to peer 7Kswitchport mode fabricpathvpc peer-link
F1 linecardEdgevPC+
Core
FabricPath
Cisco UCS facing ports:
interface Port-Channel 100Description Cisco UCSswitchport mode trunkspanning-tree port type edge trunkvpc 100
F1 linecard
End HostMode
L3 L3 L3 L3
F1
F1
F1
F1
F1 F1
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKCOM-2003_c1 8
Switching Modes: End Host
Server 1
VNIC 0
Server 2
VNIC 0
uplink
uplink
Fabric A
L2Switching
6100 A
MAC Learning
MAC Learning
VLAN 10
SpanningTree
LAN
vEth 3 vEth 1
• Server vNIC pinned to an Uplink port• No Spanning Tree Protocol
• Reduces CPU load on upstream switches
• Reduces Control Plane load on 6100
• Simplified upstream connectivity
• UCS connects to the LAN like a Server, not like a Switch
• Maintains MAC table for Servers only• Eases MAC Table sizing in the Access Layer
• Allows Multiple Active Uplinks per VLAN• Doubles effective bandwidth vs STP
• Prevents Loops by preventing Uplink-to-Uplink switching
• Upstream VSS/vPC optional• Completely transparent to upstream LAN• Traffic on same VLAN switched locally
Configuring Nexus 7000 for Cisco UCS
9
6100-A6100-A 6100-B6100-B
BPDU Filtering, BPDU Guard for “edge” ports
spanning-tree port type edge bpdufilter defaultspanning-tree port type edge bpduguard default
Cisco UCS facing “Edge” ports:
interface Ethernet 1/1description Cisco UCS physical linkswitchport mode trunkspanning-tree port type edgechannel-group 100 mode active
interface Port-Channel 100Description Cisco UCS vpc linkswitchport mode trunkspanning-tree port type edgevpc 100
End Host Mode
END HOST MODE
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKCOM-2003_c1 10
Switching Modes: Switch
Server 1
VNIC 0
Server 2
VNIC 0
uplink
Fabric A
L2Switching
6100 A MAC Learning
VLAN 10
LAN
vEth 3 vEth 1
Root • Fabric Interconnect behaves like a normal Layer 2 switch
• Server vNIC traffic follows VLAN forwarding
• Spanning tree protocol is run on the uplink ports per VLAN—Rapid PVST+
• Configuration of STP parameters (bridge priority, Hello Timers etc) not supported
• VTP is not supported currently
• MAC learning/aging happens on both the server and uplink ports like in a typical Layer 2 switch
• Upstream links are blocked per VLAN via Spanning Tree logic
End Host Mode – Individual Uplinks
CISCO
ESX HOST 1VNIC 0
Server 2VNIC 0
vEth 1vEth 3
uplink
uplink
Pinning
Fabric ASwitching
L2Switching
Dynamic Re-pinning of failed uplinks
6100 A
VLAN 10
uplink
vEth 1
VNIC 0
VNIC stays up
Sub-second re-pinning
MAC A
vSwitch / N1K
MAC C
VM 1 VM 2
MAC B
GARP MAC A
,B,C
11
End Host Mode – Port Channel Uplinks
CISCO
vEth 1vEth 3
uplink
uplink
Pinning
Fabric ASwitching
L2Switching
Recommended: Port Channel Uplinks
6100 A
VLAN 10
More Bandwidth per UplinkPer flow uplink diversityNo Server NIC disruptionNo GARPFaster bi-directional convergenceFewer moving parts
NIC stays up
RECOMMENDED
No disruption
Server 2VNIC 0
ESX HOST 1VNIC 0MAC A
vSwitch / N1K
No GARPsneeded
Sub-second convergence
MAC C
VM 1 VM 2
MAC B
12
End Host Mode – Port Channel Uplinks
CISCO
VNIC 0
vEth 1vEth 3
uplink
uplink
Pinning
Fabric ASwitching
L2Switching
Upstream switch failure & Dynamic Re-pinning
6100 A
VLAN 10
VNIC stays UP
vEth 1
VNIC 0
Server 2VNIC 0
ESX HOST 1
vSwitch / N1K
MAC C
VM 1 VM 2
MAC B
GARP MAC A
,B,C
MAC A
13
End Host Mode – vPC Uplink
CISCO
VNIC 0
vEth 1vEth 3
uplink
Pinning
Fabric ASwitching
L2Switching
vPC uplinks hide uplink & switch failures from Server VNICs
6100 A
VLAN 10
vEth 1
VNIC 0
NIC stays upMore Bandwidth per UplinkNo Server NIC disruptionSwitch and Link resiliencyPer flow uplink diversityNo GARPsFaster Bi-directional convergenceFewer moving parts
vPC RECOMMENDED
vPC Domain
Server 2VNIC 0
ESX HOST 1
vSwitch / N1K
MAC C
VM 1 VM 2
MAC B
No disruption
No GARPsNeeded!
14
DO NOT:Connect End Host Mode to Separated L2 Domains
CISCO
6100 A6100 A 6100 B6100 B
DMZ 1 DMZ 2
Broadcast Link
DMZ 1 Server
Each 6100 picks ONE uplink for Broadcast/Mcst Processing
DMZ 2 Server
Cannot see DMZ 2 Broadcasts
EHM EHM
15
DMZ 1VLAN 20-30
DMZ 2VLAN 40-50
CISCO
DO:Use Switch Mode for Separated L2 Domains
6100 A6100 A 6100 B6100 B
DMZ 1 Server
DMZ 2 Server
Switch Mode Switch ModeVLAN 20-50
All Links Forwarding
Assumption: No VLAN overlap between DMZ 1 & DMZ 2
With vPC
Common L2
16
Prune VLANsHere
Tag all VLANs
DMZ 1VLAN 20-30
DMZ 2VLAN 20-30
DO:Connect End Host Mode to a Common L2 Domain
4900M4900M4900M4900M
6100 A6100 A 6100 B6100 B
DMZ 1 Server
DMZ 2 Server
VLAN Translation20 > 20021 > 201, etc.
EHM EHM
17
VLAN 20
VLAN 200
Overlapping VLANs
All Links Forwarding
With vPC
Common L2
VLANs 20-30, 200-210
Fabric A to Fabric B Traffic Example
CISCO
vSwitch / N1KMac Pinning
VM1 VM2
VNIC 0
6100 A6100 A 6100 B6100 B
VNIC 1
L3 Switching
ESX HOST
VM1 Pinned to VNIC0
VM2 Pinned to VNIC1
VM1 on VLAN 10VM2 on VLAN 20
VM1 to VM2 traffic:1)Leaves Fabric A2)Gets L3 switched3)Enters Fabric B
EHM EHM
18
Fabric A to Fabric B Traffic Example
CISCO
VM1 VM2
VNIC 1
6100 A6100 A 6100 B6100 B
VNIC 2
L2 Switching
ESX HOST
Dynamic VNIC1Primary fabric ABackup fabric B
Dynamic VNIC2Primary fabric BBackup fabric A
VM1 on VLAN 10VM2 on VLAN 10
VM1 to VM2 traffic:1)Leaves Fabric A2)Gets L2 switched3)Enters Fabric B
EHM EHM
19
Dynamic Dynamic
Cisco UCS HW VN-LINK
PTS
Fabric A to Fabric B Traffic Example
CISCO
vSwitch / N1KMac Pinning
VM1 VM2
VNIC 0
L2 Switching
ESX HOST 1
vSwitch / N1KMac Pinning
VM3 VM4
VNIC 1ESX HOST 2
6100 A6100 A
VNIC 1
6100 B6100 B
VNIC 0
VNIC 0 on Fabric AVNIC 1 on Fabric B
VM1 Pinned to VNIC0VM4 Pinned to VNIC1
VM1 on VLAN 10VM4 on VLAN 10
VM1 to VM4 traffic:1)Leaves Fabric A2)Gets L2 switched3)Enters Fabric B
EHM EHM
20
Singly attached Cisco UCS to Nexus 7000
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2vPC Domain
vPC peer-link
keepalive
EHM EHM
1. Traffic destined for a vNIC on the Red Uplink enters 7K12. Same scenario vice-versa for Green3. All Inter-Fabric traffic traverses Nexus 7000 peer link
21
Peer Link traffic
Singly attached Cisco UCS to Nexus 7000
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2vPC Domain
vPC peer-link
keepalive
L3 SVI’s go Down
vPC Primary
1. vPC peer link fails – keep alive links stays up2. vPC secondary 7K2 brings down its L3 SVI’s3. 6100-B Red uplink stays UP4. Traffic from 6100-B red uplink has no L3 gateway5. Fabric A and Fabric B isolated
EHM EHM
22
Peer Link failure – Black Hole
Non-vPC uplinks to a Nexus 7000 vPC domain
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2vPC Domain
vPC peer-link
keepalive
L3 SVI’s go Down
vPC Primary
1. N7K vPC peer link fails, keepalive link stays up.2. vPC Secondary N7K2 brings down all vPC VLAN L3 SVI’s3. Non-vPC Red & Blue links stay UP4. Traffic from Red & Blue has no L3 gateway
23
Peer Link failure – Black Hole
vPC uplinks to a Nexus 7000 vPC domain
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2vPC Domain
vPC peer-link
keepalive
L3 SVI’s go Down
EHM EHM
1. N7K vPC peer link fails, keepalive link stays up.2. vPC Secondary N7K2 brings down L3 SVI’s
1. 7K2 brings down vPC member ports3. 6100 physical links facing 7K2 go down4. 6100 vPC uplinks stay UP5. All traffic OK, no black holes
24
Peer Link failure – No Problem
vPC Primary
Cisco UCS vPC uplinks – No Peer Link Traffic
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2vPC Domain
vPC peer-link
keepalive
EHM EHM
25
Cisco UCS attached to Nexus 5000, 7000
CISCO
N5K 2N5K 2N5K 1N5K 1
6100 A6100 A 6100 B6100 B
vPC
vPC
7K1 7K2
vPC
vPC Domain
vPC Domain
Peer-link
Port Channel uplink vPC attached
keepalive
EHM EHM
26
Best Practice without vPC
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2
All UCS uplinks forwardingNo STP influence on the topologyEnd Host Mode
Connect 6100’s with End Host Mode to Aggregation L3 switch
With 4 x 10G (or more) uplinks per 6100 – Port Channels
EHM EHM
27
Best Practice without vPC
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2
All UCS uplinks forwardingNo STP influence on the topologyEnd Host Mode
Connect 6100’s with End Host Mode to Aggregation L3 switch
With 2 x 10G uplinks per 6100
EHM EHM
28
Bad Practice without vPC
CISCO
6100 A6100 A 6100 B6100 B
7K1 7K2
No upstream switch failure protectionDoes not gain BWNothing to gain, everything to lose
Singly Attached with Port Channels
Better
29
Best Practice: Always attach UCS 6100 to (2) upstream switches
Bad Practice without vPC
CISCO
N5K 2N5K 2N5K 1N5K 1
6100 A6100 A 6100 B6100 B
STP Root
Not Recommended: UCS connected to L2 switch with STP uplinks
Uplink BW choked per VLAN at the L2 STP access switch
1-3 second traffic black hole while STP recovers from failures
No Fabric A to Fabric B transit link
30
Summary
• Use End Host Mode• Use Port Channel Uplinks• Use vPC Uplinks• No disjointed L2 domains with End Host Mode• Always dual attach UCS• If attaching UCS to a vPC domain
– Use vPC uplinks
CISCO 31
CISCO 32
THANK YOU!!