UCS Networking Deep Dive
Neehal Dass - Customer Support Engineer
• Chassis Connectivity
• Server Connectivity
• Fabric Forwarding
• M-Series
• Q & A
Agenda
Cisco Unified Computing System (UCS)Single Point of Management
Logical Building Blocks
Stateless Compute (Service Profiles)
UCS Components
MGMT SANLAN
Fabric Interconnect
UCS Chassis
Heartbeat link (No Data)
UCS Components
MGMT SANLAN
Fabric Interconnect
UCS Chassis
IO Module
Heartbeat link (No Data)
UCS Components
MGMT SANLAN
Fabric Interconnect
UCS Chassis
IO ModuleHeartbeat link (No Data)
4 x 10G KR lanes to each half width blade slot.
UCS Components
Fabric Interconnect
UCS Blade
IO Module
Cisco
VIC
Heartbeat link (No Data)
UCS Mini: 6324 Fabric Interconnect
UCS 5108 Chassis
Supports existing and future blades
+IO Modules
6248 or 6296 Fabric
Fabric Interconnects
+UCS B
UCS 5108 Chassis
Supports existing and future blades
+6324 Fabric Interconnect
UCS Mini
3rd Generation Fabric Interconnect and IOM
FI 6332• 32 x 40GbE QSFP+ ports
• 2.56Tbps switching performance
• 1RU fixed form factor, two power supplies & four fans
FI 6332-16UP• 24 x 40GbE QSFP+ & 16 x UP ports (1/10GbE or 4/8/16G FC)
• 2.43Tbps switching performance
• 1RU fixed form factor, two power supplies & four fans
IOM 2304• 8 x 40GbE server links & 4 x 40GbE QSFP+ uplinks
• 960Gbps switching performance
• Modular IOM for UCS 5108
UCS FI & IOM ModelsFI 6300 Series and IOM 2304
FI 6332 (Ethernet Only) FI 6332-16UP (Unified)
FI 6300 Series Hardware Overview
26 x 40G QSFP+ *
or
98 x 10G SFP+ **
6 x 40G QSFP+
* QSA module required on ports 13-14 to provide 10G support
** Requires QSFP to 4xSFP breakout cable
18 x 40G QSFP+
Or
72 x 10G SFP+ *
6 x 40G QSFP+16 x UP
16 x 1/10G SFP+
or
16 x 4/8/16G FC
2 x Power Supplies
4 x Fans
6300 Series
(Rear View)Serial Ports
L1 & L2
High
avail
ports
L1 & L2
High
avail
ports
Chassis Connectivity
UCS Fabric TopologiesChassis Bandwidth Options
2x 4 Link80 Gbps per Chassis
320 Gbps per Chassis
(IOM-2304)
2x 8 Links160 Gbps per Chassis
2x 2 Link40 Gbps per Chassis
160 Gbps per Chassis
(IOM-2304)
2x 1 Link20 Gbps per Chassis
80 Gbps per Chassis
(IOM-2304)
2208XP
only
UCS 2200 IO Module (FEX)
• UCS-IOM-2204XP • UCS-IOM-2208XP
• 40G to the Network
• 160G to the Hosts
– 2x10G Half width slot
– 4x10G Full width slot
• 80G to the Network
• 320G to the Hosts
– 4x10G Half width slot
– 8x10G Full width slot
UCS-IOM-2304
• Interface
• NIF• 4 x 40G QSFP
• Connects only to FI63xx
• HIF• 32 Interfaces
• Supports 10G or 4 ports can be combined to a single 40G HIF
VN-TAG: Pre-Standard IEEE 802.1BR
Application
Payload
TCP
IP
Ethernet
VN-TAG
FEX architecture
Switch
FEX
LAN
Frame
VNTAG
Frame
VN-TAG Ethertype
source virtual interface
destination virtual interfaced p
l r ver
2208
2208
UCS IOM 220x Architecture
ChassisManagement
Controller
FLASH
EEPROM
DRAM
Control
IO
Chassis
Signals
CIMC Switch
Woodside ASIC
Internal backplane ports to blades
Host Interfaces (HIFs)
Fabric Ports to FI
Network Interfaces (NIFs)
2204
2204
Feature 2204-XP 2208-XP
ASIC Woodside Woodside
Fabric Ports (NIF)
4 8
Host Ports (HIF)
16 32
Latency ~ 500ns ~ 500ns
UCSB-2-A# connect nxos
UCSB-2-A(nxos)# show fex detail
FEX: 1 Description: FEX0001 state: Online
…
Extender Model: UCS-IOM-2204XP, Part No: 73-14488-01
pinning-mode: static Max-links: 1
Fabric interface state:
Eth1/3 - Interface Up. State: Active
Eth1/4 - Interface Up. State: Active
Fex Port State Fabric Port
Eth1/1/1 Down Eth1/3
Eth1/1/2 Down None
Eth1/1/3 Up Eth1/4
Eth1/1/4 Down None
Eth1/1/5 Up Eth1/3
Eth1/1/6 Up Eth1/3
Eth1/1/7 Up Eth1/4
Eth1/1/8 Down None
Eth1/1/9 Up Eth1/3
Eth1/1/10 Down None
Eth1/1/11 Up Eth1/4
Eth1/1/12 Up Eth1/4
Eth1/1/13 Up Eth1/3
Eth1/1/14 Down None
Eth1/1/15 Up Eth1/4
Eth1/1/16 Down None
Eth1/1/17 Up Eth1/4
IOM Fabric & Backplane Interfaces
FI ports IOM connects to
1G link to CIMC switch.
Backplane port to blade 1/3
Backplane to FI pinning
NIFs
HIFs
NIFs
HIFs
IOM Traffic Rate Monitoring
Statistics from perspective of IOM!
UCS Mini: 6324 Fabric Interconnect
UCS 5108 Chassis
Supports existing and future blades
+IO Modules
6248 or 6296 Fabric
Fabric Interconnects
+UCS B
UCS 5108 Chassis
Supports existing and future blades
+6324 Fabric Interconnect
UCS Mini
UCS Mini Secondary Chassis
• A secondary chassis can be added to an existing UCS Mini Cluster
• This can be achieved by connected the secondary chassis through the Scalability Ports on the UCS Mini Fabric Interconnect
• The secondary chassis requires a UCS-2204 or a UCS-2208 IOM
• Only one secondary chassis can be connected
• FEX based Rack Server connectivity is not supported
UCS Mini Secondary Chassis
UCS Mini Secondary Chassis
Fabric Link Connectivity
Chassis Connectivity Policy
IO Module HIF to NIF Pinning2208XP – 1 Link
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF1NIF1
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
IO Module HIF to NIF Pinning2208XP – 2 Links
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF1NIF1
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF2NIF2
IO Module HIF to NIF Pinning2208XP – 4 Links
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF3
NIF4
NIF1
NIF2
NIF1
NIF2
NIF3
NIF4
IO Module HIF to NIF Pinning2208XP – 8 Links
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF6
NIF7
NIF8
NIF5
NIF1
NIF2
NIF3
NIF4
NIF3
NIF4
NIF1
NIF2
NIF7
NIF8
NIF5
NIF6
IOM Link Failure Scenario
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF3
NIF4
NIF1
NIF2
NIF1
NIF2
NIF3
NIF4Link Failure
IOM Link Failure Scenario
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF3
NIF4
NIF1
NIF2
NIF1
NIF2
NIF3
NIF4
IOM Link Failure Scenario
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF1
HIF1-4
HIF5-8
HIF9-12
HIF13-16
HIF17-20
HIF21-24
HIF25-28
HIF29-32
NIF2
NIF3
NIF4
NIF1
NIF2
NIF3
Port-channel Pinning
Pinned
to Po
2200-IOMVIC 1200/1300 adaptor with
DCE links in Port-Channel
Gen-1 adaptor with
single 10G link
HIFs
HIF
NIF
Increased Bandwidth Access to Blades
• Available bandwidth per blade – 10Gb
• Statically pinned to individual fabric links
• Deterministic Path
• Available bandwidth per blade – up to 160Gb
• Statically pinned to Port-channel
• Increased and shared bandwidth
• Higher Availability
• Available bandwidth per blade – 20Gb
• Statically pinned to individual fabric links
• Deterministic Path
• Guaranteed 10Gb to each blade
4 links, Discrete - Today
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
F
E
X
Fabric
Interconnect
8 links, Discrete
slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8
F
E
X
Up to 8 links, Port-channel
F
E
X
Fabric
Interconnect
Fabric
Interconnect
Server Connectivity
Cisco Virtual Interface Cards (VIC)
• M81KR, P81E
• 128 PCIe devices
• Dual 10Gb
• 16x PCIe Gen1
• 1340, 1380
• Dual 8x PCIe Gen 3
• VXLAN & NVGRE
• Native 40Gb Support
• RoCE
• 1240, 1280, 12xx
• 256 PCIe Device
• Dual 40Gb (4 x 10Gb)
• 16x PCIe Gen 2
1st Gen (Palo) 2nd Gen (Sereno) 3rd Gen (Cruz)
Fabric Extender Evolution – Virtual Interfaces
FEX
Adapter FEX
VIF
LIF
• VN-TAG/IEEE 802.1BR allows cascading FEXs
• Cisco VIC is an extension of FEX
• VN-TAG associates the Logical Interface (LIF) to a Virtual Interface (VIF)
VIC 1240/1340 + Port Expander Card
1240
Sereno
1340
Cruz
Modular LOM (mLOM)
Base option supports dual 2x10Gb• Port Expander =
passive connector device
• Port Expander fits in Mezzanine slot
• mLOM vs Mezzanine.
VIC 1240/1340 to IOM ConnectivityMLOM only
B200 M3/M4
1340 VIC
CPU 0
x8 Gen 3x8 Gen 3
16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+
UCS Blade Chassis
UCS 6248
Expansion Module
UCS 6248
2208XP 2208XP
Adapter
Server Blade
Midplane
IO Modules
Fabric
Interconnects
CPU 1
QPI Link
Empty
Dual 2x10 Gb port-channel from VIC 1240/1340 to 2208 IO Modules
VIC 1240/1340 to IOM ConnectivityMLOM plus Port Expander
B200 M3/M4
1340 VIC
CPU 0
x8 Gen 3x8 Gen 3
16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+
UCS Blade Chassis
UCS 6248
Expansion Module
UCS 6248
2208XP 2208XP
Adapter
Server Blade
Midplane
IO Modules
Fabric
Interconnects
CPU 1
QPI Link
Port Exp Port Expander Passive Increase BW to 80Gbps
Dual 4x10Gbps Port-channel
Port Channel 2
Port Channel 1
What Does The OS See?
Connectivity IOM to Adapter
• Implicit Port-channel
• Flows hashed across port-channel
UCS 1200/1300 VIC
2208 IOM
Side A Side B
vNIC1
VM1. 10 Gb FTP traffic
2. 10 Gb UDP traffic
VM Flows
2208 IOM
UCSB-2-A# connect nxos
UCSB-2-A(nxos)# show port-channel summary
--------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
--------------------------------------------------------------------------------
11 Po11(SD) Eth LACP Eth1/11(D)
88 Po88(SD) Eth LACP Eth1/20(D)
...
1314 Po1314(SU) Eth NONE Eth1/1/5(P)
1315 Po1315(SU) Eth NONE Eth1/1/6(P)
UCSB-2-A(nxos)#
50
VIC 1x40 & 1x80 to IOM Connectivity
B200 M3/M4
1340 VIC
CPU 0
x8 Gen 3x8 Gen 3
16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+
UCS Blade Chassis
UCS 6248
Expansion Module
UCS 6248
2208XP 2208XP
Adapter
Server Blade
Midplane
IO Modules
Fabric
Interconnects
CPU 1
QPI Link
VIC1380 Adapter Redundancy
Split vNIC across adapters
4 2x10 Gb Port-channels
No mixing of
12xx/13xx.
Full Width Blade to IOM ConnectivityMLOM, Port Expander, VIC1x80
B260 M4
1340 VIC
CPU
x8 Gen 3x8 Gen 3
16x SFP+ 16x SFP+ Expansion Module 16x SFP+ 16x SFP+
UCS Blade Chassis
UCS 6248
Expansion Module
UCS 6248
2208XP 2208XP
Adapter
Server Blade
Midplane
IO Modules
Fabric
Interconnects
QPI Link
Port Exp Total BW is 160G Four 40G port-channels
Port Channel 2Port Channel 1
VIC1380
CPU
x8 Gen 3
4x10 4x10
IOM 2304 and Adapter ConnectionVIC1340 Only
`
VIC1340
23
XX
-B
Mezz 1
(empty)
23X
X-A
CPU # 1 CPU # 0
Active KR Lane
Passive KR Lane
Blade Server
PCIe Lanes
QPI
PCIe Lanes
20G (2x10G)
2304 IOM and Adapter ConnectionVIC1340 Plus Port Expander
`
VIC1340
23X
X-B
Port Expander
Card
23X
X-A
Blade Server
CPU # 1 CPU # 0
PCIe Lanes
QPI
PCIe Lanes
40G (native)
• Adapter Resiliency
• 2 independent Adapters
• vCon placement
• 4 20G connections
• 20G are 2x10
2304 IOM and Adapter ConnectionVIC1340 Plus VIC1380
`
VIC1340
23X
X-B
VIC1380
23X
X-A
Blade Server
CPU # 1 CPU # 0
PCIe Lanes
QPI
PCIe Lanes
UCS Mini: Fabric to Server Connectivity
Same server-side connectivity as the 2204XP IOM
40G per half width blade
Fabric Forwarding -Ethernet
Ethernet Fabric Forwarding Mode of Operations
• Switch mode:
• FI acts like regular Ethernet switch
• VLAN/Mac based forwarding
• End-host mode (EHM):
• No spanning-tree protocol (STP)• Active/Active for all links & VLANs
• Policy based forwarding
LAN
End Host Mode
• Learn MAC addresses only from server interfaces
• vNICs are pinned to uplink interfaces
Server 1
VNIC 0
Server 2
VNIC 0
Fabric A
L2
Switching
FI A
MAC
Learning
MAC
Learning
VLAN 10
SpanningTree
LAN
vEth 3 vEth 1
End Host Mode: Unicast Forwarding
• Policies to prevent packet looping
1. No uplink to uplink forwarding
2. Déjà Vu check
3. RPF
• No unknown unicast.
• Silent VM?• FI Mac Aging vs. Router ARP Timeout
VLAN 10
Uplink Ports
FI
Deja-Vu
vEth 1 vEth 3
Server 1
VNIC 0
Server 2
VNIC 0
LAN
RPF
Server 2
End Host Mode: Broadcast Forwarding
• Broadcast traffic for a VLAN is pinned to one uplink port only
• Broadcast Listener prevents duplicate packets
• Server to server broadcast traffic is locally switched
• RPF and Déjà Vu check also applies for broadcast traffic.
Uplink
Ports
FI
vEth 1 vEth 3
Server 1
VNIC 0
Server 2
VNIC 0
LAN
Broadcast
Listener
per VLAN
B
B
B
Designated Receiver - Broadcast
Uplinks Carrying VLAN 511
Uplink which is the DR for VLAN 511
End Host Mode: Disjointed L2 Domains
6200 A 6200 B
Prod (vlans 10,20,30) DMZ (vlans 40,50,60)
Broadcast Link
Prod Server
UCSM by default assumes all uplinks are part of all VLANs
DMZ ServerCannot see DMZ 2 Broadcasts
EHM EHM
Switch Mode
• Fabric Interconnect behaves like a normal L2 switch
• Rapid-STP+ to prevent loops
• Server vNIC traffic follows STP forwarding states
• MAC address learning on both uplinks and server links
Server 1
VNIC 0
Server 2
VNIC 0
L2
Switching
MAC
Learning
VLAN 10
LAN
vEth 3 vEth 1
Root
Uplink Pinning
End Host Mode - Dynamic Pinning
• UCSM manages the vEth pinning to the uplink
• Pinned uplink must pass VLAN used by vNIC
• UCSM periodically redistributes the vEths
Server 1
VNIC 0
Server 2
VNIC 0
FI A
LAN
vEth 2 vEth 1
Server 3
VNIC 0
vEth 3Pinning
Switching
vlan10 vlan20 vlan30
vlan10
vlan20,30
End Host Mode – Individual Uplinks
ESX HOST 1VNIC 0
Server 2VNIC 0
vEth 1vEth 3Pinning
Fabric ASwitching
L2
Switching
Dynamic Re-pinning of failed uplinks
FI-A
vEth 1
VNIC 0
VNIC stays up
Sub-second re-pinning
MAC A
vSwitch / N1K
MAC C
VM 1 VM 2
MAC B
GARP aided upstream convergence
Sub-second re-pinning
End Host Mode – Port Channel Uplinks
vEth 1vEth 3Pinning
Fabric ASwitching
FI-A
More Bandwidth per Uplink
No Server NIC disruption
Fewer GARPs needed
Fewer moving parts
NIC stays up
RECOMMENDED
No disruption
Server 2VNIC 0
ESX HOST 1VNIC 0MAC A
vSwitch / N1K
No GARPs
needed
Sub-second convergence
MAC C
VM 1 VM 2
MAC B
End Host Mode – Static Pinning (LAN Pin Group)
• Administrator controls the vEthpinning
• Deterministic traffic flow
• No re-pinning with in the same FI
• Static and dynamic pinning can co-existServer 1
VNIC 0
Server 2
VNIC 0
FI A
LAN
vEth 2 vEth 1
Server 3
VNIC 0
vEth 3Pinning
Switching
vEth Interfaces Uplink
vEth 1 Blue
vEth 2 Blue
vEth 3 Purple
Administrator Pinning Definition
Which uplink is the servers’ vNIC pinned to?
Uplink vNIC
Fabric Forwarding -Storage
SAN “End Host” NPV ModeN-Port Virtualisation Forwarding
SAN BSAN A
Server 1VSAN 1
vFC 1 vFC 2
N_Proxy N_Proxy
F_Proxy F_Proxy
N_PortN_Port
6200-A 6200-B
F_Port
vFC 3 vFC 4
Server 2VSAN 1
vHBA
1
vHBA
0
vHBA
1
vHBA
0
F_Port
NPIV NPIVFLOGI
FDISC
VSAN 20VSAN 10
NPVNPV
• vHBAs are pinned to SAN uplinks
• FI proxies FC messages to NPIV switch
• FI in NPV mode means:
– Uplinks connect to F port
– No domain ID consumption
– Multi-vendor interoperability
– Zoning performed upstream
SAN “End Host” NPV ModeN-Port Virtualisation Forwarding with MDS, Nexus 5000
SAN BSAN A
Server 1VSAN 1
vFC 1 vFC 2
N_Proxy
F_Proxy
N_Port
6200-A 6200-B
F_Port
vFC 3 vFC 4
Server 2VSAN 2
vHBA
1
vHBA
0
vHBA
1
vHBA
0
VSAN
1,2VSAN
1,2
F_ Port
Channel &
Trunk
NPIVNPIV
• Port channel support
– Increased Bandwidth
– Redundancy
• VSAN Trunking support
UCSB-2-B(nxos)# show vsan
vsan 1 information
name:VSAN0001 state:active
interoperability mode:default
loadbalancing:src-id/dst-id/oxid
operational state:up
SAN FC Switch ModeDirect Attach FC & FCoE Storage to UCS SAN
Server 1VSAN 1
vFC 1 vFC 2
F_Port
N_Port
6200-A FC SwitchvFC 3 vFC 4
Server 2VSAN 2
vHBA
1
vHBA
0
vHBA
1
vHBA
0
FC FCoE
6200-B FC Switch
F_Port
TE_PortVSAN 1 VSAN 2
N_Port
MDS MDS
Optional
• UCS acts like an FC SAN switch
• Local or Remote Zoning
• Direct attached storage
3rd Generation FI Port Allocation
Unified Ethernet 40G only
40G OnlyEthernet
6332-16UP
6332
FC Port Configurations
• Slider bar
• Left to right
• Contiguous ports
• System Reboot
• 3 blocks
• Block 1: 6 FC ports (1/1-6)
• Block 2: 12 FC ports (1/1-12)
• Block 3: 16 FC ports (1/1-16)
• FC ports are “enable” by default
Operation Mode for FC/FCoE
End-Host (NPV) Mode
UCS Functions as Node Port (initiator)
Required for Connecting FC to Non-MDS FC Switches
FC Switching Mode
Upstream MDS or Nexus FC Switch Required
Required for UCS Local Zoning Feature
Direct Connect from Fabric Interconnect to FC/FCoE Storage Target
Operation Mode for Ethernet/iSCSI/NAS
End-Host Mode
Appliance Ports which allow Direct Connect Ethernet/iSCSI/NAS Storage Targets
Ethernet Switch
No Storage Based Reasons to use this Mode
Operation Mode vs. Features
M-Series
Shared Resources
UCS M-Series Architecture
2 x 40 Gb
Uplinks
Virtual Storage
Shared
Power
& Cooling
Virtual Network
Independent
Server
Management
4 PCIe Gen3 Lanes per Slot8 x Cartridge Slots
Flexible compute and memory
System Link Technology Overview
• System Link Technology is built on proven
Cisco Virtual Interface Card (VIC) technologies
• VIC technologies use standard PCIe
architecture to present an endpoint device to the
compute resources
• VIC technology is a key component to the UCS
converged infrastructure
• In the M-Series platform this technology has
been extended to provide access to PCIe
resources local to the chassis like storage
eth0 eth1
operating system
eth0 eth2
operating system
eth1 eth3
System Link Technology
• System Link technology provides the same capabilities as a VIC to configure PCIe devices for use by the server
• The difference with System Link is that it is an ASIC within the chassis and not a PCIe card
• The ASIC is core to the M-Series platform and provides access to I/O resources
• The ASIC connects devices to the compute resource through the system mid plane
• System Link provides the ability to access network and storage shared resources
Virtual
DriveSCSI
Commands
• Same ASIC used in the 3rd
Generation VIC
• M-Series takes advantage of
additional features which include:
• Gen3 PCIe root complex for connectivity to
Chassis PCIe cards (e.g Storage)
• 32 Gen3 PCIe lanes connected to cartridges
CPUs
• 2 x 40Gbps uplinks
• Scale to 1024 PCIe devices created on ASIC (e.g.
vNIC)
32 Lanes
Gen3
PCIe40Gbps QSFP
x2
Cartridges Network
Storage
System Link Technology
Mapping Network resources to the M-Series Servers• The System Link Technology provides the network interface
connectivity for all of the servers
• Virtual NICs (vNIC) are created for each server and are
mapped to the appropriate fabric through the service profile
on UCS Manager
• Servers can have up to 8 vNICs.
• The operating system sees each vNIC as a 40Gbps
Ethernet Interface but they can be rate limited and provide
hardware QoS marking.
• Interfaces are 802.1Q capable
• Fabric Failover is supported, so in the event of a failure
traffic is automatically moved to the second fabric
/
Host PCIe
Interface
eth0
eth1
eth0
eth
1
Fabric Interconnect A
Fabric Interconnect B
Networking Capabilities
• The System Link ASIC supports 1024 virtual devices. Current scale limits are 8
vNICs per server.
• All network forwarding is provided by the fabric interconnects, there is no
forwarding local to the chassis
• Currently the network uplinks for the M-Series chassis supports Ethernet traffic
only. The M-Series devices can connect to external IP storage volumes like NFS,
CIFS, HTTPS, or iSCSI.
• FCoE connectivity will be supported in a future release.
• iSCSI boot is supported see the UCS Interoperability Matrix for details.
Typical UCS Deployment
Recommended Topology for Upstream Connectivity
vPC/VSS
Access/Aggregation Layer
FI-A FI-B
vSwitch / N1K
Mac Pinning
VM1 VM2
VNIC 0
L2 Switching
ESX HOST 1
vSwitch / N1K
Mac Pinning
VM3 VM4
VNIC 1
ESX HOST 2
FI-A
VNIC 1
FI-B
VNIC 0
EHM EHM
UCS VM Traffic Flow• All VMs in same VLAN • VM1 to VM2
• VM1 to VM3
• VM1 to VM4
• Chassis Connectivity
• Server Connectivity
• Fabric Forwarding
• M-Series
Summary
Q & A
Complete Your Online Session Evaluation
Learn online with Cisco Live!
Visit us online after the conference
for full access to session videos and
presentations.
www.CiscoLiveAPAC.com
Give us your feedback and receive a
Cisco 2016 T-Shirt by completing the
Overall Event Survey and 5 Session
Evaluations.– Directly from your mobile device on the Cisco Live
Mobile App
– By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/
– Visit any Cisco Live Internet Station located
throughout the venue
T-Shirts can be collected Friday 11 March
at Registration
Thank you