CDWG Datacenter Presentation
Featuring Cisco Unified Computing System -
Nexus Platform
Agenda
• Brief Review-Cisco Data Center 3.0 topics
• Why Do We Need Another Server/UCS
• The Nuts and Bolts Of UCS
• What Makes UCS Different/Better
• Nexus 7K, 5K, and 2K
• Q&A
2
The Evolution of Data Center ―Architectures‖
Mainframe
Data Center 1.0
IT R
ele
van
ce
an
d C
on
tro
l
Application Architecture Evolution
Centralized
Data Center 2.0
Client-Server and
Distributed Computing
Decentralized
Data Center 3.0
Service Oriented and
Web 2.0 Based
Virtualized
Consolidate
Virtualize
Automate
Data Center Vision:
A Unified Fabric
With DCE & FCoE•Unified & Virtualized I/O
• Built-in Interoperability
LAN SAN Cluster
Storage Server Clusters
Internet/Intranet
Today• Multiple I/O
•Multiple Mgmt mechanisms
Unified
Fabric
LAN SAN
Cluster
6
Second generation•Blade servers
•Integrated switches
•Fixed backplane
Benefits•Space utilization
•Cable aggregation
•Power efficient
Weakness•I/O flexibility
•Aggregate management
•Large chassis needed to amortize
switch/mgmt costs
Why Do We Need Another Server : Blades
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
Server
ServerS
erv
er
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Serv
er
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Serv
er
Se
rve
r
Serv
er
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Serv
er
Se
rve
r
Se
rve
r
Se
rve
r
7
Se
rve
r
Se
rve
r
Serv
er
Serv
er
Serv
er
Serv
er
Serv
er
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Serv
er
Se
rve
r
Management ManagementManagement Management
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Se
rve
r
Why Do We Need Another Server:
Management
Chassis Management•New management layer
Benefits•Consistency in chassis
•Shared chassis infrastructure
monitoring
Weakness•Additional mgmt overhead
•Additional cost overhead
•Need chassis aggregation management
Enclosure, Interconnect, & Blades (Front)
6U Enclosure
1U or 2U Fabric Interconnect
Up to eight per enclosure
(Optional)
Up to four per enclosure
Mix blade types
Ejector Handles
Full Slot server blade
Half Slot server blade
Hot Swap SAS drive
Redundant, Hot Swap Power Supply Redundant, Hot Swap Fan
Redundant, Hot Swap Power Supply
Rear View of Enclosure and Interconnect
Redundant
Fabric Extender
Redundant
Hot Swap
Fan Module
Fan Handle
10GigE Ports Expansion Bay
UCS B Series Model Comparison
UCS 6100 Series Fabric Interconnects
UCS C Series Model Comparison
Cabling Comparison (System in Production)
Cisco Unified Computing System HP C-Class
40% cost savings in cabling, fiber, patch cords and labor (86% cable reduction)
30% more power available to servers
50%+ physical servers in the same space
Up to 28,000 virtual machines versus 7,200 in a legacy environment of the same size
Up to 4 times more virtual machines per kilowatt of power; minimum of 76 virtual machines are being
deployed per kilowatt of power
Nexus 7000
Data Center Core/Aggregation
Nexus 4000
Unified Fabric Blade Switch
Nexus 5000
Unified Server Access
Nexus 2000
Remote Module & Scale
Nexus 1000v
VM-Aware Policy Switching
Nexus 1000v Overview
21
Cisco Nexus 5548P Chassis
25
Cisco Nexus 7000Platform Overview
Comprehensive Data Center Feature Set
Layer 3
• Fully Distributed IPv4 and IPv6 unicast hardware forwarding
• 128K FIB TCAM entries
• OSPF, EIGRP, IS-IS, BGP, RIP, PBR
• PIM-SM, SSM, Bidir, MSDP, MP-BGP
• IGMP/MLD
• 16-way ECMP
• HSRP, GLBP, VRRP with object tracking
Virtualization
• VRF-lite
• Virtual Device Contexts (VDCs)
High Availability
• In-Service Software Upgrade (ISSU)
• Non-Disruptive Stateful supervisor switchover
• Stateful process restarts
• Graceful restart for routing protocols
• Smart Call Home
• GOLD
Operational Manageability
• 512K NetFlow table
Layer 2• Distributed Layer 2 hardware switching• Hardware MAC learning• 128K MAC table entries• 16K unique VLANs (4K per VDC)• PVRST, MST• BPDU Guard, Root Guard, Loop Guard, BPDU Filter,
Bridge Assurance• Link Aggregation Control Protocol (LACP/802.1AD)• Private VLANs• Virtual Port Channel (vPC)Security• 64K classification TCAM entries• RACLs, VACLs, PACLs• H/W Based Cisco TrustSec & LinkSec (802.1AE)• CoPP and rate limiters• DHCP snooping, DAI, IP source guard• Port security and 802.1x• Storm control• Unicast RPF check• Roles-based managementQuality of Service• Ingress queuing with WRED and tail drop• Egress queuing (with PQ or shaping) with WRED
and tail drop• Marking policies and mutation• ingress and egress 1-rate 2-color and 2-rate
3-color policing• Color-aware policing• MQC CLI model
Introducing the Cisco Nexus 7000
Nexus 7000 Platform
Industry’s First Data Center Class Platform
Nexus 7000 and NX-OS
•10 & 18 Slot versions
•15+ Terabit System
•Unified Fabric Ready
•Modern, Modular OS
•Device Virtualization
•Cisco TrustSec
•Continuous Operations
Cisco NX-OS Multi-protocol Operating System
Data Center Network Manager (DCNM)
Nexus 7010
8 I/O Slots + 2 Supervisor Slots
Front to Back Airflow
256 10GbE (4:1) / 64 Ports line rate
384 10/100/1000 Ports
Nexus 7018
16 I/O Slots + 2 Supervisor Slots
Side to Side Airflow
512 10GbE (4:1) / 128 Ports line rate
768 10/100/1000 Ports
Nexus 7010 Chassis
Optional
locking front
doors
Front Rear
System status
LEDs
Integrated cable
management
with cover
Supervisor
slots (5-6)
Payload slots
(1-4, 7-10)
Air intake with
optional filter
Air exhaust
Crossbar fabric
modules
System fan trays
Power supplies
Fabric fan trays
21RU
ID LEDs on
all FRUs
Front-to-
back airflow
Locking
ejector
levers
Common equipment
removes from rear
Two chassis
per 7’ rack
N7K-C7010
Nexus 7018 Chassis
Front Rear
System status
LEDs
Integrated cable
management
Supervisor
slots (9-10)
Power supply
air intake
Crossbar
fabric
modules
Power supplies
25RU
Side-to-side
airflow
Common equipment
removes from rear
System
fan trays
Payload slots
(1-8, 11-18)
Optional front
door
N7K-C7018
Supervisor Engine
• Performs control plane and management functions
• Dual-core 1.66GHz Intel Xeon processor with 4GB DRAM
• 2MB NVRAM, 2GB internal bootdisk, compact flash slots
• Out-of-band 10/100/1000 management interface
• Always-on Connectivity Management Processor (CMP) for lights-out management
• Console and auxiliary serial ports
• USB ports for file transferN7K-SUP1
ID LED
Console Port
AUX Port
Management
Ethernet
USB Ports CMP Ethernet
Reset ButtonStatus
LEDs
Compact Flash
Slots
Nexus 7000 Module Overview
Crossbar Switch Fabric Module
• Each fabric module provides 46Gbps per I/O module slot
• Initially shipping I/O modules do not leverage full fabric bandwidth
–Maximum 80G per slot with 10G module (160 full duplex)
–Future modules leverage additional available fabric bandwidth
• Access to fabric controlled using QoS-aware central arbitration with VOQ
N7K-C7018-FAB-1
Nexus 5000/2000
& FCoE
Nexus 5000 Rear Panel Organization
Expansion
Module SlotsPower
Entry10GE only
10/100/1000
Out of Band
Mgmt
1/10GbE
Capable Ports
Nexus 5020
Nexus 5010
(8) 4/2/1 FC
Ports
(6) 10GE Ports
Expansion
Modules
(6) 8G FC
Ports
Combo
Eth/FC
Cisco Nexus 2000 Fabric Extender (FEX)
Nexus 2148T - Fabric Extender overview
48 x 1 GE interfaces
4 x 10 GE interfaces
Beacon and status LEDs
Redundant, hot-swappable power
supplies
Hot-swappable fan tray
1RU
Chassis
New Nexus 2000 Models
48 100/1000 RJ45 Downlinks
4 10GE SFP+ UplinksNexus 2248TP-GE
32 10GE/FCoE SFP+ Downlinks
8 10GE/FCoE SFP+ UplinksNexus 2232PP-10GE
What is FET-10G
• FET is an optical transceiver that provides a highly cost-effective solution for connecting Nexus 2000 to Nexus 5000 (N2K to N5K only).
• FET-10G must be connected to another FET-10G
• Supported on N2248TP / N2232PP uplinks and N5010P / N5020P 10G fabric links
• MMF: 25M (OM2), 100M (OM3)
• Not compatible with SR optics
New NX-OS 4.2(1)N1 Software Features
• Local Port Channels on FEX-10G and FEX-100/1000
• In-Service Software Upgrade (ISSU)
• F-Port Trunking and Channeling
• VTP Transparent Mode Support
Additional features:• ACLs for SNMP Communities• Increase to 8K Logical Ports• LLDP on FEX Ports• CDP on FEX Ports• AAA Command Authorization
Both protocols have- different Ethertypes
- different frame formats
Unified Fabric with FCoEProtocol Organization
FCoE Protocol
• is the data plane protocol
• is a Fiber Channel frame encapsulated within an Ethernet frame
• carries most of the FC frames and all the SCSI traffic
FIP (FCoE Initialization Protocol)
• the control plane protocol
• discovers the FCoE capable devices connected to an Ethernet cloud
• Responsible for the FC Login and Logout processes
FCoE Benefits
FC over Ethernet (FCoE)
• Mapping of FC frames
over Ethernet
• Enables FC to run
on a lossless Data Center
Ethernet network
• Wire Server Once
• Fewer cables and adapters
• Software Provisioning of I/O
• Interoperates with
existing SANs
• No gateway—stateless
• Standard – June 3, 2009
Fibre
Channel
Ethernet
52
Q & A