Post on 14-Mar-2018
transcript
Deploying FlexPodinfrastructures, best
practices from the fieldRamses Smeyers – Technical Leader Services
BRKVIR-2260
• A collection of hints and tips gathered while deploying, managing and troubleshooting FlexPOD deployments for over 5 years
• Customer deployments
• TAC troubleshooting cases
• Customer / Partner sessions
What is this session about ?
• What is a FlexPOD ?
• Deployment gotchas
• Performance analysis
• Upgrade the stack
• UCS-Director automation
Agenda
What is a FlexPOD ?
What is a FlexPOD ?
• FlexPod is a integrated computing, networking, and storage solution developed by Cisco and NetApp. Its configurations and workloads are published as Cisco Validated Designs. FlexPod is categorized by established and emerging client needs
• Validated technologies from industry leaders in computing, storage, networking, and server virtualization
• A single platform built from unified computing, fabric, and storage technologies, with popular and trusted software virtualization
• Integrated components that help enable you to centrally manage all your infrastructure pools
• An open design management framework that integrates with your existing third-party infrastructure management solutions
CVD
Architecture - Nexus
Architecture - ACI
Deployment gotchas
Deployment gotchas
• MTU 9000
• ALUA
• FCoE
• Management / production network overlap
• Hypervisor load-balancing
• ACI Dynamic discovery
MTU 9000
• Problem ? End-2-End MTU 9000 is not working
• MTU needs to be defined @
• VMware
• Nexus 1000V (if NFS traffic is handled by Nexus 1000V)
• UCS
• Nexus 5000
• NetApp
• Use the correct ping on VMware
• ~ # vmkping -I vmk1 -s 9900 42.33.80.6
• PING 42.33.80.6 (42.33.80.6): 9900 data bytes
• 9908 bytes from 42.33.80.6: icmp_seq=0 ttl=255 time=0.355 ms
• ~ # vmkping -I vmk1 -s 9900 42.33.80.6 -d
• PING 42.33.80.6 (42.33.80.6): 9900 data bytes
• sendto() failed (Message too long)
MTU 9000
With –d >8972 will
not work (-d do
not fragment)
VMware
Nexus 1000V
• Global• bdsol-vc01vsm-01# show running-config | inc jumbo
• bdsol-vc01vsm-01# show running-config al | inc jumbo
• system jumbomtu 9000
• Port-profile
• bdsol-vc01vsm-01(config)# port-profile type ethernet Uplink
• bdsol-vc01vsm-01(config-port-prof)# system mtu 9000
• Check• bdsol-vc01vsm-01# show interface ethernet 3/3
• Ethernet3/3 is up
• Hardware: Ethernet, address: 0050.5652.0a1f (bia 0050.5652.0a1f)
• Port-Profile is Uplink
• MTU 1500 bytes
• Encapsulation ARPA
• Define MTU on
• QOS System Class
• vNIC template
• QoS Policies
UCS
Nexus 5000 Nexus 7000
policy-map type network-qos jumbo
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
mtu 9216
multicast-optimize
system qos
service-policy type network-qos jumbo
system jumbomtu 9216
interface Ethernet1/23
mtu 9216
NetApp
bdsol-3240-01-B> rdfile /etc/rc
hostname bdsol-3240-01-B
ifgrp create lacp dvif -b ip e1a e1b
vlan create dvif 3380
ifconfig e0M `hostname`-e0M flowcontrol full netmask 255.255.255.128 mtusize 1500
ifconfig e0M inet6 `hostname`-e0M prefixlen 64
ifconfig dvif-3380 `hostname`-dvif-3380 netmask 255.255.255.0 partner dvif-3380 mtusize 9000 trusted wins up
route add default 10.48.43.100 1
routed on
options dns.enable on
options nis.enable off
savecore
bdsol-3240-01-B> ifconfig dvif-3380
dvif-3380: flags=0x2b4e863<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 9000 dad_attempts 2
inet 42.33.80.2 netmask 0xffffff00 broadcast 42.33.80.255
inet6 fe80::a0:98ff:fe36:1838 prefixlen 64 scopeid 0xd autoconf
partner dvif-3380 (not in use)
ether 02:a0:98:36:18:38 (Enabled interface groups)
• Traffic flow (NFS is on Gold)
• VMware UCS (Gold CoS 4) 5K NetApp
• NetApp 5K UCS (Best Effort) vNIC Drop (no CoS on traffic)
• Solution
• Remark traffic on Nexus 5000 NetApp interfaces to CoS 4
• ACL to match NFS traffic remark to CoS 4
Watch out for return traffic
ALUA
ALUA ?
Verify ALUA on VMware
~ # esxcli storage nmp device list
naa.60a98000443175414c2b4376422d594b
Device Display Name: NETAPP Fibre Channel Disk
(naa.60a98000443175414c2b4376422d594b)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on;explicit_support=off;
explicit_allow=on;alua_followover=on;{TPG_id=2,TPG_state=AO}{TPG_id=3,TP
G_state=ANO}}
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config:
{policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=3:
NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba2:C0:T0:L0, vmhba1:C0:T0:L0
Is Local SAS Device: false
Is Boot USB Device: false
Verify ALUA on NetApp
bdsol-3220-01-A> igroup show -v
bdsol-esxi-23 (FCP):
OS Type: vmware
Member: 20:00:00:25:b5:52:0a:0e (logged in on: 0d, vtic)
Member: 20:00:00:25:b5:52:0b:0e (logged in on: 0c, vtic)
UUID: 6c2a1c9a-f539-11e2-8cbf-123478563412
ALUA: Yes
Report SCSI Name in Inquiry Descriptor: Yes
igroup show -v <group>
igroup set <group> alua yes
igroup show -v <group>
FCoE
Nexus 5000interface vfc103
bind interface port-channel103
switchport trunk allowed vsan 70
no shutdown
interface port-channel103
description bdsol-6248-03-A
switchport mode trunk
switchport trunk allowed vlan 970
interface Ethernet1/3
description bdsol-6248-03-A
switchport mode trunk
switchport trunk allowed vlan 970
spanning-tree port type edge trunk
spanning-tree bpdufilter enable
channel-group 103 mode active
• Default policy (show running-config all)policy-map type network-qos fcoe-default-nq-policy
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
multicast-optimize
system qos
service-policy type fcoe-default-nq-policy
• cisco.com documented MTU 9000 Policypolicy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
multicast-optimize
system qosservice-policy type network-qos jumbo
QOS policy
No FCoE
defined
UCS
Management / production network
overlap
• Problem customer is unable to reach the UCS management interface (+ NetApp and Nexus 5000)
• Situation:
• Topology: Netapp Nexus 5000 UCS
• Mgmt topology 3750 connected to NetApp mgmt / Nexus 5000 mgmt0 / UCS management + connected to Nexus 5000 via port-channel / VPC
• Management VLAN is also used for Server / VM traffic
• Cause:
• VM/Server caused ethernet/VLAN overload on 3750 due to broadcast flooding
Management / production network overlap
Hypervisor load-balancing
Hypervisor load-balancing – UCS-B
Each Fabric Interconnect has a port-channel towards the Nexus 5000 vPC pair
Fabric interconnects are connected for clustering no data traffic is on the link
The hypervisor running on a blade has 2 independent connections no switch dependent protocols can be used
Using IP-hash algorithms will cause MAC flaps on the UCS FI’s and N5K’s
ACI - Port-group traffic distribution settingsBy default, port-groups instantiated on the DVS default to ‘route based on originating virtual
port-id’ which is. This works well with all types of servers.
If you attach a LACP policy to your vSwitch Policies, you will set the traffic distribution to IP
hash! Be careful with UCS-B series as this is not supported (no vPC between the FIs).
See also “Setting up an Access Policy with Override Policy for a Blade Server Using
the GUI” http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/getting-
started/b_APIC_Getting_Started_Guide/b_APIC_Getting_Started_Guide_chapter_010.html
#d19263e464a1635
ACI Dynamic discovery
Prerequisites & Supported Config
• For Dynamic EPG download, we support only one L2 Hop between Host/ESX and iLeaf
• Management Address *Must* to be configured on L2/BladeSwitch.
• CDP or LLDP must be configured to advertised Mgmt-TLV on BladeSwitch
• UCS-FI need to be on version 2.2(1c) or later
• If there is need for multiple L2 hops, EPG with static Binding (using node+port selectors) should be used.
ESX Discovery in FabricFor Dynamic download of policies to Leaf
APICvCenter
1
2
3 1. Leaf send LLDP* to ESX(includes Leaf port name)
2. ESX send parsed LLDP information to vCenter
3. APIC receives LLDP information from vCenter
4. APIC download Policy for VMs behind ESX to the Leaf node when using immediate
4
*Can use CDP instead of LLDP
• LLDP enabled VIC consumes LLDP
C-Series disable LLDP
ESX Discovery in Fabric (with Blade Switch)
For Dynamic download of policies to Leaf
APICvCenter
1
31. Blade Switch sends LLDP* to
Leaf and ESX
2. ESX & Leaf send parsed LLDP information to vCenter & APIC resp.
3. APIC receives LLDP information from vCenter
4. APIC download policy on all Leafs providing path to the ESX when using immediate
2
1
2
4
*Can use CDP or mix of CDP/LLDP
ESX Discovery issue with UCSOn FI LLDP works for fabric links, but not for host-facing links
IOM
APIC
IOM
PALO
LLDP not working, Enable CDP through
policy config
Fabric Uplinks Host Links
Option-1 LLDP CDP
Option-2 CDP CDP
Scope of CDP policy is UCS-FI to
ESX server. Supported through
Attach Entity Profile override
LLDP/CDP
CDP for ESX discovery
Approach Details
No change • Static EPG at Leaf ports
• Opflex using N1K
Configuration change • Enable CDP policy for VNICs in UCSM service profile
• APIC (2 options)
1. CDP override for Host <->UCSBI using Attach entity Profile
Override
• CDP on ESX/DVS
• LLDP on fabric/Leaf
2. Set CDP as default policy throughout
• CDP on ESX/DVS
• CDP on fabric/Leaf
FCS : Options for UCS B-series deployment
Verify CDP is enabled on DVS/Hosts
Performance analysis
UCS - recap
What affects Performance?
Congestion
Oversubscription
PFC
B2B Credits
HIF
NIF
VIF
LIF
UIF
Buffers
Pinning
Sliding windowRSS
Tx Queues
Rx Queues
LSO
Arbitration
TCP Offload
Driver
Hashing
MRU
Round Robin
Fixed PathMultipathing
Port Channel
Firmware
QoS
CoS
Flow Control
Queue Depth
Power
Heat
Divide & Conquer
UCS Performance Areas can be categorized into the following areas:
Infrastructure Fabric Interconnects
IOMs
Adapters
SPFs/Cables
Platform
BIOS
Chipset
Adapter Settings
OS Specific
Windows vs. Linux
TCP vs. UDP vs. Multicast
RSS
CPU Affinity
Interrupts
We’ll focus on these areas
Egre
ss
UC
S B
lade
Chassis
FE
X/I
nte
rco
nn
ect A
FE
X/I
nte
rco
nn
ect B
LAN
Bla
de S
erv
er
CN
AO
S
Eth0
MAC A
Eth1
MAC B
OS NIC Teaming
CNA Port 1 CNA Port 2
Which path will UCS choose?
?In
gre
ss
UCS Frame Flow DecisionsEgress
Which port in the port channel?
(2-8 choices)
Which uplink/border port or port channel?
(many choices)
Local or remote destination?
(2 choices)
Which Fabric Port?
(4 choices)
Which CNA Port?
(2 choices)
Which PCIe Ethernet Interface?
(1-58 choices depending on CNA)
OS Routing Table or
OS NIC Teaming
UCS Fabric Failover
Fabric Port Pinning
L2 Switching in FIs
Border Port Pinning
Port Channeling Algorithm
UC
S B
lade
Chassis
FE
X/I
nte
rco
nn
ect A
FE
X/I
nte
rco
nn
ect B
LAN
Bla
de S
erv
er
CN
AO
S
Eth0
MAC A
Eth1
MAC B
OS NIC Teaming
CNA Port 1 CNA Port 2
UC
S B
lade
Chassis
UCS Frame Flow DecisionsIngress
Which downlink or port channel?
Allow the frame inbound?(decision depends on ‘switch mode’ vs. ‘end host mode’)
Which Fabric Extender Port?
Which Server Bay Port?(8 choices)
Which PCIe Device (vNIC)?(1-58 choices depending on CNA)
Pass frame to OS?Dest. MAC and Ethertype
binding
VNTag Identifier
VNTag + Offset
(MAC Learning on FIs)
Déjà vu, RPF, border port
pinning
(Upstream Switch Decides)
FE
X/I
nte
rco
nn
ect A
FE
X/I
nte
rco
nn
ect B
LAN
Bla
de S
erv
er
CN
AO
S
Eth0
MAC A
Eth1
MAC B
OS NIC Teaming
CNA Port 1 CNA Port 2
Fabric Port Pinning
Infrastructure Path Tracing
System Components – Hop by Hop
Trunk Interface
Fabric Interface
NIF (Network)
HIF (Host)
UIF (Uplink)
VIF (Virtual)
NXOSUCSM
System Components – ASICs (Gen 1 vs. Gen 2)
• Fabric ASIC : Altos/Sunnyvale
• Port ASIC : Gatos/Carmel
• FEX ASIC : Redwood/Woodside
• VIC ASIC : Palo/Sereno
• Gen-1 CNA ASIC : Menlo
Fabric
Switch
Compute ChassisCompute Chassis
Compute Chassis
Fabric
Switch
Compute Blade
(Half slot)
x86 Computer
Compute Blade
(Full slot)
x86 Computer
Adapter
X
Fabric
ExtenderI
Fabric
ExtenderI
x8x8x8x8
C
MGMT
SS
Adapter AdapterC
X X X X X
CC
F
P P
P P
SAN
P
L
F
P
P P
P
L
P
VC V
SANLAN
Why do I care about ASIC names?
fex-1# show platform software woodside rate
fex-1# show platform software redwood sts
TSI-UCS-A(nxos)# show hardware internal carmel crc
TSI-UCS-A(nxos)# show hardware internal sunny event-history errors
Narrowing Down the Problem
• Define the problem
• From which point to what other point is the problem?
• Do we see the problem in one direction or both?
• Eliminate variables
• Is the problem seen between traffic traversing the same fabric?
• Is the problem only happening on a specific path?
• List all the ports in the traffic path
• VIFs, FEX, HIFs, NIFs, Fabric and Uplink ports
Defining the Ports
• FI Uplink/Trunk Port
• The Fabric Interconnect defines Uplink ports as those ports connecting to the LAN
• Always in trunk mode (no such thing as mode access configuration)
• VLAN 1 is default (native) & can be changed
• Port-channel configuration allowed (LACP only)
• There is currently no vPC or Fabric Path feature in the FI
Defining the Ports
• Fabric Interconnect FEX-Fabric aka Server Interfaces (SIF)
• The Fabric Interconnect (FI) defines fex-fabric ports as those ports connecting to the IOMs in the chassis
• IOM Host Interfaces (HIFs) ports are statically pinned to FEX-fabric ports (SIF)
• Same concept Nexus FEXs use with Satellite ports.
Note: The term “FEX” and “IOM” are commonly used interchangeably
Defining the Ports
• IOM Network Interfaces (NIF)
• The IOM defines these ports which are external connecting the IOM to the FI.
• NIF port are either configured as individual or channeled to the FI’s as server ports (SIF) – depends on model of IOM.
• Same concept Nexus FEXs use with Satellite ports.
Defining the Ports
• IOM Host Interfaces (HIFs)
• Each IOM provides a number of internal ports per blade
• IOM model 2104XP provides 8x internal ports (one for each blade)
• IOM model 2204XP provides 16x internal ports (two for each blade)
• IOM model 2208XP provides 32x internal ports (four for each blade)
• Each HIF is defined by three different values, EthX/Y/Z. Chassis/Adapter/Slot
Defining the Ports
• Adapter Uplink Interface (UIFs)
• Each Adapter has 2 physical uplinks, one to each uplink
• References as 0 and 1
• These are also known as the Data Center Ethernet (DCE) Interfaces
Defining the Ports
• Virtual Interface (VIF)
• Defined as Ethernet (veth) or Fibre Channel (vfc)
• A vNIC with Fabric Failover enabled will have two VIFs assigned (Primary & Backup)
• Represent the vNIC or vHBA on the compute blade towards OS
• Pinned automatically or manually (pin groups) to border port or FC uplink ports
• veth and vfc numbers are dynamically assigned
• System automatically allocates a certain number of VIFs per service-profile for its own management/control traffic
Defining the Ports
• Logical Interfaces (LIF)
• Represent the logical interface of a VIF pair (those with Fabric Failover enabled)
• LIF indexes are managed at the adapter level
• Not visible within UCSM
Logical Interface (LIF)
Trace Example• Let’s trace the path for the
first vNIC (eth0) on blade 1/6
• First, what’s the VIF # ? Blade 1/6
FEX 1
Fabric A
FEX 2
Fabric B
0 1
Eth0 (00:25:b5:44:00:3b )
VIF Pinning – Service Profile View• UCSM top level : show service-profile circuit server <chassis#>/<slot#>
UCS-A# show service-profile circuit server 1/6
Service Profile: roberbur/Perf-Test-3
Server: 1/6
Fabric ID: A
VIF vNIC Link State Oper State Prot State Prot Role Admin Pin Oper Pin Transport
---------- --------------- ----------- ---------- ------------- ----------- ---------- ---------- ---------
9178 Up Active No Protection Unprotected 0/0 0/0 Ether
986 fc0 Up Active No Protection Unprotected 0/0 0/0 Fc
988 eth1 Up Active Passive Backup 0/0 1/7 Ether
990 eth3 Up Active Passive Backup 0/0 1/7 Ether
991 eth0 Up Active Active Primary 0/0 1/7 Ether
993 eth2 Up Active Active Primary 0/0 1/7 Ether
Fabric ID: B
<snip>
VIF Pinning – GUI vs CLI
Trace Example• We know eth0 is
assigned VIF 991 and we know eth0 is set to Fabric-A
• Next, which internal FEX port is VIF 991 using?
Blade 1/6
IOM 12204
Fabric A
IOM 22204
Fabric B
0 1
eth0 VIF 991
0
IOM Internal Port Information – 2100XP• connect iom <chassis #>
• show platform software redwood sts
Which HIFs are active
legend:= no-connect
X = Failed- = Disabled: = Dn| = Up[$] = SFP present[ ] = SFP not present[X] = SFP validation failed
IOM Internal Port Information – 2200XP• show platform software woodside sts
FEX Ports
IOM 12204
IOM 22204
Trace Example• VIF 991 is using
FEX Port 11
• Which NIF is being used?
Blade 1/6
Fabric A Fabric B
0 1
eth0 VIF 991
FEX Port 11
FEX to Fabric Port Pinning (2204XP)
IOM Fabric
Interconnect
NIF SIF
1-2
3-4
5-6
7-8
9-10
11-12
13-14
15-16
1
2
3
4Blade 6
IOM 12204
Trace Example• VIF 991 is using
FEX Port 11, NIF 2
• Which SIF is used? Blade 1/6
Fabric A Fabric B
0 1
eth0 VIF 991
IOM 22204
NIF 2
FEX Port 11
IOM Port Information• Connect nxos : show fex <chassis#> detail
Shows which Fabric Port
each FEX port is using
Trace Example• VIF 991 is using
HIF eth1/1/11, NIF 2, and bound to SIF e1/12
• Lastly, which uplink will be used?
Blade 1/6
Fabric A Fabric B
0 1
eth0 VIF 991
IOM 12204
NIF 2
Fabric Port Eth1/12
IOM 22204
HIF 11
VIF Pinning – Fabric Interconnect View• Connect nxos : show pinning border-interface active
• Connect nxos : show pinning server-interfaces
UCS-A(nxos)# show pinning border-interfaces active
--------------------+---------+----------------------------------------
Border Interface Status SIFs
--------------------+---------+----------------------------------------
Eth1/7 Active Veth988 Veth990 Veth991 Veth993
Eth1/8 Active Veth963 Veth974 Eth1/1/3 Eth2/1/7
Total Interfaces : 2
UCS-A(nxos)# show pinning server-interfaces | i Veth
Veth956 No - -
Veth963 No Eth1/8 2:27:23
Veth974 No Eth1/8 2:27:23
Veth988 No Eth1/7 2:27:23
Veth990 No Eth1/7 2:27:23
Veth991 No Eth1/7 2:27:23
Veth993 No Eth1/7 2:27:23
Trace Example• VIF 991 is using
HIF eth1/1/11, NIF 2, and bound to SIF e1/12 and egress UCS on Uplink eth1/7
Blade 1/6
Fabric A Fabric B
0 1
eth0 VIF 991
IOM 12204
NIF 2
Fabric Port Eth1/12
IOM 22204
HIF 11
Vif 991
Uplink Eth1/7
Vif 991
End-2-End storage performance
• Follow traffic from Hypervisor UCS N5K/MDS NetApp
• ESXtop
End-2-End storage performance
• Verify link oversubscription on IOM
IOM traffic
bdsol-6248-01-B# connect iom 2
fex-1# show platform software woodside rate
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
| Port || Tx Packets | Tx Rate | Tx Bit || Rx Packets | Rx Rate | Rx Bit |Avg Pkt|Avg Pkt| |
| || | (pkts/s) | Rate || | (pkts/s) | Rate | (Tx) | (Rx) |Err|
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
| 0-BI || 20 | 4 | 4.37Kbps || 10 | 2 | 2.23Kbps | 116 | 119 | |
| 0-CI || 19 | 3 | 16.36Kbps || 20 | 4 | 21.91Kbps | 518 | 664 | |
| 0-NI3 || 1174 | 234 | 4.56Mbps || 157 | 31 | 83.96Kbps | 2408 | 314 | |
| 0-NI2 || 1842 | 368 | 9.62Mbps || 1388 | 277 | 7.72Mbps | 3245 | 3456 | . |
| 0-NI1 || 309 | 61 | 193.56Kbps || 487 | 97 | 294.33Kbps | 371 | 357 | |
| 0-NI0 || 7805 | 1561 | 70.96Mbps || 6465 | 1293 | 17.83Mbps | 5662 | 1703 | . |
| 0-HI31 || 469 | 93 | 222.99Kbps || 403 | 80 | 1.96Mbps | 277 | 3020 | |
| 0-HI27 || 378 | 75 | 232.38Kbps || 302 | 60 | 985.84Kbps | 364 | 2020 | |
| 0-HI23 || 803 | 160 | 1.08Mbps || 750 | 150 | 4.04Mbps | 826 | 3349 | . |
| 0-HI19 || 1137 | 227 | 7.98Mbps || 1113 | 222 | 2.26Mbps | 4371 | 1250 | |
| 0-HI15 || 1313 | 262 | 1.41Mbps || 1368 | 273 | 5.56Mbps | 652 | 2520 | |
| 0-HI11 || 4985 | 997 | 15.10Mbps || 7179 | 1435 | 70.51Mbps | 1874 | 6119 | |
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
fex-1#
fex-1# show platform software woodside sts
(FINAL POSITION TBD) Uplink #: 1 2 3 4 5 6 7 8
Link status: | | | |
+-+--+--+--+--+--+--+--+-+
SFP: [$][$][$][$][ ][ ][ ][ ]
+-+--+--+--+--+--+--+--+-+
| N N N N N N N N |
| I I I I I I I I |
| 0 1 2 3 4 5 6 7 |
| |
| NI (0-7) |
+------------+-----------+
|
+-------------------------+-------------+-------------+---------------------------+
| | | |
+------------+-----------+ +-----------+------------+ +------------+-----------+ +-------------+----------+
| HI (0-7) | | HI (8-15) | | HI (16-23) | | HI (24-31) |
| | | | | | | |
| H H H H H H H H | | H H H H H H H H | | H H H H H H H H | | H H H H H H H H |
| I I I I I I I I | | I I I I I I I I | | I I I I I I I I | | I I I I I I I I |
| 0 1 2 3 4 5 6 7 | | 8 9 1 1 1 1 1 1 | | 1 1 1 1 2 2 2 2 | | 2 2 2 2 2 2 3 3 |
| | | 0 1 2 3 4 5 | | 6 7 8 9 0 1 2 3 | | 4 5 6 7 8 9 0 1 |
+-+--+--+--+--+--+--+--+-+ +-+--+--+--+--+--+--+--+-+ +-+--+--+--+--+--+--+--+-+ +-+--+--+--+--+--+--+--+-+
[ ][ ][ ][ ][ ][ ][ ][ ] [ ][ ][ ][ ][ ][ ][ ][ ] [ ][ ][ ][ ][ ][ ][ ][ ] [ ][ ][ ][ ][ ][ ][ ][ ]
+-+--+--+--+--+--+--+--+-+ +-+--+--+--+--+--+--+--+-+ +-+--+--+--+--+--+--+--+-+ +-+--+--+--+--+--+--+--+-+
: : - : - | - | - | - | - | - |
1 1 1 1 1 1 1 9 8 7 6 5 4 3 2 1
6 5 4 3 2 1 0
\__\__/__/ \__\__/__/ \__\__/__/ \__\__/__/ \__\__/__/ \__\__/__/ \__\__/__/ \__\__/__/
blade8 blade7 blade6 blade5 blade4 blade3 blade2 blade1
Pause framesfex-1# show platform software woodside rmon 0 HI31
+----------------------+----------------------+-----------------+----------------------+----------------------+-----------------+
| TX | Current | Diff | RX | Current | Diff |
+----------------------+----------------------+-----------------+----------------------+----------------------+-----------------+
| TX_PKT_LT64 | 0| 0| RX_PKT_LT64 | 0| 0|
| TX_PKT_64 | 1233122| 0| RX_PKT_64 | 299111| 0|
| TX_PKT_65 | 3443446179| 117| RX_PKT_65 | 1726497484| 67|
| TX_PKT_128 | 1468608090| 154| RX_PKT_128 | 678711943| 35|
| TX_PKT_256 | 164594394| 53| RX_PKT_256 | 257993274| 10|
| TX_PKT_512 | 140774947| 15| RX_PKT_512 | 202303546| 4|
| TX_PKT_1024 | 105217922| 0| RX_PKT_1024 | 76203194| 6|
| TX_PKT_1519 | 1371379821| 15| RX_PKT_1519 | 430787476| 19|
| TX_PKT_2048 | 520347482| 1| RX_PKT_2048 | 571391171| 6|
| TX_PKT_4096 | 163323927| 1| RX_PKT_4096 | 671897220| 115|
| TX_PKT_8192 | 2667845697| 1| RX_PKT_8192 | 3971350325| 34|
| TX_PKT_GT9216 | 0| 0| RX_PKT_GT9216 | 0| 0|
| TX_PKTTOTAL | 10046771581| 357| RX_PKTTOTAL | 8587434744| 296|
| TX_OCTETS | 29652413140776| 117300| RX_OCTETS | 42180807573683| 881346|
| TX_PKTOK | 10046771581| 357| RX_PKTOK | 8587434744| 296|
| TX_UCAST | 9475625210| 259| RX_UCAST | 8580251937| 294|
| TX_MCAST | 194696639| 43| RX_MCAST | 5884916| 2|
| TX_BCAST | 376449732| 55| RX_BCAST | 1297891| 0|
| TX_VLAN | 0| 0| RX_VLAN | 2| 0|
| TX_PAUSE | 0| 0| RX_PAUSE | 0| 0|
| TX_USER_PAUSE | 1233122| 0| RX_USER_PAUSE | 0| 0|
| TX_FRM_ERROR | 0| 0| | | |
| | | | RX_OVERSIZE | 0| 0|
| | | | RX_TOOLONG | 0| 0|
| | | | RX_DISCARD | 0| 0|
| | | | RX_UNDERSIZE | 0| 0|
| | | | RX_FRAGMENT | 0| 0|
| | | | RX_CRC_NOT_STOMPED | 0| 0|
| | | | RX_CRC_STOMPED | 0| 0|
| | | | RX_INRANGEERR | 0| 0|
| | | | RX_JABBER | 0| 0|
| TX_OCTETSOK | 29652413140776| 117300| RX_OCTETSOK | 42180807573683| 881346|
+----------------------+----------------------+-----------------+----------------------+----------------------+-----------------+
UCS – south-bound trafficbdsol-6248-01-A(nxos)# show int fc1/31 counters
fc1/31
1 minute input rate 1698584 bits/sec, 212323 bytes/sec, 276 frames/sec
1 minute output rate 5050968 bits/sec, 631371 bytes/sec, 410 frames/sec
1538294716 frames input, 1586583115348 bytes
0 class-2 frames, 0 bytes
1538294716 class-3 frames, 1586583115348 bytes
0 class-f frames, 0 bytes
3 discards, 0 errors, 0 CRC
0 unknown class, 0 too long, 0 too short
1852810367 frames output, 2857726621688 bytes
0 class-2 frames, 0 bytes
1852810367 class-3 frames, 2857726621688 bytes
0 class-f frames, 0 bytes
0 discards, 0 errors
0 input OLS, 0 LRR, 0 NOS, 0 loop inits
0 output OLS, 0 LRR, 0 NOS, 0 loop inits
0 link failures, 0 sync losses, 0 signal losses
0 BB credit transitions from zero
16 receive B2B credit remaining
32 transmit B2B credit remaining
0 low priority transmit B2B credit remaining
MDS / Nexus 5000bdsol-n5548-05# show int fc2/16 counters
fc2/16
1 minute input rate 608 bits/sec, 76 bytes/sec, 1 frames/sec
1 minute output rate 1008 bits/sec, 126 bytes/sec, 1 frames/sec
3676889 frames input, 215037652 bytes
0 discards, 0 errors, 0 CRC
0 unknown class, 0 too long, 0 too short
3573520 frames output, 335998936 bytes
0 discards, 0 errors
0 input OLS, 0 LRR, 0 NOS, 0 loop inits
2 output OLS, 3 LRR, 0 NOS, 0 loop inits
0 link failures, 0 sync losses, 0 signal losses
0 transmit B2B credit transitions from zero
0 receive B2B credit transitions from zero
16 receive B2B credit remaining
32 transmit B2B credit remaining
0 low priority transmit B2B credit remaining
bdsol-n5548-05#
Priority-flow-control
bdsol-n5548-05# show interface eth 1/3 priority-flow-control
============================================================
Port Mode Oper(VL bmap) RxPPP TxPPP
============================================================
Ethernet1/3 Auto On (8) 0 91568
bdsol-n5548-05# show running-config interface eth 1/3 | inc prio
bdsol-n5548-05# show running-config interface eth 1/3 all | inc priority-flow
priority-flow-control mode auto
High increase could indicate issue
PPP should be negotiated
MDS / Nexus 5000 (NetApp connection)bdsol-9148-03# show int fc1/5 counters
fc1/5
5 minutes input rate 53824 bits/sec, 6728 bytes/sec, 7 frames/sec
5 minutes output rate 30504 bits/sec, 3813 bytes/sec, 4 frames/sec
1002260022 frames input, 1860761796076 bytes
0 class-2 frames, 0 bytes
1002260022 class-3 frames, 1860761796076 bytes
0 class-f frames, 0 bytes
0 discards, 0 errors, 0 CRC
0 unknown class, 0 too long, 0 too short
325808424 frames output, 533427696444 bytes
0 class-2 frames, 0 bytes
325808424 class-3 frames, 533427696444 bytes
0 class-f frames, 0 bytes
13 discards, 0 errors
0 input OLS, 1 LRR, 0 NOS, 1 loop inits
1 output OLS, 1 LRR, 0 NOS, 1 loop inits
0 link failures, 0 sync losses, 0 signal losses
200973 BB credit transitions from zero
32 receive B2B credit remaining
3 transmit B2B credit remaining
3 low priority transmit B2B credit remaining
bdsol-9148-03#
NetApp – sysstat –x 1
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s
in out read write read write age hit time ty util in out in out
81% 0 0 0 1040 0 1 433552 189136 0 0 0s 99% 99% Ff 100% 0 1040 0 94266 443486 0 0
76% 0 0 0 1132 1 1 468263 90818 0 0 0s 99% 100% :f 100% 0 1132 0 102386 488385 0 0
80% 0 0 0 1026 0 1 423101 116056 0 0 0s 99% 99% Ff 100% 0 1026 0 92862 438264 0 0
78% 0 0 0 1085 1 0 468863 80152 0 0 0s 98% 100% :f 100% 5 1080 0 100523 461375 0 0
74% 0 0 0 1079 0 1 463149 82334 0 0 0s 98% 100% :v 100% 5 1066 0 96791 464674 0 0
81% 0 0 0 1045 1 0 419688 144208 0 0 0s 98% 100% Bf 100% 0 1037 0 84566 426947 0 0
74% 0 0 0 1080 0 1 459704 128112 0 0 0s 98% 100% :f 100% 0 1080 0 90859 473043 0 0
79% 0 0 0 1061 1 0 464854 26593 0 0 1 99% 98% Fn 100% 0 1061 0 110116 454936 0 0
78% 0 0 0 1097 0 1 438281 192684 0 0 0s 98% 100% :f 100% 6 1075 0 98459 439136 0 0
76% 0 0 0 1104 1 0 478192 67437 0 0 0s 99% 100% :f 100% 0 1104 0 105721 485442 0 0
74% 0 0 0 1013 0 1 437082 71342 0 0 1 99% 100% Bs 100% 0 1013 0 68873 447389 0 0
77% 0 0 0 1088 1 1 457864 159100 0 0 0s 98% 100% :f 100% 0 1088 0 111044 453121 0 0
73% 0 0 0 1152 1 0 458688 49040 0 0 0s 99% 100% #v 100% 86 1066 0 98002 474682 0 0
79% 0 0 0 956 0 1 397356 180432 0 0 0s 98% 100% Bf 100% 9 947 0 92035 388960 0 0
76% 0 0 0 1109 1 1 458768 95284 0 0 0s 98% 100% :f 100% 0 1109 0 110914 477828 0 0
79% 0 0 0 939 1 0 413597 146458 0 0 0s 99% 86% Ff 100% 0 939 0 88678 403322 0 0
75% 0 0 0 1106 0 1 449017 91011 0 0 0s 98% 100% :f 100% 0 1106 0 104025 473335 0 0
73% 0 0 0 1027 1 0 476400 40699 0 0 1 98% 100% :v 100% 0 1027 0 93386 460589 0 0
76% 0 0 0 1022 0 1 415088 141152 0 0 0s 99% 100% Bf 100% 4 1018 0 84828 435949 0 0
74% 0 0 0 1042 1 0 459660 124908 0 0 0s 98% 100% :f 100% 0 1042 0 95839 455676 0 0
B Back to back CPs (CP generated CP) (The filer is having a tough time keeping up with writes)
F CP caused by full NVLog (one half of the nvram log was full, and so was flushed)
: continuation of CP from previous interval (means, A CP is still going on, during 1 second intervals)
End-2-End network performance
• Follow traffic from Hypervisor UCS N5K/MDS ( NetApp)
End-2-End network performance
• esxtop verify utilization / drop on VM and uplink
VMware
Nexus 1000V
• verify utilization / drop on VM (veth) and uplink (Eth) drops high CPU on VM or no resources
bdsol-vc01vsm-01# show int eth3/3
Ethernet3/3 is up
Hardware: Ethernet, address: 0050.5652.0a1f (bia 0050.5652.0a1f)
Port-Profile is Uplink
MTU 1500 bytes
Encapsulation ARPA
Port mode is trunk
full-duplex, 10 Gb/s
5 minute input rate 32376 bits/second, 30 packets/second
5 minute output rate 19496 bits/second, 7 packets/second
Rx
300248469 Input Packets 257907789 Unicast Packets
136066323 Multicast Packets 223171865 Broadcast Packets
99113994481 Bytes
Tx
128872308 Output Packets 123377360 Unicast Packets
550084 Multicast Packets 6369085 Broadcast Packets 5352056 Flood Packets
177730805977 Bytes
5696314 Input Packet Drops 0 Output Packet Drops
bdsol-vc01vsm-01# show int Veth22
Vethernet22 is up
Port description is xd-vxme-d007, Network Adapter 1
Hardware: Virtual, address: 0050.569e.1755 (bia 0050.569e.1755)
Owner is VM "xd-vxme-d007", adapter is Network Adapter 1
Active on module 7
VMware DVS port 329
Port-Profile is VLAN_541
Port mode is access
5 minute input rate 496 bits/second, 0 packets/second
5 minute output rate 1872 bits/second, 1 packets/second
Rx
188297 Input Packets 916725 Unicast Packets
181987 Multicast Packets 248526 Broadcast Packets
69549471 Bytes
Tx
2957015 Output Packets 162803 Unicast Packets
14411526 Multicast Packets 6765942 Broadcast Packets 2789522 Flood Packets
316260141 Bytes
0 Input Packet Drops 0 Output Packet Drops
UCS (vnic IOM FI north-bound)
bdsol-6248-01-B# show service-profile connectivity
Service Profile: DCNSOL-bdsol-esxi-04
…
vNIC: eth4
VIF ID Transport Fabric ID Status Prot Role Peer VIF ID Oper State
---------- --------- --------- ----------- ----------- ----------- ----------
894 Ether A Allocated Unprotected 0 Active
vNIC: eth5
VIF ID Transport Fabric ID Status Prot Role Peer VIF ID Oper State
---------- --------- --------- ----------- ----------- ----------- ----------
895 Ether B Allocated Unprotected 0 Active
UCS
bdsol-6248-01-B# connect adapter 2/1/1
adapter 2/1/1 # connect
adapter 2/1/1 (top):3# attach-mcp
adapter 2/1/1 (mcp):2# vnic
---------------------------------------- --------- --------------------------
v n i c l i f v i f
id name type bb:dd.f state lif state uif ucsm idx vlan state
--- -------------- ------- ------- ----- --- ----- --- ----- ----- ---- -----
13 vnic_1 enet 08:00.0 UP 2 UP =>0 890 733 1 UP
14 vnic_2 enet 09:00.0 UP 3 UP =>1 891 731 1 UP
15 vnic_3 enet 0a:00.0 UP 4 UP =>0 892 734 1 UP
16 vnic_4 enet 0b:00.0 UP 5 UP =>1 893 732 1 UP
17 vnic_5 enet 0c:00.0 UP 6 UP =>0 894 735 1 UP
18 vnic_6 enet 0d:00.0 UP 7 UP =>1 895 733 1 UP
19 vnic_7 fc 0e:00.0 UP 8 UP =>0 896 736 1070 UP
20 vnic_8 fc 0f:00.0 UP 9 UP =>1 897 734 1080 UP
adapter 2/1/1 (mcp):54# lifstats 7 -a
DELTA TOTAL DESCRIPTION
939 410692745 Tx unicast frames without error
0 5 Tx multicast frames without error
0 4125 Tx broadcast frames without error
518990 1923437369399 Tx unicast bytes without error
0 414 Tx multicast bytes without error
0 280500 Tx broadcast bytes without error
0 0 Tx frames dropped
0 0 Tx frames with error
19 153267420 Tx TSO frames
1349 344390451 Rx unicast frames without error
0 54 Rx multicast frames without error
0 76304 Rx broadcast frames without error
10188382 233574137276 Rx unicast bytes without error
0 4460 Rx multicast bytes without error
0 5188648 Rx broadcast bytes without error
0 0 Rx frames dropped
0 0 Rx rq drop pkts (no bufs or rq disabled)
0 0 Rx rq drop bytes (no bufs or rq disabled)
0 0 Rx frames with error
0 0 Rx good frames with RSS
0 0 Rx frames with Ethernet FCS error
0 4118 Rx frames len == 64
12 85808713 Rx frames 64 < len <= 127
64 230343441 Rx frames 128 <= len <= 255
1 3875981 Rx frames 256 <= len <= 511
4 914752 Rx frames 512 <= len <= 1023
4 247785 Rx frames 1024 <= len <= 1518
1264 23272019 Rx frames len > 1518
2.017Mbps Tx rate
38.310Mbps Rx rate
adapter 2/1/1 (mcp):55#
fex-1# show platform software woodside rmon 0 HI31
+----------------------+----------------------+-----------------+----------------------+----------------------+-----------------+
| TX | Current | Diff | RX | Current | Diff |
+----------------------+----------------------+-----------------+----------------------+----------------------+-----------------+
| TX_PKT_LT64 | 0| 0| RX_PKT_LT64 | 0| 0|
| TX_PKT_64 | 1233122| 0| RX_PKT_64 | 299111| 0|
| TX_PKT_65 | 3443446179| 117| RX_PKT_65 | 1726497484| 67|
| TX_PKT_128 | 1468608090| 154| RX_PKT_128 | 678711943| 35|
| TX_PKT_256 | 164594394| 53| RX_PKT_256 | 257993274| 10|
| TX_PKT_512 | 140774947| 15| RX_PKT_512 | 202303546| 4|
| TX_PKT_1024 | 105217922| 0| RX_PKT_1024 | 76203194| 6|
| TX_PKT_1519 | 1371379821| 15| RX_PKT_1519 | 430787476| 19|
| TX_PKT_2048 | 520347482| 1| RX_PKT_2048 | 571391171| 6|
| TX_PKT_4096 | 163323927| 1| RX_PKT_4096 | 671897220| 115|
| TX_PKT_8192 | 2667845697| 1| RX_PKT_8192 | 3971350325| 34|
| TX_PKT_GT9216 | 0| 0| RX_PKT_GT9216 | 0| 0|
| TX_PKTTOTAL | 10046771581| 357| RX_PKTTOTAL | 8587434744| 296|
| TX_OCTETS | 29652413140776| 117300| RX_OCTETS | 42180807573683| 881346|
| TX_PKTOK | 10046771581| 357| RX_PKTOK | 8587434744| 296|
| TX_UCAST | 9475625210| 259| RX_UCAST | 8580251937| 294|
| TX_MCAST | 194696639| 43| RX_MCAST | 5884916| 2|
| TX_BCAST | 376449732| 55| RX_BCAST | 1297891| 0|
| TX_VLAN | 0| 0| RX_VLAN | 2| 0|
| TX_PAUSE | 0| 0| RX_PAUSE | 0| 0|
| TX_USER_PAUSE | 1233122| 0| RX_USER_PAUSE | 0| 0|
| TX_FRM_ERROR | 0| 0| | | |
| | | | RX_OVERSIZE | 0| 0|
| | | | RX_TOOLONG | 0| 0|
| | | | RX_DISCARD | 0| 0|
| | | | RX_UNDERSIZE | 0| 0|
| | | | RX_FRAGMENT | 0| 0|
| | | | RX_CRC_NOT_STOMPED | 0| 0|
| | | | RX_CRC_STOMPED | 0| 0|
| | | | RX_INRANGEERR | 0| 0|
| | | | RX_JABBER | 0| 0|
| TX_OCTETSOK | 29652413140776| 117300| RX_OCTETSOK | 42180807573683| 881346|
+----------------------+----------------------+-----------------+----------------------+----------------------+-----------------+
fex-1#
fex-1# show platform software woodside rate
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
| Port || Tx Packets | Tx Rate | Tx Bit || Rx Packets | Rx Rate | Rx Bit |Avg Pkt|Avg Pkt| |
| || | (pkts/s) | Rate || | (pkts/s) | Rate | (Tx) | (Rx) |Err|
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
| 0-BI || 19 | 3 | 4.01Kbps || 11 | 2 | 2.20Kbps | 112 | 105 | |
| 0-CI || 49 | 9 | 17.28Kbps || 35 | 7 | 18.68Kbps | 200 | 313 | |
| 0-NI3 || 2028 | 405 | 9.25Mbps || 389 | 77 | 755.09Kbps | 2833 | 1193 | |
| 0-NI2 || 1898 | 379 | 11.20Mbps || 1549 | 309 | 6.91Mbps | 3671 | 2771 | . |
| 0-NI1 || 415 | 83 | 602.80Kbps || 4829 | 965 | 21.45Mbps | 887 | 2756 | |
| 0-NI0 || 15340 | 3068 | 99.12Mbps || 13169 | 2633 | 56.88Mbps | 4018 | 2679 | . |
| 0-HI31 || 4974 | 994 | 21.77Mbps || 2117 | 423 | 5.86Mbps | 2715 | 1712 | |
| 0-HI27 || 765 | 153 | 874.20Kbps || 742 | 148 | 4.58Mbps | 694 | 3846 | |
| 0-HI23 || 468 | 93 | 229.40Kbps || 348 | 69 | 2.22Mbps | 286 | 3978 | . |
| 0-HI19 || 1047 | 209 | 6.24Mbps || 959 | 191 | 2.06Mbps | 3705 | 1327 | |
| 0-HI15 || 3530 | 706 | 33.94Mbps || 2505 | 501 | 5.09Mbps | 5990 | 1252 | |
| 0-HI11 || 9994 | 1998 | 23.14Mbps || 12982 | 2596 | 100.38Mbps | 1427 | 4812 | |
+--------++------------+-----------+------------++------------+-----------+------------+-------+-------+---+
fex-1#
• bdsol-6248-01-B(nxos)# show int vethernet 895
• bdsol-6248-01-B(nxos)# show pinning interface vethernet 895
• bdsol-6248-01-B(nxos)# show interface port-channel 1
Nexus 5548
bdsol-n5548-05# show interface Eth1/31 | inc err
219107 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 output errors 0 collision 0 deferred 0 late collision
bdsol-n5548-05# show interface Eth1/31 | inc CRC
0 runts 0 giants 38 CRC 0 no buffer
bdsol-n5548-05#
Upgrade the stack
Upgrade from FlexPOD VMware CVD 5.1 to CVD 5.5
FlexPOD VMware 5.1 FlexPOD VMware 5.5
UCS-B 2.1(3) 2.2(2c)
5548 6.0(2)N1(2a) 7.0(1)N1(1)
NetApp 8.2P4 8.2.1
VMware 5.1U1 5.5U1
Nexus 1000V 4.2(1)SV2(2.1a) .4.2(1)SV2(2.2)
How do we start ?
• Check all upgrade guides
• List all steps to upgrade 1 component
• Look for prerequisites
• Find a matching sequence
All steps
• UCS:
• Upgrade UCS Manager
• Update IO modules
• Activate IO modules set startup version
• Activate subordinate fabric interconnect
• Manually fail-over primary FI to upgrade FI, upgrade remaining FI
• Update / Activate Adapters / CIMC / BIOS / Storage controller
• VMware
• Upgrade vCenter
• Upgrade ESXi (do not upgrade VEM)
• Upgrade enic / fnic drivers
• NetApp:
– Upgrade controller A
– Upgrade controller B
• Nexus 5548
– Upgrade switch A
– Upgrade switch B
• Nexus 1000v
– Upgrade VSM
– Upgrade all VEM’s
Minimum requirements – Can we first upgrade Vmware to 5.5U1 ?
• Nexus 1000V - 4.2(1)SV2(2.1a) and 4.2(1)SV2(2.2)
• support VMware 5.5U1
• UCS - 2.1(3) and 2.2(2c)
• support VMware 5.5 U1
• NetApp - 8.2(P4) and 8.2.1
• support VMware 5.5 U1
• support UCS 2.1(3) and 2.2(2c)
• Nexus 5548 - 6.0(2)N1(2a) and 7.0(1)N1(1)
• support VMware 5.5U1
• Support UCS 2.1(3) and 2.2(2c)
Possible upgrade scenario
• Upgrade vCenter ESXi
• Block VEM upgrade
• Make custom ISO with eniv/fnic drivers
• Use enic/fnic drivers from UCS 2.1(3)
• Upgrade VSM Upgrade VEM
• Upgrade UCS
• After upgrading adapters upgrade VMware enic/fnic drivers• Make sure to upgrade blade per blade
• Upgrade Nexus 5548
• Upgrade NetAppAny moment
UCS-Director automation
UCS-Director automation
• UCS Director can automate day2day tasks of running a FlexPOD stack
• Deploy ESXi Host with ONTAP
• ACI / NetApp – new tenant on-boarding
• All common NetApp / UCS / VMware / Storage networking / … taks
Key take-aways
Key take-aways
• Carefully review all requirements and configure them across the stack
• When troubleshooting, use a top-down approach
• Make an inventory of all paths involved
• Understand all CRC’s, errors, speed of the entire path
• Upgrade the infrastructure step-by-step and make sure to understand all upgrade guides / pre-requisites
• UCS-Director can assist in automating standard deployment tasks
Participate in the “My Favorite Speaker” Contest
• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)
• Send a tweet and include
• Your favorite speaker’s Twitter handle @rsmeyers
• Two hashtags: #CLUS #MyFavoriteSpeaker
• You can submit an entry for more than one of your “favorite” speakers
• Don’t forget to follow @CiscoLive and @CiscoPress
• View the official rules at http://bit.ly/CLUSwin
Promote Your Favorite Speaker and You Could Be a Winner
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
Thank you