Post on 13-Aug-2015
transcript
Openflow
Josef Ungerman, CCIE #6167
Stanford Clean Slate led to the development of…
OpenFlow is a communicaCons protocol that gives access to the forwarding plane of a network
switch or router over the network
What is Openflow? (per Wikipedia definiCon)
Four parts to Openflow
Central Administra.on and Opera.ons
point for Network Elements
Openflow Controller
Openflow Controller | Northbound API
Northbound API Integral part of Controller
“Network enabled” applica.on can make use of Northbound API to
request services from the network…
Openflow Device Agent
Agent runs on the network device
Agent receives instruc.ons
from Controller
Agent programs device tables
Openflow Protocol
Openflow Protocol is…
“A mechanism for the Openflow Controller to communicate with
Openflow Agents…”
Examples of Openflow Open Source Controllers
Openflow Agents • Open Source – eg. Indigo hDp://www.openflowhub.org/display/Indigo/Indigo+-‐+Open+Source+OpenFlow+Switches • Vendor Specific – eg. Cisco OnePK OF 1.3 agent (IOS, IOS-‐XE, IOS-‐XR, NX-‐OS)
Important lesson for today…
Openflow does not equal SDN
Openflow
So\ware Defined
Networking
Openflow is one flavor of SDN
Openflow Protocol in more detail
Openflow Protocol Versions
Openflow 1.0
Ope
nflow
v1.0
Data Data Data
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Incoming packet arrive at Switch
** CPU
**Openflow 1.0 supports a lookup into a single flow table
Symmetric Sync Messages (Hello, Echo, Vendor…)
Ope
nflow
v1.0
Data Data Data
FLOW TABLE
SWITCH FORWARDING ENGINE
Fields from packet header used for lookup key
** CPU
**Openflow 1.0 supports a lookup into a single flow table
Lookup Key
Header fields used to build lookup key
Switch
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
If no match, Controller programs switch flow table
CPU
Data Data Data
Ope
nflow
v1.0
Data Data
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Forwarding Engine forwards packets
** CPU
**Openflow 1.0 supports a lookup into a single flow table
Ope
nflow
v1.0
Flow Table in more detail…
FLOW TABLE HEADER FIELDS COUNTERS ACTIONS
…
…
… …
… …
FLOW ENTRY
Flow “Entry” consists of one row in the Flow Table
Ope
nflow
v1.0
Flow Table in more detail…
FLOW TABLE HEADER FIELDS COUNTERS ACTIONS
…
…
… …
… …
Ingress Port
Source MAC
Dest MAC
Ether Type
VLAN ID
VLAN Priority
IP SRC
IP DEST
IP Protocol
IP TOS
TCP/UDP SRC
(ICMP Type)
TCP/UDP DEST
(ICMP Code)
HEADER FIELDS
This is the “Famous” Openflow 12 Tuple
1 2 3 4 5 6 7 8 9 10 11 12
Ope
nflow
v1.0
Flow Table in more detail…
FLOW TABLE HEADER FIELDS COUNTERS ACTIONS
…
…
… …
… …
Per Table AcCve Entries 32 Bits
Packet Lookups 64 Bits
Packet Matches 64 Bits
Per Flow Received Packets 64 Bits
Received Bytes 64 Bits
DuraCon (seconds) 32 Bits
DuraCon (nanoseconds) 32 Bits
Per Queue Transmit Packets 64 Bits
Transmit Bytes 64 Bits
TX Overrun Errors 64 Bits
Per Port Received Packets 32 Bits
Transmit Packets 64 Bits
Received Bytes 64 Bits
Transmit Bytes
Received Drops
Transmit Drops
Received Errors
Transmit Errors
Received Frame Alignment Errors
RX Overrun Errors
RX CRC Errors
Collisions
64 Bits
64 Bits
64 Bits
64 Bits
64 Bits
64 Bits
64 Bits
64 Bits
64 Bits
Ope
nflow
v1.0
Flow Table in more detail…
FLOW TABLE HEADER FIELDS COUNTERS ACTIONS
…
…
… …
… …
MulCple AcCons available to be programmed
Let us explore those in more detail…
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
1
Packet
Required AcCon #1
Forward out all ports except input port
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
Packet
Required AcCon #2
Redirect to Openflow Controller
2
In addiCon, there are other asynchronous Switch-‐to-‐Controller messages like this: • Port-‐Status (up/down, STP state,…) • Flow-‐Removed (idle, Cmeout) • Error
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
Packet
Required AcCon #3
Forward to local CPU 3
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
Packet
Required AcCon #4
Perform acCon in Flow Table (“inject” operaCon) 4
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
Packet
Required AcCon #5
Forward to Input Port
5
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
Packet
Required AcCon #6
Forward to DesCnaCon Port
6
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
CPU
Packet
Required AcCon #7
Drop Packet
7
Ope
nflow
v1.0
Switch
FLOW TABLE
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
Required AcCons Supported by “Openflow 1.0” Switch
6
2
7
CPU
1
34
5
Required AcCons
1 Forward out all ports except input port
2 Redirect to Openflow Controller
3 Forward to local Forwarding Stack (CPU)
4 Perform acCon in flow table
5 Forward to input port
6 Forward to desCnaCon port
7 Drop Packet
OpConal AcCons: • Modify-‐Field (eg. VLAN translaCon) • Enqueue (QoS) • Forward Normally (L2/L3)
Openflow 1.1
Ope
nflow
v1.1
Data Data Data Sw
itch
FLOW TABLE 1
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
CPU
GROUP TABLE
FLOW TABLE 2
FLOW TABLE n
Openflow 1.1 Switch consists of one of more flow tables and a group table
Provides addiConal methods for forwarding
i.e. broadcast/mulCcast
Ope
nflow
v1.1
Table 0 … … …
Matching starts at Table 1 and “may” conCnue to next table
Table 1 … … …
Table n … … …
Execute AcCon Set
Ingress packet
AcCon Set = {}
AcCon Set
AcCon Set
packet
packet + input port + metadata
Flow Table
AcCon Set AcCon Set
Match Fields ingress port + metadata + pkt headers
Match Fields ingress port + metadata + pkt headers
Ope
nflow
v1.1
Table 0
Flow entries match in packet order First matching entry in table used
Table 1 … … …
Table n … … …
Flow Entry 1
Flow Entry 2
Flow Entry 3
Flow Entry 4
… … …
Flow Entry 5
Flow Entry 6
Flow Entry 7
Flow Entry 8
Flow Entry 9
Ope
nflow
v1.1
Table 0
AcCons in Flow Table define packet processing opCons
Flow Entry 1
Flow Entry 2
Flow Entry 3
Flow Entry 4
… … …
MATCH FIELD COUNTERS ACTIONS
Packet Forwarding Packet ModificaCon Pipeline Processing
Group Table Processing
Flow Entry 5
Flow Entry 6
Flow Entry 7
Flow Entry 8
Flow Entry 9
Ope
nflow
v1.1
Table 0 Flow Entry 1
Flow Entry 2
Flow Entry 3
Flow Entry 4
… … …
MATCH FIELD COUNTERS ACTIONS
Ingress Port
Source MAC
Dest MAC
Ether Type
VLAN ID
VLAN Priority
IP SRC
IP DEST
IP Protocol
IP TOS
TCP/UDP SRC
ICMP Type
TCP/UDP DEST
ICMP Code
MPLS Label
MPLS Traffic Class
MPLS and VLAN Q-‐in-‐Q now supported in version 1.1
Ope
nflow
v1.1
Openflow v1.1 defines two processing pipeline opCons OPENFLOW ONLY and OPENFLOW HYBRID
OPENFLOW ONLY SWITCH
OPENFLOW HYBRID SWITCH
Openflow Processing Pipeline Data Data
Data Data
Openflow Processing Pipeline
STD Ethernet Processing Pipeline
OF or STD
O U T P U T
Openflow 1.2
Ope
nflow
v1.2
IPv6 now supported for lookup in flow table…
FLOW TABLE HEADER FIELDS COUNTERS ACTIONS
…
…
… …
… …
Ingress Port
Source MAC
Dest MAC
Ether Type
VLAN ID
VLAN Priority
IP SRC
IP DEST
IP Protocol
IP TOS
TCP/UDP SRC
ICMP Type
TCP/UDP DEST
ICMP Code
MPLS Label
MPLS Traffic Class
Both IPv4 and IPv6 flows supported in header field lookup
Openflow 1.3
Ope
nflow
v1.3
IPv6 Standard Header IPv6 Extended Headers Data
IPv6 Extended Headers supported in OF 1.3…
Allows match on following condiCons
Hop by Hop IPv6 extension header Router IPv6 extension header
FragmentaCon IPv6 extension header DesCnaCon OpCons IPv6 extension header AuthenCcaCon IPv6 extension header
Encrypted Security IPv6 extension header No Next Header IPv6 extension header
IPv6 extension headers out of preferred order Unexpected IPv6 extension header
Ope
nflow
v1.3
Data Data Data Sw
itch
FLOW TABLE 1
SWITCH FORWARDING ENGINE
OPENFLOW CONTROLLER
CPU
GROUP TABLE
FLOW TABLE 2
FLOW TABLE n
Openflow 1.3 Switch now adds a “flow meter” table
FLOW METER TABLE
Flow meter provides rate limi.ng (policing)
Ope
nflow
v1.3
Per Flow Meters supported in OF 1.3…
METER TABLE METER IDENTIFIER METER BAND COUNTERS
…
…
… …
… …
TYPE RATE COUNTERS TYPE/ARGUMENTS
Controls the rate/flow of packets in a flow
Ope
nflow
v1.3
Auxiliary Connec.ons supported in OF 1.3…
O/F CONTROLLER
O/F SWITCH
O/F CONTROLLER
O/F SWITCH
Single TCP ConnecCon Auxiliary
ConnecCons
Auxiliary connecCons over UDP and DTLS to carry packet in/out messages between controller and switch
BEFORE AFTER
Ope
nflow
v1.3
Other Openflow v1.3 Highlights…
Match on MPLS Bovom of Stack (BoS) Bit – label stacking Provider Backbone Bridging (PBB) support – Mac-‐in-‐Mac DuraCon field added for StaCsCcs Support for Tunnel encapsulaCons (i.e. GRE ) Ability to disable packet/byte counters on a per flow basis
Generic Route EncapsulaCon
**
**
Openflow 1.3.x
Version NegoCaCon TLV supported in OF 1.3…
OPENFLOW CONTROLLER
OPENFLOW SWITCH
Version NegoCaCon** built into flexible TLV format
Version NegoCaCon now incorporated into
TLV used during switch/controller
negoCaCon
Ope
nflow
v1.3.1
** Previously negoCaCon might fail due to lack of all versions being known by both sides
*
Type Label Value *
Openflow Protocol Summary
Openflow v1.0
Openflow v1.1
Openflow v1.2
Openflow v1.3
Openflow v1.3.1-‐1.3.4
IniCal Standard – Most prevalent in the market today
Added support for mulCple flow tables Added support for MPLS Defines two operaCng modes – Hybrid | Pure Openflow
Adds support for IPv6
Adds support for Rate LimiCng | IPv6 extension headers GRE – The version deemed produc.on ready
Adds NegoCaCon TLV’s, bug fixes
Openflow v1.4 Extensibility, bundles, tcp/6633!6653, improvements…
Application Frameworks, Management Systems, Controllers, ...
Device
Forwarding
Control
Network Services
OrchestraCon
Management
…
…
OpenFlow
OpenFlow
OperaCng Systems – IOS / NX-‐OS / IOS-‐XR API (OnePK) and Data Models (YANG)
OpenStack Puppet OnePK C/Java
Puppet
Neutron
Protocols
“Protocols” BGP, PCEP,...
Python NETCONF REST ACI Fabric
OpFlex
onePK Plug-‐Ins
RESTful
YANG JSON/XML
Example: OpenFlow vs. Hardware CapabiliCes Open Flow 1.3 Match Fields Support
L2 L3 L2 only L2+L3
V4 only IPv4 + IPv6
Dual Stack
Match Fields Supported by ASIC X version Y OXM_OF_IN_PORT
OXM_OF_IN_PHY_PORT Yes Yes Yes Yes
OXM_OF_METADATA
OXM_OF_ETH_DST Yes Yes
OXM_OF_ETH_SRC Yes Yes Yes
OXM_OF_ETH_ETYPE Yes Yes Yes
OXM_OF_VLAN_VID Yes Yes Yes
OXM_OF_VLAN_PCP Yes Yes Yes
OXM_OF_IP_DSCP Yes Yes Yes
OXM_OF_IP_ECN Yes Yes Yes
OXM_OF_IP_PROTO
OXM_OF_IPV4_SRC Yes Yes Yes
OXM_OF_IPV4_DST Yes Yes Yes
OXM_OF_TCP_SRC Yes Yes Yes
OXM_OF_TCP_DST Yes Yes Yes
OXM_OF_UDP_SRC Yes Yes Yes
OXM_OF_UDP_DST Yes Yes Yes
OXM_OF_SCTP_SRC Yes Yes Yes
OXM_OF_SCTP_DST
OXM_OF_ICMPV4_TYPE
OXM_OF_ICMPV4_CODE
Open Flow 1.3 Match Fields Support
L2 L3 L2 only L2+L3 V4 only IPv4 + IPv6
Dual Stack
Match Fields Supported by ASIC X version Y OXM_OF_ARP_OP Yes Yes Yes Yes
OXM_OF_ARP_SPA
OXM_OF_ARP_TPA Yes Yes
OXM_OF_ARP_SHA Yes Yes Yes
OXM_OF_ARP_THA Yes Yes Yes
OXM_OF_IPV6_SRC Yes Yes Yes
OXM_OF_IPV6_DST Yes Yes Yes
OXM_OF_IPV6_FLABEL Yes Yes Yes
OXM_OF_ICMPV6_TYPE Yes Yes Yes
OXM_OF_ICMPV6_CODE
OXM_OF_IPV6_ND_TARGET Yes Yes Yes
OXM_OF_IPV6_ND_SLL Yes Yes Yes
OXM_OF_IPV6_ND_TLL Yes Yes Yes
OXM_OF_MPLS_LABEL Yes Yes Yes
OXM_OF_MPLS_TC Yes Yes Yes
OXM_OF_MPLS_BOS Yes Yes Yes
OXM_OF_MPLS_PBB_ISID Yes Yes Yes
OXM_OF_TUNNEL_ID
OXM_OF_IPV6_EXTHDR
Open Flow 1.3 Set AcRons Support
AcRons
Output Port
OFPP_IN_PORT
OFPP_NORMAL
OFPP_FLOOD
OFPP_ALL
OFPP_CONTROLLER
OFPP_LOCAL
Set-‐Queue
Drop
Group
Push-‐Tag/Pop-‐Tag Push VLAN header
Pop VLAN header
Push MPLS header
Pop MPLS header
Push PBB header
Pop PBB header
Change-‐TTL Set MPLS TTL
Decrement MPLS TTL
Set IP TTL
Decrement IP TTL
Copy TTL outwards
Copy TTL inwards
Openflow & Hardware
• Parallel TCAM lookups – Star lookup (eg. EARL) – Pipeline lookup (eg. K10) – TCAM4: 250M lookups/sec.
• Livle or no flexibility – Not possible to reprogram the ASIC to support OF logic (12-‐tuple, table chains, etc.) – Can emulate some OF funcCons, but can’t be fully compliant
• Missing features can’t be added – Older/cheaper ASIC’s may have no MPLS, no IPv6, sparse counters, simplisCc QoS
Can SDN help to reuse old/cheap ASIC’s?
DRAM
FE ASIC (Forwarding Engine)
TCAMs
headers only
SRAMs
Netflow
TCAM
map
L2 fwd
classify
police
L3 fwd
statistics
queue
map
police
classify
TM ASIC (Traffic Manager) - 16K queues - SRR (1L shaping)
Example: Pipelining L3 switch ASIC
• Flexible lookup stages (table chaining)
• MulCple flow tables with full 12-‐tuple matching – L2, L3, ACL, IPv4/IPv6… – 12-‐tuple match requires lot bigger (expensive, complex) TCAM – ACL-‐like match (MAC or FIB is 1-‐tuple) – Example: Catalyst 3850 (UADP ASIC) has 17K entry TCAM table capacity for OF (the MAC or FIB is 80K)
• Group table with full acCon list support – MulRcast, MulRpath forwarding, SPAN, …
• Apply acCons support using high speed recirculaCon – Tunneling, …
• Metadata support – Labels, …
• Full per-‐flow staCsCcs – flexible staCsCcs counters assignment
• Full meter table support
• Cisco extensions using programmable packet parsing, programmable rewrite, regular expression matching, staCc metadata for L1,L2,L3 configuraCon, advanced QoS
Another important lesson:
So\ware can’t control what hardware can’t deliver.
• Cisco Network Processors are naRvely OF 1.3 capable – Complete Programmability (C-‐language) – OpCmized fast lookup memories, sTCAM – But higher power and cost than fixed ASIC’s
(full 12-‐tuple match would be prevy expensive)
• Examples – QFP – 60Gbps (ASR1000) – Typhoon – 120Gbps (ASR9000) – nPower X1 – 400Gbps (CRS, NCS)
QFP (Quantum Flow Processor)
Distribute & Gather Logic
Resources & Memory Interconnect complete packets
complete packets
Processing Pool 256 Engines (64 PPEs x 4 threads)
TM ASIC - 128K queues - 5L shaping Pkt DRAM
on-chip resources TCAM4
RLDRAM2 7
RLDRAM2 0
Fast Memory Access
Clu
ster
ing
XC
• Packet NPU’s – Broadcom, Marvel,… – newer versions are OF 1.0 or 1.3 compliant – Various NPU limits (no L2 and L3 match at the same Cme, limited IPv6 match etc.) – Smaller TCAM = cheaper, but limited OF 1.3 12-‐tuple table size (2K entries, etc.), v6 troubles
• Service NPU’s – Cavium, Freescale/NetLogic,… – Complete programmability, definitely OF ready – Typically no TCAM – so\ware tree lookup (M-‐trie), low performance stability
• Intel x86 CPU’s – Complete programmability, definitely OF ready – 40G capable today, but they are general purpose – high power and high cost
Networking ASIC vs. x86 CPU CRS: 2004: 130nm NPU, 40Gbps
2010: 65nm NPU, 140Gbps
2013: 40nm NPU, 400Gbps
2015: 20nm…
10G
5G
1G
1 Feature IP Forwarding
2 Features IP Forwarding, MPLS Label
3 Features IP Forwarding, MPLS Label, NeVlow
‘N’ Features …
Legend: No Traffic Mgmt Basic QoS Hierarchical QoS
CPU Core (x86) Feature Processing Performance
ASR9000: 2009: 90nm NPU, 120Gbps per slot
2011: 55nm NPU, 360Gbps per slot
2014: 28nm NPU, 800Gbps per slot
…
Can I use Intel x86 as the forwarding engine? • nPower X1 = 400Gbps, 230Mpps, 75W (with IP, ACL, RPF, H-‐QoS) • Xeon E5-‐2600v2 = 40Gbps, 6-‐22Mpps, 80W (same features, no QoS) • x86 high power consumpCon (half of the chip is graphics ops, floaCng point ops, etc.)
Today, a decent Forwarding NPU/ASIC is ~10-‐20x faster, smaller, and more power efficient, than equivalent CPU soluCon.
Conclusion: Low-‐bandwidth = CPU (low-‐volume, well-‐paid traffic ! NfV)
High-‐bandwidth = NPU/ASIC (high-‐volume broadband-‐like traffic ! switching&rouCng+SDN)
Not True. Cisco uses all hardware sources: • Internal Development
• Whenever it makes sense (clear criteria) • Example: CRS/NCS forwarding NPU, ASR9K fabric, ASR900 ASIC... • Specific form: acquisiCon/spin-‐in (eg. N7K/N9K)
• Merchant+ • Cisco-‐only version with certain improvements (X years of exclusivity) • Example: ASR9K Trident/Typhoon/Tomahawk NPU • Another form of Merchant+: Merchant + Cisco ASIC together (eg. ACI/N9K)
• Merchant • Broadcom, Marvell, Vitesse,… • Used if they fit our requirements (features, performance, strategy) • Example: ASR901, ASR9000v, ME1200...
Myth #1: Cisco uses only internal silicon, that‘s why it‘s so expensive.
Value Proposition: Cisco delivers the best-‐class hardware. It has been like this for decades, and it is going to conCnue.
Yet another lesson:
Even in SDN world, there will be (a) good, (b) good-‐enough, or (c) poor
hardware.
Who controls Openflow?
Non Profit ConsorCum Dedicated to “the transforma.on of networks through SDN”
Mission to “commercialize and promote SDN…as a disrup.ve approach to
networking…”
OPEN NETWORK FOUNDATION
ONF Board Members
Deutsche Telekom : Facebook : Goldman Sachs : Yahoo Google : Microso\ : NTT CommunicaCons : Verizon
ONF Members
6WIND A10 Networks ADVA OpCcal Networking Alcatel-‐Lucent Aricent Group Big Switch Networks Broadcom Brocade Centec Networks China Mobile Ciena Cisco Citrix CohesiveFT Colt CompTIA Cyan Dell/Force10 Elbrys Ericsson
ETRI Extreme Networks EZchip F5 France Telecom Orange Freescale Fujitsu Gigamon Hitachi HP Huawei IBM Infinera Infoblox Intel IP Infusion Ixia Juniper Networks KDDI Korea Telecom
Level 3 CommunicaCons LineRate Systems LSI Luxo\ Marvell Mellanox Metaswitch Networks Midokura NCL CommunicaCons NEC Netgear Netronome NetScout Systems Nokia Siemens Networks NoviFlow Oracle Overture Networks PICA8 Plexxi Inc. Qosmos
Radware Riverbed Technology Samsung SK Telecom Spirent Sunbay Swisscom Tail-‐f Systems Telecom Italia Telefónica Tencent Texas Instruments Thales Transmode Turk Telekom / Argela Vello Systems Verisign VMware/Nicira Xpliant ZTE CorporaCon
Is that LAN-‐like Centralized SDN OF deployment? Not really. • B4 is World-‐wide WAN • The Network runs ISIS and BGP • OF agent is used to set up TE tunnels from a central controller. (beDer tools are evolving for this – see IETF Spring www.segment-‐rou.ng.net )
Urs Holzle, Senior Vice President of Technology Infrastructure at Google, at the 2nd annual Open Networking Summit (April 2012) hDp://www.ee.mes.com/electronics-‐news/4371179/Google-‐describes-‐its-‐OpenFlow-‐network
SDN WAN since 2011 !
Original SDN idea: Clean Slate Project (Stanford University)
Openflow Sweetspot
Distributed Control Plane (disconnected Net and Apps)
Evolved Control Plane Architecture (Examples)
…
Control/Network/Services-‐plane component(s) ASIC’s, Data-‐plane component(s) ApplicaCons
Centralized SDN Hybrid SDN TradiConal Control Plane Architecture
Underlay (Physical)
Overlay (tunnels)
• NREN, EducaCon Secvor (Internet2) • DC Overlay (OVS – Open vSwitch) • OpenDaylight/XNC add-‐on (eg. SPAN)
DĚKUJI ZA POZORNOST
Prosíme, ohodnoťte tuto přednášku.
67