Date post: | 20-Aug-2015 |
Category: |
Technology |
Upload: | mestery |
View: | 3,533 times |
Download: | 0 times |
LISP and NSHin Linux and Open vSwitch
Vina ErmaganLori Jakab
Pritesh KothariKyle Mestery
Agenda• Service Chaining Definition
• LISP and Control Plane Service Chaining
• Service Chaining in Data Plane : NSH
• Problem statement
• Work Completed
• Next Steps
Service Chaining and Network Service Headers
• Service chaining is the broad term that describes delivering multiple services in a specific order– Decouples network topology and services– Supports dynamic insertion of services– Common model for all types of services
• Composed of data plane (focus of this presentation) and control/policy planes– Control and policy planes: will vary based on environment and
requirements. LISP control plane is one example. Area for innovation.– Data plane: common service header, shared between network platforms
and services.
LISP Background - Architecture
Policy: - Traffic Engineering, - Service Chaining, - Load Balancing, etc.
Data Plane:Encapsulation header to builda Multitenant Overlay
Control Plane:Mapping of Overlay address Space to underlying physicalNetwork + Policy
VMVMVM
VMVM
App
OS
VM
Database
Open L3 Overlay Protocol (RFC 6830):- Decouple Network Control Plane from Data Plane- Provide Network Programmability- Control plane enables dynamic on demand tunneling
LISP Control Plane – Explicit Locator Path (ELP)
• Enables a data plane hop by hop path via control plane ELP
• L: Lookup bit indicating the Re-encap Hop’s address is an indirection and must be looked up in the mapping system.
• P: Probe bit indicating the Re-encap Hop accepts reachability control messages.
• S: Strict bit indicating the Re-encap Hop must be used.
00 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 20 1 2 3 4 5 6 7 8 9 30 1
AFI = 16378 Rsvd1 Flags
Type = 10 Rsvd2 Length
AFI = x Rsvd3 L P S
Re-encap Hop 1
AFI = x Rsvd3 L P S
Re-encap Hop 2
LISP Mapping SystemNet Virtualization Edge
Request RLOC for dest EID
Response with ELP
LISP Background – Data Plane Header
• N: The N-bit is the nonce-present bit.
• L: The L-bit is the 'Locator-Status-Bits' field enabled
• E: The E-bit is the echo-nonce-request bit. Used together with N=1 to test reachability
• V: The V-bit is the Map-Version present bit. Used to indicate source and dest mapping version enabling mapping cache coherency
• I: The I-bit is the Instance ID bit. Used for multi-tenancy and traffic segregation
00 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 20 1 2 3 4 5 6 7 8 9 30 1
N L E V I - - - Nonce/Map-Version
Instance ID/Locator-Status-Bits
L2 Header L3 Header(protocol=17)
UDP Header(port#=LISP) LISP IP Header Original Payload
Endpoint ID: IP/MACRouting Locator
LISP Work Completed• LISP Work Completed
– Basic encap/decap support, flow based tunneling• Upstreamed to OVS, but not Linux
– WireShark dissector support as of 1.10.x stable series – Command line support in ovs-vsctl
• LISP Work Ongoing– Adding generic layer 3 tunneling support to OVS
• pop_eth, push_eth, allowing a flow to be specified without an Ethernet header
• Prerequisite for Linux upstreaming• ARP handling to add appropriate eth header in case of push_eth
– Enable LISP-GPE– LISP control plane in OVS
Service Chaining Data Plane Components• Traffic Classifier
– Determines what traffic requires services and forms the logical start of a service path. A traffic classifier imposes NSH header.
• Service Path– A service path is the actual forwarding path used to realize a service
chain. A service path identifier is carried in a NSH header.
• Service Overlay– The network overlay used by the service path. NSH is overlay agnostic
and can be used with existing DC overlays.
• Context– Shared context, carried in a NSH header, enables network-service
interaction and richer policy creation and enforcement. For example, classification from an edge device can be conveyed to a service.
NSH Header
• Simple, fixed size header– 6 32 bits words: 2 word base header, 4 32 bit mandatory context (metadata) headers
• Transport agnostic– VXLAN, LISP, NVGRE, MPLS, etc.
• Context headers carry metadata along the service path– Significance determined by the control plane;
– Innovate, create network value!
00
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 20
1 2 3 4 5 6 7 8 9 30
1
Base Header
Base Header
Context Header
Context Header
Context Header
Context Header
NSH Header: Expanded
• O: OAM bit indicates packet is an OAM packet and must be punted
• C: indicates that the context headers are in use and their allocation (if C = 0, all context values = 0, the headers are still present, just unused)
• Protocol Type of the original packet payload
• Service Index provides loop detection and location within the service chain. Can be used with a Service Path Identifier to derive unique value for forwarding
• Service Path Identifier: identifier of a service path; used for service forwarding
• Context Headers: packet metadata
00 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 20 1 2 3 4 5 6 7 8 9 30 1
O C - - - - - - Protocol Type (16) Service Index (8)
Service Path Identifier (24) -
Network Platform Context
Network Shared Context
Service Platform Context
Service Shared Context
Original Packet Payload
Base Header 1
Base Header 2
Context Header 1
Context Header 2
Context Header 3
Context Header 4
NSH + LISP• LISP has no mechanism to signal presence of a non-IP payload
yet– Use UDP destination port to indicate Ethernet payload : draft-smith-lisp-
layer2-03– Addressed by new draft
• http://www.ietf.org/id/draft-lewis-lisp-gpe-00.txt
L2 Header L3 Header(protocol=17)
UDP Header(port#=LISP) LISP NSH Original Payload
Example: NSH + LISP
00 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 20 1 2 3 4 5 6 7 8 9 30 1
N L E V I P - - Nonce/Map-Version/Protocol-Type
Instance ID/Locator-Status-Bits
N,E,V bits are set to 0 when P=1
However … NSH is transport independent• NSH headers are transport agnostic and can be used with any
transport protocol
• Support for overlay and underlay encapsulations– GRE, LISP, VXLAN, etc.
• The outer encapsulation must indicate the presence of the NSH header– A new IEEE Ethertype will be allocated for NSH
The state of service insertion …
Network Service Insertion, Today• Network services are ubiquitous
– Firewall and load balancer are in almost every data center
• Current state of the art is topology dependent – Very complex: VLAN-stitching, Policy Based Routing (PBR), etc.– Static: no dynamic, horizontal or vertical scaling, requires network changes
• Service chaining is accomplished via hop-by-hop switching/routing changes
• No way to share valuable information between the network and services, or between services
• Data centers are evolving, with physical and virtual workloads and services– Primarily physical service insertion today
• Services and service deployments must adapt to the hybrid and dynamic data centers
• Service chains must span and converge these hybrid environments
Changing Network Service Landscape
From To
Physical Appliances Virtual Appliances
Static services Dynamic services
Separate domains: physical vs. virtual
Seamless physical and virtual interoperability
Hop-by-hop service deployment
Chain of services
Underlay networks Dynamic overlays
Topological dependent insertion
Insertion based on resources and scheduling
No shared context Rich meta-data
Policy based on VLANs Policy based on meta-data
Service Chaining• Service chaining is the broad term that describes delivering
multiple services in a specific order– Decouples network topology and services– Supports dynamic insertion of services– Common model for all types of services
• Composed of data plane (focus of this presentation) and control/policy planes– Data plane: common service header, shared between network platforms
and services. – Control and policy planes: will vary based on environment and
requirements. Key area for innovation.
Work we’ve done so far …
LISP Work Completed• LISP Work Completed
– Basic encap/decap support, flow based tunneling• Upstreamed to OVS, but not Linux
– WireShark dissector support as of 1.10.x stable series – Command line support in ovs-vsctl
• LISP Work Ongoing– Adding generic layer 3 tunneling support to OVS
• pop_eth, push_eth, allowing a flow to be specified without an Ethernet header
• Prerequisite for Linux upstreaming– Enable LISP-GPE
NSH Work Completed
• NSH Work Completed– Initial encap/decap prototype code finished
• Works with the VXLAN code in upstream OVS
– WireShark dissector code added• Helps in debugging NSH with VXLAN
– Openflow support in OVS for flow matching on NSH Service Path ID (nsp).
– Command line support in ovs-vsctl and ovs-ofctl for above flow matching.
• NSH Work Ongoing– Move encap/decap code into the Linux kernel– Allow for stacking NSH with other overlay protocols (e.g. GRE and LISP)
Next Steps: Upstreaming• Push out encap/decap code into github repository and send
patches to ovs-dev mailing list
• Add support for NSH service index to be allowed to be set using out_nsi=flow parameter.– Push this up into OVS
• LISP is in the out-of-tree openvswitch kernel module– No standalone LISP tunneling support in Linux– LISP will be pushed to in-tree openvswitch module if layer 3 tunneling
support generalized in OVS• Currently we push/pop Ethernet headers in the LISP code• LISP is seen as just another layer 2 port in OVS
End Goal• Allow for an elastic, overlay based service network
– NSH for services– GRE/LISP/VXLAN for the overlay network– Future unification of LISP and VxLAN via GPE drafts
• Integration with OpenDaylight– Allow for programming of NSH information into OVS
• Tie it all together with your elastic cloud platform– CloudStack– Eucalyptus– OpenStack
Questions/Future Additions?• Upstreaming ?
• GRE Support?
• NSH directly over UDP and/or IP support?
• Integration with OpenDaylight?
• Next ?
• LISP control plane support ?
Questions?
Backup Slides
NSH + VXLAN• VXLAN has no mechanism to signal presence of a non-Ethernet
payload yet– Use UDP destination port to indicate new payload– Addressed by
• http://www.ietf.org/id/draft-quinn-vxlan-gpe-00.txt
L2 Header L3 Header(protocol=17)
UDP Header(port#=0xnsh-
vxlan)VXLAN NSH
Svc Header(PT=0x6558) Original L2-Frame
NSH + VXLAN using UDP port #
NSH + NVGRE• NVGRE has a protocol field to indicate payload type
– 0x6558 is defined for Ethernet in NVGRE
• NSH Ethertype carried in NVGRE– NSH protocol type is then indicate original payload type: IP or Ethernet
L2 HeaderL3 Header
(protocol=0x2F)
NVGRE Header(protocol=0xnsh)
Svc Header(PT=0x0800) Original IPv4 Packet
L2 Header L3 Header(protocol=0x2F)
NVGRE Header(protocol=0xnsh)
Svc Header(PT=0x6558)
Original Ethernet Frame
NSH + IP/UDP• In non-overlay topologies, native IP encapsulation can be used
L2 Header L3 Header(protocol=17)
UDP Header(port#=nsh)
Svc Header(PT=0x6558)
Original Ethernet Frame
L2 Header L3 Header(protocol=17)
UDP Header(port#=nsh)
Svc Header(PT=0x0800) Original IPv4 Packet
NSH + MPLS• The presence of metadata within an MPLS packet must be
indicated in the encapsulation. – Generic Associated Channel (GAL) label [RFC5586] with label value 13
is utilized for this purpose.
• The GAL label provides a method to identify that a packet contains an ”Metadata Channel Header (MCH)" followed by a non-service payload.– Draft-guichard-mpls-metadata-00 proposes an extension to [RFC5586]
to allow the first nibble of the ACH to be set to 0000b indicating that Metadata follows.