+ All Categories
Home > Documents > DOT – Distributed OpenFlow Testbed. Motivation Mininet is currently the de-facto tool for...

DOT – Distributed OpenFlow Testbed. Motivation Mininet is currently the de-facto tool for...

Date post: 14-Dec-2015
Category:
Upload: jaheem-upton
View: 219 times
Download: 0 times
Share this document with a friend
Popular Tags:
39
DOT – Distributed OpenFlow Testbed
Transcript
  • Slide 1

DOT Distributed OpenFlow Testbed Slide 2 Motivation Mininet is currently the de-facto tool for emulating an OpenFlow enabled network However, the size of network and amount of traffic are limited by the hardware resources of a single machine Our recent experiments with Mininet show that it can cause Flow serialization of otherwise parallel flows Many flows co-exist and compete for switch resources as transmission rates are limited by the CPU Process for running parallel iperf servers and clients is not trivial 2 Slide 3 Objective Run large scale emulations of an OpenFlow enabled networks and Avoid/reduce flow serialization and contention introduced by the emulation environment Enable emulation of large amounts of traffic 3 Slide 4 DOT Emulation Embedding algorithm partitions the logical network into multiple physical hosts Intra-host virtual link Eembedded inside a single host Cross-host link Connects switches located at different hosts Gateway Switch (GS) is added to each active physical host to emulate link delay of the cross-host links The augmented network with GS is called physical network SDN controller operates on the logical network 4 Slide 5 Embedding of Logical Network 5 Two Physical Machines Cross-host linksEmulated Network Physical Host 1 Physical Host 2 Slide 6 Embedding Cross-host Links 6 Physical Embedding Gateway switches a b b b aa Virtual Switch (VS) Slide 7 SDN Controllers View 7 Controllers View SDN Controller Slide 8 Software Stack of a DOT Node 8 Virtual Interface Virtual Link Physical Link OpenFlow Switch Slide 9 Gateway Switch A DOT component One gateway switch per active physical host Is attached with the physical NIC of the machine Facilitates inter-physical host packet transfer Enables emulation of delays in cross-host links Oblivious of the forwarding protocol used in the emulated network 9 Slide 10 Simulating Delay of the cross host links 10 Emulated Network (Only the cross-host links are shown) Physical Embedding Link delay Only one of the segments of a cross-host link will simulate delay Slide 11 Simulating delay 11 A->F B->E D->E Slide 12 Simulating delay 12 A->F B->E D->E When a packet is received at a Gateway Switch through its physical interface, it should identify the remote segment through which it was previously forwarded Now, GS2 has to forward the packet through particular link even if the next hop (e.g., B->E and D->E) is same. Slide 13 Solution of Traffic Forwarding at the Gateway Switch Mac Rewriting Tagging Tunnel with tag 13 Slide 14 Approach 1: MAC Rewrite Each GS maintains IP to MAC address mapping of all VMs When a packet arrives at a GS through logical links, it replaces The source MAC with its receiving port MAC This enables the remote GS to identify the segment through which the packet has been forwarded The destination MAC with the destination physical hosts physical NICs MAC This enables unicast of the packet through physical switching fabric When a GS receives a packet from the physical interface It checks the source MAC to identify the corresponding segment through which it should forward the packet Before forwarding, it replaces the source and destination MAC by inspecting the IP address field of the packet 14 Slide 15 Approach 1: MAC Rewriting 15 MAC (src, dst)IP (src, dst) VM2, VM1 SDN Controller Slide 16 16 Approach 1: MAC Rewriting SDN Controller Slide 17 17 MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller Slide 18 18 MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller Slide 19 19 MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller Slide 20 20 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting SDN Controller Slide 21 21 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMacP B,dstMacP M2 If(receiving port P C ) srcMacP C,dstMacP M2 Output: P M1 If(receiving port P D ) srcMacP D,dstMacP M1 If(receiving port P E ) srcMacP E,dstMacP M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Slide 22 22 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC GS1GS2 Outward Traffic If(receiving port P B ) srcMacP B,dstMacP M2 If(receiving port P C ) srcMacP C,dstMacP M2 Output: P M1 If(receiving port P D ) srcMacP D,dstMacP M1 If(receiving port P E ) srcMacP E,dstMacP M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Approach 1: MAC Rewriting Slide 23 23 Controllers View MACIP P D, P M1 VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMacP B,dstMacP M2 If(receiving port P C ) srcMacP C,dstMacP M2 Output: P M1 If(receiving port P D ) srcMacP D,dstMacP M1 If(receiving port P E ) srcMacP E,dstMacP M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Slide 24 24 Controllers View MACIP P D, P M1 VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMacP B,dstMacP M2 If(receiving port P C ) srcMacP C,dstMacP M2 Output: P M1 If(receiving port P D ) srcMacP D,dstMacP M1 If(receiving port P E ) srcMacP E,dstMacP M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Slide 25 25 Controllers View MACIP P D, P M1 VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMacP B,dstMacP M2 If(receiving port P C ) srcMacP C,dstMacP M2 Output: P M1 If(receiving port P D ) srcMacP D,dstMacP M1 If(receiving port P E ) srcMacP E,dstMacP M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Slide 26 26 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 1: MAC Rewriting GS1GS2 Outward Traffic If(receiving port P B ) srcMacP B,dstMacP M2 If(receiving port P C ) srcMacP C,dstMacP M2 Output: P M1 If(receiving port P D ) srcMacP D,dstMacP M1 If(receiving port P E ) srcMacP E,dstMacP M1 Output: P M2 Inward Traffic If(srcMAC= P D ) output: P B If(srcMAC = P E ) output: P C Restore MAC by inspecting IP If(srcMAC= P B ) output: P D If(srcMAC = P C ) output: P E Restore MAC by inspecting IP Slide 27 27 Controllers View PEPE PDPD P M2 P M1 PBPB PCPC MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller Slide 28 28 Controllers View PEPE PDPD P M2 P M1 PBPB PCPC MACIP VM2, VM1 Approach 1: MAC Rewriting SDN Controller Slide 29 Advantages Packet size remains same No change is required in the physical switching fabric Limitations Needs to maintain all IP to MAC address mapping in each of the GSs. Not scalable 29 Approach 1: MAC Rewriting Slide 30 Approach 2: Tunnel with Tag An unique id is assigned to each cross-host link When a packet arrives at a GS through internal logical links It encapsulates the packet with any tunneling protocol (eg. GRE) The destination address is the IP Address of the physical host address An tag equal to the id of the cross-host link is assigned to the packet (using tunnel id field of GRE) When an GS receives a packet from the physical interface It checks the tag (tunnel id) field to identify the outgoing segment It forwards the packet after decapsulating the tunnel header. 30 Slide 31 31 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 Slide 32 32 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 GS1GS2 Outward Traffic If(receiving port P B ) tunnelID1 Use tunnel to Machine 2 If(receiving port P C ) tunnelID2 Use tunnel to Machine 2 If(receiving port P D ) tunnelID1 Use tunnel to Machine 1 If(receiving port P E ) tunnelID2 Use tunnel to Machine 1 Inward Traffic If(tunnelID=1) output: P B If(tunnelID=2) output: P C If(tunnelID=1) output: P D If(tunnelID=2) output: P E Slide 33 33 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag MACIPTID PM1, PM2 #1 SDN Controller #1 #2 Header for encapsulation Original Packet Slide 34 34 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag MACIPTID PM1, PM2 #1 SDN Controller #1 #2 Slide 35 35 Controllers View MACIP VM2, VM1 PEPE PDPD P M2 P M1 PBPB PCPC Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 GS1GS2 Outward Traffic If(receiving port P B ) tunnelID1 Use tunnel to Machine 2 If(receiving port P C ) tunnelID2 Use tunnel to Machine 2 If(receiving port P D ) tunnelID1 Use tunnel to Machine 1 If(receiving port P E ) tunnelID2 Use tunnel to Machine 1 Inward Traffic If(tunnelID=1) output: P B If(tunnelID=2) output: P C If(tunnelID=1) output: P D If(tunnelID=2) output: P E Slide 36 Advantages No change is required in the physical switching fabric No GS need to know IP-MAC address mapping Rule set in GS is the order of cross-host link Scalable solution Limitations Lowers the MTU Due to the scalability issue, we choose this solution 36 Approach 2: Tunnel with Tag Slide 37 Emulating Bandwidth Configured for each logical link Using Linux tc command Maximum bandwidth for a cross-host link is bounded by the physical switching capacity Maximum bandwidth of an internal link is capped by the processing capability of the physical host 37 Slide 38 DOT: Summary Can emulates OpenFlow network with Specific link delay Bandwidth Traffic forwarding General OpenVSwitch Forwards traffic as instructed by the Floodlight controller Gateway Switches Instances of OpenVSwitch Forwards traffic based on pre-configured flow rules 38 Slide 39 Technology used so far OpenVSwitch : Version 1.8 Rate limit is configured in each port Floodlight Controller: Version 0.9 Custom modules added Static Network Loader, ARP Resolver Hypervisor Qemu-KVM Link delays are simulated using tc (Linux traffic control) 39


Recommended