Date post: | 15-Aug-2015 |
Category: |
Technology |
Upload: | piotr-kieszczynski |
View: | 139 times |
Download: | 3 times |
Docker - SDN
1
Who is this guy?But seriously - who is this guy?
1
2
Hello!I AM PIOTR KIESZCZYNSKI I am here because I love giving presentations. You can find me at @pkieszcz
3
Workrafrolic◦ Linux since Mandrake
6◦ Automate all the
things (600+ semi automated hosts…)
◦ AWS automation starts with CLI
◦ CI ◦ HPC (grid networks)◦ Kerberos v5 (major
error / minor error)
◦ System Administrator @ Seamless Poland
FEW WORDS ABOUT MYSELF
Personal stuff◦ Sailing◦ TV series◦ Swimming◦ Music festivals
4
ERS360 / TS / SEQR
https://seamless.se/
SEAMLESS POLAND
5
https://www.seqr.com/int/
SEQR
6
Network solutions for Docker
Docker networking is:◦ Still in early stages (not
anymore?!)◦ The default network assigned is
a port on Linux bridge docker0◦ docker inspect --
format='{{.NetworkSettings}}' 53720b3581be
7
Network solutions for Docker
What network solutions do we have now?◦ Docker specific networking (--
net=container, -p and socket)◦ Bridge + DHCP + VLAN◦ OVS◦ Flannel◦ Weave◦ Project Calico◦ SocketPlane◦ More and more incoming…◦ Docker 1.7 libnetwork
8
Docker0 bridge
◦ Default network is automatically created when no additional options “--net“ or “-P” are specified
◦ Each container is addressed by a static IP address assigned by Docker
◦ Similar to what we have as default in KVM or VirtualBox
◦ Host can reach container with IP on the bridge
◦ However outside traffic cannot reach the container
9
Docker0 bridge
# iptables -L -t nat -n…Chain POSTROUTING (policy ACCEPT)target prot opt source destination MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0…
# brctl showbridge name bridge id STP enabled interfacesdocker0 8000.56847afe9799 no veth05a3408
vethd88b38d
10
Port mapping
◦ Providing access to the container from outside by allocating a DNAT port in the range 49153-65535
◦ Using Linux bridge docker0, but adds iptables rules for the DNAT
◦ docker run -P -itd nginx
11
Host and container
◦ Give full access of the host network to container using --net=host
◦ docker run --net=host --name c1 -itd ubuntu◦ docker exec c1 ifconfig eth0
◦ Give full access to network of the container XX to a new container YY with --net=container:XX
◦ docker run --net=container:nginx --name c2 -itd ubuntu
◦ docker exec c2 ifconfig eth0
12
How it’s done “manually”
sudo mkdir -p /var/run/netns sudo modprobe ip_nat_ftp nf_conntrack_ftp
#Create a bridgestart_bridge () { # args: BRIDGE_NAME sudo brctl addbr $1 &>/dev/null || return sudo ip link set $1 up echo Created bridge: $1}
13
start_container () { hostname=$1 image=$2 port=$3 container=${hostname%%.*}
pid=$(docker inspect -f '{{.State.Pid}}' $container 2>/dev/null)
if [ "$?" = "1" ] then if [ -n "$port" ] then netopts="--publish=$port:22" else netopts="--net=none" fi docker run --name=$container --hostname=$hostname \ --dns=10.1.1.1 --dns-search=example.com "$netopts" \ -d $image elif [ "$pid" = "0" ] then docker start $container >/dev/null else return fi
pid=$(docker inspect -f '{{.State.Pid}}' $container) sudo rm -f /var/run/netns/$container sudo ln -s /proc/$pid/ns/net /var/run/netns/$container
echo Container started: $container}
How it’s done “manually” #2
14
create_interface () { # # Given an interface name "www-eth0", create both an interface with # that name and also a peer that is connected to it. Place the peer # in the container "www" and give it the name "eth0" there. # interface=$1 container=${interface%%-*} short_name=${interface##*-} sudo ip link add $interface type veth peer name P &>/dev/null || return give_interface_to_container P $container $short_name echo Created interface: $interface}
give_interface_to_container () { # args: OLD_NAME CONTAINER NEW_NAME sudo ip link set $1 netns $2 sudo ip netns exec $2 ip link set dev $1 name $3 sudo ip netns exec $2 ip link set $3 up}
How it’s done “manually” #3
15
bridge_add_interface () { bridge=$1 interface=$2 sudo brctl addif $bridge $interface &>/dev/null || return sudo ip link set dev $interface up echo Bridged interface: $interface}
How it’s done “manually” #4
16
Build it “manually”
#!/bin/bashstart_container example.com ubuntucreate_interface h1-eth1bridge_add_interface homeA h1-eth1
sudo ip netns exec example ip addr add 10.11.1.1/32 dev eth0sudo ip netns exec example ip route add 10.1.1.1/32 dev eth0sudo ip netns exec example ip route add default via 10.1.1.1
17
Why it sucks “literally”
◦ BASH is for stuff that just “works”◦ Doesn’t scale at all◦ You have to manually change stuff◦ No error handling◦ IP “management”◦ No need for reinventing the wheel◦ Routing, NATs and VLANs◦ This stuff won’t work on CoreOS (doh!)◦ Many other possible reasons
18
CoreOS (cloud-init)
#brigde - name: 20-br800.netdev runtime: true content: | [NetDev] Name=br800 Kind=bridge #vlan - name: 00-vlan800.netdev runtime: true content: | [NetDev] Name=vlan800 Kind=vlan
[VLAN] Id=800
19
CoreOS (cloud-init) #2
#subinterface - name: 10-eth1.network runtime: true content: | [Match] Name=eth1
[Network] DHCP=yes VLAN=vlan800 #attach - name: 30-attach.network runtime: true content: | [Match] Name=vlan800
[Network] Bridge=br800
20
DHCP + VLAN + Brigde
vconfig add eth0 100brctl add br100brctl addif br100 eth0.100ip link add c1-eth1 type veth peer name P
dhclient on container (issue with --priviliged)
or DOCKER_OPTS=’-e lxc’then docker run with --lxc.config.*docker run \ --net="none" \ --lxc-conf="lxc.network.type = veth" \ --lxc-conf="lxc.network.ipv4 = 192.168.20.30/24" \ --lxc-conf="lxc.network.ipv4.gateway = 192.168.20.1" \ --lxc-conf="lxc.network.link = br800" \ --lxc-conf="lxc.network.name = eth0" \ --lxc-conf="lxc.network.flags = up" \ -d
21
DHCP issue?
Requires trunk!
auto eth0.200iface eth0.200 inet static address 10.0.1.1 netmask 255.255.255.0
iface eth0.201 inet6 static address 10.0.2.1 netmask 255.255.255.0
iface eth0.202 inet6 static address 10.0.3.1 netmask 255.255.255.0
22
DHCP issue?
For each subnet...subnet 10.0.1.0 netmask 255.255.255.0 { range 10.0.1.10 10.0.1.20; # you might point some other address # within that subnet that should be advertised as router # it does not have to be your linux box option routers 10.0.1.1; option broadcast-address 10.0.1.255; authoritative;}
23
Weave
24
Weave
25
DescriptionExtra daemonKinda slowBuilds GRE tunnel between hosts Manual IP management
Weave
Runweave launchC=$(weave run 10.2.1.1/24 -t -i ubuntu)weave launch $HOST1C=$(weave run 10.2.1.2/24 -t -i ubuntu)
26
DescriptionSupports policyNo VLANsNo SubnetsYou have to specify IP manually
Projet Calico
Rundocker run -e CALICO_IP=XXX -itd ubuntu
./calicoctl node --ip=172.17.8.101 --name workload-a --tid busybox./calicoctl profile add PROF_A./calicoctl profile PROF_A add workload-a
27
Flannel (CoreOS)
28
DescriptionShipped with CoreOSRandomly attaches subnets (randomly) to each flannel host Overrides --bip for docker daemon so every container will be created just in this subnetNo VLAN support No extra parameters with docker runHow it’s related to the task?
Flannel (CoreOS)
Config{"Network": "10.0.0.0/8","SubnetLen": 24,"SubnetMin": "10.10.0.0","SubnetMax": "10.99.0.0","Backend": {"Type" : "udp","Port": 7890}}
29
DescriptionBuilt by French docker DevOps guy (jpetazzo)Supports some overridesSupports DHCP / VLAN
Pipework
Rundocker run -name web1 -d apachepipework br1 web1 192.168.12.23/20pipework br1 $CONTAINERID 192.168.4.25/[email protected] eth1 $CONTAINERID dhcppipework ovsbr0 $(docker run -d zerorpcworker) dhcp @10
30
DescriptionConsulCoreOS supportDHCPOVSVLANsStrange IP management(best solution for the task?)
SocketPlane
Runsocketplane network create web 10.2.0.0/16socketplane run -n web -itd ubuntu
31
RPI fanbois
◦ Hypriot team done a GREAT job◦ Easy docker for your RaspberryPI
◦ Contest (1000+ httpd on RPIv2)
◦ I’ll show you mine, if you show me yours
32
Fresh improvements
33
Docker 1.7 libnetwork (near and bright future included)
What libnetwork gives us◦ https://github.com/docker/docker/
issues/9983◦ Container Network Model◦ docker net tool (join/create/destroy..)
34