+ All Categories
Home > Software > Docker networking

Docker networking

Date post: 11-Apr-2017
Category:
Upload: alvaro-saurin
View: 76 times
Download: 0 times
Share this document with a friend
33
Networking in Docker Alvaro Saurin Senior Software Engineer @ SUSE
Transcript
Page 1: Docker networking

Networking in Docker

Alvaro SaurinSenior Software Engineer @ SUSE

Page 2: Docker networking

network: modes

2

three network modes:

● host● bridge● overlay

Page 3: Docker networking

network: host

3

host

container-01

eth0lo ...

container has full access to host’s interfaces

do not do it!

Page 4: Docker networking

network: host

4

$ docker run --rm --name container-01 --net=host -ti busybox /bin/sh/ # / # ifconfig docker0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::x/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:19888 errors:0 dropped:0 overruns:0 frame:0 TX packets:19314 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3063342 (2.9 MiB) TX bytes:29045336 (27.6 MiB)

eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet addr:192.168.1.121 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::x/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:135513 errors:0 dropped:0 overruns:0 frame:0 TX packets:109723 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:102680118 (97.9 MiB) TX bytes:22766730 (21.7 MiB)

lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:230 errors:0 dropped:0 overruns:0 frame:0 TX packets:230 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:37871 (36.9 KiB) TX bytes:37871 (36.9 KiB)

the container can see (and control) all the interfaces in our host

Page 5: Docker networking

network: bridge

5

host

container-01

eth0

docker0

container-02

172.17.0.0/16

● bridge network: a internal, virtual switch● containers are plugged in that switch● containers on the same bridge can talk to each other● users can create multiple bridges

Page 6: Docker networking

network: bridge: on the host

6

$ docker network inspect bridge[ { "Name": "bridge", "Id": "df8e30242635b2...", "Scope": "local", "Driver": "bridge", "IPAM": { "Driver": "default", "Config": [ { "Subnet": "172.17.0.0/16" } ] }, "Containers": {}, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" } }]

inter-container-communications is enabled by default

it will masquerade all traffic going out of the bridge

Page 7: Docker networking

network: bridge: as seen by the host

7

$ ifconfig docker0docker0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:a2ff:fe10:ccf7/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:30 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:5025 (5.0 KB)$$$ ip route default via 192.168.1.1 dev wlan0 proto static metric 600 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1...

route for traffic from host to containers

docker0 is by default at 172.17.0.1

Page 8: Docker networking

network: bridge: as seen by the container

8

$ docker run --rm --name container-02 -ti busybox /bin/sh/ # / # ifconfigeth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:40 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6558 (6.4 KiB) TX bytes:508 (508.0 B)

lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

/ # exit

Page 9: Docker networking

network: multi-host

9

Objective:

connect containers in multiple hosts

to the same network(s)

Page 10: Docker networking

network: multi-host: scenarios

10

host-A host-B host-C

container-02container-01

container-03 container-04

container-05 container-06

frontendnetwork

applicationnetwork

databasenetwork

eth0 eth0 eth0

Page 11: Docker networking

network: multi-host: scenarios

11

a big host-A

frontendnetworkcontainer-02container-01

container-03 container-04

container-05 container-06

applicationnetwork

databasenetwork

Page 12: Docker networking

network: multi-host: scenarios

12

a big host-A

VM-1 VM-2 VM-3

frontendnetworkcontainer-02container-01

container-03 container-04

container-05 container-06

applicationnetwork

databasenetwork

Page 13: Docker networking

network: multi-host

13

Solution 1: create a common IP space (at container level), with routing and probably NAT

● assign a /24 subnet to each host● setup IP routes between subnets ● make sure the gateway is through the host’s eth0.

Calico FlannelRomana ...

drivers available:

Page 14: Docker networking

network: multi-host: routing

14

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.10.0/24

10.0.9.4 10.0.9.5

10.0.9.1 10.0.10.1

10.0.10.8 10.0.10.9

sometimes need to change source (SNAT) and destination (DNAT) IP

Routing rules say that 10.0.10.* goes through eth0

routing rules say that 10.0.10.* goes through docker0

use iptables for separating 10.0.10.* from, for example, 10.0.178.*

we need to share all this information between hosts in our cluster

Page 15: Docker networking

network: multi-host: routing

15

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.10.0/24

10.0.9.4 10.0.9.5

10.0.9.1 10.0.10.1

10.0.10.8 10.0.10.9

15

information (+/-): ● who-is-where● who-provides-what

info for: container-03● at 192.168.1.2● offers MySQL

BGP

Page 16: Docker networking

network: multi-host: routing

16

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.10.0/24

10.0.9.4 10.0.9.5

10.0.9.1 10.0.10.1

10.0.10.8 10.0.10.9

16

database

information (+/-): ● who-is-where● who-provides-what

info for: container-03● at 192.168.1.2● offers MySQL

Page 17: Docker networking

network: multi-host: overlay

17

Solution 2: create an overlay network

● create a parallel network for cross communication● connect hosts with encapsulation tunnels● connect containers to the virtual networks

something like a VPN

Docker (natively) FlannelWeave ...

drivers available:

Page 18: Docker networking

network: multi-host: overlay

18

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

capture traffic leaving to some other container in 10.0.9.X

outerEther

Header (src/dst)

outerIP

Header (src/dst)

outerUDP

Header

VXLAN Header Inner Ether Frame

VXLAN encapsulation

VXLAN traffic

Page 19: Docker networking

network: multi-host: overlay

19

Docker (natively) FlannelWeave ...

● VXLAN

● Propietary (UDP)● VXLAN

● Propietary (UDP)● VXLAN

VXLAN faster than UDP: traffic does not go up to user-space

some hardware acceleration support for VXLAN...

but UDP can add encryption easily...

Page 20: Docker networking

network: multi-host: overlay

20

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

key-valuestore

information (+/-): ● who-is-where● (who-provides-what)

info for: container-03● at 192.168.1.2● (offers MySQL)

Page 21: Docker networking

network: multi-host: overlay

21

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

key-valuestore

libkv:● consul● zookeeper● etcd

Page 22: Docker networking

network: multi-host: overlay

22

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

$ docker daemon --cluster-store=etcd://someIP:2379...

Point your Docker daemons to the same kv store with: Docker 1.9

Page 23: Docker networking

network: multi-host: overlay

23

host-A

container-01

eth0

docker0

container-02

host-B

container-03

eth0

docker0

container-04

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

$ docker network create -d overlay --subnet=10.0.9.0/24 backend5bd26b642...

Then you can create a network:

Page 24: Docker networking

$ docker run --rm -ti --net=backend opensuse:leap /bin/shsh-4.2#sh-4.2# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:00:09:02 inet addr:10.0.9.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe00:902/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:15 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1206 (1.1 Kb) TX bytes:648 (648.0 b)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

network: multi-host: overlay

24

Then you can run a container attached to a network:

Page 25: Docker networking

network: multi-host: overlay

25

$ docker network inspect backend [ { "Name": "backend", "Id": "5bd26b642...", "Scope": "global", "Driver": "overlay", "IPAM": { "Driver": "default", "Config": [ { "Subnet": "10.0.9.0/24" } ] }, "Containers": { "3c91228d...": { "EndpointID": "5de7be39333...", "MacAddress": "02:42:0a:00:09:02", "IPv4Address": "10.0.9.2/24", "IPv6Address": "" } }, "Options": {} }]

our container appears here

here is the subnet

Page 26: Docker networking

network: overlay vs routing

26

Good Bad

Routing

● Native performance● Security (*)

● Control over the infrastructure

● VPN between clouds● Can run out of IP

addresses

Overlay

● Better inter-cloud● Encrypted traffic (*)

● Lower performance● IP multicast

(*) depends on the driver

Page 27: Docker networking

network: service discovery

host-A

frontend

eth0

docker0

...

host-B

...

eth0

docker0

myapp

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

where is myapp?

the frontend needs the IP address(es) of the myapp application

Page 28: Docker networking

network: service discovery

host-A

frontend

eth0

docker0

...

host-B

...

eth0

docker0

myapp

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

where is myapp?

● more important problem in containers than with bare metal/VMs ● containers/hosts appear/disappear a lot

○ health checks● DNS is not good enough...

Page 29: Docker networking

network: service discovery

29

host-A

frontend

eth0

docker0

...

host-B

...

eth0

docker0

myapp

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

Docker < 1.11: ● updates /etc/hosts dynamically

Docker >= 1.11: ● internal DNS server

Page 30: Docker networking

network: load balancing

30

host-A

frontend

eth0

docker0

myapp

host-B

...

eth0

docker0

myapp

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

$ docker run -d --net backend--net-alias myappmyapp

Docker 1.11

$ docker run -d --net backend--net-alias myappmyapp

Page 31: Docker networking

network: load balancing

31

host-A

frontend

eth0

docker0

myapp

host-B

...

eth0

docker0

myapp

192.168.1.1 192.168.1.2

10.0.9.0/24 10.0.9.0/24

Docker < 1.11: ● do not even try it (can

corrupt /etc/hosts)

Docker >= 1.11: ● DNS-based load-balancing

limitations

Page 32: Docker networking

network: load balancing

32

host-B

...

eth0

docker0

myapp

192.168.1.2

10.0.9.0/24

Docker >= 1.11: ● DNS-based load-balancing

limitations

● we have to return DNS responses with short TTL - more load on the DNS server○ anyway, some

clients ignore the TTL

Page 33: Docker networking

Questions?

Alvaro SaurinSenior Software Engineer @ SUSE


Recommended