Date post: | 23-Jul-2015 |
Category: |
Documents |
Upload: | docker-inc |
View: | 150 times |
Download: | 3 times |
Performance Characteristics of Traditional VMs vs Docker Containers
dockercon14June 9-10, 2014
San Francisco, CA
Boden Russell ([email protected])
Motivations: Computer Scientist
8/2/2014 2
FamilyInnovationCreativityRevenue
Motivations: Enterprise
8/2/2014 3
RevenueRevenueRevenueRevenue
Increasing Revenue: Do More With Less
Reduce Total Cost of Ownership (TCO) and increase Return On Investment (ROI)
8/2/2014 4
Category Factors Scope
CAPEX
Hardware costs - VM density (consolidation ratio)- Soft device integration- Broad vendor compatibility
- Hypervisor- Cloud manager
Software licensing costs - Software purchase price- Support contracts
- Hypervisor- Cloud manager
OPEX
Disaster recovery - Hypervisor- Cloud manager
Upgrade / maintenance expenses - Hypervisor- Cloud manager
Power & cooling costs - Reduced HW footprint - Hypervisor- Cloud manager
Administration efficiency - Automated operations- Performance / response time
- Hypervisor- Cloud manager
Support & training costs - Hypervisor- Cloud manager
AGILITY
Application delivery time - Workflow complexity- Toolset costs- Skillset
- Hypervisor- Cloud manager
Planned / unplanned downtime - Hypervisor- Cloud manager
*Not a complete or extensive list
About This Benchmark
Use case perspective
– As an OpenStack Cloud user I want a Ubuntu based VM with MySQL… Why would I choose docker LXC vs a traditional hypervisor?
OpenStack “Cloudy” perspective
– LXC vs. traditional VM from a Cloudy (OpenStack) perspective
– VM operational times (boot, start, stop, snapshot)
– Compute node resource usage (per VM penalty); density factor
Guest runtime perspective
– CPU, memory, file I/O, MySQL OLTP, etc.
Why KVM?
– Exceptional performance
DISCLAIMERSThe tests herein are semi-active litmus tests – no in depth tuning,
analysis, etc. More active testing is warranted. These results do not necessary reflect your workload or exact performance nor are they
guaranteed to be statistically sound.
8/2/2014 5
Docker in OpenStack
Havana
– Nova virt driver which integrates with docker REST API on backend
– Glance translator to integrate docker images with Glance
Icehouse
– Heat plugin for docker
Both options are still under development
8/2/2014 6
nova-docker virt driver docker heat plugin
DockerInc::Docker::Container
(plugin)
Benchmark Environment Topology @ SoftLayer
8/2/2014 7
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
docker lxc
dstat
controller compute node
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
KVM
dstat
controller compute node
+
Awesome!
+
Awesome!
Benchmark Specs
8/2/2014 8
Spec Controller Node (4CPU x 8G RAM) Compute Node (16CPU x 96G RAM)
Environment Bare Metal @ SoftLayer Bare Metal @ SoftLayer
Mother Board SuperMicro X8SIE-F Intel Xeon QuadCore SingleProc SATA [1Proc]
SuperMicro X8DTU-F_R2 Intel Xeon HexCore DualProc [2Proc]
CPU Intel Xeon-Lynnfield 3470-Quadcore [2.93GHz] (Intel Xeon-Westmere 5620-Quadcore [2.4GHz]) x 2
Memory (Kingston 4GB DDR3 2Rx8 4GB DDR3 2Rx8 [4GB]) x2 (Kingston 16GB DDR3 2Rx4 16GB DDR3 2Rx4 [16GB]) x 6
HDD (LOCAL) Digital WD Caviar RE3 WD5002ABYS [500GB]; SATAII Western Digital WD Caviar RE4 WD5003ABYX [500GB]; SATAII
NIC eth0/eth1 @ 100 Mbps eth0/eth1 @100 Mbps
Operating System Ubuntu 12.04 LTS 64bit Ubuntu 12.04 LTS 64bit
Kernel 3.5.0-48-generic 3.8.0-38-generic
IO Scheduler deadline deadline
Hypervisor tested NA - KVM 1.0 + virtio + KSM (memory deduplication)- docker 0.10.0 + go1.2.1 + commit dc9c28f + AUFS
OpenStack Trunk master via devstack Trunk master via devstack. Libvirt KVM nova driver / nova-docker virt driver
OpenStack Benchmark Client
OpenStack project rally NA
Metrics Collection NA dstat
Guest Benchmark Driver NA - Sysbench 0.4.12- mbw 1.1.1.-2- iibench (py)- netperf 2.5.0-1 - Blogbench 1.1- cpu_bench.py
VM Image NA - Scenario 1 (KVM): official ubuntu 12.04 image + mysql snapshotted and exported to qcow2 – 1080 MB
- Scenario 2 (docker): guillermo/mysql -- 381.5 MB
Hosted @
STEADY STATE VM PACKING
OpenStack Cloudy Benchmark
8/2/2014 9
Cloudy Performance: Steady State Packing
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot 15 VM asynchronously in succession
– Wait for 5 minutes (to achieve steady-state on the compute node)
– Delete all 15 VMs asynchronously in succession
Benchmark driver
– cpu_bench.py
High level goals
– Understand compute node characteristics under steady-state conditions with 15 packed / active VMs
8/2/2014 10
0
2
4
6
8
10
12
14
16
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
Act
ive
VM
s
Time
Benchmark Visualization
VMs
Cloudy Performance: Steady State Packing
8/2/2014 11
0
10
20
30
40
50
60
70
80
1 9
17
25
33
41
49
57
65
73
81
89
97
10
5
11
3
12
1
12
9
13
7
14
5
15
3
16
1
16
9
17
7
18
5
19
3
20
1
20
9
21
7
22
5
23
3
24
1
24
9
25
7
26
5
27
3
28
1
28
9
29
7
30
5
31
3
32
1
CP
U U
sage
In
Pe
rce
nt
Time
Docker: Compute Node CPU (full test duration)
usr
sys
Averages
– 0.54
– 0.17
0
10
20
30
40
50
60
70
80
1 9
17
25
33
41
49
57
65
73
81
89
97
10
5
11
3
12
1
12
9
13
7
14
5
15
3
16
1
16
9
17
7
18
5
19
3
20
1
20
9
21
7
22
5
23
3
24
1
24
9
25
7
26
5
27
3
28
1
28
9
29
7
30
5
31
3
32
1
32
9
33
7
34
5
CP
U U
sage
In
Pe
rce
nt
Time
KVM: Compute Node CPU (full test duration)
usr
sys
Averages
– 7.64
– 1.4
Cloudy Performance: Steady State Packing
8/2/2014 12
0
2
4
6
8
10
12
14
1 6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
10
1
10
6
11
1
11
6
12
1
12
6
13
1
13
6
14
1
14
6
15
1
15
6
16
1
16
6
17
1
17
6
18
1
18
6
19
1
19
6
20
1
20
6
21
1
CP
U U
sage
In
Pe
rce
nt
Time (31s – 243s)
Docker: Compute Node Steady-State CPU (segment: 31s – 243s)
usr
sys
0
2
4
6
8
10
12
14
1 6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
10
1
10
6
11
1
11
6
12
1
12
6
13
1
13
6
14
1
14
6
15
1
15
6
16
1
16
6
17
1
17
6
18
1
18
6
19
1
19
6
20
1
20
6
21
1
CP
U U
sage
In
Pe
rce
nt
Time (95s - 307s)
KVM: Compute Node Steady-State CPU (segment: 95s – 307s)
usr
sys
Averages
– 0.2
– 0.03
Averages
– 1.91
– 0.36
31 seconds243 seconds
95 seconds307 seconds
Cloudy Performance: Steady State Packing
8/2/2014 13
0
2
4
6
8
10
12
14
1 7
13
19
25
31
37
43
49
55
61
67
73
79
85
91
97
10
3
10
9
11
5
12
1
12
7
13
3
13
9
14
5
15
1
15
7
16
3
16
9
17
5
18
1
18
7
19
3
19
9
20
5
21
1
CP
U U
sage
In
Pe
rce
nt
Time: KVM(95s - 307s) Docker(31s – 243s)
Docker / KVM: Compute Node Steady-State CPU (Segment Overlay)
docker-usr
docker-sys
kvm-usr
kvm-sys
docker: 31sKVM: 95s
docker: 243sKVM: 307s
Docker Averages
– 0.2
– 0.03
KVM Averages
– 1.91
– 0.36
Cloudy Performance: Steady State Packing
8/2/2014 14
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+091
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
13
6
14
5
15
4
16
3
17
2
18
1
19
0
19
9
20
8
21
7
22
6
23
5
24
4
25
3
26
2
27
1
28
0
28
9
29
8
30
7
31
6
32
5
33
4
Me
mo
ry U
sed
Axis Title
Docker / KVM: Compute Node Used Memory (Overlay)
kvm
docker
dockerDelta734 MBPer VM49 MB
KVMDelta4387 MBPer VM292 MB
Cloudy Performance: Steady State Packing
8/2/2014 15
0
10
20
30
40
5060
7080
90
100
1 9
17
25
33
41
49
57
65
73
81
89
97
10
5
11
3
12
1
12
9
13
7
14
5
15
3
16
1
16
9
17
7
18
5
19
3
20
1
20
9
21
7
22
5
23
3
24
1
24
9
25
7
26
5
27
3
28
1
28
9
29
7
30
5
31
3
32
1
1 M
inu
te L
oad
Ave
rage
Time
Docker: Compute Node 1m Load Average (full test duration)
1m
Average
0.15 %
0
10
20
30
40
50
60
70
80
90
100
1 9
17
25
33
41
49
57
65
73
81
89
97
10
5
11
3
12
1
12
9
13
7
14
5
15
3
16
1
16
9
17
7
18
5
19
3
20
1
20
9
21
7
22
5
23
3
24
1
24
9
25
7
26
5
27
3
28
1
28
9
29
7
30
5
31
3
32
1
32
9
33
7
1 M
inu
te L
oad
Ave
rage
Time
KVM: Compute Node 1m Load Average (full test duration)
1m
Average
35.9 %
SERIALLY BOOT 15 VMS
OpenStack Cloudy Benchmark
8/2/2014 16
Cloudy Performance: Serial VM Boot
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot VM
– Wait for VM to become ACTIVE
– Repeat the above steps for a total of 15 VMs
– Delete all VMs
Benchmark driver
– OpenStack Rally
High level goals
– Understand compute node characteristics under sustained VM boots
8/2/2014 17
0
2
4
6
8
10
12
14
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Act
ive
VM
s
Time
Benchmark Visualization
VMs
Cloudy Performance: Serial VM Boot
8/2/2014 18
3.529113102
5.781662448
0
1
2
3
4
5
6
7
docker KVM
Tim
e In
Se
con
ds
Average Server Boot Time
docker
KVM
Cloudy Performance: Serial VM Boot
8/2/2014 19
0
5
10
15
20
25
30
35
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79
CP
U U
sage
In
Pe
rce
nt
Time
Docker: Compute Node CPU
usr
sys
Averages
– 1.39
– 0.57
0
5
10
15
20
25
30
35
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
11
2
11
5
11
8
12
1
12
4
12
7
CP
U U
sage
In
Pe
rce
nt
Time
KVM: Compute Node CPU Usage
usr
sys
Averages
– 13.45
– 2.23
Cloudy Performance: Serial VM Boot
8/2/2014 20
0
5
10
15
20
25
30
35
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125
CP
U U
sage
In
Pe
rce
nt
Time
Docker / KVM: Compute Node CPU (Unnormalized Overlay)
kvm-usr
kvm-sys
docker-usr
docker-sys
Cloudy Performance: Serial VM Boot
8/2/2014 21
y = 0.0095x + 1.008
y = 0.3582x + 1.0633
0
5
10
15
20
25
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51
Usr
CP
U I
n P
erc
en
t
Time (8s - 58s)
Docker / KVM: Serial VM Boot Usr CPU (segment: 8s - 58s)
docker(8-58)
kvm(8-58)
Linear (docker(8-58))
Linear (kvm(8-58))
8 seconds 58 seconds
Cloudy Performance: Serial VM Boot
8/2/2014 22
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
4.00E+09
4.50E+09
5.00E+09
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125
Me
mo
ry U
sed
Time
Docker / KVM: Compute Node Memory Used (Unnormalized Overlay)
kvm
docker
DockerDelta677 MBPer VM45 MB
KVMDelta2737 MBPer VM182 MB
Cloudy Performance: Serial VM Boot
8/2/2014 23
y = 1E+07x + 1E+09
y = 3E+07x + 1E+09
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
Me
mo
ry U
sage
Time (1s - 67s)
Docker / KVM: Serial VM Boot Memory Usage (segment: 1s - 67s)
docker
kvm
Linear (docker)
Linear (kvm)
1 second 67 seconds
Cloudy Performance: Serial VM Boot
8/2/2014 24
0
5
10
15
20
25
30
35
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79
1 M
inu
te L
oad
Ave
rage
Time
Docker: Compute Node 1m Load Average
1m
Average
0.25 %
0
5
10
15
20
25
30
35
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
11
2
11
5
11
8
12
1
12
4
12
7
1 M
inu
te L
oad
Ave
rage
Time
KVM: Compute Node 1m Load Average
1m
Average
11.18 %
SERIAL VM SOFT REBOOT
OpenStack Cloudy Benchmark
8/2/2014 25
Cloudy Performance: Serial VM Reboot
Benchmark scenario overview– Pre-cache VM image on compute node prior to test
– Boot a VM & wait for it to become ACTIVE
– Soft reboot the VM and wait for it to become ACTIVE• Repeat reboot a total of 5 times
– Delete VM
– Repeat the above for a total of 5 VMs
Benchmark driver– OpenStack Rally
High level goals– Understand compute node characteristics under sustained VM reboots
8/2/2014 26
0
1
2
3
4
5
6
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55
Act
ive
VM
s
Time
Benchmark Visualization
Active VMs
Cloudy Performance: Serial VM Reboot
8/2/2014 27
2.577879581
124.433239
0
20
40
60
80
100
120
140
docker KVM
Tim
e In
Se
con
ds
Average Server Reboot Time
docker
KVM
Cloudy Performance: Serial VM Reboot
8/2/2014 28
3.5675860413.479760051
0
0.5
1
1.5
2
2.5
3
3.5
4
docker KVM
Tim
e In
Se
con
ds
Average Server Delete Time
docker
KVM
Cloudy Performance: Serial VM Reboot
8/2/2014 29
0
1
2
3
4
5
6
7
8
9
10
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
CP
U U
sage
In
Pe
rce
nt
Time
Docker: Compute Node CPU
usr
sys
0123456789
10
1
72
14
3
21
4
28
5
35
6
42
7
49
8
56
9
64
0
71
1
78
2
85
3
92
4
99
5
10
66
11
37
12
08
12
79
13
50
14
21
14
92
15
63
16
34
17
05
17
76
18
47
19
18
19
89
20
60
21
31
22
02
22
73
23
44
24
15
24
86
25
57
26
28
26
99
27
70
28
41
29
12
29
83
30
54
31
25
CP
U U
sage
In
Pe
rce
nt
Time
KVM: Compute Node CPU
usr
sys
Averages
– 0.69
– 0.26
Averages
– 0.84
– 0.18
Cloudy Performance: Serial VM Reboot
8/2/2014 30
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+091 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
Me
mo
ry U
sed
Time
Docker: Compute Node Used Memory
Memory
Delta48 MB
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
1
81
16
1
24
1
32
1
40
1
48
1
56
1
64
1
72
1
80
1
88
1
96
1
10
41
11
21
12
01
12
81
13
61
14
41
15
21
16
01
16
81
17
61
18
41
19
21
20
01
20
81
21
61
22
41
23
21
24
01
24
81
25
61
26
41
27
21
28
01
28
81
29
61
30
41
31
21
Me
mo
ry U
sed
Time
KVM: Compute Node Used Memory
Memory
Delta486 MB
Cloudy Performance: Serial VM Reboot
8/2/2014 31
0
0.5
1
1.5
2
2.5
3
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
1 M
inu
te L
oad
Ave
rage
Time
Docker: Compute Node 1m Load Average
1m
Average
0.4 %
0
0.5
1
1.5
2
2.5
3
17
11
41
21
12
81
35
14
21
49
15
61
63
17
01
77
18
41
91
19
81
10
51
11
21
11
91
12
61
13
31
14
01
14
71
15
41
16
11
16
81
17
51
18
21
18
91
19
61
20
31
21
01
21
71
22
41
23
11
23
81
24
51
25
21
25
91
26
61
27
31
28
01
28
71
29
41
30
11
30
81
31
51
1 M
inu
te L
oad
Ave
rage
Time
KVM: Compute Node 1m Load Average
1m
Average
0.33 %
SNAPSHOT VM TO IMAGE
OpenStack Cloudy Benchmark
8/2/2014 32
Cloudy Performance: Snapshot VM To Image
Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot a VM
– Wait for it to become ACTIVE
– Snapshot the VM
– Wait for image to become ACTIVE
– Delete VM
Benchmark driver
– OpenStack Rally
High level goals
– Understand cloudy ops times from a user perspective
8/2/2014 33
Cloudy Performance: Snapshot VM To Image
8/2/2014 34
36.88756394
48.02313805
0
10
20
30
40
50
60
docker KVM
Tim
e In
Se
con
ds
Average Snapshot Server Time
docker
KVM
Cloudy Performance: Snapshot VM To Image
8/2/2014 35
0
1
2
3
4
5
6
7
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
CP
U U
sage
In
Pe
rce
nt
Time
Docker: Compute Node CPU
usr
sys
Averages
– 0.42
– 0.15
0
1
2
3
4
5
6
7
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
11
2
11
5
CP
U U
sage
In
Pe
rce
nt
Time
KVM: Compute Node CPU
usr
sys
Averages
– 1.46
– 1.0
Cloudy Performance: Snapshot VM To Image
8/2/2014 36
1.48E+09
1.5E+09
1.52E+09
1.54E+09
1.56E+09
1.58E+09
1.6E+09
1.62E+09
1.64E+09
1.66E+09
1.68E+09
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
11
2
11
5
Me
mo
ry U
sed
Time
KVM: Compute Node Used Memory
Memory
Delta114 MB
1.6E+09
1.61E+09
1.62E+09
1.63E+09
1.64E+09
1.65E+09
1.66E+09
1.67E+09
1.68E+09
1.69E+09
1.7E+09
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
Me
mo
ry U
sed
Time
Docker: Compute Node Memory Used
Memory
Delta57 MB
Cloudy Performance: Snapshot VM To Image
8/2/2014 37
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
1 M
inu
te L
oad
Ave
rage
Time
Docker: Compute Node 1m Load Average
1m
Average
0.06 %
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
10
9
11
2
11
5
1 M
inu
te L
oad
Ave
rage
Time
KVM: Compute node 1m Load Average
1m
Average
0.47 %
GUEST PERFORMANCE BENCHMARKS
Guest VM Benchmark
8/2/2014 38
Guest Ops: Network
940.26 940.56
0
100
200
300
400
500
600
700
800
900
1000
docker KVM
Thro
ugh
pu
t In
10
^6 b
its/
seco
nd
Network Throughput
docker
KVM
8/2/2014 39
Guest Ops: Near Bare Metal Performance
Typical docker LXC performance near par with bare metal
8/2/2014 40
linpack performance @ 45000
0
50
100
150
200
250
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31BM
vcpus
GFl
ops
220.77
Bare metal220.5
@32 vcpu
220.9
@ 31 vcpu
0
2000
4000
6000
8000
10000
12000
14000
MEMCPY DUMB MCBLOCK
MiB
/s
Memory Test
Memory Benchmark Performance
Bare Metal (MiB/s)
docker (MiB/s)
KVM (MiB/s)
Guest Ops: Block I/O
Tested with [standard] AUFS
8/2/2014 41
845 822
0
100
200
300
400
500
600
700
800
900
Bare Metal docker
MB
/s
Async I/Odd if=/dev/zero of=/tmp/d4g bs=4G count=1
Bare Metal
docker
90.1 87.2
0
10
20
30
40
50
60
70
80
90
100
Bare Metal docker
MB
/s
Sync Data Writedd if=/dev/zero of=/tmp/d4g bs=4G count=1 oflag=dsync
Bare Metal
docker
89.2 89
0
10
20
30
40
50
60
70
80
90
100
Bare Metal docker
MB
/s
Sync Data / Metadata Writedd if=/dev/zero of=/tmp/d4g bs=4G count=1 oflag=sync
Bare Metal
docker
Guest Ops: File I/O Random Read / Write
0
200
400
600
800
1000
1200
1400
1600
1 2 4 8 16 32 64
Tota
l Tra
nsf
err
ed
In K
b/s
ec
Threads
Sysbench Synchronous File I/O Random Read/Write @ R/W Ratio of 1.50
docker
KVM
8/2/2014 42
Guest Ops: MySQL OLTP
0
2000
4000
6000
8000
10000
12000
14000
1 2 4 8 16 32 64
Tota
l Tra
nsa
ctio
ns
Threads
MySQL OLTP Random Transactional R/W (60s)
docker
KVM
8/2/2014 43
Guest Ops: MySQL Indexed Insertion
0
20
40
60
80
100
120
140
100000 200000 300000 400000 500000 600000 700000 800000 900000 1000000
Seco
nd
s P
er
10
0K
In
sert
ion
Bat
ch
Table Size In Rows
MySQL Indexed Insertion @ 100K Intervals
docker
kvm
8/2/2014 44
Cloud Management Impacts on docker LXC
0.17
3.529113102
0
0.5
1
1.5
2
2.5
3
3.5
4
docker cli nova-docker
Seco
nd
s
Docker: Boot Container - CLI vs Nova Virt
docker cli
nova-docker
8/2/2014 45
Cloud management often caps true ops performance of LXC
Ubuntu MySQL Image Size
381.5
1080
0
200
400
600
800
1000
1200
docker kvm
Size
In
MB
Docker / KVM: Ubuntu MySQL
docker
kvm
8/2/2014 46
Out of the box JeOS images for docker are lightweight
In Summary
Near bare metal performance in the guest
Fast operations in the Cloud
– Often capped by Cloud management framework
Reduced resource consumption (CPU, MEM) on the compute node – greater density
Out of the box smaller image footprint
8/2/2014 47
Parting Thoughts: Ecosystem Synergy
8/2/2014 48
Category Factors Scope
CAPEX
Hardware costs - VM density (consolidation ratio)- Soft device integration- Broad vendor compatibility
- Hypervisor- Cloud manager
Software licensing costs - Software purchase price- Support contracts
- Hypervisor- Cloud manager
OPEX
Disaster recovery - Hypervisor- Cloud manager
Upgrade / maintenance expenses - Hypervisor- Cloud manager
Power & cooling costs - Reduced HW footprint - Hypervisor- Cloud manager
Administration efficiency - Automated operations- Performance / response time
- Hypervisor- Cloud manager
Support & training costs - Hypervisor- Cloud manager
AGILITY
Application delivery time - Workflow complexity- Toolset costs- Skillset
- Hypervisor- Cloud manager
Planned / unplanned downtime - Hypervisor- Cloud manager
Displacement of enterprise players requires full stack solutions
*Not a complete or extensive list
References & Related Links
http://www.slideshare.net/BodenRussell/realizing-linux-containerslxc http://bodenr.blogspot.com/2014/05/kvm-and-docker-lxc-benchmarking-with.html https://www.docker.io/ http://sysbench.sourceforge.net/ http://dag.wiee.rs/home-made/dstat/ http://www.openstack.org/ https://wiki.openstack.org/wiki/Rally https://wiki.openstack.org/wiki/Docker http://devstack.org/ http://www.linux-kvm.org/page/Main_Page https://github.com/stackforge/nova-docker https://github.com/dotcloud/docker-registry http://www.netperf.org/netperf/ http://www.tokutek.com/products/iibench/ http://www.brendangregg.com/activebenchmarking.html http://wiki.openvz.org/Performance http://www.slideshare.net/jpetazzo/linux-containers-lxc-docker-and-security (images)
– http://www.publicdomainpictures.net/view-image.php?image=11972&picture=dollars– http://www.publicdomainpictures.net/view-image.php?image=1888&picture=zoom– http://www.publicdomainpictures.net/view-image.php?image=6059&picture=ge-building
8/2/2014 49
Thank You… Questions?
8/2/2014 50