+ All Categories
Home > Documents > CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131...

CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131...

Date post: 05-Sep-2019
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
12
inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de CoreOS integration for Foreman Johannes Maximilian Scheuermann 1. A brief introduction to Docker 1.1 What is Docker? Docker is a lightweigth container virtualization plattform. Docker helps you to develop, ship and run your applications anywhere. As docker based on container virtualization which does not rely on a full-blown hypervisor, kernel and operatingsystem, compared to virtual machines docker containers run with less overhead and better utilize your hardware. Docker provides a flexible way to ship your container regardless if you run the container locally on your workstation, on a server or in the cloud. Docker was initally released on 13 March 2013. 1.2 Docker components The two major components of Docker are Docker itself and the Docker Hub which provides a service for sharing and managing Docker containers. Docker uses a client-server architecture. The Docker daemon acts as a server and supports the creation, lifecycle and distributing of Docker containers. The Docker client talks to the daemon through sockets or a RESTful API. You can run booth on your local computer or you can connect yourself with a remote Docker daemon. Docker images Docker images are read-only templates which contains information about the container e.g. which OS, which applications should be installed etc.. You can download existing images like Ubuntu or create your own images. Docker registries A Docker registry manages your Docker images. You can use the public Docker registry Docker Hub or you can run your own private registry. If you use the public registry you have access to a huge collection of existing images. Docker containers A Docker container provides the complete application runtime environment. Every container is created from a Docker image. A container can have different states like run,
Transcript
Page 1: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

CoreOS integration for Foreman Johannes Maximilian Scheuermann

1. A brief introduction to Docker

1.1 What is Docker?

Docker is a lightweigth container virtualization plattform. Docker helps you to develop, ship

and run your applications anywhere. As docker based on container virtualization which does

not rely on a full-blown hypervisor, kernel and operatingsystem, compared to virtual

machines docker containers run with less overhead and better utilize your hardware.

Docker provides a flexible way to ship your container regardless if you run the container

locally on your workstation, on a server or in the cloud. Docker was initally released on 13

March 2013.

1.2 Docker components

The two major components of Docker are Docker itself and the Docker Hub which provides

a service for sharing and managing Docker containers. Docker uses a client-server

architecture. The Docker daemon acts as a server and supports the creation, lifecycle and

distributing of Docker containers. The Docker client talks to the daemon through sockets or

a RESTful API. You can run booth on your local computer or you can connect yourself with a

remote Docker daemon.

Docker images

Docker images are read-only templates which contains information about the container e.g.

which OS, which applications should be installed etc.. You can download existing images like

Ubuntu or create your own images.

Docker registries

A Docker registry manages your Docker images. You can use the public Docker registry

Docker Hub or you can run your own private registry. If you use the public registry you have

access to a huge collection of existing images.

Docker containers

A Docker container provides the complete application runtime environment. Every

container is created from a Docker image. A container can have different states like run,

Page 2: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

started, stopped, moved and deleted. Each container is an isolated application instance. The

level of isolation (and security) highly depends on the underlying techology (i.e.

libcontainer).

1.3 Benefits

Rapid application development Containers minize the overhead to deploy an

application by providing the minimal runtime requirements only.

Portability An application and all its dependencies are bundled into a single container

which can be shipped around. The container is independent from the host version of

Linux Kernel, host Linux distribution and deployment model. A container can be

shipped to any machine which runs Docker and be executed without any

compatibility issues.

Version control and component reuse Docker provides a version control system

which allows you to inspect differences between version and roll-back to a previous

versions. Containers can reuse components which makes them more lightweight.

Sharing You can share your Docker images via a public repository or using your own.

Lightweight Normally Docker images are quite small to provide rapid delivery and

reduces the time to deploy new application containers. A Docker container spins up

in some seconds, only a fraction of the time required to boot a VM.

And many more

Figure 1: Containers vs VMs

Page 3: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

2. CoreOS

2.1 What is CoreOS?

CoreOS is a fork of Crome OS and many new features have been added since. The inital

release date of CoreOS was October 3, 2013. CoreOS is a Linux distribution for massive

server deployments. It was rearchitected to provide features to create modern

infrastructure stacks.

The new architecture allows you to run your services at scale and with high resilience but

less overhead and a significat smaller footprint compared to other Linux distribution.

2.2 Overview

“CoreOS is designed for security, consistency, and reliability. Instead of installing packages

via yum or apt, CoreOS uses Linux containers to manage your services at a higher level of

abstraction.“ from coreos

CoreOS can be run on nearly every X86 platform. You can run a CoreOS cluster across

multiple cloud provider and your own machines. The three main blocks of CoreOS are etcd,

docker and systemd.

Docker

The main building block of CoreOS is Docker as a container engine. To manage your Docker

containers you can use fleet, which is shipped with CoreOS by default. The rational behind

moving functionality from the OS into containers is to keep the footprint of CoreOS small

and to not rely on a complex packet manager. This means that any software not included in

CoreOS has to be deployed in its own container.

Page 4: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

systemd and fleet

When you run Docker containers on CoreOS you should use fleet to schedule them. Fleet is

a tool that presents the entire cluster as a single (systemd) init system. To interact with

fleet you have to create systemd unit files. Fleet will then schedule them onto the machines

in the cluster based on rules, like declared conflicts and other encoded preferences. You can

create very complex architectures by combining properties like collocation with the same

properties. Also you can define which service containers should not run on the same

machine based on availability zone or region.

etcd

etcd is a distributed high-available key value store for shared configuration and service

discovery. Every CoreOS host has a local endpoint for etcd. The benefit of etcd is the

replicated state which means every change is available on every node in the cluster. With

etcd you don’t need to hardcode links to different layers like a database, you can simply

fetch the information from etcd. Service discovery allows you to distribute your

applications and scale them.

Figure 2: CoreOS - Three Tier App

Page 5: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

2.3 Cloud config

CoreOS can be configured using a YAML based configuration file. The configuration file can

be applied using the coreos-cloudinit process during setup. A cloud-config file must start

with #cloud-config or #!. You can define zero or more of the following keys :

coreos

◦ etcd, configuration for etcd like addr, peer-addr, name etc.

◦ fleet, allows configuration of fleet. You can pass different metadata like role etc.

◦ flannel , with flannel you can setup an overlay network for your cluster.

◦ update, you can define different update strategies.

◦ units, defines a list of systemd units which are started after booting. This is

useful for a simple network configuration or to mount storage.

ssh_authorized_keys, define a list of ssh keys which are authorized for the core user.

hostname, defines the hostname.

user, you can add or modifiy the specified list of users. Each user object has a subset

of fields.

write_files, you can define a set of files which will be crea ted on the local filesystem.

manage_etc_hosts, manages the content of /etc/hosts.

You can provide a cloud-config file via a config-drive or via an URL passed to the kernel as

an argument during boot, like we will do with foreman. A very useful tool to validate your

cloud-config is https://coreos.com/validate.

3. Foreman integration

We use Foreman in many other projects as Bare Metal or VM provisioner. Therefore it was

natural to integrate CoreOS into Foreman in order to deploy a single CoreOS machine or a

complete CoreOS Cluster from scratch. As a proof of concept we used Foreman 1.6.1 on

CentOS 6 to integrate CoreOS.

3.1 Code changes

If you want to integrate CoreOS support into your Foreman you have to append and change

some ruby files of Foreman. Alternatively you can pull the current develop branch of

Page 6: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

the foreman from github as changes have been merge into the project already and will be

available in foreman 1.8. A full list of all changes is available on github in our pull request . If

you installed foreman using packages, the code resides in /usr/share/foreman. After you

applied the code changes you have to run these two commands:

foreman−r a k e db : m i g r a t e

foreman−r a k e db : s e e d

These commands will load the new settings into foreman.

3.2 Template files

You will need a PXELinux Template to boot and install CoreOS over PXE. A provision

Template is used to provide a valid cloud-config for the installation. You can find all files in

our community-templates pull request . The second pull request has been merged, too.

PXELinux Template

A simple PXELinux template for foreman. All needed variables will be replaced and filled in

by foreman. Foreman will provide these settings to the new machine, which will boot over

PXE. If Foreman notices that the needed boot up files are missing it will download these

from http://release.release.core-os.net where $release will be replaced by the defined

release e.g. stable.

default coreos

label coreos

k e r n e l <%= @kernel % >

i n i t r d= <%= @ i n i t r d % > cloud −c o n f i g −u r l= <%=

f o r e m a n u r l ( ’ p r o v i s i o n ’ )% >

Provision Template

This is the main provision template for our unattended installation. It will start a systemd

unit with the name coreos-bootstrap.service which will install CoreOS to your disk. We used

the coreos-install script from CoreOS which is available by default in every CoreOS image.

With the X-Conflicts attribute we ensure that only one instance of this service will run on

the machine.

Page 7: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

In the next section we defined our SSH keys for remote access. You don’t need to define

this section for unattended installation but it’s nice for debugging.

In the last section we generate the actual cloud-config which will be used after the

installation is completed. In order to generate the content the Provision template utilises

the coreos cloudconfig snippet which we will detail below.

Cloud Config snippet

This cloud-config will be used for the next boot after the installation to configure our new

CoreOS node. At first we defined our discovery token. CoreOS use this token to find other

nodes in the cluster and make it easy to join to an existing cluster. You can simply get a new

discovery token with a simple curl or simply type the url into your browser. We will need

this token in the next section.

Curl h t t p s : / / d i s c o v e r y . e t c d . i o /new

Next we have defined that etcd and fleet should be started. Also we start the Docker TCP-

Socket which allows us to interact with the docker daemon remotly. You can interact with

the Docker damoen when you specify the Docker host:

sudo d o c k e r −H t c p : / / 1 2 7 . 0 . 0 . 1 : 2 3 7 5

In the last section we define our SSH keys to login to our CoreOS nodes. Host Groups

parameters In the last step we need to define a host group for our CoreOS cluster and

create some parameters. We called our host group ”coreos cluster”. Next we added the

following attributes:

etcd_discovery_url Insert here the discovery token that you get from the discovery

service: https://discovery.etcd.io/new

install-disk /dev/vda or if you like to install it to bare metal /dev/sda

ssh_authorized_keys Your SSH key to login to the CoreOS nodes.

Page 8: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

4. Small Cluster deployment

When you finished all the steps successfully it is pretty easy to create your first small

CoreOS cluster. Just add some new hosts to foreman and start your machines. Sometimes

if you use the alpha release you might hit a bug where CoreOS doesn’t boot correctly. Just

try to login via ssh, if you are able to do this everything is fine. When you are logged in on

your CoreOS node you can check with top that CoreOS is installing itself on the disk.

After a succesfull installation you can again login to the CoreOS node. Now you can ensure

that your CoreOS cluster is running correctly by showing all nodes in your cluster:

f l e e t c t l l i s t −machines

or you can take your discovery token and paste it in your browser. You should see all active

nodes in your cluster. That’s it you are running a small cluster. Now you are able to run your

fleet units.

Figure 3: Small Cluster Setup

Page 9: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

5. Easy Development / Testing Cluster

Next we want to create a dedicated etcd cluster and a bunch of worker nodes. A benefit of

this setup is that your etcd cluster - in our case it’s only a single node - can send the

workload to the worker nodes and you will gain some CPU and RAM to play with.

At first we will modify the provision template like this:

27 − content : |

28 <%= s n i p p e t @host . params [ ’ c l o u d c o n f i g ’ ] % >

29 path : /home/ c o r e / cloud −c o n f i g . yml

Snippets

This allows us to dynamically load different cloud configs for different types of nodes. Now

we create a snippet for our etcd cluster. Note we need a static entry point for our etcd

cluster, this will be the first etcd node we launch in our etcd cluster. Also we can define the

active cluster size which means how many nodes should be activly involed in our etcd

cluster. The prefed size of an etcd cluster should be between 3-9. You should always

choose a odd number for best effort . Every CoreOS node which is started up with this

cloud config has the role ”etcd”.

Figure 4: Easy Development/Testing Cluster

Page 10: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

#cloud-config

<%#

kind: snippet

name: etcd_cloudconfig

%>

coreos:

fleet:

metadata: "role=etcd"

etcd:

<% if @host.params['peer_address'] != @host.ip -%>

peers: <%= @host.params['peer_address'] %>:7001

<% end -%>

addr: <%= @host.ip %>:4001

peer-addr: <%= @host.ip %>:7001

<% if @host.params['cluster_active_size'] != @host.ip -%>

cluster-active-size: <%= @host.params['cluster_active_size'] %>

<% end -%>

units:

- name: etcd.service

command: start

- name: fleet.service

command: start

<% if @host.params['ssh_authorized_keys'] -%>

ssh_authorized_keys:

<% @host.params['ssh_authorized_keys'].split(',').map(&:strip).each do |ssh_key| -%>

- "<%= ssh_key %>"

<% end -%>

<% end -%>

Page 11: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

In the next step we create a snippet for all worker nodes. We define that all

new nodes have the role ”worker” which can be used for fleet files. Also we

define a list or a single etcd server where the node will connect to.

#cloud −c o n f i g

<%#

kind : s n i p p e t

name : w o r k e r c l o u d c o n f i g

% >

coreos :

fleet :

metadata : ” r o l e=worker ”

<% i f @host . params [ ’ e t c d s e r v e r s ’ ] −%>

e t c d s e r v e r s : <%= @host . params [ ’ e t c d s e r v e r s’ ] % >

<% end −%>

units :

− name : e t c d . s e r v i c e

mask : t r u e

− name : f l e e t . s e r v i c e

command : s t a r t

<% i f @host . params [ ’ s s h a u t h o r i z e d k e y s ’ ] −%>

ssh authorized keys :

<% @host . params [ ’ s s h a u t h o r i z e d k e y s ’ ] . s p l i t ( ’ , ’ ) .

map(&: s t r i p ) . each do | s s h k e y | −%>

− ”<%= s s h k e y %>”

<% end −%>

<% end −%>

Page 12: CoreOS integration for Foreman - inovex GmbH · inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | info@inovex.de |  CoreOS integration for Foreman

inovex GmbH | Ludwig-Erhard-Allee 6 | 76131 Karlsruhe | Tel. +49 721 619021-0 | [email protected] | www.inovex.de

Parameters

We will define two new Host Groups which inherit from the coreos cluster host group we

defined above. The first group is the etcd cluster group with the following Parameters:

cloudconfig etcd_cloudconfig

cluster_active_size In my example I choosed 3 but you can choose every number

between 3-9.

peer_address Static IP address of the first etcd node you start.

In the second step we define the worker cluster host group which also inherits from the

coreos cluster host group.

cloudconfig worker cloudconfig

etcd_servers List all etcd server, if you insert more than one use a comma as

serperator e.g. http://172.24.1.126:4001

Now we have to first start our etcd cluster and afterwards the worker nodes.

Production Cluster

It is the same setup described above except that we will start at least three

etcd nodes.


Recommended