+ All Categories
Home > Software > Docker slides

Docker slides

Date post: 19-Mar-2017
Category:
Upload: jyotsna-raghuraman
View: 5,045 times
Download: 0 times
Share this document with a friend
22
What is Docker? Application Files Dependencies Runtime OS System Libraries It is a system to package software completely with everything needed to run - the application files, dependencies, runtime, operating system, system libraries, and so on, in other words, anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment is lightweight, since all instances share the kernel
Transcript

What is Docker?

• Application Files

• Dependencies

• Runtime

• OS

• System Libraries

• It is a system to package software completely with everything needed to run - the application files, dependencies, runtime, operating system, system libraries, and so on, in other words, anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment

• is lightweight, since all instances share the kernel

History

• 2008 – Introduction of Linux containers (LXC)

• 2013 – Released as an open source project by dotCloud based on Linux containers

• 2015 – runC released by Docker (its universal container runtime, a collection of its infrastructure plumbing code)

• 2016 – Rocket launched by CoreOS to challenge Docker

Docker was released as an open source project by dotCloud, a platform as a service company, in 2013. Docker relies on Linux kernel features, such as namespaces and cgroups, to ensure resource isolation and to package an application along with its dependencies. This packaging of the dependencies enables an application to run as expected across different Linux operating systems—supporting a level of portability that allows a developer to write an application in any language and then easily move it from a laptop to a test or production server—regardless of the underlying Linux distribution. It’s this portability that’s piqued the interest of developers and systems administrators alike.

Terminology/Components

• Image - a file system and parameters to use at runtime. It doesn’t have state and never changes

• Container - a running instance of an image

• Docker Engine –a lightweight container runtime and tooling to build and run containers.

Leverages LXC, features of the Linux kernel (namespaces and control groups) to sandbox processes from each other a lightweight container runtime and robust tooling that builds and runs your container. Docker allows you to package up application code and dependencies together in an isolated container that share the OS kernel on the host system. The in-host central daemon communicates with the Docker Client to execute commands to build, ship and run containers.

Docker versus VM

How to set up Docker

• Install Docker Engine

• Create a Dockerfile - a recipe which describes the files, environment, and commands that make up an image.

• Build the image from the Dockerfile, and run it

Environments: what platforms does Docker run on

• Linux:

• Any distribution running version 3.10+ of the Linux kernel

• Microsoft Windows: - Windows Server 2016 - Windows 10

• Cloud:

• Amazon EC2

• Google Compute Engine

• Microsoft Azure

• Rackspace

Started with linux. Now supports Windows Server and Windows 10 via Nano Server, a new headless deployment for Windows Server 2016 (https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/) Requires Hyper-V on Windows to run Specific instructions are available for most Linux distributions, including RHEL, Ubuntu, SuSE, and many others.

Docker commands

• Build: docker build –t hello-world

• Reads the Dockerfile in the current directory and processes its instructions one by one to build an image called hello-world

• Run: docker run hello-world

• checks to see if the hello-world software image exists

• downloads the image from the Docker Hub if required (more about the hub later)

• loads the image into the container and “runs” it

• Login to the cloud: docker login

• displays login prompt to Docker Hub (or AWS/Azure, etc.)

• Publish to cloud: docker push hello-world

• Pushes image to the cloud

Builds the image in layers

Example Dockerfile

• FROM … (specifies base image name such as centos:centos7)

• RUN … (yum install, etc.)

• COPY … (files into image)

• ADD … (files into image – COPY is preferred because it is more transparent)

• CMD … (only one of these per Dockerfile)

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. The main purpose of a CMD is to provide defaults for an executing container Note: Don’t confuse RUN with CMD. RUN actually runs a command and commits the result; CMD does not execute anything at build time, but specifies the intended command for the image.

Docker projects

• Docker Store – purchase commercial software as docker images

• Docker Hub - a cloud-based registry service which allows you to link to code repositories, build your images and test them, store manually pushed images, and link to Docker Cloud so you can deploy images to your hosts.

• Docker Registry – server side scalable application to store and distribute docker images, allowing full ownership of the image distribution pipeline (can deploy to on-prem servers)

Docker Cloud - It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline. Docker Hub provides the following major features: Image Repositories: Find and pull images from community and official libraries, and manage, push to, and pull from private image libraries to which you have access. Automated Builds: Automatically create new images when you make changes to a source code repository. Webhooks: A feature of Automated Builds, Webhooks let you trigger actions after a successful push to a repository.

Organizations: Create work groups to manage access to image repositories. GitHub and Bitbucket Integration: Add the Hub and your Docker Images to your current workflows.

Multi-container applications

• Docker Compose is a tool for defining and running multi-container Docker applications.With Compose, you use a Compose file to configure your application's services.Then, using a single command, you create and start all the servicesfrom your configuration.

• Compose works well for development, testing, and staging environments, as well as CI workflows.

Using Compose is basically a three-step process. Define your app's environment with a Dockerfile so it can be reproduced anywhere. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment: Lastly, run docker-compose up and Compose will start and run your entire app.

Host clustering and container scheduling (Docker Swarm)

• Useful for microservice administration

Docker Swarm - an orchestration tool providing clustering and scheduling capabilities for IT operations teams. Instead of having to communicate with each Docker Engine directly to build and run containers, you can cluster together Docker Engines into a single “virtual engine” that pools their resources together and communicate with a single Swarm master to execute commands. Competitor – Google Kubernetes

Docker UI - Kitematic

Legacy Docker UI (bundled with Docker Toolbox) replaced by Docker for Mac and Docker for Windows. Guides the user to install the complete Docker environment, provisioning a VirtualBox VM. Allows searching for public images on Docker Hub. Can start and stop containers, view container process output logs, make changes to the container settings, switch between Docker CLI and GUI

Docker in the cloud

• Docker Cloud – more managed

• AWS – EC2 container service

• Azure

• Google Cloud Platform

• Rackspace

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images One thing to note is that Docker Cloud is cloud provider agnostic, for us that was a big deal. No reason to get locked in with Amazon or Microsoft in their cloud. Hypothetically speaking, you could move your application from cloud to cloud as prices change, or in our case, use multiple cloud providers to provide fault tolerance. Even though cloud provider outages are rare, they do occur. I believe the closest point of failure are the Docker Cloud DNS services in this approach

Typical Docker platform workflow

• Get your code and its dependencies into Docker containers:

• Write a Dockerfile that specifies the execution environment and pulls in your code.

• If your app depends on external applications (such as Redis, or MySQL), find them on a registry such as Docker Hub, and refer to them in a Docker Compose file, along with a reference to your application, so that they run simultaneously.

• Software providers also distribute paid software via the Docker Store.

• Build, then run your containers on a virtual host via Docker Machine as you develop.

• Configure networking and storage, if needed.

• Upload builds to a registry to collaborate with your team.

• To scale your solution across multiple hosts (VMs or physical machines), plan for how to set up the Swarm cluster and scale it to meet demand.

• Note: Universal Control Plane allows management of the Swarm cluster using a friendly UI

• Finally, deploy to your preferred cloud provider (or, for redundancy, multiple cloud providers) with Docker Cloud. Or, use Docker Datacenter, and deploy to on-premise hardware.

When to use Docker

• For microservice-type application development, each service in one or more containers

• Weak isolation between containers is acceptable

Containers are ideally suited to microservice-type application development -- an approach that allows more complex applications to be configured from basic building blocks, where each building block is deployed in a container and the constituent containers are linked together to form the cohesive application. The application's functionality can then be scaled by deploying more containers of the appropriate building blocks rather than entire new iterations of the full application.

When not to use Docker

• For monolithic applications

• Where high security and strict isolation are needed

What to love about it• Great documentation

• Simplicity of setup for most use cases

• Speed of build after the initial one because of layer caching

---------------------------------------------------------------------------------------------------------------

And what not• Limited portability because containers typically run on top of the physical OS (Linux

containers under Docker cannot run on Windows Servers without a VM)

• Limited tools to monitor and manage containers (getting better)

• Containers can grow out of hand and need to be cleaned up from time to time (unnecessary cloud computing costs)

Pricing

• Docker Engine is open source and free to use

• Pricing available for Docker Datacenter (Docker Containers as a Service), Docker Cloud, Commercially Supported Docker Engine

Open Container Initiative

• Started by Docker and Microsoft to keep the packaging format universal

• Launched on June 22nd 2015

• Contains two specifications – runtime-spec and image-spec

• Workflow should support the simple UX expected of container engines – run an image with no additional arguments

The Open Container Initiative (OCI) is a lightweight, open governance structure (project), formed under the auspices of the Linux Foundation, for the express purpose of creating open industry standards around container formats and runtime. The OCI was launched on June 22nd 2015. The OCI currently contains two specifications: the Runtime Specification (runtime-spec) and the Image Specification (image-spec). The Runtime Specification outlines how to run a “filesystem bundle” that is unpacked on disk. At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. At this point the OCI Runtime Bundle would be run by an OCI Runtime.

This entire workflow should support the UX that users have come to expect from container engines like Docker and rkt: primarily, the ability to run an image with no additional arguments: docker run example.com/org/app:v1.0.0 rkt run example.com/org/app,version=v1.0.0

Questions?


Recommended