+ All Categories
Home > Documents > media.readthedocs.org · User Documentation 1 Getting Started 3 1.1 Web Console. . . . . . . . . ....

media.readthedocs.org · User Documentation 1 Getting Started 3 1.1 Web Console. . . . . . . . . ....

Date post: 27-May-2020
Category:
Upload: others
View: 29 times
Download: 0 times
Share this document with a friend
179
APPUiO User Documentation Documentation Release 1.0 APPUiO Community Apr 21, 2020
Transcript
  • APPUiO User DocumentationDocumentation

    Release 1.0

    APPUiO Community

    Apr 21, 2020

  • User Documentation

    1 Getting Started 31.1 Web Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 APPUiO Sample Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 APPUiO - Techlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 OpenShift Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2 APPUiO Public Platform Specifics 52.1 Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 URLs and Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Persistent Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.4 Quotas and Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.5 Secure Docker Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.6 Let’s Encrypt Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.7 Email Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    3 APPUiO Secure Docker Builder 73.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.3 User-customizable builder configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.4 Build VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.5 Build Hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.6 Multi-stage builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.7 Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4 Backup as a Service 114.1 What is Backup as a Service? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.3 Data restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.4 How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.5 Current limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.6 Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    5 Let’s Encrypt Integration 175.1 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    6 How Tos 19

    i

  • 6.1 How to access the OpenShift registry from outside . . . . . . . . . . . . . . . . . . . . . . . . . . . 196.2 How to run scheduled jobs on APPUiO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206.3 How to access an internal service while developing . . . . . . . . . . . . . . . . . . . . . . . . . . . 206.4 How to use a private repository (on e.g. GitHub) to run S2I builds . . . . . . . . . . . . . . . . . . . 206.5 How to add a persistent volume to an application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.6 How to customize the build image/process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    7 Non HTTP Services / TCP Ingress 25

    8 Troubleshooting 278.1 Build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.3 Application Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    9 FAQ (Technical) 359.1 Can I run Containers/Pods as root? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359.2 What do we monitor? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359.3 What do we backup? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369.4 What DNS entries should I add to my custom domain? . . . . . . . . . . . . . . . . . . . . . . . . . 369.5 Which IP addresses are being used? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369.6 How can I secure the access to my web application? . . . . . . . . . . . . . . . . . . . . . . . . . . 379.7 Can I run a database on APPUiO? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379.8 I get an error like ‘Failed Mount: MountVolume.NewMounter initialization failed for volume

    “gluster-pv123” : endpoints “glusterfs-cluster” not found’ . . . . . . . . . . . . . . . . . . . . . . . 379.9 How do I kill a pod/container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.10 How do I work with a volume if my application crashes because of the data in the volume? . . . . . . 389.11 How long do we keep application logs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399.12 Is OpenShift Service Catalog available to be used? . . . . . . . . . . . . . . . . . . . . . . . . . . . 399.13 How to pull an image from a private registry or private docker hub . . . . . . . . . . . . . . . . . . . 39

    10 Introduction 4110.1 Architecture of our shop application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4210.2 Structure of this documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4310.3 Where you can find the sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    11 General Concepts 4511.1 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4511.2 Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4711.3 OpenShift / Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    12 Webserver 5112.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5112.2 Building a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5212.3 Running the container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5412.4 Implementing a CI Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5412.5 Preparing the APPUiO project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5912.6 Pushing to the APPUiO registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6012.7 Implementing a deployment strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6212.8 Advanced Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    13 API 7313.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7313.2 Building a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7413.3 Running the container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7813.4 Implementing a CI pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

    ii

  • 14 Users 8514.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8514.2 Building a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8614.3 Running the container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8914.4 Implementing a CI Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9114.5 Deploying to APPUiO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    15 Orders 9715.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9715.2 Building and running the container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9815.3 Integrating Jenkins with APPUiO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9815.4 Implementing a CI Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10315.5 Deploying to APPUiO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    16 Using Helm Charts to Deploy Services 11516.1 Example: Postgres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11516.2 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11616.3 Related Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

    17 Custom Applications 11717.1 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11717.2 Exception Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11717.3 Monitoring & Alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

    18 PHP Source to Image Sample 119

    19 PHP Docker Build Sample App 121

    20 PHP 7 with Apache Source to Image Example 12320.1 Build Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12320.2 Deploy App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    21 PHP 7 with Nginx Source to Image Example 12521.1 Build Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12521.2 Deploy App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

    22 MySql Backup Image 127

    23 PostgreSQL Backup Image (the manual way) 129

    24 Spring Boot Application 13124.1 Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13124.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13224.3 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    25 Spring Boot Application in Wildfly 13325.1 Spring Boot Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13325.2 OpenShift Build Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13325.3 OpenShift Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

    26 Spring Boot Application with Angular 2 Frontend 13526.1 Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13526.2 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13626.3 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

    iii

  • 27 Java EE Source to Image 13727.1 Deployment via oc Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13727.2 Deployment via webconsole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13727.3 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13827.4 Speed up your build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

    28 Docker Image 139

    29 Deploy Static Content to APPUiO 14129.1 Apache based image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14129.2 Nginx-based image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14329.3 Continuous Integration: Trigger Rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

    30 Binary Deployment in Wildfly 14530.1 Create a new project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14530.2 Create the deployment folder structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14530.3 Create a new build using the Wildfly image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14530.4 Start the build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14630.5 Create a new app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14630.6 Expose the service as route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

    31 Node JS 6 Example 14731.1 How to deploy to APPUiO / OpenShift Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14731.2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14731.3 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

    32 MSSQL Server on APPUiO 14932.1 Quick Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14932.2 In Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

    33 More OpenShift Sample Apps 151

    34 Prometheus 15334.1 Installation of prometheus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15334.2 Example Metrics from cadvisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

    35 Unifi-Controller 157

    36 License 171

    37 Indices and tables 173

    iv

  • APPUiO User Documentation Documentation, Release 1.0

    This is the place where everything related to APPUiO gets documented. Feel free to contribute using pull requests onour GitHub project.

    The get started with an example, have a look at the very detailed Microservices Example.

    User Documentation 1

    https://github.com/appuio/docs/en/latest/services/01_introduction.html

  • APPUiO User Documentation Documentation, Release 1.0

    2 User Documentation

  • CHAPTER 1

    Getting Started

    1.1 Web Console

    Login to the Platform here: Console

    1.2 CLI

    You can download the OpenShift CLI Client (oc) matching the current OpenShift of APPUiO directly from APPUiO.

    • Windows

    • Mac OS X

    • Linux

    Copy the oc client on your machine into a direcotry on the defined PATH

    For example: ~/bin

    1.2.1 Prerequisites

    For certain commands eg. oc new-app https://github.com/appuio/example-php-sti-helloworld.git a locally installed gitclient (git command) is required.

    1.2.2 Login

    oc login https://console.appuio.ch

    For more information please see Get Started with the CLI.

    3

    https://console.appuio.ch/https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-windows.ziphttps://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-mac.ziphttps://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gzhttps://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html

  • APPUiO User Documentation Documentation, Release 1.0

    1.3 APPUiO Sample Applications

    If you want to deploy your first “hello world” example, see OpenShift’s Developers: Web Console Walkthrough. Ordive right into some sample applications for APPUiO in our Application Tutorial section.

    1.4 APPUiO - Techlab

    The APPUiO - OpenShift techlab provides a hands on step by step tutorial that allows you to get in touch with thebasic concepts. Check out our german APPUiO Techlab .

    The german techlab covers:

    • Quicktour and basic concepts

    • install OpenShift CLI

    • First Steps on the Platform (Source To Image deployment from github)

    • Deploy a docker image from dockerhub

    • Creating routes

    • Scaling

    • Troubleshooting

    • Deploying a database

    • Code changes and redeployments

    • Attach persistent storage

    • how to use application templates

    1.5 OpenShift Documentation

    Please find further documentation here: OpenShift docs

    4 Chapter 1. Getting Started

    https://docs.openshift.com/container-platform/3.11/getting_started/developers_console.htmlhttps://github.com/appuio/techlabhttps://docs.openshift.com/enterprise/latest/welcome/index.html

  • CHAPTER 2

    APPUiO Public Platform Specifics

    APPUiO is based on OpenShift Container Platform. This page describes APPUiO specific OpenShift configurationsettings as well as features which were added to APPUiO that are not present in OpenShift.

    2.1 Versions

    • Operating System: Red Hat Enterprise Linux (RHEL) 7

    • OpenShift Container Platform: 3.11

    • Docker: 1.13.1

    You can download matching clients directly from APPUiO: Getting Started.

    2.2 URLs and Domains

    • Master URL: https://console.appuio.ch/

    • Metrics URL: https://metrics.appuio.ch/

    • Logging URL: https://logging.appuio.ch/

    • Application Domain: appuioapp.ch

    2.3 Persistent Storage

    APPUiO currently uses GlusterFS based persistent storage. For database data we provide Gluster volumes with stor-age class gluster-database to avoid instability, which makes use of optimized parameters. (Please set thestorageClassName attribute in your PVC or StatefulSet manifest accordingly.) For now, volumes with the fol-lowing sizes are available out of the box:

    • 1 GiB

    5

    https://console.appuio.ch/https://metrics.appuio.ch/https://logging.appuio.ch/https://github.com/gluster/glusterfs/blob/release-7/extras/group-db-workloadhttps://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaimshttps://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components

  • APPUiO User Documentation Documentation, Release 1.0

    • 5 GiB

    If you need larger volumes please contact us. All volumes can be accessed with ReadWriteOnce (RWO) and Read-WriteMany (RWX) access modes. Please see the official OpenShift documentation for more information.

    2.4 Quotas and Limits

    The quotas are defined in the project size you ordered. The exact numbers can be found on the product site APPUiOPublic Platform

    2.5 Secure Docker Builds

    Usually Docker builds from Dockerfile have to be disabled on multi-tenant platforms for security reasons. How-ever, APPUiO uses it’s own implementation to securely run Docker builds in dedicated VMs: APPUiO Secure DockerBuilder

    2.6 Let’s Encrypt Integration

    Let’s Encrypt is a certificate authority that provides free SSL/TLS certificates which are accepted by most of todaysbrowser via an automated process. APPUiO provides integration with Let’s Encrypt to automatically create, sign,install and renew certificates for your Domains running on APPUiO: Let’s Encrypt Integration

    2.7 Email Gateway

    To send emails to external entities, you should SMTP relay via the email gateway at mxout.appuio.ch.

    To include the APPUiO email gateway in your existing SPF policy, you can include or redirect to spf.appuio.ch.

    Example DNS record:

    @ IN TXT "v=spf1 ... include:spf.appuio.ch ~all"

    Or if you send emails for your domain exclusivly from appuio:

    @ IN TXT "v=spf1 redirect=spf.appuio.ch"

    6 Chapter 2. APPUiO Public Platform Specifics

    https://control.vshn.nethttps://docs.openshift.com/container-platform/3.11/dev_guide/persistent_volumes.htmlhttps://appuio.ch/public.htmlhttps://appuio.ch/public.html

  • CHAPTER 3

    APPUiO Secure Docker Builder

    3.1 Rationale

    Docker builds from Dockerfiles need access to the Docker Socket and are inherently insecure. For this reason mostmulti-tenant container platforms do not support Docker builds. While OpenShift Container Platform, on which AP-PUiO is based, improves the security of builds through the use of SELinux, they are still not secure enough to runon a multi-tenant platform. Indeed we have disabled the custom build strategy (custom builders) on APPUiO for thisreason.

    3.2 Features

    However, since we regard building Docker images from Dockerfiles as a vital feature, APPUiO provides its ownmechanism called the “APPUiO secure Docker builder” to offer this. APPUiO secure Docker builder has the followingfeatures:

    • It provides the same user experience as the OpenShift Container Platform Docker builder.

    • Builds run in virtual machines dedicated to a single APPUiO project, which in turn run on dedicated hosts,i.e. outside of APPUiO’s OpenShift Container Platform. Therefore providing full isolation between builds andcustomer containers as well as between builds from different customers.

    • Supports Docker cache for fast subsequent builds.

    • All communication between APPUiO’s OpenShift Container Platform and the dedicated build VMs is encrypted.

    • To compensate the loss of custom builders it provides hooks to allow users to run a script before and/or afterdocker build.

    7

    https://docs.docker.com/engine/security/security/#/docker-daemon-attack-surfacehttps://docs.openshift.org/latest/admin_guide/securing_builds.htmlhttps://docs.openshift.com/container-platform/3.11/architecture/core_concepts/builds_and_image_streams.html#custom-build

  • APPUiO User Documentation Documentation, Release 1.0

    3.3 User-customizable builder configuration

    The source secret attached to the build strategy of a build configuration can be used to configure the build. As perusual in OpenShift secrets values must be encoded using Base64.

    3.3.1 Example

    $ oc export secrets example-source-authapiVersion: v1kind: Secretmetadata:

    name: example-source-authtype: Opaquedata:

    ssh-privatekey: LS0...Cg==ssh-known-hosts: Iwo=ssh-config: Iwo=

    The string Iwo= is #\n in Base64.

    3.3.2 ssh-privatekey

    Private SSH key; see OpenShift documentation.

    3.3.3 ssh-known-hosts

    If this attribute is set to anything, including the empty string, strict host key checking is enabled (seeStrictHostKeyChecking in ssh_config(5)). The host keys for the following hosting services are alreadyincluded by default:

    • GitHub

    • GitLab

    • Atlassian Bitbucket

    Other host keys can be added in Base64 format. Example retrieval command:

    $ ssh-keyscan git.example.net | base64Z2l[...]wo=

    3.3.4 ssh-config

    SSH configuration snippet; added after the built-in options. Useful to specify different configuration options for theSSH client (i.e. the Ciphers option; see ssh_config(5)).

    3.4 Build VMs

    RHEL and Docker versions in the build VMs are identical the ones on APPUiOs OpenShift Container Platform.

    8 Chapter 3. APPUiO Secure Docker Builder

    https://en.wikipedia.org/wiki/Base64https://docs.openshift.com/container-platform/3.11/dev_guide/builds/build_inputs.html#source-secrets-ssh-key-authenticationhttps://github.com/https://about.gitlab.com/https://bitbucket.org/

  • APPUiO User Documentation Documentation, Release 1.0

    3.5 Build Hooks

    Users can add .d2i/pre_build and/or .d2i/post_build scripts to the source repository where theirDockerfile resides. The scripts

    • need to be executable and can be written in any language.

    • have access to environment variables set in the BuildConfig object, the variables documented for customOpenShift builder images, DOCKERFILE_PATH (relative or absolute path to Dockerfile) and DOCKER_TAG(output Docker tag)

    • pre_build is executed just before docker build and has read/write to the Docker context, including theDockerfile (use $DOCKERFILE_PATH; also passed as first argument); the output tag is given as the secondargument

    • post_build is executed just after docker build and has access to the Docker context and the built image

    • are executed in the build VM as root

    3.5.1 Build Hook Example

    Here you’ll find an example which uses a pre_build script to install Maven and uses it to download a .war filefrom an artefact repository: https://github.com/appuio/appuio-docker-builder-example. The Dockerfile picks upthe .war file downloaded by the pre_build script and adds to the image with an ADD instruction. In a real projectthe ARTIFACT environment variable would be configure in a BuildConfig. The example uses JBoss EAP, whichis only available to you if you ordered it. However this approach also works with other base images.

    3.6 Multi-stage builds

    Note: As of September 2017 multi-stage builds are a beta feature included in the secure Docker builder.

    Note: Multi-stage builds can’t be used when the source image for a build is overridden using.spec.strategy.dockerStrategy.from.name.

    Docker 17.05 and newer support multi-stage builds where build stages can be partially reused for further stages. Anexample Dockerfile from the Docker documentation:

    Listing 1: Dockerfile

    FROM golang:1.7.3 as builderWORKDIR /go/src/github.com/alexellis/href-counter/RUN go get -d -v golang.org/x/net/htmlCOPY app.go .RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

    FROM alpine:latestRUN apk --no-cache add ca-certificatesWORKDIR /root/COPY --from=builder /go/src/github.com/alexellis/href-counter/app .CMD ["./app"]

    3.5. Build Hooks 9

    https://docs.openshift.com/container-platform/3.11/creating_images/custom.html#custom-builder-imagehttps://docs.openshift.com/container-platform/3.11/creating_images/custom.html#custom-builder-imagehttps://github.com/appuio/appuio-docker-builder-examplehttps://docs.openshift.com/container-platform/3.11/dev_guide/builds/build_strategies.html#docker-strategy-fromhttps://docs.docker.com/engine/userguide/eng-image/multistage-build/

  • APPUiO User Documentation Documentation, Release 1.0

    3.7 Known Issues

    • The OpenShift Container Platform Docker builder exposes environment variables via an ENV instruction at theend of Dockerfile. This is not yet implemented in the APPUiO secure Docker builder.

    • Binary and image sources are currently not implemented.

    10 Chapter 3. APPUiO Secure Docker Builder

    https://docs.openshift.com/container-platform/3.11/dev_guide/builds/build_inputs.html#binary-sourcehttps://docs.openshift.com/container-platform/3.11/dev_guide/builds/build_inputs.html#image-source

  • CHAPTER 4

    Backup as a Service

    Beta Warning

    This service is currently in beta and seeks for feedback

    Contents

    • Backup as a Service

    – What is Backup as a Service?

    – Getting started

    * Application aware backups

    – Data restore

    * Automatic restore

    * Manual restore

    – How it works

    – Current limitations

    – Plans

    4.1 What is Backup as a Service?

    On APPUiO we provide a managed backup service based on Restic.

    Just create a backup object in the namespace you’d like to backup. It’s that easy. We take care of the rest: Regularlyrun the backup job and monitor if and how it is running.

    11

    https://restic.readthedocs.io/

  • APPUiO User Documentation Documentation, Release 1.0

    4.2 Getting started

    Follow these steps to enable backup in your project:

    1. Prepare an S3 endpoint which holds your backup data. We recommend cloudscale.ch object storage, but anyother S3 endpoint should work.

    2. Store the endpoint credentials in a secret:

    oc -n mynamespace create secret generic backup-credentials \--from-literal=username=myaccesskey \--from-literal=password=mysecretaccesskey

    3. Store an encryption password in a secret:

    oc -n mynamespace create secret generic backup-repo \--from-literal=password=mybackupencryptionpassword

    4. Configure the backup by creating a backup object:

    oc -n mynamespace apply -f -

  • APPUiO User Documentation Documentation, Release 1.0

    • You can always check the state and configuration of your backup by using oc -n mynamespacedescribe schedule.

    • By default all PVCs are stored in backup. By adding the annotation appuio.ch/backup=false to a PVCobject it will get excluded from backup.

    4.2.1 Application aware backups

    It’s possible to define annotations on pods with backup commands. These backup commands should create an appli-cation aware backup and stream it to stdout. Since the backupcommand isn’t run in a bash there is no env available.Therefore this might have to be specified in the backupcommand using ‘/bin/bash -c’.

    Define an annotation on pod:

    template:

    metadata:labels:

    app: postgresannotations:

    appuio.ch/backupcommand: '/bin/bash -c "pg_dump -U user -p 5432 -d dbname"'

    With this annotation the operator will trigger that command inside the the container and capture the stdout to a backup.

    Tested with: * MariaDB * MongoDB

    But it should work with any command that has the ability to output the backup to stdout.

    4.3 Data restore

    There are two ways to restore your data once you need it.

    4.3.1 Automatic restore

    This kind of restore is managed via CRDs. These CRDs support two targets for restores:

    • S3 as tar.gz

    • To a new PVC (mostly untested though → permissions might need some more investigation)

    Example of a restore to S3 CRD:

    apiVersion: backup.appuio.ch/v1alpha1kind: Restoremetadata:

    name: restore-testspec:restoreMethod:s3:

    endpoint: http://10.144.1.224:9000bucket: restoreminiaccessKeyIDSecretRef:

    name: backup-credentials(continues on next page)

    4.3. Data restore 13

  • APPUiO User Documentation Documentation, Release 1.0

    (continued from previous page)

    key: usernamesecretAccessKeySecretRef:name: backup-credentialskey: password

    backend:s3:

    endpoint: http://10.144.1.224:9000bucket: baasaccessKeyIDSecretRef:

    name: backup-credentialskey: username

    secretAccessKeySecretRef:name: backup-credentialskey: password

    repoPasswordSecretRef:name: backup-repokey: password

    The S3 target is intended as some sort of self service download for a specific backup state. The PVC restore isintended as a form of disaster recovery. Future use could also include automated complete disaster recoveries to othernamespaces/clusters as way to verify the backups.

    4.3.2 Manual restore

    Restoring data currently has to be done manually from outside the cluster. You need Restic installed.

    1. Configure Restic to be able to access the S3 backend:

    export RESTIC_REPOSITORY=s3:https://objects.cloudscale.ch/mybackupexport RESTIC_PASSWORD=mybackupencryptionpasswordexport AWS_ACCESS_KEY_ID=myaccesskeyexport AWS_SECRET_ACCESS_KEY=mysecretaccesskey

    2. List snapshots:

    restic snapshots

    3. Mount the snapshot:

    restic mount ~/mnt

    4. Copy the data to the volume on the cluster e.g. using the oc client:

    oc rsync ~/mnt/hosts/tobru-baas-test/latest/data/pvcname/ podname:/tmp/restoreoc cp ~/mnt/hosts/tobru-baas-test/latest/data/pvcname/mylostfile.txt podname:/tmp

    Please refer to the Restic documentation for the various restore possibilities.

    4.4 How it works

    A cluster wide Kubernetes Operator is responsible for processing the backup objects and handle the backup sched-ules. When it’s time to do a backup, the operator scans the namespace for matching PVCs and creates a backup job inthe corresponding namespace, while mounting the matching PVCs under /data/. Restic then backupsthe data from this location to the configured endpoint.

    14 Chapter 4. Backup as a Service

    https://restic.readthedocs.io/en/latest/050_restore.html

  • APPUiO User Documentation Documentation, Release 1.0

    4.5 Current limitations

    • Only supports data from PVCs with access mode ReadWriteMany at the moment

    • Backups are not actively monitored / alerted yet

    4.6 Plans

    • Active and automated monitoring by APPUiO staff

    • Backup of cluster objects (deployments, configmaps, . . . )

    • In-Cluster data restore

    • Additional backends to S3 by using the rclone backend of Restic

    • Open-Sourcing the Operator

    4.5. Current limitations 15

  • APPUiO User Documentation Documentation, Release 1.0

    16 Chapter 4. Backup as a Service

  • CHAPTER 5

    Let’s Encrypt Integration

    Let’s Encrypt is a certificate authority that provides free SSL/TLS certificates via an automated process. Their certifi-cates are accepted by most of todays browsers.

    APPUiO provides integration with Let’s Encrypt to automatically create, sign, install and renew certificates for yourdomains running on APPUiO.

    To create a certificate for one of your domains follow these steps:

    1. If you haven’t already done so create a route for the fully qualified domain name (FQDN), e.g. www.example.org, your application should run under

    2. Add a CNAME record (important!) for the FQDN to the DNS of your domain pointing to cname.appuioapp.ch. E.g. in BIND: www IN CNAME cname.appuioapp.ch. (the trailing dot is required)

    3. Annotate your route: oc -n MYNAMESPACE annotate route ROUTE kubernetes.io/tls-acme=true

    Creating certificates for the default domain appuioapp.ch is neither needed nor supported as APPUiO alreadyhas a wildcard certificate installed for *.appuioapp.ch. Without this wildcard certificate we would hit the Let’sEncrypt rate limits on the appuioapp.ch domain sooner or later.

    Important: Always create a route before pointing a DNS entry to APPUiO and always remove the correspondingDNS entry before deleting a route for a domain of yours. Otherwise someone else could potentially create a route anda Let’s Encrypt certificate for your domain.

    Please note:

    1. APPUiO automatically renews certificates a few days before they expire

    2. Let’s encrypt can only create domain validated certificates, i.e. it’s not possible to add an organization name toa Let’s Encrypt certificate.

    5.1 Implementation details

    APPUiO uses the OpenShift ACME controller to provide the Let’s Encrypt integration.

    17

    https://letsencrypt.org/https://letsencrypt.org/docs/rate-limits/https://letsencrypt.org/docs/rate-limits/https://en.wikipedia.org/wiki/Domain-validated_certificatehttps://github.com/tnozicka/openshift-acme

  • APPUiO User Documentation Documentation, Release 1.0

    The certificates are stored in the target Route object in the corresponding project. If you require the certifi-cate to be stored as a Secret as well, add the acme.openshift.io/secret-name annotation, e.g. oc -nMYNAMESPACE annotate route ROUTE acme.openshift.io/secret-name=mysecretname

    18 Chapter 5. Let’s Encrypt Integration

  • CHAPTER 6

    How Tos

    Contents

    • How Tos

    – How to access the OpenShift registry from outside

    – How to run scheduled jobs on APPUiO

    – How to access an internal service while developing

    – How to use a private repository (on e.g. GitHub) to run S2I builds

    * 1. Create an SSH keypair

    * 2. Create a deploy key

    · GitHub

    · GitLab

    * 3. Save the private key in an OpenShift secret

    * 4. Create a new build config in OpenShift

    – How to add a persistent volume to an application

    * Create a volume from the Web-GUI

    – How to customize the build image/process

    6.1 How to access the OpenShift registry from outside

    To access the internal OpenShift registry from outside, you can use the following example:

    19

  • APPUiO User Documentation Documentation, Release 1.0

    oc login https://console.appuio.choc whoami -t | docker login -u "$(oc whoami)" --password-stdin registry.appuio.chdocker pull busyboxdocker tag busybox registry.appuio.ch/MYPROJECT/busyboxdocker push registry.appuio.ch/MYPROJECT/busyboxoc get imagestreams -n MYPROJECT

    6.2 How to run scheduled jobs on APPUiO

    checkout the APPUiO Cron Job Example

    6.3 How to access an internal service while developing

    E.g. accessing a hosted PostgreSQL on port 5432 while developing locally.

    To access a service (a single pod, to be more specific) from your local machine, make sure you have installed theOpenShift CLI (as described in the official documentation).

    Login to the OpenShift CLI:

    $ oc login

    Get a list of your currently running pods:

    $ oc get podsNAME READY STATUS RESTARTS AGEplay-postgres-1-9ste1 1/1 Running 0 9s

    With the name of the pod running your service, run the oc port-forward command, also specifying the port youwould like to access:

    $ oc port-forward play-postgres-1-9ste1 5432Forwarding from 127.0.0.1:5432 -> 5432Forwarding from [::1]:5432 -> 5432

    Your service may now be accessed via localhost:port. For more advanced usage of oc port-forwardconsider the official documentation.

    6.4 How to use a private repository (on e.g. GitHub) to run S2I builds

    6.4.1 1. Create an SSH keypair

    Create an SSH keypair without passphrase:

    $ ssh-keygen -t rsa -b 4096 -C "[email protected]"Generating public/private rsa key pair.Enter file in which to save the key: id_rsaEnter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in id_rsa.Your public key has been saved in id_rsa.pub.

    20 Chapter 6. How Tos

    https://github.com/appuio/example-cron-traditionalhttps://docs.okd.io/3.11/cli_reference/get_started_cli.htmlhttps://docs.okd.io/3.11/dev_guide/port_forwarding.html

  • APPUiO User Documentation Documentation, Release 1.0

    The private key has been saved as id_rsa, the public key as id_rsa.pub. You will need both of them, store themin a secure location.

    6.4.2 2. Create a deploy key

    To allow the newly generated key to pull your repository, you have to specify the public key as a deploy key for yourproject. This can be done as shown below:

    GitHub

    GitLab

    6.4. How to use a private repository (on e.g. GitHub) to run S2I builds 21

  • APPUiO User Documentation Documentation, Release 1.0

    For OpenShift to be able to access a private repository, the GitLab instance needs to be configured for SSH access.

    6.4.3 3. Save the private key in an OpenShift secret

    Add a new ssh secret to your OpenShift project, specyfing the path of your ssh private key:

    oc secrets new-sshauth sshsecret --ssh-privatekey=id_rsa

    A new secret called sshsecret has been added. In order to allow OpenShift to pull your repository, the newly savedsecret also has to be linked to the builder service account:

    oc secrets link builder sshsecret

    A more detailed explanation of this step can be found in the official documentation.

    6.4.4 4. Create a new build config in OpenShift

    Now that OpenShift knows your private key and the builder is able to use it, you can create a new S2I build configu-ration, specifying your private repository as a source.

    Create a new build config using the following command (while in your project’s directory with git remotes defined):

    oc new-build s2i-builder-image~SSH_REPO_URL --name="new-bc"

    The s2i-builder-image above specifies the S2I-builder image OpenShift is going to use to build yourapplication source. SSH_REPO_URL should be replaced with the path of your repository, for example“[email protected]:john/example_project.git”.

    As a final step, add the sshsecret to the newly created build config new-bc:

    oc set build-secret --source bc/new-bc sshsecret

    You should now be able to successfully run your source-to-image builds on OpenShift.

    All of those steps are also explained in the official documentation.

    6.5 How to add a persistent volume to an application

    As you know, the contents of the pod/container is discarded when deploying a new container and not shared betweenconcurrent application instances, so you need to save your application data either in a specific service (like S3 forfiles/object, a database for data, etc) or in a persistent volume that is attached to the container when started.

    6.5.1 Create a volume from the Web-GUI

    Click in the Menu under “Storage”, you’ll find there all your existing Persistent Volume Claims. On the top-right thereis the button to create a new claim.

    1. Set a unique name, e.g. yourappname-claim

    2. Choose if you need the volume only on one container (Single User) or simultaneously on multiple containers(Shared Access). A read-only volume can be used for special purposes, but you probably don’t need one.

    3. Enter a size, probably in GiB. This is the amount of storage that will be reserved for you and you will be billedon.

    22 Chapter 6. How Tos

    https://docs.openshift.org/3.11/dev_guide/builds.html#ssh-key-authenticationmailto:[email protected]://docs.openshift.org/3.11/dev_guide/builds.html#ssh-key-authentication

  • APPUiO User Documentation Documentation, Release 1.0

    4. Click Create

    You can then bind that claim to a deployment by clicking in the Menu Applications Deployments, choosing yourdeployment, then below the Template and above the list of deployments there is the “Volumes” section with the “Addstorage” option. Clicking that you can choose which claim to use, where inside the pod the volume should be mounted.

    6.5. How to add a persistent volume to an application 23

  • APPUiO User Documentation Documentation, Release 1.0

    If your deployment/pod already has an “emptyDir” (= ephemeral) volume mounted (e.g. because you are deploying adocker image with a volume specified) you can replace that volume with your new claim using:

    oc volumes dc/yourappname --add --overwrite \--name=yourvexistingvolumename \--type=persistentVolumeClaim \--claim-name=yourappname-claim

    6.6 How to customize the build image/process

    I tried to build https://github.com/arska/sslinfo using the default python 3.5 builder through the Web GUI. Unfor-tunately, while installing my dependencies the following error message appeared that did not in my developmentenvironmen:

    Collecting cryptography==2.1.4 (from -r requirements.txt (line 5))Downloading cryptography-2.1.4.tar.gz (441kB)Complete output from command python setup.py egg_info:error in cryptography setup command: Invalid environment marker: platform_python_→˓implementation != 'PyPy'

    Quick googling pointed me to https://github.com/pyca/pyopenssl/issues/702 with the resolution being upgrading thepip and setuptools packages before installing the dependency.

    My first reaction was to customize the assemble stage of the source-to-image (s2i) process to first upgrade the installersbefore installing dependencies. This can be customized by creating a shellscript at /s2i/bin/assemble in the gitrepo that will be used instead of the build process supplied one, as described at Customizing S2I Images. As this is allopen source I looked at the original (https://github.com/sclorg/s2i-python-container/blob/master/3.5/s2i/bin/assemble)to copy and modify it.

    Looking at the original source was a good idea: the code to upgrade the installers was already there waiting tobe executed if the environment variable UPGRADE_PIP_TO_LATEST was non-empty (https://github.com/sclorg/s2i-python-container/blob/master/3.5/s2i/bin/assemble#L31). So in the end I just had to add the environment variableUPGRADE_PIP_TO_LATEST=true in the build configuration and everything was well.

    24 Chapter 6. How Tos

    https://github.com/arska/sslinfohttps://github.com/pyca/pyopenssl/issues/702https://docs.openshift.com/container-platform/3.11/using_images/s2i_images/customizing_s2i_images.htmlhttps://github.com/sclorg/s2i-python-container/blob/master/3.5/s2i/bin/assemblehttps://github.com/sclorg/s2i-python-container/blob/master/3.5/s2i/bin/assemble#L31https://github.com/sclorg/s2i-python-container/blob/master/3.5/s2i/bin/assemble#L31

  • CHAPTER 7

    Non HTTP Services / TCP Ingress

    Accessing a TCP or UDP service without using the provided OpenShift router via the route object is possible via aLoad Balancer type service.

    To use it just create a service with the type LoadBalancer. Example:

    apiVersion: v1kind: Servicemetadata:

    name: myappspec:ports:- name: mytcpappport: 5000

    type: LoadBalancerselector:app: myapp

    The cluster automatically assigns a unique external IPv4 address to this service. To see which IPv4 address has beenassigned, go to the webconsole and navigate to “Applications -> Services”. The IP is displayed in the field “ExternalIP”. Using the CLI is also possible: oc describe svc myapp.

    Note:

    • Only IPv4 is supported, IPv6 is not available for this service yet

    • Additional costs will apply for each external IP

    Relevant Readings / Resources

    Using a Load Balancer to Get Traffic into the Cluster [OpenShift Docs]

    25

    https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_load_balancer.html

  • APPUiO User Documentation Documentation, Release 1.0

    26 Chapter 7. Non HTTP Services / TCP Ingress

  • CHAPTER 8

    Troubleshooting

    If your app does not deploy or run have a look at some tips here. If you don’t find your error or do find an errormessage you don’t understand come to the community chat at https://community.appuio.ch and ask.

    The tips are structured in three categories: how to inspect the build process, how to inspect the deploy process, how toinspect your running application and some specific tips and error messages at the end of each part.

    8.1 Build

    Note that the build process of your application is just another pod that you can look at in the Web GUI or CLI. Thebuild process will use computing resources in your quota just like your application does, so if there are no moreavailable resources the build won’t run.

    8.1.1 How to get build logs in the Web GUI

    While the build is running there is a “view log” link in the overview, you can find all the logs of all the builds in theMenu under Builds Builds and select the build config of your application from the list. All the previous builds arelisted, the newest with the highest build number is at the top. You can click the build number to get more informationabout what triggered the build and inspect the build logs under the “Logs” tab

    8.1.2 How to get build logs in the CLI

    oc logs buildconfig/yourappname for the latest or oc logs buildconfig/yourappname--previous for the penultimate build. You can abbreviate the command to oc logs bc/yourappname. Tostream the logs in real-time during the build you can append the -f parameter: oc logs -f bc/yourappname.

    To access the build log history you get the log of the builder pod:

    1. Get the list of past builder pods with oc get pods. You want the pod to have a name like yourappname-123-build where 123 is the build number.

    2. Get the logs with oc logs yourappname-123-build

    27

    https://community.appuio.ch

  • APPUiO User Documentation Documentation, Release 1.0

    8.1.3 Build Error: manifest blob unknown: blob unknown to registry

    Problem: The Openshift project app and base image have the same name causing Openshift to use the same Im-ageStreamTag for source and destination.

    Pushed 13/13 layers, 100% completeRegistry server Address:Registry server User Name: serviceaccountRegistry server Email: [email protected] server Password: error: build error: Failed to push image: errors:manifest blob unknown: blob unknown to registrymanifest blob unknown: blob unknown to registrymanifest blob unknown: blob unknown to registrymanifest blob unknown: blob unknown to registry

    Solution: Use a different app name: oc new-app --name.

    Since oc version 1.5/3.5 the new-app command throws an error if there is a conflict.

    8.1.4 Build Error: Error pushing to registry: Authentication is required

    Due to a known race condition when instantiating a template (https://github.com/openshift/origin/issues/4518) the firstbuild can fail at pushing the resulting container. Just re-start the build process from the Web GUI or through the CLIwith oc new-build yourappname.

    8.1.5 Build Warning: [DEPRECATION NOTICE] registry v2 schema1 support will beremoved in an upcoming release.

    If pushing the built image to the APPUiO image registry fails for any reason, the docker code falls back to an earlierversion of the registry schema to try again. This fallback causes the following deprecation warning to be emitted:

    [DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming→˓release.Please contact admins of the registry.appuio.ch registry NOW to avoid future→˓disruption.More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/

    If you see this notice, try checking whether your image push failed for some other reason, such as your image tagquota being exceeded or perhaps an authentication problem. For more details on this warning message, see GitHubissue #942 on the docker/for-linux issue tracker.

    8.1.6 Build Resources

    The build resources count against a project’s terminating resources quota. To increase the resources for a build, specifythem as documented in Setting Build Resources. Keep in mind that deploy-pods also count against the same quotawhich means that if a build uses up all of it, no deployment can run. Currently, the terminating resources quota can’tbe changed. If you experience issues due to the builds resources, please contact support.

    28 Chapter 8. Troubleshooting

    https://github.com/openshift/origin/issues/4518https://github.com/docker/for-linux/issues/942https://github.com/docker/for-linux/issues/942https://docs.openshift.com/container-platform/3.11/dev_guide/builds/advanced_build_operations.html#build-resourceshttps://control.vshn.net

  • APPUiO User Documentation Documentation, Release 1.0

    8.2 Deployment

    8.2.1 How to get deployment logs in the Web GUI

    While a deployment is running you’ll see the status in the overview. You can see all the build events in the Menuunder Applications Deployments and select the name of the deployment in the list you want to inspect. The historyof the deployment will be shown below the configuration, when you click on the deployment number you can inspectthe deployment events in the “Events” tab.

    To see all Events you can click on Monitoring in the Menu, then in the top-right there are “Events” and then “ViewDetails”.

    8.2.2 How to get deployment logs in the CLI

    You get all the cluster events with oc get events.

    8.2.3 Deployment error: Error creating: pods “yourappname-123-” is forbidden:exceeded quota: compute-resources, requested: limits.cpu=500m, used: lim-its.cpu=1600m, limited: limits.cpu=2

    The deployment failed because the quota was enforced. In this example the CPU quota was reached as 500mCPU wasrequested while 1600mCPU was already used, the limit being 2000mCPU (2000 millicores-CPU = 2 CPU).

    You can change how much CPU/RAM your application requests on the deployment settings page: Menu ApplicationsDeployments, choose your deployment and then “Actions” on the top-right and “Edit Resource Limits”. The defaultis 100mCPU requested, 500mCPU hard limit, 100MB RAM requested and 512MB RAM hard limit. You can tunethis down depending on your application e.g. to 50mCPU requested, 100mCPU limit, 50MB RAM requested, 100MBRAM limit.

    When changing the resource limits a new deployment is started automatically to apply the new settings. If you wereso close to your resource limit that the rolling deployment can’t start the new container before the old is gone you caneither change the deployment strategy from “rolling” to “replace” or (e.g. if you want downtime-less deployments andare usually within quota):

    1. Cancel the deployment (e.g. from the overview page)

    8.2. Deployment 29

  • APPUiO User Documentation Documentation, Release 1.0

    2. Manually scale the app to 0 pods

    3. Restart the deployment (e.g. from the overview page or from Applications Deployments yourappname Deploy)

    30 Chapter 8. Troubleshooting

  • APPUiO User Documentation Documentation, Release 1.0

    4. Manually scale the app back to 1 pod

    You can change your global quota limit by upgrading your APPUiO.ch package.

    8.2.4 Deployment Error: Error syncing pod, skipping: timeout expired waiting forvolumes to attach/mount

    This error means there was a problem with attaching the requested persistent volume, which can be due to:

    1. No more storage available please contact support

    2. There needs to be a “glusterfs-cluster” service in your project. The service is created automatically when your

    8.2. Deployment 31

  • APPUiO User Documentation Documentation, Release 1.0

    account is set up but that can be deleted by the user. If you don’t have this service and you start using persistentvolumes please contact support or create the service yourself:

    oc create -f -

  • APPUiO User Documentation Documentation, Release 1.0

    1. Get the list of pods with oc get pods. You want the pod to have a name like yourappname-123-a1b2c3where 123 is the build number and the last part is random.

    2. Get the log with oc logs yourappname-123-a1b2c3 or live-streamed with oc logs -fyourappname-123-a1b2c3

    8.3.3 More options

    See Custom Applications for suggestions on exception logging, monitoring and alerting.

    8.3. Application Logs 33

  • APPUiO User Documentation Documentation, Release 1.0

    34 Chapter 8. Troubleshooting

  • CHAPTER 9

    FAQ (Technical)

    9.1 Can I run Containers/Pods as root?

    This is not possible due to security restrictions.

    9.2 What do we monitor?

    The functionality of OpenShift and all involved services are completely monitored and operated by VSHN. Individualprojects are not monitored out of the box - but Kubernetes already has health checks integrated and running. Alsoreplication controllers make sure that Pods are running all the time. If you need a more complex monitoring for yourproject, feel free to contact us at our Customer Portal.

    More information can also be found here: Application Health

    9.2.1 Route monitoring

    Certificates on application routes are monitored for validity. Users or cluster operators may set any of the followingannotations on routes:

    • monitoring.appuio.ch/enable: Whether to monitor route at all (boolean, default true)

    • monitoring.appuio.ch/enable-after: Whether to monitor route after specified point in time (time,no default).

    • monitoring.appuio.ch/verify-tls-certificate: Whether to verify X.509 certificate validity(boolean as string, default true).

    • monitoring.appuio.ch/not-after-remaining-warn, monitoring.appuio.ch/not-after-remaining-crit: Amount of time before reporting warning or critical status whenthe primary route certificate is to expire (duration).

    Value formats:

    35

    https://control.vshn.nethttps://docs.openshift.com/container-platform/3.11/dev_guide/application_health.html

  • APPUiO User Documentation Documentation, Release 1.0

    • Boolean: "false" or "true".

    • Time: Supported formats (see time.ParseInLocation):

    – 2006-01-02T15:04:05Z07:00

    – 2006-01-02T15:04Z07:00

    – 2006-01-02T15:04

    – 2006-01-02

    • Duration: Parsed using Go’s time.ParseDuration function, e.g. 168h30m.

    9.3 What do we backup?

    We backup all data relevant to run the OpenShift cluster. Application data itself is not in the default backup and is theresponsibility of the user. However, we can provide a backup service for individual projects. Please contact us at ourCustomer Portal for more information.

    9.4 What DNS entries should I add to my custom domain?

    When creating an application route, the platform automatically generates a URL which is immediately accessible,e.g. http://django-psql-example-my-project.appuioapp.ch due to wildcard DNS entries under*.appuioapp.ch. If you now want to have this application available under your own custom domain, follow thesesteps:

    1. Edit the route and change the hostname to your desired hostname, e.g. www.myapp.ch

    2. Point your DNS entry using a CNAME resource record type (important!) to cname.appuioapp.ch

    Always create a route before pointing a DNS entry to APPUiO, otherwise someone else could create a matchting routeand serve content under your domain.

    Note that you can’t use CNAME records in the apex domain (example.com, e.g. without www in front of it). If youneed to use the apex domain for your application you have the following options:

    1. Redirect to a subdomain (e.g. example.com www.example.com or app.example.com) with your DNS-provider,set up the subdomain with a CNAME

    2. Use ALIAS-records with your DNS-provider if they support them

    3. Enter 5.102.151.2 and 5.102.151.3 as A records

    9.5 Which IP addresses are being used?

    Disclaimer: These addresses may change at any time. We do not recommend whitelisting by IP address. A betteroption is to use Transport Layer Security (TLS) with client certificates for authentication.

    Incoming connections for routes 5.102.151.2, 5.102.151.3

    Outgoing connections from pods Until May 12 2020: 5.102.147.130, 5.102.147.124,2a06:c00:10:bc00::/56

    After May 12 2020: 5.102.151.22, 2a06:c00:10:bc00::/56

    36 Chapter 9. FAQ (Technical)

    https://golang.org/pkg/time/#ParseInLocationhttps://golang.org/pkg/time/#ParseDurationhttps://control.vshn.nethttps://en.wikipedia.org/wiki/Transport_Layer_Security

  • APPUiO User Documentation Documentation, Release 1.0

    9.6 How can I secure the access to my web application?

    OpenShift supports secure routes and everything is prepared on APPUiO to have it secured easily. Just edit the routeand change the termination type to edge. There is a default trusted certificate in place for *.appuioapp.ch whichis used in this case. If you want to use your own certificate, see Routes.

    9.7 Can I run a database on APPUiO?

    Short answer: Yes. But we do discourage it. Use the gluster-database storage class as described in PersistentStorage if you do. See Using Helm Charts to Deploy Services for a convenient way to deploy a database service.

    We provide shared persistent storage using GlusterFS. Please make sure that the database intended to use is capable ofstoring its data on a shared filesystem. We don’t recommend running production databases with GlusterFS as storagebackend, because there is a risk of data corruption and when that happens, your database will not run/start anymore.For highly-available and high-performance managed databases, please contact us at our Customer Portal.

    9.8 I get an error like ‘Failed Mount: MountVolume.NewMounterinitialization failed for volume “gluster-pv123” : endpoints“glusterfs-cluster” not found’

    When you received your account there was a service called “glusterfs-cluster” pointing to the persistent storage end-point. If you delete it by accident you can re-create it with:

    oc create -f -

  • APPUiO User Documentation Documentation, Release 1.0

    Or run oc create -f https://raw.githubusercontent.com/appuio/docs/master/glusterfs-cluster.yaml

    Please note that the IP addresses above are dependent on which cluster you are on, these are valid for console.appuio.ch

    9.9 How do I kill a pod/container

    If your container is hanging, either because your application is unresponsive or because the pod is in state “Terminat-ing” for a long time, you can manually kill the pod:

    oc delete pod/mypod

    If it still hangs you can use more force:

    oc delete --grace-period=0 --force pod/mypod

    The same functionality is available in the Web GUI: Applications Pods Actions Delete, there is a checkbox “Deletepod immediately without waiting for the processes to terminate gracefully” for applying more force

    9.10 How do I work with a volume if my application crashes becauseof the data in the volume?

    If your application is unhappy with the data in a persistent volume you can connect to the application pod:

    oc rsh mypod

    to run commands inside the application container, e.g. to fix or delete the data. In the Web GUI this is ApplicationsPods mypod Terminal.

    If your application crashes at startup this does not work as there is no container to connect to – the container exits assoon as your application exits. If there is a shell included in your container image you can use oc debug to cloneyour deployment config including volumes for a one-off debugging container:

    oc debug deploymentconfig/prometheus

    If your container image does not include a shell or you need special recovery tools you can start another containerimage, mount the volume with the data and then use the tools in the other container image to fix the data manually.Unfortunately the oc run command does not support specifying a volume, so we have to create a deployment configwith the volume for it to be mounted and make sure our deployed container does not exit:

    1. Get the name of the persistent volume claim (pvc) that contains the data. In this example the application anddeployment config (dc) name is ‘prometheus’:

    oc volume dc/prometheus

    This produces the following output:

    deploymentconfigs/prometheusconfigMap/prometheus-config as prometheus-config-1

    mounted at /etc/prometheuspvc/prometheus-data (allocated 1GiB) as prometheus-volume-1

    mounted at /prometheus

    38 Chapter 9. FAQ (Technical)

  • APPUiO User Documentation Documentation, Release 1.0

    You can see the pvc/prometheus-data is the persistent volume claim that is mounted at /prometheusfor the application prometheus.

    2. Deploy the helper container (e.g. “busybox”, minimal container containing a shell) - if you need special tools tofix the data (e.g. to recover a database) you should use another container image containing these tools), patch itnot to exit and mount the volume at /mnt:

    # create a new deployment with a "busybox" shell containeroc new-app busybox# patch the new deployment with a while-true-loop so the container keeps on→˓runningoc patch dc/busybox -p '{"spec":{"template":{"spec":{"containers":[{"name":→˓"busybox","command":["sh"],"args":["-c","while [ 1 ]; do echo hello; sleep→˓1; done"]}]}}}}'# mount the persistent volume claim into the container at /mntoc volume dc/busybox --add -m /mnt -t pvc --claim-name prometheus-data# wait for the new deployment with the mount to roll out

    Warning: The oc patch command above has a problem with escaping on Windowscmd/PowerShell. You can add the “command” and “args” keys and values in the Web GUI.

    3. Connect to your helper container and work in the volume:

    oc rsh dc/busyboxcd /mnt/# congratulations, you are now in the volume you want to fix# you can now selectively delete/edit/clean the bad data

    4. Clean up the temporary deployment config afterwards:

    oc delete all -l app=busybox

    9.11 How long do we keep application logs?

    Application logs are stored in elasticsearch and accessible via Kibana. All container logs are sent there but only keptfor 10 days.

    9.12 Is OpenShift Service Catalog available to be used?

    OpenShift Service Catalog is not supported nor available to be used on APPUiO. Template Service Broker and Open-Shift Ansible Broker are not supported nor available. It was once available, but because Red Hat is removing thesupport of the Service Catalog from OpenShift, we decided to remove the Service Catalog from APPUiO.

    See Using Helm Charts to Deploy Services for an alternative.

    9.13 How to pull an image from a private registry or private dockerhub

    To pull an image from a private container registry like Docker Hub Private Repositories you need to create a secret tostore the credentials and link it to be used for pulls in your project:

    9.11. How long do we keep application logs? 39

    https://docs.openshift.com/container-platform/4.1/release_notes/ocp-4-1-release-notes.html#ocp-41-deprecated-featureshttps://docs.openshift.com/container-platform/4.1/release_notes/ocp-4-1-release-notes.html#ocp-41-deprecated-features

  • APPUiO User Documentation Documentation, Release 1.0

    oc create secret docker-registry myimagepullingsecretname \--docker-server=docker.io \--docker-username=myusername \--docker-password=mypassword \[email protected]

    oc secrets link default myimagepullingsecretname \--for=pull--namespace=myproject

    40 Chapter 9. FAQ (Technical)

  • CHAPTER 10

    Introduction

    This documentation has been created with the intention of getting developers ready to automatically deploy their appsto the OpenShift container platform.

    We try to achieve this using an exemplary microservice application with basic functionalities of an online shop. Eachmicroservice is continuously integrated and deployed to APPUiO (our public OpenShift platform), which allows foran independent description of the necessary pipeline as well as the most relevant concepts for the respective use case.

    Before we describe the architecture of our application in more detail, let us shortly summarize what the followingchapters include (in order):

    General Concepts

    • Motivation for Docker and OpenShift/APPUiO

    • Motivation for Continuous Integration

    • Overview of CI tooling (GitLab CI and Jenkins)

    • Overview of Source2Image principles

    Webserver

    • Dockerizing a ReactJS application for OpenShift

    • Testing and bundling a ReactJS application

    • Continuous integration with GitLab CI

    • Deployment strategies for multiple environments

    • Tracking of OpenShift configuration alongside the codebase

    • Optimizing GitLab CI configurations using variables and templates

    API

    41

    https://appuio.ch

  • APPUiO User Documentation Documentation, Release 1.0

    • Dockerizing a Scala Play! application

    • Testing and compiling a Scala Play! application

    • Continuous integration with GitLab CI

    • Using OpenShift Source2Image for building a Docker container

    • Creating a tailor-made Source2Image builder

    Users

    • Dockerizing an Elixir application for OpenShift

    • Testing and compiling an Elixir application

    • Building a container using Alpine build and runtime containers

    • Continuous integration with GitLab CI

    Orders

    • Testing a Python application

    • Continuous integration with Jenkins 2 and the OpenShift plugin

    • Creating a tailor-made Jenkins slave (runner)

    • Using the OpenShift Python builder for S2I

    10.1 Architecture of our shop application

    42 Chapter 10. Introduction

  • APPUiO User Documentation Documentation, Release 1.0

    A first clear distinction in our application’s architecture can be made between the frontend and the backend of theapplication. The frontend only contains a single service, which is the Webserver. The Webserver is an instance ofNginx that serves some static files (the compiled JS application).

    The backend consists of multiple microservices: the main endpoint (API) that is accessed from the frontend of theapplication, a service that handles user management and authentication (Users), a service that handles order manage-ment (Orders) and a service responsible for sending emails (Mailer). API, Users and Orders each manage their owndatabase to enforce separation of concerns. The API connects to the other services by using their respective RESTendpoints whenever it needs a timely response.

    10.2 Structure of this documentation

    This documentation is structured such that we first make sure that you know of the most relevant topics and prerequi-sites for following along later on. The chapter about General Concepts provides a short motivation for concepts likeDocker and OpenShift and guides you to useful resources if you need to deepen your knowledge about those topics.

    The following chapters will each describe one of our services more in depth. We go into how a continuous integrationpipeline might be built and how the respective service might be packaged for OpenShift, as well as several moreadvanced topics. We generally try to account for best practices like the 12-Factor App.

    10.3 Where you can find the sources

    The sources for all the parts of this documentation as well as for all the described examples can be found on APPUiOGitHub. The GitHub repositories are synchronized with our internal development repositories and represent the currentstate. The following lists contain all the public resources and repositories that have been created during the writing ofthis documentation:

    Documentation

    • https://github.com/appuio/docs in subdirectory services

    Microservices

    • Umbrella repository: https://github.com/appuio/shop-example

    • API: https://github.com/appuio/shop-example-api

    • Orders: https://github.com/appuio/shop-example-orders

    • Users (builder): https://github.com/appuio/shop-example-users-builder

    • Users (runtime): https://github.com/appuio/shop-example-users

    • Webserver: https://github.com/appuio/shop-example-webserver

    Misc

    • CI runner for SBT (hub): https://hub.docker.com/r/appuio/gitlab-runner-sbt

    • CI runner for SBT (sources): https://github.com/appuio/gitlab-runner-sbt

    • CI runner for OC (hub): https://hub.docker.com/r/appuio/gitlab-runner-oc

    • CI runner for OC (sources): https://github.com/appuio/gitlab-runner-oc

    10.2. Structure of this documentation 43

    https://12factor.nethttps://github.com/appuiohttps://github.com/appuiohttps://github.com/appuio/docshttps://github.com/appuio/shop-examplehttps://github.com/appuio/shop-example-apihttps://github.com/appuio/shop-example-ordershttps://github.com/appuio/shop-example-users-builderhttps://github.com/appuio/shop-example-usershttps://github.com/appuio/shop-example-webserverhttps://hub.docker.com/r/appuio/gitlab-runner-sbthttps://github.com/appuio/gitlab-runner-sbthttps://hub.docker.com/r/appuio/gitlab-runner-ochttps://github.com/appuio/gitlab-runner-oc

  • APPUiO User Documentation Documentation, Release 1.0

    • CI runner for Yarn (hub): https://hub.docker.com/r/appuio/gitlab-runner-yarn

    • CI runner for Yarn (sources): https://github.com/appuio/gitlab-runner-yarn

    • Vagrant box with necessary tools: https://github.com/appuio/shop-example-vagrant

    44 Chapter 10. Introduction

    https://hub.docker.com/r/appuio/gitlab-runner-yarnhttps://github.com/appuio/gitlab-runner-yarnhttps://github.com/appuio/shop-example-vagrant

  • CHAPTER 11

    General Concepts

    This chapter will introduce some of the most important concepts that you need to know about for the followingchapters. We will shortly motivate the concepts and provide you with the most relevant resources for getting started ordeepening your knowledge on your own.

    11.1 Containers

    Containers allow us to package everything we need to run our application right alongside the application. They aresimilar to virtual machines but don’t package an entire operating system, which makes them very lightweight. Instead,they build on top of the underlying operating system (most often Linux) and only contain what is specific to theapplication.

    Docker allows us to define what a container should look like using simple configuration files (called Dockerfiles). Ifwe build said configuration files, we get an image that can be run on any machine with the docker binary. The DockerHub provides access to a vast amount of images that have been created by others and that are ready to be pulled andrun.

    The main advantage of containers is that they contain everything they need to run, which guarantees that they run thesame on any machine (in local development as well as in production). This confidence is crucial if one is consideringthe usage of fully automated deployment strategies like Continuous Deployment.

    Relevant Readings / Resources

    1. What is Docker? [Docker Docs]

    2. Official Documentation [Docker Docs]

    3. Dockerfile Reference [Docker Docs]

    4. Dockerfile Best Practices [Docker Docs]

    5. Docker Hub

    45

    https://www.docker.com/what-dockerhttps://docs.docker.comhttps://docs.docker.com/engine/reference/builderhttps://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practiceshttps://hub.docker.com

  • APPUiO User Documentation Documentation, Release 1.0

    11.1.1 Docker Compose

    Most of the time, an application will depend on other containers like databases, caches or other microservices. Tobe able to coordinate the application and its dependencies while developing locally, we can leverage Docker and theDocker Compose toolkit.

    Docker Compose allows us to set up an overall service definition that can contain many interdependent services. Theservice definition is saved in a docker-compose.yml file that can then be tracked alongside the source code.

    A service definition might look as follows:

    1 version: "2.1"2 services:3 # definition for the users service4 users:5 # build the Dockerfile in the current directory6 build: .7 # specify environment variables for the users service8 environment:9 SECRET_KEY: "abcd"

    10 # specify ports that the users service should publish11 ports:12 - "4000:4000"13

    14 # definition for the associated database15 users-db:16 # specify the image the users-db should run17 image: postgres:9.5-alpine18 # specify environment variables for the users-db service19 environment:20 POSTGRES_USER: users21 POSTGRES_PASSWORD: secret

    On running docker-compose up --build, this configuration will build the users service and pull the Post-greSQL database image. It will then start up both services and expose them with their hostname corresponding totheir name in the service definition. This means that the users service can connect to the database using the hostnameusers-db.

    We provide such docker-compose configuration files for every service independently as well as in the form ofan umbrella docker-compose file that allows to start-up the entire application. The umbrella can be found onhttps://github.com/appuio/shop-example. Please make sure also to include the submodules (i.e. using git clone--recursive -j8 https://github.com/appuio/shop-example).

    Note: A problem with such simple configurations is that the database usually performs an initialization processbefore starting up (creating indices etc.). If both services are started simultaneously, the users service will be unableto connect to the database.

    To circumvent this, we need to have the users service wait for the database to finish its initialization. This topic willbe addressed in later chapters, as it will not only matter in local development but also once the services are deployed.

    Relevant Readings / Resources

    1. Overview of Docker Compose [Docker Docs]

    46 Chapter 11. General Concepts

    https://github.com/appuio/shop-examplehttps://docs.docker.com/compose/overview

  • APPUiO User Documentation Documentation, Release 1.0

    11.2 Continuous Integration

    Modern continuous integration tools enable us to automate many tedious aspects of the software development lifecycle.We can configure these tools such that they automatically perform jobs like testing and compiling the application anddeploying a new release.

    These tools work especially well if we use them in conjunction with containers, as we can have the tool build acontainer from our sources, test the container and possibly directly deploy the new version of the container. As we areconfident that containers run the same on all environments, we can trust that the container built and tested in CI willalso run where we deployed it to.

    There are many CI tools around with all of them providing similar functionalities, which might make choosing betweenthem quite hard. To account for this diversity, we will use two very popular CI tools to continuously integrate ourmicroservices: Jenkins and GitLab.

    Relevant Readings / Resources

    1. Continuous Integration [Wikipedia]

    2. Docker for CI/CD

    11.2.1 Jenkins

    Jenkins is the most popular open-source continuous integration solution. With a vast amount of plugins available, it isextendable to be able to fit almost any use case.

    To use Jenkins, you need to create a so-called Jenkinsfile that specifies all the jobs (the “pipeline”) that Jenkins shouldexecute. You also need to add a webhook to your source repository such that Jenkins gets notified on changes to thecodebase.

    A real example on using Jenkins for continuous integration will be presented in the chapter on the Orders microser-vice.

    Relevant Readings / Resources

    1. Getting Started [Jenkins Docs]

    2. Jenkinsfile [Jenkins Docs]

    11.2.2 GitLab CI

    GitLab CI is a continuous integration solution that is provided by the popular Git repository manager GitLab. Itis seamlessly integrated into the repository management functionality, which makes its usage very convenient. Thedownside is that it is only usable if GitLab is used for repository management. If you use GitHub or similar, you willneed to find another solution (Jenkins, Travis CI, etc.).

    To use GitLab CI, simply create a .gitlab-ci.yml with job definitions and store it in your source repository. GitLab CIwill automatically execute your pipeline on any changes to the codebase.

    We will see examples for using GitLab CI in the chapters about the Webserver, API and Users services.

    Relevant Readings / Resources

    1. Quick Start [GitLab Docs]

    11.2. Continuous Integration 47

    https://en.wikipedia.org/wiki/Continuous_integrationhttps://www.docker.com/use-cases/cicdhttps://jenkins.io/doc/pipeline/tour/hello-worldhttps://jenkins.io/doc/book/pipeline/jenkinsfilehttps://docs.gitlab.com/ce/ci/quick_start

  • APPUiO User Documentation Documentation, Release 1.0

    2. Config with .gitlab-ci.yml [GitLab Docs]

    Usage with Docker

    A feature that we find especially useful is that jobs can be run inside a Docker container. Instead of having to installdependencies for testing, building, etc. during execution of our job, we can simply specify a docker image that alreadyincludes all those dependencies and execute the job within this image. In many cases, this is as easy as using anofficially maintained docker image from the Hub.

    If we need a very specific configuration or dependencies while executing our job, we can build a tailor-made dockerimage just for running the job. We will describe how to create a custom runner later on in this documentation.

    Relevant Readings / Resources

    1. Using Docker Images [GitLab Docs]

    11.3 OpenShift / Kubernetes

    Once you start using containers for more than small demo applications, you are bound to encounter challenges suchas scalability and reliability. Docker is an excellent tool in itself, but as soon as an application consists of severalcontainers that probably depend on each other, a need for orchestration arises.

    Orchestrators are pieces of software that have been built to handle exactly those types of problems. An orchestratororganizes multiple services such that they appear as a single service to the outside, allows scaling of those services,handles load-balancing and more. All of this can be done on a single machine as well as on a cluster of servers. Avery popular orchestration software is Kubernetes (K8S), which was originally developed by Google.

    Adding another layer on top, RedHat OpenShift provides a complete Platform-as-a-Service solution based on Ku-bernetes. It extends Kubernetes with features for application lifecycle management and DevOps and is easier to getstarted with. Our public cloud platform APPUiO runs on the OpenShift container platform, which is the enterpriseversion of OpenShift (with OpenShift Origin as an upstream).

    Relevant Readings / Resources

    1. User-Guide [Kubernetes Docs]

    2. What is K8S [Kubernetes Docs]

    3. Developer Guide [OpenShift Docs]

    4. APPUiO Documentation

    5. OpenShift Origin [GitHub]

    11.3.1 Source2Image

    Instead of writing a Dockerfile that extends some base image and building it with docker build, OpenShift in-troduces an alternative way of packaging applications into containers. The paradigm - which they call Source2Imageor short S2I - suggests that given your application’s sources and a previously prepared builder image, you inject thesources into the builder container, run an assemble script inside the builder and commit the container. This will havecreated a runnable version of your application, which you can run using another command.

    48 Chapter 11. General Concepts

    https://docs.gitlab.com/ce/ci/yamlhttps://docs.gitlab.com/ce/ci/docker/using_docker_images.htmlhttps://kubernetes.io/docs/user-guidehttps://kubernetes.io/docs/whatisk8shttps://docs.openshift.com/container-platform/3.11/dev_guide/index.htmlhttp://docs.appuio.ch/en/latesthttps://github.com/openshift/origin

  • APPUiO User Documentation Documentation, Release 1.0

    This works very well for dynamic languages like Python where you don’t need to compile the application beforehand.The OpenShift Container Platform already provides several such builder images (Python, PHP, Ruby, Node.js, etc.) soyou would only need to inject your sources and your application would be ready to run. We will use this strategy fordeployment of our Python microservice later on.

    For compiled languages like Java, this approach means that the compile-time dependencies would also be includedin the runtime image, which could heavily bloat that image and pose a security risk. S2I would allow us to providea runtime image for running the application after the builder image has assembled it. However, this is not yet fullyimplemented in OpenShift (it is still an experimental feature).

    There will also be cases where you can’t find an S2I builder image that fits your use-case. A possible solution can beto create a custom builder that is tailor-made for the application. We will see how we can use such a custom builder inthe chapter about the API service.

    Relevant Readings / Resources

    1. Creating images with S2I [OpenShift Docs]

    2. Source-to-Image [GitHub]

    3. Community S2I builder images [GitHub]

    11.3. OpenShift / Kubernetes 49

    https://docs.openshift.com/container-platform/3.11/creating_images/s2i.html#creating-images-s2ihttps://github.com/openshift/source-to-imagehttps://github.com/openshift-s2i

  • APPUiO User Documentation Documentation, Release 1.0

    50 Chapter 11. General Concepts

  • CHAPTER 12

    Webserver

    12.1 Introduction

    The first part of our microservice architecture that will be explained is the webserver. It is the first service the userconnects to and one of only two services that are exposed to the user. The webserver consists of an instance of nginx(a high-performance webserver) serving the application’s frontend (static files like HTML, CSS, JS and images).

    The frontend has been designed as a Single-Page-App (SPA) which runs computations in the client’s browser and onlyconnects to the API if it needs to fetch data. This is a frequently used pattern in modern web applications as API’soften also need to be accessible using native apps and other means. The basic technologies used are React (a JavaScriptframework), Webpack (a JavaScript bundler) and Yarn (package management).

    The webserver lends itself to some introductory explanations about continuous integration pipelines and docker de-ployments to APPUiO (building on those in the General Concepts section), as the build/deployment pipeline is quitesimple and as it doesn’t directly depend on any other service.

    Note: We won’t go into the implementation details, but you are welcome to have a look at the sources on GitHub.

    12.1.1 Goals for CI

    What we would like to achieve with our pipeline can be shortly summarized as follows:

    1. Run all of the application’s tests

    2. Build an optimized JavaScript bundle that can be served statically

    51

    https://www.nginx.comhttps://facebook.github.io/reacthttps://webpack.js.orghttps://yarnpkg.comhttps://github.com/appuio/shop-example-webserver

  • APPUiO User Documentation Documentation, Release 1.0

    3. Build a docker container that can be run on APPUiO

    4. Push the newly built container directly to the APPUiO registry

    5. Update the application configuration on APPUiO

    6. Trigger a new deployment in APPUiO

    The following sections will describe how this pipeline might be implemented using GitLab CI. Topics that will becovered include (among others):

    • Building and running the service as a docker container

    • Implementing a simple GitLab CI pipeline with caching and artifacts

    • Strategies when using multiple deployment environments (staging, prod etc.)

    • Preparing our APPUiO project such that we can deploy the service (routes, deployments etc.)

    • Extending our pipeline such that the APPUiO configuration is tracked alongside our source code

    • Adding health checks to the deployment of our service

    12.2 Building a container

    The first thing we need to achieve such that we can later deploy our application to APPUiO is packaging it into adocker container. The Dockerfile for this is quite simple:

    Listing 1: docs_webserver/Dockerfile

    1 # extend the official nginx image from https://hub.docker.com/_/nginx/2 # use mainline as recommended by devs and alpine for reduced size3 FROM nginx:1.11-alpine4

    5 # create new user with id 1001 and add to root group6 RUN adduser -S 1001 -G root7

    8 # expose port 90009 EXPOSE 9000

    10

    11 # copy the custom nginx config to /etc/nginx12 COPY docker/nginx.conf /etc/nginx/nginx.conf13

    14 # copy artifacts from the public folder into the html folder15 COPY build /usr/share/nginx/html16

    17 # switch to user 1001 (non-root)18 USER 1001

    Most commands should be understandable by their respective comments (for a reference see #1).

    There is one very important concept we would like to emphasize: OpenShift enforces that the main process inside acontainer must be executed by an unnamed user with numerical id (see #2). This is due to security concerns about thepermissions of the root user inside a container as it might break out and access the host. If the webserver is ultimatelydeployed to OpenShift, the platform will assign a random numerical id in place of the defined id 1001.

    Due to these security restrictions, the official nginx image has to be configured differently, as it normally wants to runas root (which would cause the deployment on OpenShift to fail


Recommended