+ All Categories
Home > Technology > Ceph Day Beijing: Containers and Ceph

Ceph Day Beijing: Containers and Ceph

Date post: 28-Jul-2015
Category:
Upload: ceph-community
View: 141 times
Download: 6 times
Share this document with a friend
Popular Tags:
25
Containers and Ceph Haomai 2015.06.04
Transcript
Page 1: Ceph Day Beijing: Containers and Ceph

Containers and Ceph Haomai 2015.06.04

Page 2: Ceph Day Beijing: Containers and Ceph

Hi, I’m Haomai Wang

❖ Join Ceph community Since 2013

❖ GSOC 2015 Ceph Mentor

❖ Maintains KeyValueStore and AsyncMessenger

❖ Active in RBD, Performance, ObjectStore things

❖ Newer to Containers!

[email protected]

Page 3: Ceph Day Beijing: Containers and Ceph

Agenda

❖ Motivation

❖ Block ? File ?

❖ CephFS Update

❖ Orchestration

❖ Summary

Page 4: Ceph Day Beijing: Containers and Ceph

Cloud Hodgepodge❖ Compelling clouds offer options

❖ Compute

❖ VM (KVM, XEN …)

❖ Containers (LXC, Docker, OpenVZ)

❖ Storage

❖ Block

❖ File

❖ Object

❖ Key/Value

❖ NOSQL

❖ SQL

App Server

App Server

App Server

App Server

Page 5: Ceph Day Beijing: Containers and Ceph

Containers?❖ Performance

❖ Shared Kernel❖ Fast Boot❖ Lower baseline overhead❖ Better resource sharing

❖ Storage❖ Shared Kernel -> Efficient IO❖ Small Image -> Efficient deployment

❖ Emerging container host OSs❖ CoreOS❖ Atomic❖ Snappy Ubuntu

❖ New app provisioning model❖ Small, single-service containers❖ Standalone execution environment

Page 6: Ceph Day Beijing: Containers and Ceph

Ceph Components

Page 7: Ceph Day Beijing: Containers and Ceph

Block/File

Page 8: Ceph Day Beijing: Containers and Ceph

VM + Block(RBD)❖ Model

❖ Nova → libvirt → KVM →librbd.so

❖ Cinder → rbd.py →librbd.so

❖ Glance → rbd.py → librbd.so

❖ Pros

❖ proven

❖ decent performance good security

❖ Cons

❖ performance could be better

❖ Status

❖ most common deployment model today (~44% in latest survey)

Page 9: Ceph Day Beijing: Containers and Ceph

Container + Block(RBD)❖ The model

❖ libvirt-based lxc containers(Or Docker)

❖ map kernel RBD on host

❖ pass host device to libvirt, container

❖ Pros

❖ fast and efficient

❖ implement existing Nova API Cons

❖ weaker security than VM

❖ Status

❖ lxc is maintained

Page 10: Ceph Day Beijing: Containers and Ceph

Follow VM to use mature Block(RBD)?

Page 11: Ceph Day Beijing: Containers and Ceph

Different App Provision Model❖ Container VS Virtualization

❖ Hardware abstraction❖ Application Centric❖ Per VM Isolation, Guest Environment and

lifecycle defined by Application ❖ Application Isolation❖ Density

❖ New Provision❖ Micro-Service❖ Multi-instance, Multi-version, Maximal

flexible, Minimal overhead❖ Block

❖ Physical block abstraction❖ Unknown user data layout❖ Difficult to bind block to container(s)

Page 12: Ceph Day Beijing: Containers and Ceph

Data Aware

Page 13: Ceph Day Beijing: Containers and Ceph

RADOS(File alike Interface)

Ceph Storage Layout

Block Deivc

Block Deivc

Block Deivc

Block Deivc

Block Deivc

Block Deivc

Block Deivc

Block Deivc

OSD OSD OSD OSD OSD OSD OSD OSD

RBDOBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECTOBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT OBJECT

BLOCK

CEPHFS(Strict POSIX)

DirectoryDirectory DirectoryFile

File

File

File

File

File

File

BLOCK BLOCK BLOCK BLOCK BLOCK BLOCK BLOCK BLOCK

Page 14: Ceph Day Beijing: Containers and Ceph

Containers may like file more than block

Page 15: Ceph Day Beijing: Containers and Ceph

CephFS Update

Page 16: Ceph Day Beijing: Containers and Ceph

File Storage

❖ Familiar POSIX semantics(POSIX is a lingua-franca)

❖ Fully shared volume – many clients can mount and share data

❖ Elastic storage – amount of data can grow/shrink without explicit provisioning

CephFS

Page 17: Ceph Day Beijing: Containers and Ceph

CephFS Architecture❖ Inherit resilience and

scalability of RADOS

❖ Multiple metadata daemons (MDS) handling dynamically shared metadata

❖ Fuse & kernel clients: POSIX compatibility

❖ Extra features: Subtree snapshots, recursive statistics

Page 18: Ceph Day Beijing: Containers and Ceph

Detecting failures❖ MDS

❖ “beacon” pings to RADOS MONs. Logic on MONs decides when to mark an MDS failed and promote another daemon to take its place

❖ Clients:

❖ “RenewCaps” pings to each MDS with which it has a session. MDSs individually decide to drop a client's session (and release capabilities) if it is too late.

Page 19: Ceph Day Beijing: Containers and Ceph

The Now

❖ Priority

❖ Complete FSCK & repair tools

❖ Tenant Security/Auth

❖ Other work:

❖ Multi-MDS hardening

❖ Snapshot hardening

Page 20: Ceph Day Beijing: Containers and Ceph

Orchestration

Page 21: Ceph Day Beijing: Containers and Ceph

Existing VM & FileNFS + CEPHFS.KO VIRTFS/9P + CEPHFS.KO

Page 22: Ceph Day Beijing: Containers and Ceph

Nova-Docker & CephFS❖ Model

❖ host mounts CephFS directly❖ mount --bind share into container

namespace❖ Pros

❖ best performance❖ full CephFS semantics

❖ Cons❖ rely on container for security

❖ Status❖ no prototype

Page 23: Ceph Day Beijing: Containers and Ceph

Kubernetes & CephFS❖ Pure Kubernetes❖ Volume Driver

❖ AWS EBS, Google Block❖ CephFS❖ NFS❖ …

❖ Status❖ Under review(https://github.com/

GoogleCloudPlatform/kubernetes/pull/6649)

❖ Drivers expect pre-existing volumes❖ Expected deploy mode

❖ Pod(Shared File Volume)❖ Make micro-service ease with shared storage

Page 24: Ceph Day Beijing: Containers and Ceph

Kubernetes on OpenStack❖ Provision Nova VMs

❖ KVM or ironic

❖ Atomic or CoreOS

❖ Kubernetes per tenant

❖ Provision storage devices

❖ Cinder for volumes

❖ Manila for shares

❖ Kubernetes binds into pod/container

❖ Status

❖ Prototype Cinder plugin for Kubernetes (https://github.com/spothanis/kubernetes/tree/cinder-vol-plugin)


Recommended