Ceph: de factor storage backend for OpenStack

Post on 31-Dec-2015

55 views 3 download

Tags:

description

Ceph: de factor storage backend for OpenStack. OpenStack Summit 2013 Hong Kong. Whoami. đź’Ą SĂ©bastien Han đź’Ą French Cloud Engineer working for eNovance đź’Ą Daily job focused on Ceph and OpenStack đź’Ą Blogger Personal blog: http://www.sebastien-han.fr/blog/ - PowerPoint PPT Presentation

transcript

Ceph: de factor storage backend for OpenStack

OpenStack Summit 2013Hong Kong

Whoamiđź’Ą SĂ©bastien Hanđź’Ą French Cloud Engineer working for eNovanceđź’Ą Daily job focused on Ceph and OpenStackđź’Ą Blogger

Personal blog: http://www.sebastien-han.fr/blog/Company blog: http://techs.enovance.com/

Worldwide offices coverageWe design, build and run clouds – anytime -

anywhere

CephWhat is it?

The project

âžś Unified distributed storage system

âžś Started in 2006 as a PhD by Sage Weil

âžś Open source under LGPL license

âžś Written in C++

âžś Build the future of storage on commodity hardware

Key features

âžś Self managing/healing

âžś Self balancing

âžś Painless scaling

âžś Data placement with CRUSH

Controlled replication under scalable hashing

âžś Pseudo-random placement algorithm

âžś Statistically uniform distribution

âžś Rule-based configuration

Overview

Building a Ceph clusterGeneral considerations

How to start?âžś Use case

• IO profile: Bandwidth? IOPS? Mixed?• Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver?• Usage: do I use Ceph in standalone or is it combined with a software solution?

➜ Amount of data (usable not RAW)• Replica count• Failure ratio - How much data am I willing to rebalance if a node fail?• Do I have a data growth planning?

âžś Budget :-)

Things that you must not do

➜ Don't put a RAID underneath your OSD• Ceph already manages the replication• Degraded RAID breaks performances• Reduce usable space on the cluster

➜ Don't build high density nodes with a tiny cluster• Failure consideration and data to re-balance• Potential full cluster

âžś Don't run Ceph on your hypervisors (unless you're broke)

State of the integrationIncluding best Havana’s additions

Why is Ceph so good?

It unifies OpenStack components

Havana’s additions➜ Complete refactor of the Cinder driver:

• Librados and librbd usage• Flatten volumes created from snapshots• Clone depth

➜ Cinder backup with a Ceph backend:• backing up within the same Ceph pool (not recommended)• backing up between different Ceph pools• backing up between different Ceph clusters• Support RBD stripes• Differentials

➜ Nova Libvirt_image_type = rbd• Directly boot all the VMs in Ceph• Volume QoS

Today’s Havana integration

Is Havana the perfect stack?

…

Well, almost…

What’s missing?

âžś Direct URL download for Nova

• Already on the pipe, probably for 2013.2.1

➜ Nova’s snapshots integration

• Ceph snapshot

https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd

Icehouse and beyondFuture

Tomorrow’s integration

Icehouse roadmap

➜ Implement “bricks” for RBD

âžś Re-implement snapshotting function to use RBD snapshot

âžś RBD on Nova bare metal

âžś Volume migration support

âžś RBD stripes support

« J Â» potential roadmapâžś Manila support

Ceph, what’s coming up?Roadmap

Firefly

âžś Tiering - cache pool overlay

âžś Erasure code

âžś Ceph OSD ZFS

âžś Full support of OpenStack Icehouse

Many thanks!

Questions?

Contact: sebastien@enovance.comTwitter: @sebastien_hanIRC: leseb