+ All Categories
Home > Technology > Using Ceph in OStack.de - Ceph Day Frankfurt

Using Ceph in OStack.de - Ceph Day Frankfurt

Date post: 16-Jan-2015
Category:
Upload: inktank
View: 521 times
Download: 5 times
Share this document with a friend
Description:
Burkhard Noltensmeier, Teuto
Popular Tags:
29
Burkhard Noltensmeier teuto.net Netzdienste GmbH Erkan Yanar Consultant
Transcript
Page 1: Using Ceph in OStack.de - Ceph Day Frankfurt

Burkhard Noltensmeierteuto.net Netzdienste GmbH

Erkan YanarConsultant

Page 2: Using Ceph in OStack.de - Ceph Day Frankfurt

teuto.net Netzdienste GmbH

● 18 Mitarbeiter● Linux Systemhaus und

Webdevelopment● Ubuntu Advantage Partner● Openstack Ceph Service● Büros und Datacenter in

Bielefeld

Page 3: Using Ceph in OStack.de - Ceph Day Frankfurt

Why Openstack ?

Infrastructure as a Sevice● Cloud Init (automated Instance provisioning) ● Network Virtualization● Multiple Storage Options ● Multiple APIs for Automation

Page 4: Using Ceph in OStack.de - Ceph Day Frankfurt

● closed beta since September 2013● updated to Havana in October● Ubuntu Cloud Archive● 20 Compute Nodes● 5 Ceph Nodes● Additional Monitoring with Graphite

Page 5: Using Ceph in OStack.de - Ceph Day Frankfurt

Provisioning and Orchestration

Page 6: Using Ceph in OStack.de - Ceph Day Frankfurt

Openstack Storage Types

● Block Storage● Object Storage● Image Repository● Internal Cluster Storage

– Temorary Image Store

– Databases (Mysql Galera,MongoDB)

Page 7: Using Ceph in OStack.de - Ceph Day Frankfurt

Storage Requirements

● Scalability● Redundancy● Performance● Efficient Pooling

Page 8: Using Ceph in OStack.de - Ceph Day Frankfurt
Page 9: Using Ceph in OStack.de - Ceph Day Frankfurt

Key Facts for our Decision

● One Ceph Cluster fits all Openstack needs● no „single Point of Failure“● POSIX compatibility via Rados Block Device● seamless scalability● commercial support by Inktank● GPL

Page 10: Using Ceph in OStack.de - Ceph Day Frankfurt

Rados Block Storage ● Live migration ● Efficient Snapshots● Different types of storage avaiable (tiering)● Cloning for fast restore or scaling

Page 11: Using Ceph in OStack.de - Ceph Day Frankfurt

How to start

● determine Clustersize

uneven amount of Nodes to enable negotiation ● Small start with at least 5 Nodes● either 8 or 12 Disks per Chassis● One Jounal per Disk ● 2 Journal SSD per Chassis

Page 12: Using Ceph in OStack.de - Ceph Day Frankfurt

Rough calculation

● 3 Nodes, 8 Disks per Node, 2 replica● Netto = Brutto / 2 replica – 1 Node (33%) =

33%

Cluster Brutto● 24 2TB Sata Disks, 100 IOPS each

Cluster Netto● 15,8 Terrabyte, 790 IOPS

Page 13: Using Ceph in OStack.de - Ceph Day Frankfurt

Rough calculation

● 5 Nodes, 8 Disks per Node, 3 replica● Netto = Brutto / 3 replica – 1 Node (20%) =

27%

Cluster Brutto ● 40 2TB Sata Disks, 100 IOPS each

Cluster Netto● 21,3 Terrabyte, 1066 IOPS

Page 14: Using Ceph in OStack.de - Ceph Day Frankfurt

Ceph specifics

● Data is distributed throughout the Cluster● Unfortunately this destroys Data locality

tradeoff between blocksize an iops.● The bigger Blocks, the better is sequential

performance ● Double Write, SSD Journals strongly advised● Longterm fragmentation by small writes

Page 15: Using Ceph in OStack.de - Ceph Day Frankfurt

Operational Challenges

● Performance● Availability● Qos (Quality of Service)

Page 16: Using Ceph in OStack.de - Ceph Day Frankfurt

Ceph Monitoring in ostack

● Ensure Quality with Monitoring● Easy spotting of congestion Problems● Event Monitoring (e.g. disk failure)● Capacity management

Page 17: Using Ceph in OStack.de - Ceph Day Frankfurt

What we did

● Disk monitoring with Icinga● Collect data via Ceph Admin Socket Json

interface● put it into Graphite ● enrich it with Meta Data

– with Openstack tennant

– Ceph Node

– OSD

Page 18: Using Ceph in OStack.de - Ceph Day Frankfurt
Page 19: Using Ceph in OStack.de - Ceph Day Frankfurt

Cumulated osd Performance

Page 20: Using Ceph in OStack.de - Ceph Day Frankfurt

Single osd performance

Page 21: Using Ceph in OStack.de - Ceph Day Frankfurt

Sum by Openstack tenant

Page 22: Using Ceph in OStack.de - Ceph Day Frankfurt

Verify Ceph Performance

● Fio Benchmark with fixed file sizefio ­­fsync=<n> ­­runtime=60 ­­size=1g –bs=<n> ...

● Different sync option nosync, 1, 100● Different Cinder Qos Service Options● Blocksize 64k 512k 1024k 4096k● 1 up to 4 VM Clients● Resulting in 500 Benchmark runs..

Page 23: Using Ceph in OStack.de - Ceph Day Frankfurt
Page 24: Using Ceph in OStack.de - Ceph Day Frankfurt

Cinder Quality of Service

$ cinder qos­create high­iops consumer="front­end" 

  read_iops_sec=100       write_iops_sec=100 

  read_bytes_sec=41943040 write_bytes_sec=41943040

$ cinder qos­create low­iops consumer="front­end" 

  read_iops_sec=50        write_iops_sec=50 

  read_bytes_sec=20971520 write_bytes_sec=20971520

$ cinder qos­create ultra­low­iopsconsumer="front­end"

  read_iops_sec=10        write_iops_sec=10   read_bytes_sec=10485760  write_bytes_sec=10485760

Page 25: Using Ceph in OStack.de - Ceph Day Frankfurt

Speed per Cinder Qos

Page 26: Using Ceph in OStack.de - Ceph Day Frankfurt

Does it scale

Page 27: Using Ceph in OStack.de - Ceph Day Frankfurt

Effect of syncing Files

Page 28: Using Ceph in OStack.de - Ceph Day Frankfurt

Different Blocksize with sync

Page 29: Using Ceph in OStack.de - Ceph Day Frankfurt

Ceph is somewhat complex, but

● reliable ● No unpleasent suprises (so far!)● Monitoring is important for resource

management and availabilty !


Recommended