+ All Categories
Home > Technology > Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Date post: 16-Jan-2015
Category:
Upload: inktank
View: 1,331 times
Download: 1 times
Share this document with a friend
Description:
Paul Brook and Michael Holzerland, Dell
Popular Tags:
17
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar Michael Holzerland: [email protected] Paul Brook [email protected] Twitter @paulbrookatdell
Transcript
Page 1: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar

Michael Holzerland:

[email protected]

Paul Brook

[email protected]

Twitter @paulbrookatdell

Page 2: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Confidential

Agenda:

Introduction Inktank & Dell Dell Crowbar Automation, scale

Best Practice with CEPH Cluster Best Practice Networking Whitepapers

Crowbar Demo

Page 3: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

3 Confidential

Dell is a certified reseller of Inktank Services, Support and Training.

• Need to Access and buy Inktank Services & Support? • Inktank 1-year subscription packages

– Inktank Pre-Production subscription – Gold (24*7) Subscription

• Inktank Professional Services – Ceph Pro services Starter Pack – Additional days services options

• Ceph Training from Inktank

– Inktank Ceph100 Fundamentals Training – Inktank Ceph110 Operations and Tuning Training – Inktank Ceph120 Ceph and OpenStack Training

Page 4: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

OPS

SW

Dell OpenStack Cloud Solution

HW

SW

OPS

“Crowbar”

CloudOps

Software

Services &

Consulting

Reference

Architecture

Confidential 4

Page 6: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Data Center Solutions

Crowbar

Page 7: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

4) Ergänzende Produkte D

ell

“C

row

ba

r”

Op

s M

an

ag

em

en

t

Core Components &

Operating Systems

Cloud

Infrastructure

Physical Resources

APIs, User Access,

& Ecosystem

Partners

Page 8: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Quantum Cinder

SwiftNova Support

Proxy

Dashboard

Store Nodes

(min 3 nodes)

Controller

AP

I AP

I

UI

AP

IA

PI

Block Device

(SAN/NAS/DAS)

Scheduler

Keystone

Compute Nodes

Controller

Database

Glance

AP

I

RabbitMQ

AP

I

#8

#9 #3

#4

#2

#5

Barclamps!

Automatisierte und einfache Installation

#7 #6

#1

Page 9: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Crowbar Landingpage

• http://crowbar.github.io/

2/28/2014 9

Page 10: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Best Practices

Page 11: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Object Storage Daemons (OSD)

• Allocate sufficient CPU cycles and memory per OSD –2GB memory and 1GHz of AMD or Xeon CPU cycles

per OSD –Hyper Threading can be used in Xeon Sandybridge and

UP • Use SSDs as dedicated Journal devices to improve

random latency –Some workloads benefit from separate journal devices

on SSDs –Rule of Thumb: 6 OSD for 1 SSD

• No Raid Controller –Just JBOD

11

Page 12: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Ceph Cluster Monitors

• Best practice to deploy monitor role on dedicated hardware –Not resource intensive but critical –Using separate hardware ensures no contention for resources

• Make sure monitor processes are never starved for resources – If running monitor process on shared hardware, fence off

resources

• Deploy an odd number of monitors (3 or 5) –Need to have an odd number of monitors for quorum voting –Clusters < 200 nodes work well with 3 monitors – Larger clusters may benefit from 5 –Main reason to go to 7 is to have redundancy in fault zones

• Add redundancy to monitor nodes as appropriate –Make sure the monitor nodes are distributed across fault zones –Consider refactoring fault zones if needing more than 7 monitors

12

Page 13: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Potential Dell Server Hardware Choices

• Rackable Storage Node – Dell PowerEdge R720XD or R515

– INTEL Xeon 2603v2 or AMD C32 Plattform

– 32GB RAM

– 2x 400GB SSD drives (OS and optionally Journals)

– 12x 4TB SATA drives

– 2x 10GbE, 1x 1GbE, IPMI

• Bladed Storage Node – Dell PowerEdge C8000XD Disk

and PowerEdge C8220 CPU

– 2x Xeon E5-2603v2 CPU, 32GB RAM

– 2x 400GB SSD drives (OS and optionally Journals)

– 12x 4TB NL SAS drive

– 2x 10GbE, 1x 1GbE, IPMI

• Monitor Node – Dell PowerEdge R415

– 2x 1TB SATA

– 1x 10GbE

Confidential 13

Page 14: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Configure Networking within the Rack

• Each Pod (e.g., row of racks) contains two Spine switches • Each Leaf switch is redundantly uplinked to each Spine switch • Spine switches are redundantly linked to each other with 2x 40GbE • Each Spine switch has three uplinks to other pods with 3x 40GbE

14

10GbE link 40GbE link

High-Speed Top-of-Rack (Leaf) Switch

Nodes in Rack

High-Speed Top-of-Rack (Leaf) Switch

Nodes in Rack

High-Speed Top-of-Rack (Leaf) Switch

Nodes in Rack

High-Speed End-of-Row

(Spine) Switch

High-Speed End-of-Row

(Spine) Switch

To Other Rows (Pods) To Other Rows (Pods)

Page 15: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Networking Overview

•Plan for low latency and high bandwidth

•Use 10GbE switches within the rack

•Use 40GbE uplinks between racks

•One option: Dell Force10 S4810 switches with port aggregation & Force10 S6000 for aggregation Level with 40GigE

15

Page 16: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Whitepapers!

Page 17: Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt

Questions? - Demo -


Recommended