+ All Categories
Home > Technology > Red Hat Ceph Storage: Past, Present and Future

Red Hat Ceph Storage: Past, Present and Future

Date post: 07-Aug-2015
Category:
Upload: redhatstorage
View: 164 times
Download: 1 times
Share this document with a friend
Popular Tags:
29
RED HAT CEPH STORAGE: PAST, PRESENT AND FUTURE Neil Levine June 25, 2016
Transcript
Page 1: Red Hat Ceph Storage: Past, Present and Future

RED HAT CEPH STORAGE:PAST, PRESENT AND FUTURE

Neil LevineJune 25, 2016

Page 2: Red Hat Ceph Storage: Past, Present and Future

AGENDA

Red Hat Storage Overview

PastRetrospective on Inktank acquisition

Red Hat Ceph Storage 1.2

PresentRed Hat Ceph Storage 1.3

RHEL-OSP with 1.3

FutureRed Hat Ceph Storage 2.0OpenStack and Containers

Page 3: Red Hat Ceph Storage: Past, Present and Future

Open Software-Defined Storage is a fundamental reimagining of how storage infrastructure works.

It provides substantial economic and operational advantages, and it has quickly become ideally suited for a growing number of use cases.

TODAY EMERGING FUTURE

CloudInfrastructure

CloudNative Apps

Analytics

Hyper-Convergence

Containers

???

???

OPEN, SOFTWARE-DEFINED STORAGE

Page 4: Red Hat Ceph Storage: Past, Present and Future

A RISING TIDE

“By 2020, between 70-80% of unstructured data will be held on lower-cost storage managed by SDS environments.”

“By 2019, 70% of existing storage array productswill also be available as software only versions”

“By 2016, server-based storage solutions will lower storage hardware costs by 50% or more.”

Gartner: “IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry”

Innovation Insight: Separating Hype From Hope for Software-Defined Storage

Innovation Insight: Separating Hype From Hope for Software-Defined Storage

Market size is projected to increase approximately 20% year-over-year between 2015 and 2019.

2013

2014

2015

2016

2017

2018

2019

$1,349B

$1,195B

$1,029B

$859B

$706B

$592B

SDS-P MARKET SIZE BY SEGMENT

$457B

Block StorageFile StorageObject StorageHyperconverged

Source: IDC

Software-Defined Storage is leading a shift in the global storage industry, with far-reaching effects.

Page 5: Red Hat Ceph Storage: Past, Present and Future

THE RED HAT STORAGE PORTFOLIO

Cephmanagement

OP

EN S

OU

RC

ESO

FTW

AR

E

Glustermanagement

Cephdata services

Glusterdata services

STA

ND

AR

DH

AR

DW

AR

E

Share-nothing, scale-out architecture provides durability and adapts to changing demands

Self-managing and self-healing features reduce operational overhead

Standards-based interfaces and full APIs ease integration with applications and systems

Supported by theexperts at Red Hat

Page 6: Red Hat Ceph Storage: Past, Present and Future

● VM Storage with OpenStack Cinder, Glance & Nova

● Object storage for tenant apps

Built from the ground up as a next-generation storage system, based on years of research and suitable for powering infrastructure platforms

TARGET USE CASES

Rich Media and Archival● S3-compatible object storage

Highly tunable, extensible, and configurable, with policy-based control and no single point of failure

Offers mature interfaces for block and object storage for the enterprise

Cloud Infrastructure

Customer Highlight: CiscoCisco uses Red Hat Ceph Storage to deliver storage for next-generation cloud services

RED HAT CEPH STORAGE

Powerful distributed storage for the cloud and beyond

Page 7: Red Hat Ceph Storage: Past, Present and Future

ANALYTICSBig Data analytics with Hadoop

CLOUD INFRASTRUCTURE

RICH MEDIAAND ARCHIVAL

SYNC ANDSHARE

ENTERPRISEVIRTUALIZATION

Machine data analytics with Splunk

Virtual machine storage with OpenStack

Object storage for tenant applications

Cost-effective storage for rich media streaming

Active archives

File sync and share with ownCloud

Storage for conventional virtualization with RHEV

FOCUSED SET OF USE CASES

Page 8: Red Hat Ceph Storage: Past, Present and Future

PAST

Page 9: Red Hat Ceph Storage: Past, Present and Future

TIMELINE

Jul14 Mar15 Jun15May14

INKTANK CEPH ENTERPRISEv1.2

v1.3

Inktank acquisition &Ceph Firefly released

Ceph Hammer released

RED HAT CEPH STORAGE

Page 10: Red Hat Ceph Storage: Past, Present and Future

MG

MT All required dependencies are now included within a local package

repository, allowing deployment to non-Internet-connected storage nodes.

MG

MT

CO

RE

CO

RE

CO

RE

OB

JEC

T

Administrators can now perform basic cluster administration tasks through Calamari, the Ceph visual interface.

Erasure-coded storage back-ends are now available, providing durability with lower capacity requirements than traditional, replicated back-ends.

A cache tier pool can now be designated as a writeback or read cache for an underlying storage pool in order to provide cost-effective performance.

Clients can be configured to read objects from the closest replica, increasing performance and reducing network strain.

The Ceph Object Gateway now supports and enforces quotas for users and buckets.

Off-line installer

GUI management

Erasure coding

Cache tiering

RADOS read-affinity

User andbucket quotas

These features were introduced in version 1.2 of Red Hat Ceph Storage, and have been supported by Red Hat since July, 2014.

DETAIL:RED HAT CEPH STORAGE V1.2

Page 11: Red Hat Ceph Storage: Past, Present and Future

DELIVERING RED HAT CEPH STORAGE

BEFORE DURING AFTER

Page 12: Red Hat Ceph Storage: Past, Present and Future

DELIVERING RED HAT CEPH STORAGE

Bugzilla

Fork

Package

Doc

Test

Page 13: Red Hat Ceph Storage: Past, Present and Future

A GENUINE RED HAT PRODUCT

Page 14: Red Hat Ceph Storage: Past, Present and Future

CEPH SUCCESSES

Page 15: Red Hat Ceph Storage: Past, Present and Future

PRESENT

Page 16: Red Hat Ceph Storage: Past, Present and Future

RED HAT CEPH STORAGE 1.3

GA Today

Based on Ceph Hammer (0.94)

Core ThemesRobustness at Scale

Operational EfficiencyPerformance

Page 17: Red Hat Ceph Storage: Past, Present and Future

Red Hat Ceph Storage 1.3 contains improved logic and algorithms that allow it to do the “right thing” for users with multi-petabyte clusters where hardware failure is normal:

ROBUSTNESS AT SCALE

Improved self-management for large clusters

● Improved automatic rebalancing logic, which prioritizes degraded over misplaced objects

● Rebalancing operations can be temporarily disabled so they don’t impact performance

● Time-scheduled scrubbing, to avoid disruption during peak times

● Sharding of object buckets to avoid hot-spots

Page 18: Red Hat Ceph Storage: Past, Present and Future

Ceph is a distributed system with lots of moving parts. Red Hat Ceph Storage 1.3 introduces features to help manage storage more efficiently.

OPERATIONAL EFFICIENCY

Making administration tasks easier

● Calamari now supports multiple users and clusters

● CRUSH management via Calamari API allows programmatic adjustment of placement policies

● Lightweight, embedded Civetweb server eases deployment of the Ceph Object Gateway

● Faster Ceph Block Device operations make resize, delete, and flatten operations quicker, while export parallelism makes backups faster

Page 19: Red Hat Ceph Storage: Past, Present and Future

CEPH WITH SANDISK INFINIFLASH

Page 20: Red Hat Ceph Storage: Past, Present and Future

A number of performance tweaks improve the speed of Red Hat Ceph Storage 1.3 and increase I/O consistency:

PERFORMANCE

Speedier, more efficient distributed storage

● Optimizations for flash storage devices increases Ceph’s topline speed

● Read ahead caching accelerates virtual machine booting in OpenStack

● Allocation hinting reduces XFS fragmentation to avoid performance degradation over time

● Caching hinting preserves the cache’s advantages and improves performance

PER

FOR

MA

NC

E

SCALE

Page 21: Red Hat Ceph Storage: Past, Present and Future

OTHER FEATURES

S3 Object Expiration

Swift Storage Policies

IPv6 Support

Local/Pyramid Codes

Page 22: Red Hat Ceph Storage: Past, Present and Future

RED HAT CEPH STORAGE 1.3.z

SELinux Support

Satellite Integration

Puppet-based Installer (Tech Preview)

Page 23: Red Hat Ceph Storage: Past, Present and Future

RHEL OPENSTACK PLATFORM w/CEPH

Feb15 Jul16Jun14

Icehouse

RHEL-OSP5

RHEL-OSP6

RHEL-OSP7

Juno

Kilo

JunoICE 1.2

RHCS 1.2.3 & 1.3.0

RHCS 1.3.0

Page 24: Red Hat Ceph Storage: Past, Present and Future

RHEL OPENSTACK PLATFORM w/CEPH

RHEL-OSP5

RHEL-OSP6

RHEL-OSP7

Integrated SKU | Integrated Installer (Client)

Ephemeral Volumes

Integrated Installer (Client and Server) | Image Conversion

Page 25: Red Hat Ceph Storage: Past, Present and Future

FUTURE

Page 26: Red Hat Ceph Storage: Past, Present and Future

DETAIL:RED HAT CEPH STORAGE “TUFNELL”

CO

RE

CO

RE

CO

RE

More intelligent scrubbing policies and improved peering logic to reduce impact of common operations on overall cluster performance.

More information about objects will be provided to help administrators perform repair operations on corrupted data.

New backend for OSDs to provide performance benefits on existing and modern drives (SSD, K/V).

Performance Consistency

Guided Repair

New Backing Store (Tech Preview)

These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.

MG

MT

A new user interface with improved sorting and visibility of critical data.

MG

MT Introduction of altering features that notify administrations of critical

issues via email or SMS.

New UI

Alerting

Page 27: Red Hat Ceph Storage: Past, Present and Future

BLO

CK

Introduction of a highly-available iSCSI interface for the Ceph Block Device, allowing integration with legacy systems

BLO

CK

OB

JEC

TO

BJE

CT

Capabilities for managing virtual block devices in multiple regions, maintaining consistency through automated mirroring of incremental changes

Access to objects stored in the Ceph Object Gateway via standard Network File System (NFS) endpoints, providing storage for legacy systems and applications

Support for deployment of the Ceph Object Gateway across multiple sites in an active/active configuration (in addition to the currently-available active/passive configuration)

iSCSI

Mirroring

NFS

Active/Active Multi-Site

These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.

DETAIL:RED HAT CEPH STORAGE “TUFNELL”

Page 28: Red Hat Ceph Storage: Past, Present and Future

RHEL OPENSTACK PLATFORM w/CEPH

RHEL-OSP 8

Containers

QoS | Live Migration | Disaster Recovery

RBD Driver for Kubernetes | S3 Backend for OpenShift

Page 29: Red Hat Ceph Storage: Past, Present and Future

Recommended