+ All Categories
Home > Documents > Technical White Paper FUJITSU Hyperscale Storage...

Technical White Paper FUJITSU Hyperscale Storage...

Date post: 29-Aug-2018
Category:
Upload: hangoc
View: 220 times
Download: 0 times
Share this document with a friend
16
White Paper ETERNUS CD10000 Feature Set Page 1 of 16 www.fujitsu.com/eternus_cd Management Summary 2 Introduction 3 Distributed Scale-out Storage 4 ETERNUS CD10000 Architecture 7 ETERNUS CD10000 Hardware Architecture 8 ETERNUS CD10000 Software Architecture 10 ETERNUS CD10000 Management System 12 ETERNUS CD10000 Unified Storage 14 System platform approach 15 Content This white paper provides an overview of the main features supported by the FUJITSU Storage ETERNUS CD10000 system. It highlights their benefits, explains use cases and briefly describes each feature. Technical White Paper FUJITSU Hyperscale Storage System ETERNUS CD10000
Transcript

White Paper ETERNUS CD10000 Feature Set

Page 1 of 16 www.fujitsu.com/eternus_cd

Management Summary 2Introduction 3Distributed Scale-out Storage 4ETERNUS CD10000 Architecture 7ETERNUS CD10000 Hardware Architecture 8ETERNUS CD10000 Software Architecture 10ETERNUS CD10000 Management System 12ETERNUS CD10000 Unified Storage 14System platform approach 15

Content

This white paper provides an overview of the main features supported by the FUJITSU Storage ETERNUS CD10000 system. It highlights their benefits, explains use cases and briefly describes each feature.

Technical White Paper FUJITSU Hyperscale Storage System ETERNUS CD10000

White Paper ETERNUS CD10000 Feature Set

Page 2 of 16 www.fujitsu.com/eternus_cd

Management Summary

This White Paper describes the technical functionality of the FUJITSU Storage ETERNUS CD10000 hyperscale system. ETERNUS CD10000 pro-vides unlimited, modular scalability of storage capacity and performance at zero downtime for instant and cost efficient online access to extensive data volumes. Integrating open-source Ceph software into a storage system delivered with end-to-end maintenance from Fujitsu enables IT organizations to fully benefit from open standards without implemen-tation and operational risks. Providing hyper-scalable object, block, and file storage up to more than 50 PetaBytes of data in a cost optimized way ETERNUS CD10000 is the ideal storage for OpenStack users, service providers for cloud, IT and telecommunication as well as media-broad-casting companies. Financial and public institutions with ever-growing document repositories, large scale business analytics/big data applica-tions as well as organizations with comprehensive multimedia data can be served by ETERNUS CD10000 in an excellent manner.

The ETERNUS CD10000 offers:■ Unlimited and flexible scalability of capacity and performance■ Fast data access with zero downtime ■ Cost efficient storage for extremely large data volumes

White Paper ETERNUS CD10000 Feature Set

Page 3 of 16 www.fujitsu.com/eternus_cd

Introduction

One of the most discussed and observed IT topics is undoubtedly cloud services, big data analytics, social business, mobile broadband and virtualization in order to handle the unexpected data growth. From 2013 to 2020, 90 % of IT industry growth will be driven by third-party platform technologies that, today, represent just 22 % of ICT spending.*(Source: IDC 12/12)

Services will be built on innovative mixtures of cloud, mobile device apps, social technologies, big data and more.

All of these scenarios need to use storage systems that are able to handle the massive demand with petabyte data by simultaneously growing the storage capacity and without reaching any physical limits.

Traditional RAID scale-up systems face limits when crossing the Petabyte divide and run into insolvable challenges:■ High RAID rebuild times , high risks■ Exponentially rising costs for HA■ Over or under provisioning due to unexpected data growth■ Extreme data migration duration times■ Significant issues with (planned) downtimes■ Performance issues

Some storage vendors started offering scale-out storage solutions by clustering RAID systems and pushing limits via additional scalability, performance and reliability. Such a scale-out concept does not lead to the desired quality and resilience because:■ The RAID problems for this kind of scale-out storage cluster still remain■ Performance scalability is not linear and reaches saturation levels due to central data distribution mechanisms■ High costs per capacity remain due to purpose-built architectures plus additional hardware and software components

These topics inevitably result in a paradigm shift in the data center environment. This data center transformation for converged systems will account for over a third of enterprise cloud deployments by 2016.

The current main questions for IT decision-makers are to identify the best practice solution for data center environments. Going for “cloud” is very simple: the goal for a typical IaaS system is to provide the user with a range of capacities, such as computing power and disk space with less effort for the service provider and maximum flexibility for the platform. This method for a cloud storage solution provides users with flexible computing power and disk space when they actually need the services. IT service providers face the challenge of keeping their own set-ups scalable and of handling peak loads in this manner.

A seamless extension of a scale-out platform needs to be easy and possible at any time. Typical usage scenarios for scale-out platforms can be found at cloud service and telecommunication providers, companies with large R&D activities, public institutions with huge document repositories, financial institutions or media, broadcasting and streaming companies. The Fujitsu ETERNUS CD10000 offers an outstanding hassle-free system platform which meets the demands for efficient cloud IT infrastructure solutions as described above.

White Paper ETERNUS CD10000 Feature Set

Page 4 of 16 www.fujitsu.com/eternus_cd

Distributed Scale-out Storage

Scale-out storage is rapidly emerging as a viable alternative for address-ing a wide variety of enterprise use cases, because it allows companies to add storage on a “pay-as-you-grow” basis for incremental capacity and performance. The answer to target-oriented cost calculation is a further issue that customers wish to clarify, especially for pre-investment. The scale-out storage solution offers the opportunity of concentrating the calculation on the required storage resources beyond the expected future capacities.

Due to the explosion of unstructured data, the need to provide differ-entiated services and the availability of better functionalities and professional support will drive the demand for scale-out file system storage in the near future.

The main perspectives are:■ Build foundation for efficient cloud computing■ Demand for scalable and resilient storage infrastructure■ “Pay as you grow” based on business requirements■ Answer to target-oriented cost calculation based on required

storage resources■ Need to provide differentiated services■ Implement cost-effective fault tolerance■ Professional support and maintenance

IT organizations which need to provide constant online access to data in the petabyte scale will need to evaluate distributed, hyperscaling storage architectures. Essential evaluation criteria are:

The implication challenges for the distributed scale-out storage platform are listed below:■ Scalability - Practically unlimited scalability in terms of performance & capacity - No performance bottlenecks - No hot spots regarding data access - Zero planned and unplanned downtime■ Reliability - Full redundancy - Self-healing functionalities - Geographical dispersion - Fast rebuild of failed disks and nodes■ Manageability - Central management of huge storage amounts - Unified multi-protocol access (block, file and object) - Seamless introduction of new storage

■ System is devided into RAID groups providing efficient data protection

■ Issues with rebuild times of large capacity disks■ Protection against system failures requires external

add-ons■ Limited performance scalability inhibits full capacity

utilization of systems■ Issues with performance hot-spots

Figure 1

Let’s take a look at a traditional scale-up storage which works with a RAID set for redundancy purposes. (Figure 1)

Classical scale-up, RAID storage

White Paper ETERNUS CD10000 Feature Set

Page 5 of 16 www.fujitsu.com/eternus_cd

While traditional scale-up storage systems distribute volumes across sub-sets of spindles, scale-out systems use algorithms to distribute volumes across all or many spindles to provide maximum utilization of all system resources.

Comparing the distributed scale-out storage we quickly approach the design architecture of the new paradigm shift software-based storage. The data allocation distributes data homogenously via disks. The separation in figure 2 shows the different storages nodes which are also part of the data distribution. The objects, files or block data are allocated homogeneously. Disks or node failures can happen without losing data or affecting performance. The system thus has very high fault tolerance, scalability of capacity and data migration with zero downtime.

■ Data are broadly distributed over disks and nodes delivering high I/O speed even with slow disks

■ Protection against disk and node failures is achieved by creating 2, 3, 4, … n replicas of data

■ Distributed file system avoids central bottlenecks■ Adding nodes provides linear performance scalibility■ Fault tolerance and online migration of nodes is part

of the design → Zero down time

Figure 2

Distributed scale-out storage

White Paper ETERNUS CD10000 Feature Set

Page 6 of 16 www.fujitsu.com/eternus_cd

How to achieve high availabilityThe traditional scale-up environment typically use a structure based on disk controller working as a redundant system. The RAID controller instances are RAID 6 where 2 drives can fail, RAID 1, 5 or 10 where 1 drive can fail without losing data. As disk capacity is rising constantly this also increases rebuild times of failed disks. Thus the probability that additional disk failures occur during rebuild phases increase as well which can lead to critical RAID situations or even data loss. Furthermore scale-up systems have no inherent capabilities for bridging planned or unplanned downtimes. For complete business continuity add-ons like storage virtualization layers etc. are needed.

In the distributed scale-out architecture of ETERNUS CD10000, multiple storage nodes and drives can fail because of the inherent fault tolerance of the architecture without impacting business continuity. All nodes can automatically coordinate the recovery in a short self-healing process. The system is fault tolerant by design and also planned downtime

phases for maintenance reasons or for technology upgrades are not necessary any longer. As storage nodes of a particular generation can be replaced with nodes of the next generation during operations the lifecyle of the overall systems can be extended and migration efforts can be reudced heavily.

High availability and scalability through distributed scale-out model:Figure 3 shows the architectural differentiation between conventional scale-up and distributed scale-out storage systems and how the controller and nodes resilience is achieved. The scalability of the distributed model is very high based on its node characteristics.

The next sections look at the architecture details of the distributed scale-out ETERNUS CD10000 system and how the technical design meets requirements.

Figure 3

Distributed■ N-way resilience, e.g. (scale-out) - Multiple nodes can fail - Multiple drives can fail - Nodes coordinate replication and recovery

Conventional model■ Dual redundancy (classic) - 1 node/controller can fail - 2 drives can fail (RAID 6) - HA only inside a pair

Node 1 Node 2 Node 3 Node 5Node 4

Controller 1 Controller 2

Access network

Interconnect

White Paper ETERNUS CD10000 Feature Set

Page 7 of 16 www.fujitsu.com/eternus_cd

ETERNUS CD10000 Architecture

Figure 4 shows the architecture building blocks of the ETERNUS CD10000 system. The system hardware consists of storage nodes, a the fast In-finiband backend network and a 10 Gb Ethernet frontend network which is crucial for consistent network performance levels.

The internal system management bases on Ceph open-source software plus add-ons from Fujitsu increasing ease of use. The central manage-ment system is responsible for the operating the complete system platform with all its nodes from a single instance. The software design enables unified object, block and file access. Fujitsu also supports user-specific interfaces for cloud storage, synch & share, archiving and file service as options.

Overview of key characteristics:■ Up to 200 storage nodes connected; able to deliver up to 50 Petabyte

capacity at minimum space■ Host interface: 10 Gb Ethernet – front-end network■ Storage nodes interconnect via Infiniband 40 GB■ Ceph Storage Software■ Software enhancements for ease-of-mangement■ Unified access for object, block and file storage■ Central management system■ Additional support for customer-specific user interfaces■ End-to-end maintenance and support from Fujitsu for hard and

software (system platform approach)■ Service packs and upgrades for secure lifecycle management after

purchase

The next sections provide a detailed look on the building blocks.

Figure 4

Use case specific interfaces

10 GbE Frontend Network

InfiniBand Backend Network

Ceph Storage System Software and Fujitsu Extensions

Central Management System

Obeject Level Access

Cloud Storage Sync and Share Archive File service

Block Level Access File Level Access

Capa

city

nod

es

ETER

NUS

CD1

0000

Perfo

rman

ce n

odes

White Paper ETERNUS CD10000 Feature Set

Page 8 of 16 www.fujitsu.com/eternus_cd

ETERNUS CD10000Hardware Architecture

Configuration schemes of ETERNUS CD10000 nodes:This section describes the architecture of the ETERNUS CD10000. In order to serve balance access speed, capacity and cost requirements 3 different types of storage nodes are offered which can be mixed on demand. For central control, management and provisioning tasks a management node is also part of the system design. The maximum number of possible server nodes can be 200 enabling storage capacities of up to 50 Petabyte.

Note: The basis configuration of the ETERNUS CD10000 system is the 4+1, which contains 1 management node and 4 basic nodes. The maximum amount of storage nodes is 200. The node interconnect bases on an Infiniband 2x 40 GB 2 channel QDR. The various configuration details are shown in Figure 6.

Storage Node Configuration

Basic Configuration

1x Management Node + 4x Basic Node

Optional Configurations

1x Management Node 1x Management Node 1x Management Node

+ 2x Performance Node + 4x Performance Node + 4x Capacity Node

+ 2x Capacity Node

Maximum Upgrades

Up to 200 Nodes: Mix of Basic Node / Performance Node / Capacity Node

Basic Storage Node – for storing data.Technical specs:■ 2x Intel XEON CPU■ 128 GB RAM■ Node-Interconnect: Infiniband 2x 40GbIB (Mellanox IBHCR 40 GB 2Kanal QDR )■ Front-end interface: 10GbE (2x 10GbE PCIe x8 D2755 SFP+)■ PCI-e SSD■ Usage: journal, cache■ 16x 2.5“ 900 GB SAS 10k HDDs (2x for OS, 14x for user data)The total raw capacity of each node is 12.6 TB SAS HDD. The usable capacity depends on the number of the replicas. Typically usable capacity = raw capacity / number of replicas, e.g. 2 or 3

■ Delivers 12.6 TB using 2.5“ SAS disks (10k rpm)

Basic Storage Node

Management node - is responsible for collecting all the data logs of the running system. The management node acts as a “housekeeper” and stores all the upcoming event logs from the system, but has no further action in the system operation. This server-node is available once per system. In case of a management node failure, the system can seamlessly continue with its operations. For redundancy purposes in-stalling 2 management nodes per system is recommended.Technical specs:■ 1x Intel XEON CPU■ 64 GB RAM■ Front-End Interface: 2x 1GbE onboard for administration, 4x 1GbE for administration

and management■ 4x 2.5“ 900 GB SAS 10k HDDs (2x for OS, 2x for user data)The total raw capacity of the node is 3.6 TB SAS HDD.

■ Delivers 3.6 TB using 2.5“ SAS disks (10k rpm)

Management Node

Performance node - is designed to deliver fast access to data by using fast rotating SAS disks and PCIe connected SSDs.■ PCI-e SSD with 800 GB capacity also used as storage tier■ ETERNUS JX40 JBOD adds 21.6 TB of 2.5” SAS disks (10K rpm)The total raw capacity is 34.2 TB.Typically usable capacity = raw capacity / number of replicas, e.g. 2 or 3

■ 34.2 TB total capacity using 2.5“ SAS disks (10k rpm)

Storage Performance Node

Capacity node – is designed to deliver high data density trough a very compact housing for disks. It can host high capacity disk to store high data volumes in a cost-effective way.■ 14x SAS 900 GB HDD plus 60x 3.5” NL-SAS 4 TB HDD in totalThe total raw capacity is 252.6 TB.Typically usable capacity = raw capacity / number of replicas, e.g. 2 or 3

■ 252.6 TB total capacity using 3.5“ SATA disks (7.2k rpm)

Storage Capacity Node

Figure 6

White Paper ETERNUS CD10000 Feature Set

Page 9 of 16 www.fujitsu.com/eternus_cd

Storage Node Architecture:The design of the storage nodes is optimized to provide consistant high performance, even in failure situations.

This is achieved through■ a balanced I/O architecture■ 1 GB/s throughput including all redundant data copies (replicas)■ a very fast Infinband back-end network for a fast distribution of

data between nodes and for a fast rebuild of data redundancy/ redistributing data after hardware failures

■ using PCIe connected SSDs for fast access of journal and metadata and also acting as a data cache. Avoids latency in a distributed storage architecture

Figure 5 shows the data throughput in the ETERNUS CD10000 server nodes. This architecture ensures a balanced I/O performance with around 1 Gigabyte/s sustained speed including all redundant data copies per performance node. Every component runs at the required speed with a redundant and network Infiniband switch in the backend.

Figure 5

Stor

age

node

Redundant client interconnect 10 Gbit (IP based)

Redundant cluster interconnect Infiniband 40 Gbit (IP based)

App

File

App

Block

1 GB/s

2 GB/s

1 GB/s2 GB/s

1 GB/s2 GB/s

Memory Journal/PCIe-SSD

RAID controllerMemory OSD/SAS

ETERNUS CD10000 storage node – internal architecture

White Paper ETERNUS CD10000 Feature Set

Page 10 of 16 www.fujitsu.com/eternus_cd

ETERNUS CD10000Software Architecture

Ceph storage softwareCeph is an Open Source, software defined storage platform and designed to present object, block and file storage from a distributed x86 computer cluster. Ceph supports scalable clusters up to the Exabyte level. The data is replicated over disks and nodes and supports the technique for system fault tolerance which is designed to be both self-healing and self-managing. Ceph helps to reduce investment and operational costs.

Ceph software is part of the OpenStack project and the core storage element. OpenStack enables IT organizations to build private and public cloud environments with open source software.

One of the underlying Ceph technologies is the CRUSH algorithm that manages the homogeneous allocation of the data placement. The CRUSH algorithm (Controlled Replication Under Scalable Hashing) describes a method which determines how data is stored and retrieved by computing data storage locations.

It works with a pseudo-random and uniform (weighted) distribution to establish a homogeneous allocation of the data between all available disks and nodes. Adding new devices (HDDs or nodes) will thus not have any negative impact on data mapping. The new devices become part of the system without any bottleneck or hot-spot effects.

Due to the infrastructure algorithm powered by CRUSH, the placement based on physical infrastructure (e.g. devices, servers, cabinets, rows, DC’s) becomes very easy. This enables high performance and provides the system with high resilience.

Figure 7 shows how the data is stored on object storage devices and distributed on various HDDs and server nodes. In order to simplify the data distribution process, we use a sample object with 4x 4 MB blocks. The 4 MB blocks are stored in each object store container (HDDs) of the various server nodes. In this example a total of 3 copies (replicas) of the objects are stored and distributed on different containers providing data high availability trough redundancy. If the client asks for this 16 MB file, the CRUSH algorithm collects the respective data blocks from the storage nodes and rebuilds it as an object file. Block sizes are not fixed and can be individually defined.

HDDs represent OSDs with 4 MB block-size CRUSH- algorithm allocates distribution of object files

16 MB object file

Server Node 1 Server Node 2 Server Node 3

Figure 7

White Paper ETERNUS CD10000 Feature Set

Page 11 of 16 www.fujitsu.com/eternus_cd

In the event of an hardware failure (disks or nodes fail) the system is able to seamlessly proceed without downtime. As data copies are available on other locations within the system data is distributed over the remaining HDDs. There is no significant influence on reliability and performance. The system automagically recreates new data copies (replicas) for those which have been lost during the failure of compo-nents. Trough the very distributed nature of storing data this happens only in a fraction of time compared with the rebuild of data on a spare disk within a RAID group.

How Ceph storage software overcomes the central access bottleneckWith an algorithmically determined method of storing and retrieving data, Ceph avoids a single point of failure, a performance bottleneck trough extensive parallelism and a physical limit to its scalability. Theo-retically it scales to the Exabyte level. CRUSH uses a map of the storage node cluster to pseudo-randomly store and retrieve data in OSDs across the complete system. It also uses intelligent data replication to ensure resiliency.

Here is an example of how a data cluster works in the distributed file area:Figure 8 shows a swarm (cluster) of fish, where the disks assume the role of the fish in the storage swarm. Each disk is represented by an OSD (Object Storage Device). The swarm represents the data. The clients can directly access the data, because the CRUSH algorithm knows the structure of the cluster and is able to grant access over many OSDs in a very performant and failure resistant way without using a central instance (e.g. gateway or management server). A single fish does not influence the whole population (cluster), creating fault-tolerance by design.

Figure 9 shows how the single instances interact. Several cluster monitors granting access to the object store (fish swarm), managing all instances and the entire communication of all external clients. The cluster’s metadata servers known as MDSs are responsible for name-space management, metadata operations and for ensuring the security of the object store (fish swarm). The cluster’s Object Storage Device stores all data and meta data. It organizes data into flexibly-sized containers, called objects. Within Ceph RADOS (Reliable Autonomic Distributed Object Store) stores the client data and makes sure that the data in the object store cluster can be accessed.

Metadataservers

Object Storage Devices

Clients

Cluster monitors

File, Block, Object I/O

Met

adat

a op

erat

ions

Metadata I/O

Figure 8 Figure 9

Clients

RADOS

White Paper ETERNUS CD10000 Feature Set

Page 12 of 16 www.fujitsu.com/eternus_cd

ETERNUS CD10000Management System

Figure 10 shows how the ETERNUS CD10000 software architecture and the clients access the object store devices (OSDs). Monitoring Clusters (MONs), Object Storage Devices (OSDs) and the Metadata Servers (MDSs) are part of the software storage solution of Ceph. The Cluster Monitors (MONs) are responsible for the authentication of the clients. MONs also control the cluster membership, the cluster state and the

cluster map of accessed data. The Meta Data Server (MDS) controls the block file objects of the POSIX compliant meta data and take rules for the name space management. The Object Storage Device (OSDs) stores organize all the data and meta data on the object stores in flexibly sized containers and the number of the containers can be up to 10,000.

Figure 10

Clients

Cluster monitors (MONs)■ Few per cluster (< 10)■ Cluster membership■ Authentication■ Cluster state■ Cluster map

Meta data server (MDS)■ Few per cluster (10s)■ For POSIX only■ Namespace management■ Metadata ops (open, stat, rename, …)

Object storage device (OSDs)■ 10,000s■ Stores all data/metadata■ Organizes all data in flexibly sized containers

Bulk data traffic

Topology AuthenticationBlock file objectPOSIX meta data only

White Paper ETERNUS CD10000 Feature Set

Page 13 of 16 www.fujitsu.com/eternus_cd

The basic system management of Ceph offers a generic command line interface that requires suitable skills in batch files and Linux support.

Fujitsu has complemented the basic management functions of Ceph by a GUI based deployment and configuration tool for enhancing operational efficiency.

It delivers:■ Central software deployment■ Central network management■ Central log file management■ Central cluster management■ Central configuration, administration and maintenance■ SNMP integration of all modes and network components

The GUI dashboard enables easy hassle-free administration of the system with simultaneous time and cost savings for complex storage administration procedures.

Figure 11 shows some extracts from the GUI dashboard.

Figure 11

White Paper ETERNUS CD10000 Feature Set

Page 14 of 16 www.fujitsu.com/eternus_cd

LiberadosA library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby and PHP

Ceph Object Storage (RADOS)A reliable, autonomic, distributed object store comprised of

self-healing, self-managing, intelligent storage nodes

CD10000 Hardware

ETERNUS CD10000Unified Storage

Ceph interface architectureFigure 12 shows the interface for the unified object, block and file level access. The software core is the Ceph Object Storage (RADOS). RADOS stands for “Reliable Autonomic and Distributed Object Store”.

RADOS contains the interfaces to access Ceph cluster data for: Librados, Ceph Object Gateway, Ceph Block Device and Ceph File System

Librados: The software libraries provide client applications with direct access to the (RADOS) object-based storage system, and also provide a foundation for some of Ceph’s features. The Librados interface libraries provide for apps directly access to RADOS with support for C, C++, Java, Python, Ruby and PHP.

Ceph object gateway (RGW): The Ceph Object Gateway offers access to objects which are the most common paradigm for systems handling data. This leads to the bucked-based RESTful interface which can present e.g. as Amazon S3 (Simple Storage Service) or OpenStack Swift APIs.Ceph Block Device (RBD): The Ceph Block Device is offering block storage for e.g. QEMU/KVM virtual machines. Block Devices can be mounted on all OS’s. By design, Ceph automatically stripes and repli-cates data across the cluster.Ceph File System (CephFS): The Ceph File System allows direct file and directory access for applications. Clients mount the POSIX-compatible file systems e.g. using a Linux kernel client.Customer values – Fujitsu blueprint scenarios: In addition to the generic interfaces, Fujitsu offers service concepts to use the system on standardized scenarios, e.g. Open Stack, DropBox-like file-sharing, Backup/Archiving. These types of customized blueprint scenarios are not part of the system, but regarding the various customer capabilities this can be individually considered and implemented.

App App

Object

Ceph Object Gateway (RGW)A bucket-based REST gateway, compatible with S3 and Swift

Ceph Block Device (RBD)A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

Ceph File System (CephFS)A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

Host/VM

Virtual Disk

Client

Files & Dirs

Customer values, Fujitsu blueprint scenarios

CD10000 Interfaces CD10000

System

Figure 12

White Paper ETERNUS CD10000 Feature Set

Page 15 of 16 www.fujitsu.com/eternus_cd

System platform approach

The ETERNUS CD1000 is delivered as an system platform. Based on the required customer configuration all hard and software components are delivered as a complete product. Fujitus’s development teams that the internal server, storage, networking and software functions work seam-lessly together. This comprises also the included Ceph open source software. Benchmark and tuning tests care for the right sizing of all components which is key to run a petabyte scale storage system with-out performance bottlenecks and availability issues.

Part of the system platform approach are end-to-end maintenance services and life-cycle management.

ETERNUS CD10000 maintenance services provides support for eliminating potential technical errors of all components.

Consistent upgrades are delivered to keep the overall system up-to date whilst ensuring the interoperability off all components. This en-ables users to benefit from future enhancement of the Ceph software without comprising the stability of the system.

Especially for a distributed storage system which is software defined lifecycle management is paramount as it allows in principle to mix generations of hardware nodes as the core functionality lies in the software. Through the inherent data distribution mechanism such hardware refreshes can be conducted completely online with an auto-mated data migration. Thus a very long system lifetime can be achieved for the overall construction helping to reduce big migration projects which is not trivial in Petabyte dimensions. Nevertheless it is all about compatibility and right-sizing of hard and software. This is the key advantage of the ETERNUS CD10000 storage system. Users can fully benefit from open source software but get nevertheless professional enterprise-class support and are released from complex evaluation, integration and mainteance tasks.

Figure 13

Fujitsu add-on management functions

Fujitsu Maintenance Services

Fujitsu Professional Services

Fujitsu Server & Storage Hardware

OpenStack Storage System Software Ceph

White Paper ETERNUS CD10000 Feature Set

Page 16 of 16 www.fujitsu.com/eternus_cd

All rights reserved, including intellectual property rights. Technical data subject to modifications and delivery subject to availability. Any liability that the data and illustrations are complete, actual or correct is excluded. Designations may be trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owner.For further information see www.fujitsu.com/eternus_cd

Published byFujitsu LimitedCopyright © 2014 Fujitsu Limitedwww.fujitsu.com/eternus_cd


Recommended