Post on 18-Jul-2020
transcript
Deployment of SUSE Enterprise Storage with DellEMC PowerEdge
Kishore Gagrani : Global Product Director - PowerEdge Product Management @DellEMCKishore.Gagrani@Dell.com
David Byte: Sr. Technology Strategist,
Alliances @SUSE dbyte@suse.com
© Copyright 2019 Dell Inc.2 of Y
Agenda ● Why Ceph ● Review of a SUSE Enterprise Storage (SES)● DellEMC’s PowerEdge Server portfolio ● Technical Review of lab tested Reference Architecture for
deployment of Ceph using SES and PowerEdge ● Recommendations for PowerEdge and SES/Ceph while
deploying Ceph cluster● Future potentials (Not PoR, ONLY directional)
© Copyright 2019 Dell Inc.3 of Y
Why Ceph? Very Large Developer Community
Near Infinite Scalability
Variety of Uses:
Object Storage – Cost Effective■ Archiving, Cold Storage, RADOS
RBD Block Storage - Cost /Performance Optimized ■ Virtual Machines/OpenStack
File Storage ■ Data Lakes, Backup, Distributed Computing
© Copyright 2019 Dell Inc.4 of Y
Deliver a highly scalable and resilientenvironment with no single points of failure
Reduce IT costs by using off-the-shelf servers and disk drives
Automatically optimize and add storage when needed without disruption
SUSE Enterprise Storage
MonitorNodes
Management Node
Storage Nodes
Unified
Open Source Software on
x86
Resilient & Self-healing
High Performance
MassivelyScalable
Public Cloud Like
Pricing
ObjectStorage
BlockStorage
FileSystem
UnifiedCluster
HardwareFlexibility
ReducedIT Costs
An intelligent software-defined storage management solution, powered by Ceph Technology that enables IT to transform their enterprise storage infrastructure to:
© Copyright 2019 Dell Inc.5 of Y
DellEMC PowerEdge portfolio - a 40K Feet view ● Intel based portfolio of 1U/2U/4U , 1S, 2S and 4S Rack and Tower
Servers
● AMD based portfolio of 1U/2U, 1S , 2S Rack Servers
● Intel based portfolio of Modular Servers and Storage Sleds
POWEREDGE - THE BEDROCK OF THE MODERN DATA CENTER
© Copyright 2019 Dell Inc.6 of Y
PowerEdge 14G Xeon : Rack/Tower
Rack Servers Tower Servers
PowerEdge T340PowerEdge T140PowerEdge R340PowerEdge R240
Based on Xeon E-2100 Based on Xeon E-2100
Based on Xeon E-2100
Based on Xeon E-2100
PowerEdge R640PowerEdge R740/R740XD
PowerEdge R940xa
PowerEdge R440
PowerEdge C4140(either NVLink or PCIe)
PowerEdge R540
PowerEdge T440 PowerEdge T640
1U/2-Socket4 x GPU
1U/2-Socket 2U/2-Socket 4-Socket 2-SocketValue
2-Socket
PowerEdge R940
PowerEdge R840
© Copyright 2019 Dell Inc.7 of Y
PowerEdge 14G AMD EPYC
PowerEdge R7425 PowerEdge R7415 PowerEdge R6415
2-socket/2U 1-socket/2U 1-socket/1U
© Copyright 2019 Dell Inc.8 of Y
PowerEdge 14G Modular
PowerEdge FC640 PowerEdge M640
PowerEdge FX2PowerEdge VRTX
PowerEdge M640
PowerEdge MX(next generation modular: NGM)
PowerEdge M1000e
PowerEdge MX740c
PowerEdge MX840c
PowerEdge C6420
© Copyright 2019 Dell Inc.9 of Y
Ceph Reference Architecture Lab Testing● The Objective: Implement, test, and document
SUSE Enterprise Storage on PowerEdge hardware.● H/W
○ Bare minimum, as is in the lab○ Detailed BOM of servers in the whitepaper
● Networking○ Bonded 25Gb for Private OSD and 25Gb for Internal Management N/W
● Software ○ SUSE Enterprise Storage and SLES 12 SP3
● SUSE “YES” certification test
© Copyright 2019 Dell Inc.10 of Y
The Lab H/W Reference Architecture
OSD Nodes
Solution Admin Host Public Client Node
33 34 35 3631 3229 3027 2825 26 45 46 47 4843 4441 4239 4037 389 10 11 127 85 63 41 2 21 22 23 2419 2017 1815 1613 14 50 52 54
49 51 53
Stac
k ID
33 34 35 3631 3229 3027 2825 26 45 46 47 4843 4441 4239 4037 389 10 11 127 85 63 41 2 21 22 23 2419 2017 1815 1613 14 50 52 54
49 51 53
Stac
k ID
Public N/W Private N/W
PowerEdge R740XD
PowerEdge R640
DellEMC Switch S5248-F
DellEMC Swittch S4112-ON
Monitor Nodes
© Copyright 2019 Dell Inc.11 of Y
Ceph Architecture
© Copyright 2019 Dell Inc.12 of Y
Architectural Design Considerations ● Symmetric & Redundant deployments are desirable
○ Dual ToR switches○ Distribute nodes evenly across multiple racks
● Large Deployments = Spine/Leaf, Small = Hub/Spoke or Mesh● Sufficiently large IP Subnet to current and future needs
○ Include clients, gateways, and storage nodes○ Use a /23 vs /24 for larger future OSD expansion
● Do NOT Route Non-Object storage protocols (NFS, CIFS, CephFS, RBD, ISCSI)○ Adds latency○ Many routing device cannot handle the high throughput○ Take advantage of Ceph’s native client aggregation
● Use LACP bonding● Understand signaling rates of various network topologies w.r.t. cluster
© Copyright 2019 Dell Inc.13 of Y
Other Network Service ConsiderationsReliable NTP Service● Plan Multiple upstream NTP servers for Proper NTP Services ● Do not virtualize NTP Servers
Utilize the SUSE SMT or RMT service for● Local mirror of SUSE Repositories● For quality assurance requirement, Stage, test and implement updates
Install a Physical Solution Admin Host for ● PXE● DHCP● DNS● SMT
© Copyright 2019 Dell Inc.14 of Y
H/W Considerations
● Update BIOS firmware to the certified version or newer● Set BIOS to performance optimization settings on all
node● Consider the use of SSD/NVMe for RocksDB/WAL
offload● On OSD Nodes, Configure all non-operating system
drives as RAID 0 / JBOD mode devices● Set the cache mode for the PowerEdge RAID Controller
(PERC) set to write-back● Install the Operating System (OS) on a mirrored pair of
SSDs● Ensure total OS drive capacity is greater than the host’s
memory● Use BOSS for OS installation
© Copyright 2019 Dell Inc.15 of Y
S/W Deployment Considerations
Update your cluster before doing anything else
Plan to perform regular updatesBug fixes, enhancements, etc
Evaluate tuning optionsKernelNetworkDriver
© Copyright 2019 Dell Inc.16 of Y
Future potentials ● Deployment Automation using iDRAC APIs
● M & O automation with integrated SES and DellEMC tool sets
● Potential: All SSD or NVMe for performance; Tiered Storage for Cost optimized performance
© Copyright 2019 Dell Inc.17 of Y
Summary● Dell PowerEdge servers make a great platform for Ceph
● SUSE and Dell are ready to help your business address PB+ scale requirements today.