Date post: | 18-Dec-2015 |
Category: |
Documents |
Upload: | matilda-greene |
View: | 214 times |
Download: | 0 times |
© 2007 DataDirect Networks, Inc. All Rights Reserved.
Simple Petabyte-Level Scaling
Industry Leading Performance
SATAssure Data Integrity
Power & Space Efficiency
S2A Technology Update
A Tradition of HPC Data Leadership2008 HPC User Forum Spring Meeting
Jeff Denworth: Director of Platform Solutions
DataDirect NetworksTarget Applications
Total Worldwide Digital Archive Capacity: 2007-2012Source: Enterprise Strategy Group
Servicing performance and capacity-bound applications experiencing compounding storage management issues
These application profiles collide in the HPC data center 2 Sample Data Points
– Defensive I/O: Multi-Core = More Memory = More data to Checkpoint• Scalable, High-Performance WRITE-INTENSIVE storage is required• Low-Cost, High Density RAM (eg. MetaRAM) will only compound this problem
– Productive I/O: Data Volume is Exploding• CERN LHC will produce 27TB per day!• That data needs to be distributed to
7000 researchers worldwide
© 2007 DataDirect Networks, Inc. All Rights Reserved.
SATA Usage & Capacity Increases Vulnerability Implications
The Appeal of SATA Technology:– Lower price per GB
– Larger capacity
– Lower power consumption
– Less heat than Enterprise drives
Feature Enterprise-Class SATA
Rotational Speed 15,000 RPM 7,200 RPM
Capacity 400GB 1TB (1,000 GB)
Drive Defect List In hardware On media
# of CPUs2 – one for I/O, one for mgmt
1 for everything
Prone to silent data corruption
No Yes
Duty Cycle Continuous Non-continuous
Average Failure Rate
Low High
The Trade-Off:
Performance is 1/2 - 1/3 of SCSI
SATA exacerbates storage issues– Lower seek performance
– Less reliability controls (mechanically)
– Less robust electronically
Rebuild Times Grow w/ Capacity– Not Complimentary to Reliability
– Many Drives not even dead– Sick/Slow Drives are Common
The Trade-Off:
Performance is 1/2 - 1/3 of SCSI
SATA exacerbates storage issues– Lower seek performance
– Less reliability controls (mechanically)
– Less robust electronically
Rebuild Times Grow w/ Capacity– Not Complimentary to Reliability
– Many Drives not even dead– Sick/Slow Drives are Common
The Physics of a Spindle:
Rebuild Time Hours Hours-Days
DataDirect Networks4th Largest Linux TB Shipped ‘06
The Experience from a leader:
“No Trouble Found”RMAs
Infrequent Over 80%
© 2007 DataDirect Networks, Inc. All Rights Reserved.
High Performance ComputingTraditional Storage Issues
We believe that traditional storage technologies are incapable of responding to the growing data challenge.
Issues with Issues with traditional storagetraditional storage in HPC in HPCSystems deliver substantially lower write performance than read Systems deliver substantially lower write performance than read performance – often ½ lessperformance – often ½ lessSystems are not designed to provide predictable I/O and handle failuresSystems are not designed to provide predictable I/O and handle failuresStriped storage environments realize degrading performance as the Striped storage environments realize degrading performance as the cluster grows – Amdahl’s Law cluster grows – Amdahl’s Law
Degraded shared file performance as Degraded shared file performance as systems encounter sluggish / failed drivessystems encounter sluggish / failed drives
CapacityCapacity
Sys
tem
Sys
tem
Pe
rfo
rman
ceP
erf
orm
ance Peak System PerformancePeak System Performance
Not delivered in clustered environmentsNot delivered in clustered environments
© 2007 DataDirect Networks, Inc. All Rights Reserved.
DataDirect Networks S2A9900
6GB/s Read & Write Performance: 24GB/s Internal Bandwidth
8 x Active/Active Host Ports: FC4, FC8, 4xDDR-Infiniband
Scales to Support 1,200 Hard Drives
Up to 40,000 IOPS to disk
Mix SAS + SATA For Storage Tiering
600 Drives per Rack; 1.2PB in Two Racks
Zero-Compromise SATA Latency
No-Impact Drive Rebuilds
Journaled Fast Drive Rebuild
Sleep Mode for Optimum Power Efficiency
Real-Time Read Parity Checking
Non-Disruptive Drive Power Cycling reduces SATA drive maintenance by up to 90% - no more No Trouble Found RMAs! combined with Journaled Rebuilding - dramatically increases reliability
8th Generation S2A Technology
Resolving Performance & Reliability Within the Box
2008 WinnerVisionary Product
2008 WinnerVisionary Product
© 2007 DataDirect Networks, Inc. All Rights Reserved.
DataDirect Networks Product Family
S2A9900
6GB/s Write & Read Sequential I/O Optimized 1,200 SAS and/or SATA Drives 1.2PB in Two Racks
Optional Solutions:S2A Clustered NAS
S2A HPC Storage SolutionS2A Shared SAN File System
S2A VTL Solution
S2A9700
3GB/s Read & Write Sequential I/O Optimized 1,200 SAS and/or SATA Drives 1.2PB in Two Racks
S2A9550
3GB/s Read & Write Sequential I/O Optimized 960 Fibre-Channel or SATA Drives 960TB in Two Racks
Announced Today!!Announced Today!!
Announced Today!!Announced Today!!
© 2007 DataDirect Networks, Inc. All Rights Reserved.
S2A Storage = The Gold Standard in HPC I/O – Common Platform: Processing, Backup and Archiving– OPEN storage application & file system flexibility– DirectRAID Algorithms optimized for large data transfer
• Writes as fast as reads; Unrivaled real-world performance – with SATA
– Green Storage: SleepMode and up to 660 drives/rack– The Performance & Capacity Storage Leader
• 40 of the top 100 supercomputers choose S2A Technology
– Tight Integration with Parallel and Archival File Systems• Lustre, GPFS, pVFS, Quantum StorNext, CXFS + DMF, HPSS, etc.
Case Study: LLNL
Requirements:•Multi-Cluster, Massively Scalable Site-Wide Lustre Platform
•Includes world’s fastest BlueGene/L•130GB/s GPFS System for Capability System (Purple)•Maximum SATA data integritySolution:•150GB/s+ S2A Systems providing scalable I/O and Data Availability
Case Study: LLNL
Requirements:•Multi-Cluster, Massively Scalable Site-Wide Lustre Platform
•Includes world’s fastest BlueGene/L•130GB/s GPFS System for Capability System (Purple)•Maximum SATA data integritySolution:•150GB/s+ S2A Systems providing scalable I/O and Data Availability
Sample Customers:Variety of Applications
Sample Customers:Variety of Applications
Worldwide HPC Experience
Partners:Partners: