VIMMs Violin Memory, Inc. Page 1 of 15
Violin Intelligent Memory Modules (VIMMs):
Intelligent Flash Aggregation and Management
INTRODUCTION
Violin Memory delivers flash Memory Arrays that balance reliability, sustained throughput, spike free low
latency, and enterprise class high availability. Violin has solved both the throughput (IOPS) and latency
challenges by taking an integrated systems approach to the design of the arrays built with unique vRAID
and VIMM technology.
This paper provides an overview of the Violin Intelligent Memory Modules (VIMMs) and the intelligent
flash management architecture, capabilities, and operation.
Violin bundles NAND flash into hot-swappable VIMMS. On each VIMM is a hardware-based Flash
Translation Layer that provides garbage collection, wear leveling and error/fault management. VIMMs
work in conjunction with vRAID which is a hardware-based RAID algorithm specifically designed to deliver
sustained performance with very low spike-free latency.
The combination of VIMMs and vRAID guarantee the high availability, throughput, and long lifespan of the
Violin Memory Array. The unique patent pending vRAID and VIMMs are designed to overcome possible
“flash technology related” issues (slow erase, reliability, and wearing) and deliver an enterprise-grade
flash array which with a far greater life expectancy than typical Hard Disk Drive (HDD) arrays. The
technology provides for highly efficient aggregation of flash resources for scalability and high
performance.
Additionally, vSHARE and vCLUSTER software deliver a robust and easy to use management layer
supported by a Graphical User Interface (GUI). The vCLUSTER management provides web, email,
syslog, and SNMP management alerts for any VIMM failure or wear issue. The GUI provides a simple
flash lifetime indicator for the whole array. In the unlikely event of a problem, the system notifies Violin,
partners, and end-users automatically to any fault or performance issues.
TABLE OF CONTENTS
INTRODUCTION 1
TABLE OF CONTENTS 1
FLASH TECHNOLOGY 2
ENTERPRISE REQUIREMENTS 3
VRAID 4
VIMMS 5
VIMMs Violin Memory, Inc. Page 2 of 15
GARBAGE COLLECTION 6
FLASH ENDURANCE & WEAR LEVELING 7
FLASH ERROR HANDLING 8
FAIL-IN-PLACE 10
FLASH MANAGEMENT 11
SYSTEM MONITORING 13
CONCLUSION 14
CONTACT VIOLIN 15
FLASH TECHNOLOGY
The core memory technology used by Violin is NAND Flash, both Single Level Cell (SLC) and Multi-level
Cell (MLC). SLC NAND flash is faster, less dense and has a higher cost per GB relative to MLC. Multi-
Level Cell (MLC) technology stores 2 bits per cell and provides the density and cost structure that
enables the substitution of Flash for HDDs in many applications. MLC has lower endurance than SLC,
meaning it will ‘wear out’ and become unusable after fewer erase cycles.
Flash technology has been widely used for many years in consumer devices such as mobile phones,
digital cameras and MP3 players. More recently, it has been used within SSDs for laptops and as
cache/memory extension cards in servers. These devices and their associated applications take
advantage of the small form factor and mobility of flash memory. However, there are several issues with
flash devices (chips):
Writes (“Flash programs”) are sequential and relatively slow (MLC = 1500 µsec).
Erases require a whole section (“Flash block”) to be wiped and take considerable time (MLC can
be 10,000 µsec). During this time, nothing can be read or written.
Reading can be very fast (<100 µsec) and either random or sequential. However, only a single
page can be read at a time.
The page degrades with time and becomes more difficult to read
Pages can be damaged by repeated reading and handling
A single page can only be wiped only so many times before it wears out. MLC typically has less
than one third the erase cycles of SLC.
These issues are most obvious when measuring the sustained random Write performance of Flash SSDs.
The performance is initially good when the pages are clean, but drops dramatically over a “Write Cliff”
when pages have to be recycled in a process called Garbage Collection (or grooming). AnandTech
provided a review of two higher performing PC SSDs that showed the significance of the Write Cliff with
random 4K IOPS. Simple PCIe cards suffer from the same issue, especially when MLC Flash is used.
VIMMs Violin Memory, Inc. Page 3 of 15
Figure 1: Flash Write Cliff
ENTERPRISE REQUIREMENTS
For flash to be viable in enterprise data center class products, it requires a very different set of attributes
compared to a PC or consumer device. Reliability, availability, and sustained and predictable
performance is required for the data center. Fortunately, NAND Flash issues can be overcome through
proper design of large-scale Capacity Flash arrays.
Violin’s architecture enables thousands of flash devices to operate efficiently as a Flash Memory Array
which masks the device level issues and delivers reliable and sustained system level performance:
Writes are random and fast (20 µsec)
Erases are simple, fast and a background process Reading is accelerated by faster access
through parallel access to billions of pages
The page remains in crisp digital form with multiple copies for reliability
The repeated reading of the pages has no impact on the original
Violin’s architecture is based on a high-performance flash controller, distribution across many flash
VIMMs and a flash-specific vRAID algorithm. This results in a performance profile which is sustained for
Writes as well as Reads.
For maximum performance, the Violin arrays are populated with SLC VIMMs. However, the performance
of Violin arrays with MLC VIMMs also significantly exceeds the performance of high end HDD-based
storage systems as well as many SLC SSDs.
Empty Drive 1 User Write = 1
Flash Write
No Erases No moving of data
within Flash 20K – 50K IOPS
Full Drive 1 User Write = N Flash Write
Erases required before new writes
Data moved within Flash for garbage Collection
1K – 5K IOPS
VIMMs Violin Memory, Inc. Page 4 of 15
VRAID
RAID is an umbrella term for computer data storage schemes that can divide and replicate data among
multiple physical drives (HDD) and presented as a single drive (LUN) to the operation system or
application. Violin Memory stores data on flash and presents a volume to the operating system or
application. Flash is a different technology to store data in comparison with disks.
Violin employs a patent-pending vRAID algorithm that optimizes the important attributes of an enterprise
flash storage system:
Spike-free latency under load
Bandwidth,
Storage efficiency and
Reliability.
Why not use RAID1, RAID5, or RAID6? These algorithms were designed for HDDs and employ a Read-
modify-Write algorithm. In the case of HDDs, Reads and Writes take the same amount of time and this
model works nicely. With Flash, Writes take much longer than Reads, and Erases may interrupt Reads.
Bottom line: RAID5 with SSDs is dreadfully slow and not recommended. This is one of the many
examples of why building a flash-based storage system is much different than building an HDD-based
system.
RAID1 is the other option. However, as this means mirroring the data, only 50% of the Flash is used for
data. Violin’s Flash RAID uses 80% of the Flash for data.
Data comes into the Violin array as blocks of any size from 512Bytes to 4Mbytes using a Logical Block
Address (LBA). The blocks are split into 4K blocks and striped across multiple vRAID groups which
increases bandwidth and reduces latency for larger blocks. Each vRAID group consists of 5 VIMMs; four
for data and one for parity. The 4K block is written as 5 x 1K pages, each of which share a Logical Page
Address, but on different VIMMs.
VIMMs Violin Memory, Inc. Page 5 of 15
Figure 2: vRAID uses 4+1 Parity
If any VIMM or Flash die/block fails, the data can be reconstructed using the Parity VIMM. If necessary, a
VIMM can be replaced and the RAID group rebuilt using one of the spare VIMMs in the system. Flash bit
errors are normally corrected using ECC protection provided across each 1K block.
vRAID reduces latency and latency spikes for 4K block reads in three ways:
1. Striping 4K across 5 VIMMs allows the Flash devices to be read and written in parallel with
increased bandwidth.
2. Flash RAID has a patent-pending capability to ensure that multi-millisecond Erases never block a
Read or Write and enables spike-free latency in mixed Read/Write environments. This is possible
because only four out of five VIMMs need to be read at any time.
3. The same approach is being used to reduce the impact on flash Writes on latency. This is
particularly important for MLC flash.
VIMMs
Data is written and read to and from VIMMs. Each VIMM operates its own instance of a Flash Translation
Layer. Data is written to VIMMs using the Logical Page Address (LPA), which assigns the page to a
Physical Page Address (PPA) within the Flash of the VIMM. Metadata is used to map between logical
addresses and physical addresses. Each VIMM includes:
High-performance logic-based flash memory controller
Management processor
DRAM (e.g. 3GB) for metadata
NAND Flash (e.g. 128/256/512GB) for storage
VIMMs Violin Memory, Inc. Page 6 of 15
VIMMs are designed to enable the scalability of large arrays of flash memory. The advantages of this
architecture are significant:
Integrated garbage collection for sustained write performance
Low latency access to DRAM metadata and flash memory
Safe access and local storage of metadata for fault recovery
Integrated monitoring and management of flash memory health – bad blocks, bad die
Distributed ECC correction for maximum bandwidth
Hot swap and redundancy management
3 port design so that single port failures do not impact data access
Unlike most SSDs and PCIe cards, a failed Flash device does not cause a Violin Intelligent Memory
Module to lose data or be taken out of service. ECC and RAID protection are used to manage the data.
Figure 3: Violin Intelligent Memory Module (VIMM)
GARBAGE COLLECTION
Garbage collection or grooming is a well know function used in both flash devices and log-structured file
systems such as ZFS. It is the process of reclaiming previously used address space with stale data to
allow new writes of fresh data.
Data becomes stale when a new write is made to an address with existing data. The existing data
becomes stale and the new data is fresh. Because flash does not allow writes to random physical
addresses, the fresh data is not written over the stale data, but to another physical address. The stale
data is garbage collected at a later moment and made free for new writes.
The process of collecting the garbage involves reading a large section of data, identifying the fresh blocks
and then rewriting the fresh data to another place. The large section is then erased and by default all of
the stale data is removed from the section.
The garbage collection process consumes large amounts of flash bandwidth. For each flash block written,
many blocks have to be read, a few rewritten and then some flash erased. This process is typically done
in software on the server processor or on the flash controller.
VIMMs Violin Memory, Inc. Page 7 of 15
Violin has optimized the garbage collection process in several ways:
Distributed: Garbage collection is done on each VIMM in the system in a fully distributed
manner. There can be 84 engines simultaneously operating.
Hardware-based: The garbage collection process is implemented in a hardware state machine
and does not consume CPU cycles and DRAM on the host or on the flash controller.
Intelligent: The section flash to be garbage collected (confusingly called a block) is selected by
the algorithm. Instead of a simple round-robin scheme, a VIMM chooses a block which satisfies a
set of criteria including amount of stale data and number of time the block has been erased
before.
This combination guarantees both a high bandwidth and highly efficient process.
FLASH ENDURANCE & WEAR LEVELING
NAND flash does wear out with use. Each erase gradually reduces the ability of the flash cell to store a
charge of many years without exceeding a specified error rate. In contrast, HDDs wear out with each
revolution and typically need to be replaced after 4 to 5 years.
For a typical MLC NAND flash device, a single block may typically be erased 3,000 times before there is a
3% probability that that block cannot retain data for multiple years without exceeding a certain bit error
count.
For SLC technology, the number of erases is increased to 100,000 and meets the same criteria.
Violin removes the risk of flash wear out through a combination of algorithms designed to overcome the
historical problems of Flash wear:
vRAID: vRAID stripes data across all RAID groups and hence all VIMMs. With this algorithm,
each LUN is striped across all RAID groups and VIMMs. A single VIMM does not get an
extremely high write load.
VIMMs: The Flash Translation Layer stripes data across all of the flash devices. There are 128
flash chips in each VIMM and all of them are used for all data and LUNs. No chip gets a higher
write load and the wear-leveling algorithm in a VIMM ensures all blocks are given their share of
writes and erases.
The combination of Violin’s garbage collection algorithms, Flash vRAID and Flash management are
designed to squeeze much greater endurance out of the flash. Violin Memory Arrays ensure that no
specific blocks are repeatedly used and instead the wear is leveled across all blocks of all flash devices
across all active VIMMs. There are more than 10,000 flash chips and 8 Million blocks in a single system.
Violin extracts many more erase cycles out of flash with its flash management techniques. Unlike a
consumer USB drive that get written and left in a desk drawer for a year, Violin arrays are powered on
and can perform functions such as data scrubbing. Data with errors can be rewritten to remove errors.
Using this approach and with adequate ECC capability, the lifetime of flash can be extended by an order
of magnitude. Instead of 3,000 to 5,000 erase cycles, independent tests have shown that MLC flash with
16 bit error correction can endure over 50,000 erase cycles.
VIMMs Violin Memory, Inc. Page 8 of 15
Figure 4: Bit Errors vs. Erase Cycles for Generic MLC Flash
In addition where blocks do wear out or whole flash devices fail, the Violin Fail-in-Place and RAID
algorithms handle these failures without loss of data or any service interruption. In the extreme case of a
VIMM failing, the spare VIMMs are used and the VIMM can be hot-swapped.
By using these techniques, the Violin 3120 is capable of sustaining greater then 1000GB per hour of
writes for 5 years. This is almost 25TB per day or 260MB/s, making it practically impossible to wear out
the system within a normal 5 year life. The reality is that the flash will last much longer that this, but we
will only be able to measure this many years after the systems have been in production.
With SLC, the system has many more write cycles (100,000) and can sustain full-speed writes on a
continuous basis for a 5-10 year life. For example, a 3210 can allow 8TB of writes per hour or greater
than 2GB/s over a 10 year life. This allows the array to continuously operate at maximum performance.
Under standard maintenance agreements, Violin will replace any modules that wear out as if they had
failed. This completely removes the operational risk from the user. A 3 year warranty is provided
independently of the maintenance agreements.
FLASH ERROR HANDLING
All storage media have errors. The storage system has responsibility of mitigating those errors and
avoiding data loss. Increasing the Mean-Time-To Data-Loss (MTTDL) to a level where it is not
operationally significant is the goal.
Data loss occurs in storage systems for several reasons:
1. Uncorrectable media/bit errors; RAID-0 issue
VIMMs Violin Memory, Inc. Page 9 of 15
2. Undetected media errors; insufficient ECC/CRC
3. Simultaneous uncorrectable media errors;
4. Uncorrectable errors during a RAID rebuild
Traditional disk drives have media errors that are protected by ECC. The goal for an enterprise drive is an
Uncorrectable Error Rate (UER) of 10-16
. (i.e. on average, a user can read 1016
bits before a Read is
unsuccessful or a bit is errored.) Typically, this target is achieved and the primary data loss cause for disk
drives is whole drive failures, but also sectors can fail in ways where data is corrupted and lost. With
RAID 1/5/6 enabled, data loss most often occurs during a RAID rebuild. With 1TB drives, RAID rebuilds
can take a whole day.
Violin’s technology provides high levels of ECC & RAID data protection and delivers a Mean-Time-to-
Data-Loss (MTTDL) which is orders of magnitude higher than a typical RAID-5 HDD storage system.
Flash memory has different failure modes compared to HDDs such as bit errors, bad blocks, bit-lines, and
failed die. Errors are caused by data aging, as well as by Reads and Writes of neighboring blocks. Given
that flash chip vendors focus on reducing the cost of flash memory for consumer applications like iPods
and cameras, the likelihood is that these failure modes will not be resolved at the device level. Media
error rates of worse than 10-5
(1 bit is errored on average for every 10KB read) are seen on some Flash
devices. Violin’s architecture handles these issues through a combination of ECC, data scrubbing and
vRAID.
Violin’s ECC algorithms are tuned to the specific Flash technology. For example, the initial 512GB MLC
Flash VIMM uses a flash type which requires 8 bit ECC per 512B to achieve a 10-16
UER rate. Violin
provides 16 bits of ECC per 1024 Bytes and weekly data scrubbing to achieve (much) greater than 10-18
UER before considering RAID.
Is this level of ECC sufficient? Theoretically it looks good, but the reality is that there are Flash failure
modes that cause large numbers of bit errors simultaneously. ECC cannot easily handle these cases and
RAID is needed. RAID increases the UER from 10-18
to greater than 10-30
for normal operation.
The real test for storage systems comes from handling a combination of media errors and module
failures. As an example, 30 SATA drives in a RAID-5 configuration have long RAID rebuild times and high
error rates that lead to a Mean Time to Data loss (MTDL) of less than a few years.
The other limit for MTDL is the probability of 2 module failures within a RAID group before a RAID rebuild
is completed. The MTBF of VIMM is estimated at 200 years, about 20 times higher than a rotating HDD.
The RAID rebuild times for a Violin array are typically between 1 and 24 hours, depending on user load
and memory type. The RAID-5 rebuild time for large HDDs is measured in days, especially under load.
The following table compares the conservatively estimated MTDL rates of the Violin array with other
storage techniques and shows that it is competitive with RAID-6 storage systems and much better than
RAID-5 HDD storage. Violin SLC systems have a higher MTDL than Capacity Flash systems because of
the lower RAID rebuild times (1 hour).
VIMMs Violin Memory, Inc. Page 10 of 15
Figure 5: Mean Time to Data Loss
FAIL-IN-PLACE
Flash devices fail either through usage (e.g. Erases) or through simple aging; e.g. Annual Failure Rate
(AFR). Violin manages these failures through a Fail-in-place strategy that:
Minimizes service events
Minimizes RAID rebuilds
Minimizes data loss rates
Maximizes system availability
For a 160GB SSD with only 40 flash devices, and an AFR per device of 0.25%, a typical SSD will last 10
Years (MTBF). Any flash device failure will lead to loss of a large amount of data and cause a SSD
failure. This is why SSD failure rates have been much higher than desired and are typically greater than
1% per 100GB.
With a fail-in-place strategy, a typical Violin array with 10,000 flash devices will operate for over 5 years
without service before losing too many VIMMs to continue operation. This is achieved by using RAID with
spare Flash devices on each module and spare modules. The system intelligently manages flash failures
as follows:
Less than N devices fail in a VIMM: RAID is used to rebuild the data into other flash devices on
the same VIMM. No service event is required and the event is logged. This approach reduces the
VIMM AFR to less than 0.5%. In most other SSDs, every flash device failure is treated as a
module failure and the AFR of an SSD can be 10% or more.
More than N flash devices fail in a VIMM: The system takes the VIMM out-of-service. The
value of N (minimum 3) increases with a lower format capacity and increases the expected
availability and MTBF of the Violin system. The data from that VIMM is moved to a spare VIMM
using the RAID algorithm to rebuild the data. This is done in background without administrative
intervention or any significant impact on access to user data. An alarm is raised to indicate a
VIMM has failed. The system may have one to four spares, and hence replacement of the module
is not an urgent requirement and may be performed monthly or quarterly.
Array Type
Capacity (TB)
RAID-0
SATA or
SSD
RAID-1
SATA
RAID-5
SATA
RAID-5
FC
RAID-6
SATA
Violin
SLC
Violin
Capacity
Flash
8 6 289 187 2,256 925,926 1,127,151 550,599
15 5 134 80 975 269,360 563,576 275,299
30 3 50 27 327 69,444 281,788 137,650
Media Error Rate 1.00E-15 1.00E-15 1.00E-15 1.00E-16 1.00E-15 1.00E-18 1.00E-18
Mean Time To Data Loss (Months)
VIMMs Violin Memory, Inc. Page 11 of 15
FLASH MANAGEMENT
Unlike an HDD storage system, the Violin array does not have to manage complex queuing structures,
large and distributed caches, rotational disk performance, distribution of data across RAID groups and
complex Information Lifecycle Management (ILM) functions. These functions have evolved over time as
solutions to the performance limitations of HDDs and are not needed for flash.
The Violin array inherently removes HDD performance restraints by making it possible to manage large
numbers of flash devices and modules. The goal is to keep the operation of the system simple and
reliable.
Violin’s Flash ECC, RAID and Fail-in-place technologies have been designed to automate most of the
standard tasks involved in managing the flash memory. Why should a user have to monitor and act on the
HDD-based SMART statistics of every SSD when a system can automatically perform this function?
The Violin array manages the flash and user data with the following techniques:
Data protection: Robust ECC and CRC algorithms detect and correct bit errors in the system.
Data validation: Extra data is stored with each block of flash so that invalid data is detected
rather than passed on.
Data scrubbing: All data in the system is read on a weekly basis and scanned for errors. Any
errors found are then repaired. Violin does this without any noticeable impact on user
performance. It greatly reduces data loss rates and increases data endurance.
Flash wear leveling: Is the application reading and writing to the same addresses repeatedly
and wearing out the flash in that hot spot? The Violin array distributes this load to all the Flash
devices in all the modules of a system. No specific LUN is tied to a specific module and hence
active LUNs do not wear out specific flash devices.
Flash monitoring: All Read, Write and error statistics are captured and reported. VIMMs
behaving below specification would be automatically taken out of service and the error events
logged. Before that happens, the statistics and alarms will indicate unusual behavior.
Examples of the CLI output for a show vimm command can be found in the following table. The system is
designed to report all data so that anomalous behavior can be detected easily.
VIMMs Violin Memory, Inc. Page 12 of 15
Figure6: Show VIMM output
The flash health indication provides the most useful information for monitoring any particular VIMM.
Failed block and die counts provide an indication of the % of flash that has been removed from service.
As can be seen, there are three thresholds the system uses to monitor the health of the VIMM:
Performance Threshold: The VIMM above is 0.95% of the way to this threshold. At this level,
there may be some reduction in system performance. Warnings are sent (syslog, email alerts,
admin-state up
oper-state up vimm-state active raid-group 0 vimm-type FLASH mem-type MLC-NAND raw-capacity 549.7GB ( 512GiB) raw-capacity-bytes 549,755,813,888 fmt-capacity 438.1GB ( 408GiB) fmt-capacity-bytes 438,086,664,192 part-number 1000159A-C-02 serial-number 18099R00000361 mfg-date 20091013 fw-date Thu Aug 26 19:21:00 2010 fw-version 0x409b sw-date Tue Sep 21 15:29:00 2010 sw-version 0x40fc is-programmed true id-assigned true write-verify disabled environment temperature 59C (OK) sensor-1.2v 1.17 sensor-1.8v 1.79 sensor-2.5v 2.42 sensor-3.3v 3.38 run-time-stats run-time 0 years, 7 days 23:40:38 stats-date Thu Oct 28 20:24:54 2010 format-date Fri Oct 1 06:37:46 2010 user-reads 2,014,398,764 user-read-bytes 2,062,764,354,816 user-writes 129,877,003 user-write-bytes 132,994,051,072 ecc-cor-counts eight-to-ten-bits 417
eleven-to-thirteen-bits 0
fourteen-plus-bits 0
total-cor 3,496
ecc-corrected 23,759 (rate: 1.08e-05)
raid-corrected 0 (rate: 0.00e+00) blk-boot-fails 0 blk-erase-fails 0 blk-prog-fails 0 blk-ecc-thresh 0 blk-ecc-uncor 0 erase-counts blk-erase-target 3,000 blk-erase-avg 571.81 flash-health failed-blocks 341 (0.13%) failed-die 0 (0.00%) perform-thresh 0.95% (OK) rebuild-thresh 0.63% (OK) critical-thresh 0.48% (OK)
VIMMs Violin Memory, Inc. Page 13 of 15
SNMP traps) and the VIMM is recommended to be replaced. Typically the VIMM could operate
for another year or two.
Rebuild Threshold: The VIMM above is 0.63% of the way to this threshold. At this level, the
system will deliberately move data of this VIMM and put in an available spare VIMM. Alarms are
sent and the VIMM should be replaced immediately.
Critical Threshold: The system may not have had spares to all the rebuild to occur and hence
the VIMM has stayed in operation. At this point, the VIMM has insufficient spare flash to reliably
store new data. The VIMM is taken off-line and if necessary the system will cease to accept new
writes.
These thresholds are important steps in the process of managing flash. However, like many safeguard
mechanisms, they are usually never called upon. In the 3 years Violin has been shipping flash arrays, no
customer has worn out a VIMM. VIMMs have failed due to component failures, but typical enterprise use
has resulted in wear rates of significantly less than 10% per year. It may be 10 years before Violin sees
any VIMM wear out.
SYSTEM MONITORING
It would be tedious and unnecessary to check each VIMM and its health. Violin Memory Gateways
provide monitoring and alerting functions for the exceptional situations.
Figure 7: VIMM Status and Lifetime Indicators
The screen above shows the status of an array with an icon for each VIMM. The lower indicator on each
VIMM shows it lifetime as a green/yellow/red indicator. Spares are slightly shaded to show that the
system is protected.
VIMMs Violin Memory, Inc. Page 14 of 15
On the top-middle right, there is also a “lifetime dial” for the whole array. It indicates the expected flash
lifetime for the whole array. This may be a new array showing a 100% lifetime remaining, but after a year,
you may still see 98% remaining
Figure 8: Array Lifetime Indicator
The array indicator is pessimistically estimated by looking at all VIMMs, ranking them and choosing the
nth VIMM, where n is the number of VIMMs that are needed for array operation. A single VIMM that has
worn out, or more likely has had some flash die failures, does not impact the expected lifetime of the
array.
CONCLUSION
Flash is a complex media, but no more complex than HDDs which have heads floating micrometers
above platters spinning just below the speed of sound. The mechanical wear-out of HDDs is much more
of an issue that the wear of flash.
With attention to all the issues and failure modes of flash, arrays can be built which provide highly reliable
service, lower data loss rates and a long media life. Violin Memory Arrays achieve this goal on top of the
very low latency and predictable high number of IOPS.
VIMMs Violin Memory, Inc. Page 15 of 15
CONTACT VIOLIN
For more information, contact:
Violin Memory, Inc. USA
685 Clyde Ave, Suite 100, Mountain View, CA 94043
USA
Violin Memory EMEA Ltd.
Quatro House, Lyon Way, Camberley, Surrey, GU16 7ER.
UK
(888) 9-VIOLIN Ext 10 or
(888) 964-6546 Ext 10
Email: [email protected]
www.violin-memory.com
Ph: +44 1276 804620