Date post: | 22-May-2015 |
Category: |
Documents |
Upload: | flashdomain |
View: | 401 times |
Download: | 1 times |
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Data Distribution Algorithms for ReliableParallel Storage on Flash Memories
Kathrin Peter
Zuse Institute Berlin
November 2008, MEMICS Workshop
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Motivation
Nonvolatile storage
Flash memory -Invented by Dr. FujioMasuoka 1984
Type of EEPROM
Usage in a RAID-likeconfiguration instead ofhard disk drives
Flash memory based storage: Alternative to hard diskdrives?
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Outline
1 Flash based memory
2 Parallel and distributed storage based on Flash memories
3 Discussion and Summary
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Application of Flash memories
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Pros and Cons
Less power consumption
Higher access rates (in some cases)
Uniform access time for randomaccess - no seeks
Robustness (extreme temperatures,vibration, shock)
Price
Limited erase cycles
Flash management
Model 2.5” SATA 3.0 Gbps SSD 2.5” SATA 3.0Gbps HDDMechanism type Solid NAND flash based Magnetic rotating platters
Density 64 GByte 80 GByteWeight 73 g 365g
Active Power consumption 1 W 3.86 WOperating temperature 0°C-70°C 5°C-55°C
Acoustic Noise None 0.3 dBEndurance MTBF > 2M hours MTBF <0.7M hours
Av. access time 0.1 msec 17 msecRead performance 100 MB/s 34 MB/sWrite performance 80 MB/s 34 MB/s
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Limited erase cycles and flash management
Floating gate
Control gate
Sourcen+
Drainn+
P−Substrate
layersInsulating oxide
Transistor
.. .
NAND − flash
blockpage pagepage page
block=erase unit
Memory cell: floating gatetransistor
Retention and Endurance
Typical page size: 16896 B(=2 KB+64 B spare)
Typical block size:64 pages = 128 KB
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Mapping
Problem: How to map logical blocks to flash adresses?Disadvantage of linear mapping (one-to-one mapping):
Frequently-used erase units wear outIdentity mapping requires lots of copying to and from RAM(fixed block size). Example:
data data data dataflashmemory
RAM
copy to RAM
data data data datamodify
copy back
data data datadata
data data datadata
mod.
mod.erase
Solution: Sophisticated block-to-flash mapping andmoving around blocks: wear leveling, garbage collection
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Mapping - example
Mapping of virtual block 5 to physical block 0, page 3
7
2
2 74
0123
4
5
flash memory virtual−to−logical example:
0
virtual block
= log. erase unit
page
page
= logical block = 1117
1
0 1 2
5
3
3
2 01
= phy. erase unit 0
erase unit map
page maps
logical−to−physical
5
page
block
1
2
3
0
Algorithms and Data Structures for Flash Memories, Gal, Toledo, 2004
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Wear leveling - example
...
hot pool cold pool
olde
r
stop
agi
ng
order inpools
flash memory
On Efficient Wear Leveling for Large-Scale Flash-Memory Storage Systems, Li-Pin Chang, 2007
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Garbage collection - example
Free space drops below a thresholdSelect blocks for reclamation
Copy all live pages to free pages somewhere elseChange mapping entry, update to new positionErase block and allocate pages as free
erase
erase block x erase block x
erase block y free page
invalid page
life page
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Facts (local) wear leveling
Performance: Erase operation is slow
No in-place update
Controller: Efficient mapping and erase distribution
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Flash memory in a RAID-like system
Aggregate higher bandwidth
Reliability (redundancy for fault tolerance)
Problem: uneven usage of flash memories, more writes toredundant blocks when data becomes updated
Update accumulatedaccess load
Data Redundancy
flash-RAIM: Redundant Array of Independent flashMemories
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Uneven distribution of writes
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
flash-RAIM and wear leveling
There exist data distribution algorithms to place dataevenly on a single memory
Goal: long lifetime of a single memory
We work on even distribution of writes across all memoriesin a memory array
Goal: even usage of cells on all flash memories, reliabledata storage, high throughput
Method: global wear leveling
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Data Distribution Algorithms
Staggered Striping
+ − − −++
Memory 1 Memory 2 Memory 3
Hierarchical dual−pool Algorithm
Different starting points of algorithmsExplicit placing of dataData movement after storing and local wear leveling
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Discussion of Future work
Extension of the simulator to evaluate the global wearleveling algorithms
Parameter study (trade-off)Define metrics to compare algorithms
Lifetime of the flash-RAIMSpeedOverhead
Usage of traces
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Summary
Use aggregated bandwidth and fault-tolerance
flash-RAIM
Simulator for evaluation
Next steps:Implementation and evaluation of the global wear levelingalgorithm
Kathrin Peter Reliable Parallel Storage on Flash Memories
Flash based memoryParallel and distributed storage based on Flash memories
Discussion and Summary
Erase cycles data/ redundancy memory
0
10
20
30
40
50
60
70
80
90
100
1.1968e+06 1.197e+06 1.1972e+06 1.1974e+06 1.1976e+06 1.1978e+06 1.198e+06 1.1982e+06 1.1984e+06 1.1986e+06 1.1988e+06
num
ber
of e
rase
cyc
les
physical erase block adress
distribution of access frequencies
Disk 1Disk 6
Kathrin Peter Reliable Parallel Storage on Flash Memories