+ All Categories
Home > Documents > Chapter 5 Memory Hierarchy Design

Chapter 5 Memory Hierarchy Design

Date post: 18-Jan-2018
Category:
Upload: emil-sullivan
View: 264 times
Download: 0 times
Share this document with a friend
Description:
Many Levels in Memory Hierarchy Pipeline registers Invisible only to high-level language programmers Register file There can also be a 3rd (or more) cache levels here Usually made invisible to the programmer (even assembly programmers) 1st-level cache (on-chip) 2nd-level cache (on same MCM as CPU) Our focus in chapter 5 Physical memory (usu. mounted on same board as CPU) Virtual memory (on hard disk, often in same enclosure as CPU) Disk files (on hard disk often in same enclosure as CPU) Network-accessible disk files (often in the same building as the CPU) Tape backup/archive system (often in the same building as the CPU) Data warehouse: Robotically-accessed room full of shelves of tapes (usually on the same planet as the CPU)
47
1 Chapter 5 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines Conclusion
Transcript
Page 1: Chapter 5 Memory Hierarchy Design

1

Chapter 5 Memory Hierarchy Design

• Introduction• Cache performance• Advanced cache optimizations• Memory technology and DRAM optimizations• Virtual machines• Conclusion

Page 2: Chapter 5 Memory Hierarchy Design

2

Many Levels in Memory Hierarchy

Pipeline registers

Register file1st-level cache

(on-chip)2nd-level cache

(on same MCM as CPU)Physical memory

(usu. mounted on same board as CPU)Virtual memory

(on hard disk, often in same enclosure as CPU)Disk files

(on hard disk often in same enclosure as CPU)Network-accessible disk files

(often in the same building as the CPU)Tape backup/archive system

(often in the same building as the CPU)Data warehouse: Robotically-accessed room full of shelves of tapes

(usually on the same planet as the CPU)

Our focusin chapter 5

Usually madeinvisible to

the programmer(even assemblyprogrammers)

Invisible only to high-levellanguage programmers

There can also bea 3rd (or more)

cache levels here

Page 3: Chapter 5 Memory Hierarchy Design

3

Simple Hierarchy Example• Note many orders of magnitude change in

characteristics between levels:

×128→ ×8192→ ×200→

×4 → ×100 → ×50,000 →(for randomaccess) (2 GB) (1 TB

10 ms)

Page 4: Chapter 5 Memory Hierarchy Design

4

Why More on Memory Hierarchy?

1

10

100

1,000

10,000

100,000

1980 1985 1990 1995 2000 2005 2010

Year

Perf

orm

ance

Memory

Processor Processor-MemoryPerformance GapGrowing

Page 5: Chapter 5 Memory Hierarchy Design

5

Three Types of Misses

• Compulsory– During a program, the very first access to a block will

not be in the cache (unless pre-fetched)

• Capacity– The working set of blocks accessed by the program is

too large to fit in the cache

• Conflict– Unless cache is fully associative, sometimes blocks

may be evicted too early (compared to fully-associative) because too many frequently-accessed blocks map to the same limited set of frames.

Page 6: Chapter 5 Memory Hierarchy Design

6

Misses by Type

Conflict

Conflict misses are significant in a direct-mapped cache.Conflict misses are significant in a direct-mapped cache.

From direct-mapped to 2-way helps as much as doubling cache size.From direct-mapped to 2-way helps as much as doubling cache size.

Going from direct-mapped to 4-way is Going from direct-mapped to 4-way is betterbetter than doubling cache size. than doubling cache size.

Page 7: Chapter 5 Memory Hierarchy Design

7

As fraction of total misses

Page 8: Chapter 5 Memory Hierarchy Design

8

Cache Performance• Consider memory delays in calculating CPU time:

• Example:– Ideal CPI=1, 1.5 ref / inst, miss rate=2%, miss penalty=100

CPU cycles

• In-order pipeline is assumed – The lower the ideal CPI, the higher the impact of cache miss– Measured by CPU cycles, fast cycle time has more penalty

TimeCyclenInstructioCycleStallMemoryCPIICTimeCPU Execu )(

CTICTimeCycleICTimeCPU 0.4)1005.1%20.1(

TimeCyclePenaltyMissnInstructio

AccessMemRateMissCPIIC Execu )(

Page 9: Chapter 5 Memory Hierarchy Design

9

Cache Performance Example

• Ideal-L1 CPI=2.0, ref / inst=1.5, cache size=64KB, miss penalty=75ns, hit time=1 clock cycle

• Compare performance of two caches:– Direct-mapped (1-way): cycle time=1ns, miss rate=1.4%– 2-way: cycle time=1.25ns, miss rate=1.0%

ICnsICTimeCPU way 575.31)755.1%4.10.2(1

CyclesnsnsPenaltyMiss way 75

175

1

Cyclesns

nsPenaltyMiss way 6025.1

752

ICnsICTimeCPU way 625.325.1)605.1%10.2(2

Page 10: Chapter 5 Memory Hierarchy Design

10

Out-Of-Order Processor• Define new “miss penalty” considering overlap

– Need to decide memory latencymemory latency and overlapped latencyoverlapped latency– Not straight forward

• Example (from previous slide)– Assume 30% of 75ns penalty can be overlapped, but with

longer (1.25ns) cycle on 1-way design due to OOO

)(. latencymissOverlaplatencymissTotalnInstructio

MissesnInstructio

CyclesStallMem

Cyclesns

nsPenaltyMiss way 4225.1

%70751

ICnsICTimeCPU OOOway 60.325.1)425.1%4.10.2(,1

Page 11: Chapter 5 Memory Hierarchy Design

11

An Alternative Metric

missmisshitacc TfTT (Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty)The times Tacc, Thit, and T+miss can be either:

– Real time (e.g., nanoseconds), or, number of clock cycles– T+miss means the extra (not total) time (or cycle) for a miss

• in addition to Thit, which is incurred by all accesses

– The Ave. mem access time does notnot take other instructions into the formula, the same example shows different results

CPU CacheLower levelsof hierarchy

Hit time

Miss penalty

nsnsTimeAccessMemAverage way 05.2)175%4.1(0.11

nsnsnsTimeAccessMemAverage way 2)25.160%1(25.10.12

Page 12: Chapter 5 Memory Hierarchy Design

12

Another View (Multi-cycle Cache)

• Instead of increase the cycle time, 2-way cache can have two-cycle cache access, then

• becomes:

• In reality, not all the 2nd cycle of the 2-cycle loads will cause stalls of dependent instructions, say only 20%: (Note the formula assume impact all)

• Note, cycle time always impacts all

nsnsnsTimeAccessMemAverage way 75.2)175%1(10.22

nsnsnsTimeAccessMemAverage way 2)25.160%1(25.10.12

nsnsnsTimeAccessMemAverage way 95.1)175%1(12.12

Page 13: Chapter 5 Memory Hierarchy Design

13

Cache Performance

• Consider the cache performance equation:

• It obviously follows that there are three basic ways to improve cache performance:– Reducing miss penalty– Reducing miss rate – Reducing miss penalty/rate via parallelism – Reducing hit time

• Note that by Amdahl’s Law, there will be diminishing returns from reducing only hit time or amortized miss penalty by itself, instead of both together.

(Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty)

“Amortized miss penalty”

Page 14: Chapter 5 Memory Hierarchy Design

14

6 Basic Cache Optimizations

Reducing hit time1. Giving Reads Priority over Writes

• E.g., Read complete before earlier writes in write buffer

2. Avoiding Address Translation during Cache Indexing

Reducing Miss Penalty3. Multilevel Caches

Reducing Miss Rate4. Larger Block size (Compulsory misses)5. Larger Cache size (Capacity misses)6. Higher Associativity (Conflict misses)

Page 15: Chapter 5 Memory Hierarchy Design

15

Multiple-Level Caches

• Avg mem acc time = Hit time (L1) + Miss rate (L1) x Miss Penalty (L1)

• Miss penalty (L1) =Hit time (L2) +Miss rate (L2) x Miss Penalty (L2)

• Can plug 2nd equation into the first:– Avg mem access time =

Hit time(L1) + Miss rate(L1) x (Hit time(L2) +

Miss rate(L2)x Miss penalty(L2))

Page 16: Chapter 5 Memory Hierarchy Design

16

Multi-level Cache Terminology

• “Local miss rate”– The miss rate of one hierarchy level by itself.– # of misses at that level / # accesses to that level– e.g. Miss rate(L1), Miss rate(L2)

• “Global miss rate”– The miss rate of a whole group of hierarchy levels– # of accesses coming out of that group

(to lower levels) / # of accesses to that group– Generally this is the product of the miss rates at each

level in the group.– Global L2 Miss rate = Miss rate(L1) × Local Miss rate(L2)

Page 17: Chapter 5 Memory Hierarchy Design

17

Effect of 2-level Caching

• L2 size usually much bigger than L1– Provide reasonable hit rate– Decreases miss penalty of 1st-level cache– May increase L2 miss penalty

• Multiple-level cache inclusion property– Inclusive cache: L1 is a subset of L2; simplify cache

coherence mechanism, effective cache size = L2– Exclusive cache: L1, L2 are exclusive; increase effect

cache sizes = L1 + L2– Enforce inclusion property: Backward invalidation on

L2 replacement

Page 18: Chapter 5 Memory Hierarchy Design

18

11 Advanced Cache Optimizations

• Reducing hit time Small and simple caches Way prediction Trace caches

• Increasing cache bandwidth Pipelined caches Multibanked caches Nonblocking caches

• Reducing Miss Penalty Critical word first Merging write buffers

• Reducing Miss Rate Compiler optimizations

• Reducing miss penalty or miss rate via parallelism

Hardware prefetching Compiler prefetching

Page 19: Chapter 5 Memory Hierarchy Design

19

1. Fast Hit times via Small and Simple Caches

• Index tag memory and then compare takes time• Small cache can help hit time since smaller memory takes less time to index

– E.g., L1 caches same size for 3 generations of AMD microprocessors: K6, Athlon, and Opteron

– Also L2 cache small enough to fit on chip with the processor avoids time penalty of going off chip

• Simple direct mapping– Can overlap tag check with data transmission since no choice

• Access time estimate for 90 nm using CACTI model 4.0– Median ratios of access time relative to the direct-mapped

caches are 1.32, 1.39, and 1.43 for 2-way, 4-way, and 8-way caches

-

0.50

1.00

1.50

2.00

2.50

16 KB 32 KB 64 KB 128 KB 256 KB 512 KB 1 MB

Cache size

Acc

ess

time

(ns) 1-way 2-way 4-way 8-way

Page 20: Chapter 5 Memory Hierarchy Design

20

2. Fast Hit times via Way Prediction• How to combine fast hit time of Direct Mapped and have the lower

conflict misses of 2-way SA cache? • Way prediction: keep extra bits in cache to predict the “way,” or

block within the set, of next cache access. – Multiplexor is set early to select desired block, only 1 tag comparison

performed that clock cycle in parallel with reading the cache data – Miss 1st check other blocks for matches in next clock cycle

• Accuracy 85% (can be higher with bigger history)• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles

– Used for instruction caches vs. data caches

Hit Time

Way-Miss Hit Time Miss Penalty

Page 21: Chapter 5 Memory Hierarchy Design

21

3. Fast Hit times via Trace Cache (Pentium 4 only; and last time?)

• Find more instruction level parallelism?How avoid translation from x86 to microops?

• Trace cache in Pentium 4• Dynamic traces of the executed instructions vs. static sequences

of instructions as determined by layout in memory– Built-in branch predictor

• Cache the micro-ops vs. x86 instructions– Decode/translate from x86 to micro-ops on trace cache miss

• +1. better utilize long blocks (don’t exit in middle of block, don’t enter at label in middle of block)

- 1. complicated address mapping since addresses no longer aligned to power-of-2 multiples of word size

• -1. instructions may appear multiple times in multiple dynamic traces due to different branch outcomes

Page 22: Chapter 5 Memory Hierarchy Design

22

4: Increasing Cache Bandwidth by Pipelining

• Pipeline cache access to maintain bandwidth, but higher latency

• Instruction cache access pipeline stages:• 1: Pentium• 2: Pentium Pro through Pentium III • 4: Pentium 4 greater penalty on mispredicted branches more clock cycles between the issue of the load

and the use of the data

Page 23: Chapter 5 Memory Hierarchy Design

23

5. Increasing Cache Bandwidth: Non-Blocking Caches

• Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss– requires F/E bits on registers or out-of-order execution– requires multi-bank memories

• “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests

• “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses– Significantly increases the complexity of the cache controller

as there can be multiple outstanding memory accesses– Requires muliple memory banks (otherwise cannot support)– Penium Pro allows 4 outstanding memory misses

Page 24: Chapter 5 Memory Hierarchy Design

24

Nonblocking Cache StatsA

vera

ge st

all t

ime

as %

of b

lock

ing

cach

e hu1m

hu2mhu64m

Averages

Page 25: Chapter 5 Memory Hierarchy Design

25

Value of Hit Under Miss for SPEC

• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92

Hit Under i Misses

Avg.

Mem

. Acc

ess

Tim

e

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

eqntott

espresso

xlisp

compress

mdljsp

2 ear

fpppp

tomcatv

swm256

doduc

su2cor

wave5

mdljdp2

hydro2d

alvinn

nasa7

spice

2g6

ora

0->1

1->2

2->64

Base

Integer Floating Point

“Hit under n Misses”

0->11->22->64Base

Page 26: Chapter 5 Memory Hierarchy Design

26

6: Increasing Cache Bandwidth via Multiple Banks

• Rather than treat the cache as a single monolithic block, divide into independent banks that can support simultaneous accesses– E.g.,T1 (“Niagara”) L2 has 4 banks

• Banking works best when accesses naturally spread themselves across banks mapping of addresses to banks affects behavior of memory system

• Simple mapping that works well is “sequential interleaving” – Spread block addresses sequentially across banks– E,g, if there 4 banks, Bank 0 has all blocks whose address

modulo 4 is 0; bank 1 has all blocks whose address modulo 4 is 1; …

Page 27: Chapter 5 Memory Hierarchy Design

27

7. Reduce Miss Penalty: Early Restart and Critical Word First

• Don’t wait for full block before restarting CPU• Early restart —As soon as the requested word of the

block arrives, send it to the CPU and let the CPU continue execution– Spatial locality tend to want next sequential word, so not clear size

of benefit of just early restart

• Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block– Long blocks more popular today Critical Word 1st Widely used

block

Page 28: Chapter 5 Memory Hierarchy Design

28

8. Merging Write Buffer to Reduce Miss Penalty

• Write buffer to allow processor to continue while waiting to write to memory

• If buffer contains modified blocks, the addresses can be checked to see if address of new data matches the address of a valid write buffer entry

• If so, new data are combined with that entry• Increases block size of write for write-through

cache of writes to sequential words, bytes since multiword writes more efficient to memory

• The Sun T1 (Niagara) processor, among many others, uses write merging

Page 29: Chapter 5 Memory Hierarchy Design

29

9. Reducing Misses by Compiler Optimizations

• McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software

• Instructions– Reorder procedures in memory so as to reduce conflict misses– Profiling to look at conflicts(using tools they developed)

• Data– Merging Arrays: improve spatial locality by single array of compound elements vs.

2 arrays– Loop Interchange: change nesting of loops to access data in order stored in

memory– Loop Fusion: Combine 2 independent loops that have same looping and some

variables overlap– Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs.

going down whole columns or rows

Page 30: Chapter 5 Memory Hierarchy Design

30

Merging Arrays Example

/* Before: 2 sequential arrays */int val[SIZE];int key[SIZE];

/* After: 1 array of stuctures */struct merge {

int val;int key;

};struct merge merged_array[SIZE];

• Reducing conflicts between val & key; improve spatial locality when they are accessed in a interleaved fashion

Page 31: Chapter 5 Memory Hierarchy Design

31

Loop Interchange Example

/* Before */for (k = 0; k < 100; k = k+1)

for (j = 0; j < 100; j = j+1)for (i = 0; i < 5000; i = i+1)x[i][j] = 2 * x[i][j];

/* After */for (k = 0; k < 100; k = k+1)

for (i = 0; i < 5000; i = i+1)for (j = 0; j < 100; j = j+1)x[i][j] = 2 * x[i][j];

• Sequential accesses instead of striding through memory every 100 words; improved spatial locality

Page 32: Chapter 5 Memory Hierarchy Design

32

Loop Fusion Example/* Before */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)a[i][j] = 1/b[i][j] * c[i][j];

for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)d[i][j] = a[i][j] + c[i][j];

/* After */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1){ a[i][j] = 1/b[i][j] * c[i][j];d[i][j] = a[i][j] + c[i][j];}

• 2 misses per access to a & c vs. one miss per access; improve spatial locality

Page 33: Chapter 5 Memory Hierarchy Design

33

Blocking Example/* Before */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1){r = 0; for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];}; x[i][j] = r;};

• Two Inner Loops:– Read all NxN elements of z[]– Read N elements of 1 row of y[] repeatedly– Write N elements of 1 row of x[]

• Capacity Misses a function of N & Cache Size:– 2N3 + N2 => (assuming no conflict; otherwise …)

• Idea: compute on BxB submatrix that fits

Page 34: Chapter 5 Memory Hierarchy Design

34

Blocking Example/* After */for (jj = 0; jj < N; jj = jj+B)for (kk = 0; kk < N; kk = kk+B)for (i = 0; i < N; i = i+1)

for (j = jj; j < min(jj+B-1,N); j = j+1){r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) {r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r;};

• B called Blocking Factor• Capacity Misses from 2N3 + N2 to 2N3/B +N2

• Conflict Misses Too?

Page 35: Chapter 5 Memory Hierarchy Design

35

Loop Blocking – Matrix MultiplyBefore:

After:

Page 36: Chapter 5 Memory Hierarchy Design

36

Reducing Conflict Misses by Blocking

• Conflict misses in caches not FA vs. Blocking size– Lam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48

despite both fit in cache

Blocking Factor

Mis

s Ra

te

0

0.05

0.1

0 50 100 150

Fully Associative Cache

Direct Mapped Cache

Page 37: Chapter 5 Memory Hierarchy Design

37

Performance Improvement 1 1.5 2 2.5 3

compress

cholesky(nasa7)

spicemxm (nasa7)btrix (nasa7)

tomcatvgmty (nasa7)

vpenta (nasa7)

mergedarrays

loopinterchange

loop fusion blocking

Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

Page 38: Chapter 5 Memory Hierarchy Design

38

10. Reducing Misses by Hardware Prefetching of Instructions & Data

• Prefetching relies on having extra memory bandwidth that can be used without penalty• Instruction Prefetching

– Typically, CPU fetches 2 blocks on a miss: the requested block and the next consecutive block.

– Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer

• Data Prefetching– Pentium 4 can prefetch data into L2 cache from up to 8 streams

from 8 different 4 KB pages – Prefetching invoked if 2 successive L2 cache misses to a page,

if distance between those cache blocks is < 256 bytes

1.16

1.45

1.18 1.20 1.21 1.26 1.29 1.32 1.40 1.49

1.97

1.001.201.401.601.802.002.20

Per

form

ance

Impr

ovem

ent

SPECint2000 SPECfp2000

Page 39: Chapter 5 Memory Hierarchy Design

39

11. Reducing Misses by Software Prefetching Data

• Data Prefetch– Load data into register (HP PA-RISC loads)– Cache Prefetch: load into cache

(MIPS IV, PowerPC, SPARC v. 9)– Special prefetching instructions cannot cause faults;

a form of speculative execution

• Issuing Prefetch Instructions takes time– Is cost of prefetch issues < savings in reduced misses?– Higher superscalar reduces difficulty of issue bandwidth

Page 40: Chapter 5 Memory Hierarchy Design

40

Compiler Optimization vs. Memory Hierarchy Search

• Compiler tries to figure out memory hierarchy optimizations

• New approach: “Auto-tuners” 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, …) and algorithms, then produce C code to be compiled for that computer

• “Auto-tuner” targeted to numerical method– E.g., PHiPAC (BLAS), Atlas (BLAS),

Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

• NOTE, Figure 5.11 summarizes impact on cache performance and complexity of these methods.

Page 41: Chapter 5 Memory Hierarchy Design

41

Main Memory

• Some definitions:– Bandwidth (bw): Bytes read or written per unit time– Latency: Described by

• Access Time: Delay between access initiation & completion

– For reads: Present address till result ready.• Cycle time: Minimum interval between separate requests

to memory.– Address lines: Separate bus CPUMem to carry

addresses. (Not usu. counted in BW figures.)– RAS (Row Access Strobe)

• First half of address, sent first.– CAS (Column Access Strobe)

• Second half of address, sent second.

Page 42: Chapter 5 Memory Hierarchy Design

42

RAS vs. CAS (save address pin)DRAM bit-cell array

1. RAS selects a row

2. Parallelreadout of

all row data

3. CAS selectsa column to read

4. Selected bitwritten to memory bus

Page 43: Chapter 5 Memory Hierarchy Design

43

Types of Memory

• DRAM (Dynamic Random Access Memory)– Cell design needs only 1 transistor per bit stored.– Cell charges leak away and may dynamically (over time)

drift from their initial levels.– Requires periodic refreshing to correct drift

• e.g. every 8 ms– Time spent refreshing kept to <5% of BW

• SRAM (Static Random Access Memory)– Cell voltages are statically (unchangingly) tied to power

supply references. No drift, no refresh.– But needs 4-6 transistors per bit.

• DRAM: 4-8x larger, 8-16x slower, 8-16x cheaper/bit

Page 44: Chapter 5 Memory Hierarchy Design

44

Typical DRAM Organization

(256 Mbit)

High14 bits

Low 14 bits

Page 45: Chapter 5 Memory Hierarchy Design

45

Amdahl/Case Rule

• Memory size (and I/O b/w) should grow linearly with CPU speed– Typical: 1 MB main memory, 1 Mbps I/O b/w per 1 MIPS

CPU performance.• Takes a fairly constant ~8 seconds to scan entire

memory (if memory bandwidth = I/O bandwidth, 4 bytes/load, 1 load/4 instructions, and if latency not a problem)

• Moore’s Law:– DRAM size doubles every 18 months (up 60%/yr)– Tracks processor speed improvements

• Unfortunately, DRAM latency has only decreased 7%/yr! Latency is a big deal.

Page 46: Chapter 5 Memory Hierarchy Design

46

Some DRAM Trend

Since 1998, the rate of increase in chip capacity has slowed to 2x per 2 years:* 128 Mb in 1998* 256 Mb in 2000* 512 Mb in 2002

See new Fig 5.13 for more updated data

Page 47: Chapter 5 Memory Hierarchy Design

47

Improving DRAM Bandwidth

• Fast Page mode: access the same row repeatly• Synchronous DRAM: SDRAM (with controller)• Double data rate (DDR): transfer on both rising

and falling edge of DRAM clock– DDR, 133-200 MHz, 2.5 Volts– DDR2, 266-400 MHz, 1.8 Volts– DDR3, 533-800 MHz, 1.5 Volts– See the detailed name convention and bandwidth from

Figure 5.14.


Recommended