+ All Categories
Home > Documents > Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Date post: 05-Jan-2016
Category:
Upload: lilian-walker
View: 220 times
Download: 2 times
Share this document with a friend
Popular Tags:
58
Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary
Transcript
Page 1: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Exploiting Locality in DRAM

Xiaodong Zhang

College of William and Mary

Page 2: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Where is Locality in DRAM? DRAM is the center of memory hierarchy:

High density and high capacity Low cost but slow access (compared to SRAM)

A cache miss has been considered as a constant delay for long time. This is wrong. Non-uniform access latencies exist within DRAM

Row-buffer serves as a fast cache in DRAM Its access patterns here has been paid little attention. Reusing buffer data minimizes the DRAM latency.

Larger buffers in DRAM for more locality.

Page 3: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Outline Exploiting locality in Row Buffers

Analysis of access patterns. A solution to eliminate conflict misses.

Cached DRAM (CDRAM) Design and its performance evaluation.

Large off-chip cache design by CDAM Major problems of L3 caches. Address the problems by CDRAM.

Memory access scheduling A case for fine grain scheduling.

Page 4: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

CPU Registers

L1TLB

L3

L2

Row buffer

DRAMBus adapterController

buffer

Buffer cache

CPU-memory bus

I/O bus

I/O controller

disk

Disk cache

TLB

registers

L1

L2

L3

Controller buffer

Buffer cache

disk cache

Row buffer

Locality Exploitation in Row Buffer

Page 5: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Exploiting the Locality in Row Buffers

Zhang, et. al., Micro-33, 2000, (W&M)

Contributions of this work: looked into the access patterns in row buffers. found the reason behind misses in the row buffer. proposed an effective solution to minimize the misses.

The interleaving technique in this paper was adopted by Sun UltralSPARC IIIi Processor series.

Page 6: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

DRAM Access = Latency + Bandwidth Time

Precharge

Row Access

Bus bandwidth time

DRAM Core

Row Buffer

Processor

Column Access

DRAMDRAMLatencyLatency

Row buffer misses come from a sequence of accesses to different pages in the same bank.

Page 7: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Nonuniform DRAM Access Latency

Case 1: Row buffer hit (20+ ns)

Case 2: Row buffer miss (core is precharged, 40+ ns)

Case 3: Row buffer miss (not precharged, ≈ 70 ns)precharge row access col. access

row access col. access

col. access

Page 8: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Amdahl’s Law applies in DRAM

70

70

20

60

16070

6.4GB/s(Rambus)

2.1GB/s(PC2100)

0.8GB/s(PC100)

As the bandwidth improves, DRAM latency will decide cache miss penalty.

Time (ns) to fetch a 128-byte cache block: latency bandwidth

Page 9: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Row Buffer Locality Benefit

Objective: serve memory requests without accessing the DRAM core as much as possible.

missbuffer rowhitbuffer row LatencyLatency

Reduce latency by up to 67%.

Page 10: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Row Buffer Misses are Surprisingly High

Standard configuration Conventional cache

mapping Page interleaving for

DRAM memories 32 DRAM banks, 2KB

page size SPEC95 and SPEC2000

What is the reason behind this?

0102030405060708090

100

tom

ca

tv

hy

dro

2d

mg

rid

ap

plu

co

mp

res

s

ijpe

g

Page 11: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Conventional Page Interleaving

Page 0 Page 1 Page 2 Page 3

Page 4 Page 5 Page 6 Page 7

… … … …

Bank 0

Address format

Bank 1 Bank 2 Bank 3

page index page offsetbank

r pk

Page 12: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Conflict Sharing in Cache/DRAM

cache tag cache set index block offset

page index page offset

t s b

bank

r pk

cache-conflicting: same cache index, different tags. row-buffer conflicting: same bank index, different pages. address mapping: bank index cache set index Property: xy, x and y conflict on cache also on row buffer.

page:

cache:

Page 13: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Sources of Misses Symmetry: invariance in results under transformations.

Address mapping symmetry propogates conflicts from cache address to memory address space:

• Cache-conflicting addresses/misses are also row-buffer conflicting addresses/misses.

• Cache write-back address conflicts with the missed block.

• Upon a miss, if the replaced cache block is dirty, it must be written back to memory before the missed block is loaded.

• The conflict between the dirty block address and the missed block address cause a row-buffer miss.

• As a sequence of replacement on dirty cache blocks happens, so do the write-back conflicts in row-buffer.

Page 14: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Breaking the Symmetry by Permutation-based Page Interleaving

k

XOR

k

page index page index page offsetpage offsetnew bank

k

page offsetpage offsetindex bank

L2 Cache tag

Page 15: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Permutation Property (1)

Conflicting addresses are distributed onto different banks

memory banks0000000100100011010001010110011110101011

Permutation-basedinterleaving

1011 1010

1010

1001

1000

1010

1010

1010

L2 Conflicting addresses

xor

Different bank indexes

Conventionalinterleaving

Same bank index

Page 16: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Permutation Property (2)

The spatial locality of memory references is preserved.

memory banks0000000100100011010001010110011110101011

1000 1010

1000 1010

1000 1010

1000 1010

… …

Within one pagePermutation-based

interleavingConventionalinterleaving

Same bank indexxor

Same bank index

Page 17: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Permutation Property (3) Pages are uniformly mapped onto ALL memory banks. P: page, C: the number of pages the (L2/L3) cache holds.

C+1P

2C+2P

bank 0 bank 1 bank 2 bank 3

C

2C+3P

C+3P

2C

0 1P 2P 3P

C+2P

2C+1P

4P 5P 6P 7P

… … … …

C+5P C+4P C+7P C+6P

… … … …

2C+6P 2C+7P 2C+4P 2C+5P

… … … …

Page 18: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

A Solution of ``Swap”

DEC architects ``swap” partial bits of L2 tag and partial bits of the page offset for the AlphaStation 600 5-series. (Digital Technical Journal, 1995).

An optimal number of swapped bits was tested by Wong and Baer (Washington, 97)

We showed why this only slightly solves the problem.

Page 19: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Row-buffer Miss Rates

0102030405060708090

100Cache line

Page

Swap

Permutation

Page 20: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Comparison of Memory Stall Times

0

0.2

0.4

0.6

0.8

1

1.2

1.4

No

rmal

ized

Mem

ory

Sta

ll T

ime Cache line

Page

Swap

Permutation

Page 21: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Measuring IPC (#instructions per cycle)

0.00

0.20

0.40

0.60

0.80

1.00

1.20

1.40

1.60

1.80

Nor

mil

ized

IP

C

tom

catv

swim

su2c

or

hyd

ro2d

mgr

id

app

lu

turb

3d

wav

e5

TP

C-C

cachelinepageswappermutation

Page 22: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Where to Break the Symmetry?

Break the symmetry at the bottom level (DRAM address) is most effective:

Far away from the critical path (little overhead)

Reduce the both address conflicts and write-back conflicts.

Our experiments confirm this (30% difference).

Page 23: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Impact to Commercial Systems Critically show the address mapping problem in

Compaq XP1000 series with an effective solution.

Our method has been adopted in Sun Ultra SPARC IIIi processor series, called XOR interleaving.

Chief architect Kevin Normoyle had intensive discussions with us for the adoption in 2001.

The results in the Micro-33 paper on ``conflict propagation”, and ``write-back conflicts” are quoted in the Sun Ultra SPARC Product Manuals.

Sun Microsystems has formally acknowledged our research contribution to their products.

Page 24: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Outline Exploiting locality in Row Buffers Exploiting locality in Row Buffers

Analysis of access patterns.Analysis of access patterns. A solution to eliminate conflict misses.A solution to eliminate conflict misses.

Cached DRAM (CDRAM) Design and its performance evaluation.

Large off-chip cache design by CDAMLarge off-chip cache design by CDAM Major problems of L3 caches.Major problems of L3 caches. Address the problems by CDRAM. Address the problems by CDRAM.

Memory access schedulingMemory access scheduling A case for fine grain scheduling. A case for fine grain scheduling.

Page 25: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Can We Exploit More Locality in DRAM?

Cached DRAM: adding a small on-memory cache in the memory core.

Exploiting the locality in main memory by the cache.

High bandwidth between the cache and memory core.

Fast response to single memory request hit in the cache.

Pipelining multiple memory requests starting from the memory controller via the memory bus, the cache, and the DRAM core (if on-memory cache misses happen).

Page 26: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Cached DRAM

CPU

L1 Cache

L2 Cache

Memory Bus

On Memory Cache

DRAM Core

Cached DRAM

Low bandwidth in cache line per bus cycle

High bandwidth in page per internal bus cycle

Page 27: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Improvement of IPC (# of instructions per cycle)

0

0.5

1

1.5

2

2.5

3

IPC

SDRAMCDRAM

Page 28: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Cached DRAM vs. XOR Interleaving(16 * 4 KB on-memory cache for CDRAM,

32 * 2 KB row buffers for XOR interleaving among 32 banks)

0

5

10

15

20

25

30

35

40

Impro

vem

ent O

ver

SD

RA

M (%

)

TP

C-C

tom

catv

swim

su2c

or

hyd

ro2d

mgr

id

applu

turb

3d

wav

e5

Cached DRAM

Permutation-basedpage interleaving

Page 29: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Cons and Pros of CDRAM over xor Interleaving

Merits:

High hits in on-memory cache due to high associativity.

The cache can be accessed simultaneously with DRAM.

More cache blocks than the number of memory banks.

Limits:

Requires an additional chip area in DRAM core and additional management circuits.

Page 30: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Outline Exploiting locality in Row Buffers Exploiting locality in Row Buffers

Analysis of access patterns.Analysis of access patterns. A solution to eliminate conflict misses. A solution to eliminate conflict misses.

Cached DRAM (CDRAM)Cached DRAM (CDRAM) Design and its performance evaluation.Design and its performance evaluation.

Large off-chip cache design by CDAM Major problems of L3 caches. Address the problems by CDRAM.

Memory access schedulingMemory access scheduling A case for fine grain scheduling. A case for fine grain scheduling.

Page 31: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Large Off-chip Caches by CDRAM Large and off-chip L3 caches are commonly used to

reduce memory latency.

It has some limits for large memory intensive applications:

The size is still limited (less than 10 MB).

Access latency is large (10+ times over on-chip cache) Large volume of L3 tags (tag checking time ∞ log (tag size)

Tags are stored off-chip.

Study shows that L3 can degrade performance for some applications (DEC Report 1996).

Page 32: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Can CDRAM Address L3 Problems?

What happens if L3 is replaced by CDRAM?

The size of CDRAM is sufficiently large, however,

How could its average latency be comparable or even lower than L3 cache?

The challenge is to reduce the access latency to this huge ``off-chip cache” .

``Cached DRAM Cache” (CDC) addresses the L3 problem, by Zhang et. al. published in IEEE Transactions on Computers in 2004. (W&M)

Page 33: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Cached DRAM Cache as L3 in Memory Hierarchy

L1 Inst Cache L1 Data Cache

L2 Unified CacheCDC tag cacheand predictor

CDC-DRAM

CDC-cache Memory bus

DRAM main memory

Page 34: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

How is the Access Latency Reduced?

The tags of the CDC cache are stored on-chip. Demanding a very small storage.

High hits in CDC cache due to high locality of L2 miss streams .

Unlike L3, the CDC is not between L2 and DRAM. It is in parallel with the DRAM memory.

An L2 miss can either go to CDC or DRAM via different buses.

Data fetching in CDC and DRAM can be done independently.

A predictor is built on-chip using a global history register. Determine if a L2 miss will be a hit/miss in CDC.

The accuracy is quite high.

Page 35: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Advantages and Performance Gains

Unique advantages

Large capacity, equivalent to the DRAM size, and

Low average latency by (1) exploiting locality in CDC-cache, (2) fast on-chip tag checking for CDC-cache data, (3) accurate prediction of hit/miss in CDC.

Performance of SPEC2000

Outperforms L3 organization by up to 51%.

Unlike L3, CDC does not degrade performance of any.

The average performance improvement is 25%.

Page 36: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Performance Evaluation by SPEC2000fp

0 0.5 1 1.5 2

168.wupwise

171.swim

172.mgrid

173.applu

179.art

Speedup over base system

CDCSRAM-L3

Page 37: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Outline Exploiting locality in Row Buffers Exploiting locality in Row Buffers

Analysis of access patterns.Analysis of access patterns. A solution to eliminate conflict misses. A solution to eliminate conflict misses.

Cached DRAM (CDRAM)Cached DRAM (CDRAM) Design and its performance evaluation.Design and its performance evaluation.

Large off-chip cache design by CDAMLarge off-chip cache design by CDAM Major problems of L3 caches.Major problems of L3 caches. Address the problems by CDRAMAddress the problems by CDRAM.

Memory access scheduling A case for fine grain scheduling.

Page 38: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

CPU

FIFO

FIFO

FIFO

Stream Buffer Unit

AddressMapping

Unit

Cache

MainMemory

Mem

ory

Sch

edul

ing

Uni

t

Memory Controller

Memory accesses issued in the requested order

Memory accesses issued in an “optimal” order

Page 39: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Basic Functions of Memory Controller Where is it?

A hardware logic directly connected to CPU, which generates necessary signals to control the read/write, and address mapping in the memory, and interface other with other system components (CPU, cache).

What does it do specifically?

Pipelining and buffering the requests

Memory address mapping (e.g. XOR interleaving)

Reorder the memory accesses to improve performance.

Page 40: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Complex Configuration of Memory Systems Multi-channel memory systems (e.g. Rambus)

Each channel connect multiple memory devices.

Each device consists multiple memory banks.

Current operations among channels and banks.

How to utilize rich multi-channel resources?

Maximizing the concurrent operations.

Deliver a cache line with critical sub-block first.

Page 41: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Device 0 Device D-1

……

……

Bank0

Bank B-1

… …

… …

Channel C-1

……

Channel 0

Multi-channel Memory Systems

CPU/L1

L2

Page 42: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Partitioning A Cache Line into sub-blocks

Smaller sub-block size shorter latency for critical sub-blocks

DRAM system: minimal request length

Sub-block size = smallest granularity available for Direct Rambus system

a cache miss

request

multiple DRAMRequests (in the same bank)

Page 43: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Mapping Sub-blocks onto Multi-channels

Evenly distribute sub-blocks to all channels aggregate bandwidth for each cache request

channel 0 channel 1

a cache line fill request

Page 44: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Priority Ranks of Sub-blocks

Read-bypass-write: a ``read” is in the critical path and requires less delay than write. A memory ``write” can be overlapped with others operations.

Hit-first: row buffer hit. Get it before it is replaced. Ranks for read/write

Critical: critical load sub-requests of cache read misses Load: non-critical load sub-requests of cache read misses Store: load sub-requests for cache write misses

In-order: other serial accesses.

Page 45: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Existing Scheduling Methods for MC Gang scheduling: (Lin, et. al., HPCA’01, Michigan)

Upon a cache miss, all the channels are used to deliver.

Maximize concurrent operations among multi-channels.

Effective to a single miss, but not for multiple misses (cache lines have to be delivered one by one).

No consideration for sub-block priority.

Burst scheduling (Cuppu, et. al., ISCA’01, Maryland) One cache line per channel, and reorder the sub-blocks in each.

Effective to multiple misses, not to a single or small number of misses (under utilizing concurrent operations in multi-channels).

Page 46: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Fine Grain Memory Access Scheduling

Zhu, et., al., HPCA’02 (W&M).

Sub-block and its priority based scheduling.

All the channels are used at a time.

Always deliver the high priority blocks first.

Priority of each critical sub-block is a key.

Page 47: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Fine GrainBoth P&C.

BurstUse priority,but not all channels.

GangUse all channels But no priority.

Advantages of Fine Grain Scheduling

A5A0A1A2A3

A4

A6A7

B0B1B2B3

B4B5B6B7

A0 A1A2 A3 A4 A5 A6 A7B0 B1 B2B3 B4 B5 B6 B7

A0A1A2

A3

A4A5A6

A7

B0B1B2

B3

B4B5B6B7

A0

A1

A2

A3

A4

A5

A6

A7

B0

B1

B2

B3

B4

B5

B6

B7

Page 48: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Experimental Environment

Simulator SimpleScalar 3.0b An event-driven

simulation of a multi-channel Direct Rambus DRAM system

Benchmark SPEC CPU2000

Key parameters Processor: 2GHz, 4-issue MSHR: 16 entries L1 cache : 4-way 64KB I/D L2 cache: 4-way 1MB, 128B

block Channel: 2 or 4 Device: 4 / channel Bank: 32 / device Length of packets: 16 B Precharge: 20 ns Row access: 20 ns Column access: 20 ns

Page 49: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Burst Phase in Miss Streams

Execution Time with Multiple Memory Accesses

0102030405060708090

100

Fra

ctio

n (%

)

Page 50: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Clustering of Multiple Accesses

0

0.2

0.4

0.6

0.8

1

1.2

2 4 8 16 32

Number of Concurrent Accesses

Cum

ula

tive

Pro

babilit

y

179.art181.mcf171.swim187.facerec178.galgel

Page 51: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Percentages of Critical Sub-blocks

0

10

20

30

40

50

60

7016

8.w

upw

ise

171.

swim

172.

mgr

id

173.

appl

u

178.

galg

el

179.

art

187.

face

rec

188.

amm

p

189.

luca

s

301.

apsi

175.

vpr

176.

gcc

181.

mcf

256.

bzip

2

300.

twol

f

Ave

rage

Fra

ctio

n (%

)

Page 52: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Waiting Time Distribution

179.art

0

0.2

0.4

0.6

0.8

1

1.2

Waiting Time (cycles)

Cum

ula

tive

Pro

bab

ilit

y

fine-grain (critical)

fine-grain (non-critical)burst (critical)

burst (non-critical)

gang

Page 53: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Critical Sub-block Distribution in Channels

0

0.1

0.2

0.3

0.4

0.5

2 3 4 5 6 7 8Number of Critical Sub-requests per Channel (4-channel)

Cum

ulat

ive

Pro

babi

lity

179.art fine-grain179.art burst173.applu fine-grain173.applu burst178.galgel fine-grain178.galgel burst

Page 54: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Performance Improvement: Fine Grain Over Gang Scheduling

05

10152025303540

168.

wup

wis

e

171.

swim

172.

mgr

id

173.

appl

u

178.

galg

el

179.

art

187.

face

rec

188.

amm

p

189.

luca

s

301.

apsi

175.

vpr

176.

gcc

181.

mcf

256.

bzip

2

300.

twol

f

Ave

rage

IPC

Im

pro

vem

ent (%

) 2-channel4-channel

Page 55: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Performance Improvement: Fine Grain Over Burst Scheduling

05

1015202530354045

168.

wup

wis

e

171.

swim

172.

mgr

id

173.

appl

u

178.

galg

el

179.

art

187.

face

rec

188.

amm

p

189.

luca

s

301.

apsi

175.

vpr

176.

gcc

181.

mcf

256.

bzip

2

300.

twol

f

Ave

rage

IPC

Im

pro

vem

ent (%

)

2-channel4-channel

Page 56: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

2-channel Fine Grain Vs. 4-channel Gang & Burst Scheduling

0

0.5

1

1.5

2

2.5

168.

wup

wis

e

171.

swim

172.

mgr

id

173.

appl

u

178.

galg

el

179.

art

187.

face

rec

188.

amm

p

189.

luca

s

301.

apsi

175.

vpr

176.

gcc

181.

mcf

256.

bzip

2

300.

twol

f

IPC

2-channel Fine-grain 4-channel Gang 4-channel Burst

Page 57: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Summary of Memory Access Scheduling

Fine-grain priority scheduling Granularity: sub-block based. Mapping schemes: utilize all the channels. Scheduling policies: priority based.

Outperforms Gang & Burst Scheduling Effective utilizing available bandwidth and

concurrency Reducing average waiting time for cache miss requests Reducing processor stall time for memory accesses

Page 58: Exploiting Locality in DRAM Xiaodong Zhang College of William and Mary.

Conclusion

High locality exists in cache miss streams. Exploiting locality in row buffers can make a great performance

difference.

Cached DRAM can further exploit the locality in DRAM.

CDCs can serve as large and low overhead off-chip caches.

Memory access scheduling plays a critical role.

Exploiting locality in DRAM is very unique. Direct and positive impact to commercial product.

The locality in DRAM has been ignored for long time.

Impact to architecture and computer organization teaching.


Recommended