+ All Categories
Home > Documents > 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

Date post: 20-Dec-2015
Category:
View: 216 times
Download: 0 times
Share this document with a friend
33
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy
Transcript
Page 1: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

1

Chapter SevenLarge and Fast: Exploiting Memory Hierarchy

Page 2: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

2

• SRAM:

– value is stored on a pair of inverting gates

– very fast but takes up more space than DRAM (4 to 6 transistors)

• DRAM:

– value is stored as a charge on capacitor (must be refreshed)

– very small but slower than SRAM (factor of 5 to 10)

7.1 Introduction

B

A A

B

Word line

Pass transistor

Capacitor

Bit line

Page 3: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

3

• There are three primary technologies used in building memory hierarchies:

1. DRAM (main memory)

2. SRAM (caches)

3. Magnetic Disk

Page 4: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

4

Locality

• A principle that makes having a memory hierarchy a good idea

• If an item is referenced,

temporal locality: it will tend to be referenced again soon

spatial locality: nearby items will tend to be referenced soon.

Why does code have locality?

• Our initial focus: two levels (upper, lower)

– block: minimum unit of data

– hit: data requested is in the upper level

– miss: data requested is not in the upper level

Page 5: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

5

• Users want large and fast memories!

• Build memory as a hierarchy of levels (fastest is close to the processor.

Page 6: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

6

• Hit rate

• Miss rate

• Hit time

• Miss penalty

Page 7: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

7

• Two issues:

– How do we know if a data item is in the cache?

– If it is, how do we find it?

• Our first example:

– block size is one word of data

– "direct mapped"

For each item of data at the lower level, there is exactly one location in the cache where it might be.

e.g., lots of items at the lower level share locations in the upper level

7.2 The Basic of Caches

Page 8: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

8

• Mapping: address is modulo the number of blocks in the cache

Direct Mapped Cache

00001 00101 01001 01101 10001 10101 11001 11101

000

Cache

Memory

001

01

001

11

001

011

101

11

Page 9: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

9

Accessing a Cache

Example: An eight-word direct-mapped cache

Page 10: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

10

• For MIPS:

What kind of locality are we taking advantage of?

Direct Mapped Cache

Address (showing bit positions)

Data

Hit

Data

Tag

Valid Tag

3220

Index

012

102310221021

=

Index

20 10

Byteoffset

31 30 13 12 11 2 1 0

Page 11: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

11

Example: Bits in a Cache

How many total bits are required for a direct-mapped cache with 16 KB of data and 4-word blocks, assuming a 32-bit address?

---------------------------------------------

16KB = 4K words = 212 words

Block size = 4 words(22), 210 blocks

Each block has 432=128 bits of data plus a tag(32-10-2-2 bits)

Thus:

the total cache size = 210(128+(32-10-2-2)+1) = 210 147 = 147 Kbits

Page 12: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

12

Example: Mapping an Address to a Multiword Cache Block

Consider a cache with 64 blocks and a block size of 16 bytes. What block number does byte address 1200 map to?

------------------------------------

(Block address) modulo (Number of cache blocks)

where the address of the block is:

blockBytes per

ssByte addre

)1 (

blockperByteblockperByte

blockBytes per

ssByte addre&&blockperByte

blockBytes per

ssByte addre

7516

1200

which map to cache number (75 modulo 64) = 11

Notice that this block address is the block containing all address between:

Thus, with 16 bytes per block

Page 13: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

13

• Taking advantage of spatial locality:

Direct Mapped Cache

Address (showing bit positions)

DataHit

Data

Tag

V Tag

32

16

=

Index

18 8 Byteoffset

31 14 13 2 1 06 5

4

Block offset

256

entries

512 bits18 bits

Mux

3232 32

Page 14: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

14

• Read hits

– this is what we want!

• Read misses

– stall the CPU, fetch block from memory, deliver to cache, restart

• Write hits:

– can replace data in cache and memory (write-through)

– write the data only into the cache (write-back the cache later)

• Write misses:

– read the entire block into the cache, then write the word

Hits vs. Misses

Page 15: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

15

• Make reading multiple words easier by using banks of memory

• It can get a lot more complicated...

Hardware Issues

CPU

Cache

Memory

Bus

One-word-widememory organization

a.

b. Wide memory organization

CPU

Cache

Memory

Bus

Multiplexor

CPU

Cache

Bus

Memory

bank 0

Memory

bank 1

Memory

bank 2

Memory

bank 3

c. Interleaved memory organization

Page 16: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

16

• Increasing the block size tends to decrease miss rate:

• Use split caches because there is more spatial locality in code:

Performance

1 KB

8 KB

16 KB

64 KB

256 KB

256

40%

35%

30%

25%

20%

15%

10%

5%

0%

Mis

s ra

te

64164

Block size (bytes)

ProgramBlock size in

wordsInstruction miss rate

Data miss rate

Effective combined miss rate

gcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%

spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%

Page 17: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

17

Performance

• Simplified model:

execution time = (execution cycles + stall cycles) cycle time

stall cycles = # of instructions miss ratio miss penalty

• Two ways of improving performance:

– decreasing the miss ratio

– decreasing the miss penalty

What happens if we increase block size?

Page 18: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

18

Compared to direct mapped, give a series of references that:

– results in a lower miss ratio using a 2-way set associative cache

– results in a higher miss ratio using a 2-way set associative cache

assuming we use the “least recently used” replacement strategy

Decreasing miss ratio with associativity

Eight-way set associative (fully associative)

Tag Tag Data DataTagTag Data Data Tag Tag Data DataTagTag Data Data

Tag Tag Data DataTagTag Data DataSet

0

1

Four-way set associative

TagTag Data DataSet

0

1

2

3

Two-way set associative

Tag DataBlock

0

1

2

3

4

5

6

7

One-way set associative

(direct mapped)

Page 19: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

19

An implementation

Address

22 8

V TagIndex

01

2

253254255

Data V Tag Data V Tag Data V Tag Data

3222

4-to-1 multiplexor

Hit Data

123891011123031 0

Page 20: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

20

Performance

Associativity

0One-way Two-way

3%

6%

9%

12%

15%

Four-way Eight-way

1 KB

2 KB

4 KB

8 KB

16 KB

32 KB64 KB 128 KB

Page 21: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

21

Decreasing miss penalty with multilevel caches

• Add a second level cache:

– often primary cache is on the same chip as the processor

– use SRAMs to add another cache above primary memory (DRAM)

– miss penalty goes down if data is in 2nd level cache

• Example:– CPI of 1.0 on a 5 Ghz machine with a 5% miss rate, 100ns DRAM access– Adding 2nd level cache with 5ns access time decreases miss rate to .5%

• Using multilevel caches:

– try and optimize the hit time on the 1st level cache

– try and optimize the miss rate on the 2nd level cache

Page 22: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

22

Cache Complexities

• Not always easy to understand implications of caches:

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

200

400

600

800

1000

1200

64 128 256 512 1024 2048 4096

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

400

800

1200

1600

2000

64 128 256 512 1024 2048 4096

Theoretical behavior of Radix sort vs. Quicksort

Observed behavior of Radix sort vs. Quicksort

Page 23: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

23

Cache Complexities

• Here is why:

• Memory system performance is often critical factor– multilevel caches, pipelined processors, make it harder to predict outcomes– Compiler optimizations to increase locality sometimes hurt ILP

• Difficult to predict best algorithm: need experimental data

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

1

2

3

4

5

64 128 256 512 1024 2048 4096

Page 24: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

24

Virtual Memory

• Main memory can act as a cache for the secondary storage (disk)

• Advantages:– illusion of having more physical memory– program relocation – protection

Virtual addresses Physical addresses

Address translation

Disk addresses

Page 25: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

25

Pages: virtual memory blocks

• Page faults: the data is not in memory, retrieve it from disk

– huge miss penalty, thus pages should be fairly large (e.g., 4KB)

– reducing page faults is important (LRU is worth the price)

– can handle the faults in software instead of hardware

– using write-through is too expensive so we use writeback

Virtual page number Page offset

31 30 29 28 27 3 2 1 015 14 13 12 11 10 9 8

Physical page number Page offset

29 28 27 3 2 1 015 14 13 12 11 10 9 8

Virtual address

Physical address

Translation

Page 26: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

26

Page Tables

Page tablePhysical page or

disk addressPhysical memory

Virtual pagenumber

Disk storage

1111011

11

1

0

0

Valid

Page 27: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

27

Page Tables

Virtual page number Page offset

3 1 3 0 2 9 2 8 2 7 3 2 1 01 5 1 4 1 3 1 2 11 1 0 9 8

Physical page number Page offset

2 9 2 8 2 7 3 2 1 01 5 1 4 1 3 1 2 11 1 0 9 8

Virtual address

Physical address

Page table register

Physical page numberValid

Page table

If 0 then page is notpresent in memory

20 12

18

Page 28: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

28

Making Address Translation Fast

• A cache for address translations: translation lookaside buffer

1111011

11

1

0

0

1000000

11

1

0

0

1001011

11

1

0

0

Physical pageor disk addressValid Dirty Ref

Page table

Physical memory

Virtual pagenumber

Disk storage

111101

011000

111101

Physical pageaddressValid Dirty Ref

TLB

Tag

Typical values: 16-512 entries, miss-rate: .01% - 1%miss-penalty: 10 – 100 cycles

Page 29: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

29

TLBs and caches

YesWrite access

bit on?

No

YesCache hit?

No

Write data into cache,update the dirty bit, and

put the data and theaddress into the write buffer

YesTLB hit?

Virtual address

TLB access

Try to read datafrom cache

No

YesWrite?

No

Cache miss stallwhile read block

Deliver datato the CPU

Write protectionexception

YesCache hit?

No

Try to write datato cache

Cache miss stallwhile read block

TLB missexception

Physical address

Page 30: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

30

TLBs and Caches

=

=

20

Virtual page number Page offset

31 30 29 3 2 1 014 13 12 11 10 9

Virtual address

TagValid Dirty

TLB

Physical page number

TagValid

TLB hit

Cache hit

Data

Data

Byteoffset

=====

Physical page number Page offset

Physical address tag Cache index

12

20

Blockoffset

Physical address

18

32

8 4 2

12

8

Cache

Page 31: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

31

Modern Systems

Page 32: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

32

Modern Systems

• Things are getting complicated!

Page 33: 1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.

33

• Processor speeds continue to increase very fast— much faster than either DRAM or disk access times

• Design challenge: dealing with this growing disparity

– Prefetching? 3rd level caches and more? Memory design?

Some Issues

Year

Performance

1

10

100

1,000

10,000

100,000

CPU

Memory


Recommended