Date post: | 15-Jan-2016 |
Category: |
Documents |
View: | 215 times |
Download: | 0 times |
Memory
Chapter 7
Cache Memories
Memory Challenges Ideally one desires a huge amount of very fast memory for
little cost, but: Fast memory is expensive Cheap memory is slow
The solution on a fixed budget for memory is a hierarchy A small amount of very fast memory (Think SRAM) A medium amount of slower memory (Think DRAM) A large amount of slower yet memory (Think Disk)
Comparing:
Technology Access Time Cost/GB
SRAM 0.5 – 5 ns $4,000 - $10,000
DRAM 50 – 70 ns $100 - $200
Disk 5 – 20 ms $0.50 - $2Recall: We used 200ps or 0.2 ns in our pipeline study. Why the difference?
The “Memory Wall” Logic vs DRAM speed gap continues to grow
0.01
0.1
1
10
100
1000
VAX/1980 PPro/1996 2010+
Core
Memory
Clo
cks
per
inst
ruct
ion
Clo
cks
per
DR
AM
acc
ess
Philosophically
How does one UTILIZE the very fast memory effectively?
Think “The Principal of Locality” Temporal Locality (Close in Time)
- Memory that has been accessed recently is most likely to be accessed sooner
Spatial Locality (Close in location)- Memory that is close to memory that has been accessed recently is
most likely to be accessed sooner
Organize memory in blocks Keep blocks likely to be used soon in the very fast memory Keep the next most likely blocks in medium fast memory Keep those not likely to be used soon in slower memory
Hierarchical Memory Organization
Cache memory What is cache?
A small amount of very high speed memory between the “main memory” and the CPU
How is it organized? Organized in a number of uniform sized blocks of memory that
have a high likelihood of being used.
How is kept “current”? When a block in main memory is more likely to be needed,
that block replaces a block in the cache.
How do we know it is needed? An access fails to find the word in the cache
Where does it get placed in the cache? Likely in place of the last used block
How do we rate the performance of the cache? Based upon Hit rates and Miss rates
Should there be Instruction Caches and Data Caches?
Hierarchical Memory Organization
Registers are the fastest
Cache is the fastest “Memory” - SRAM
DRAM makes good main memory
Disk is best for the rest (majority)
The Memory Hierarchy
Increasing distance from the processor in access time
L1$
L2$
Main Memory
Secondary Memory
Processor
(Relative) size of the memory at each level
Inclusive– what is in L1$ is a subset of what is in L2$ is a subset of what is in MM that is a subset of is in SM
4-8 bytes (words)
1 to 4 blocks
1,024+ bytes (disk sector = page)
8-32 bytes (block)
Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology
The Memory Hierarchy: Pictorially
Temporal Locality (Locality in Time): Keep most recently accessed data items closer to the
processor
Spatial Locality (Locality in Space): Move blocks consisting of contiguous words to the upper levels
Lower LevelMemoryUpper Level
MemoryTo Processor
From ProcessorBlk X
Blk Y
The Memory Hierarchy: Terminology Hit: data is in some block in the upper level (Blk X)
Hit Rate: the fraction of memory accesses found in the upper level Hit Time: Time to access the upper level which consists of
RAM access time + Time to determine hit/miss
Miss: data is not in the upper level so needs to be retrieve from a block in the lower level (Blk Y)
Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level
+ Time to deliver the block the processor
Hit Time << Miss Penalty
Lower LevelMemoryUpper Level
MemoryTo Processor
From ProcessorBlk X
Blk Y
How is the Hierarchy Managed?
registers memory by compiler (or programmer?)
cache main memory by the cache controller hardware
main memory disks by the operating system (virtual memory) virtual to physical address mapping assisted by the hardware
(TLB) by the programmer (files)
Two questions to answer (in hardware): Q1: How do we know if a data item is in the cache? Q2: If it is, how do we find it?
Direct Mapped Caching For each item of data at the lower level, there is exactly
one location in the cache where it might be - so lots of items at the lower level must share locations in the upper level
Address mapping:
(block address) modulo (# of blocks in the cache)
First consider block sizes of one word
Cache
Caching: A Simple First Example
00
011011
Cache
Main Memory
Q2: How do we find it?
Use next 2 low order memory address bits – the index – to determine which cache block (i.e., modulo the number of blocks in the cache)
Tag Data
Q1: Is it there?
Compare the cache tag to the high order 2 memory address bits to tell if the memory block is in the cache
Valid
0000xx0001xx0010xx0011xx0100xx0101xx0110xx0111xx1000xx1001xx1010xx1011xx1100xx1101xx1110xx1111xx
Two low order bits define the byte in the word (32b words)
(block address) modulo (# of blocks in the cache)
Index
Direct Mapped Cache
0 1 2 3
4 3 4 15
Consider the main memory word reference string 0 1 2 3 4 3 4 15
00 Mem(0) 00 Mem(0)00 Mem(1)
00 Mem(0) 00 Mem(0)00 Mem(1)00 Mem(2)
miss miss miss miss
miss misshit hit
00 Mem(0)00 Mem(1)00 Mem(2)00 Mem(3)
01 Mem(4)00 Mem(1)00 Mem(2)00 Mem(3)
01 Mem(4)00 Mem(1)00 Mem(2)00 Mem(3)
01 Mem(4)00 Mem(1)00 Mem(2)00 Mem(3)
01 4
11 15
00 Mem(1)00 Mem(2)
00 Mem(3)
Start with an empty cache - all blocks initially marked as not valid
8 requests, 6 misses
One word/block, cache size = 1K words
MIPS Direct Mapped Cache Example
20Tag 10Index
Data Index TagValid012...
102110221023
31 30 . . . 13 12 11 . . . 2 1 0Byte offset
What kind of locality are we taking advantage of?
20
Data
32
Hit
Read hits (I$ and D$) this is what we want – no challenges
Write hits (D$ only) What is the problem here?
Strategies allow cache and memory to be inconsistent
- write the data only into the cache block (write-back the cache contents to the next level in the memory hierarchy when that cache block is “evicted”)
- need a dirty bit for each data cache block to tell if it needs to be written back to memory when it is evicted
require the cache and memory to be consistent- always write the data into both the cache block and the
next level in the memory hierarchy (write-through) so don’t need a dirty bit
- writes run at the speed of the next level in the memory hierarchy – so slow! – or can use a write buffer, so only have to stall if the write buffer is full
Handling Cache Hits
Read / Write Strategies
Write Hit Policy Write Miss Policy Write Through Write Allocate
Write Through * Write No Allocate *
Write Back * Write Allocate *
Write Back No Write Allocate
Read Through: Word read from memory
No Read Through: Word word from cache after block is read from memory
Write Through: Word written to both Cache and Memory
Write Back: Word written only to Cache
Write Allocate: Block is loaded on a write miss, followed by a write hit
Write No Allocate: Block is modified on a write miss but not loaded
Write Buffer for Write-Through Caching
Write buffer between the cache and main memory Processor: writes data into the cache and the write buffer Memory controller: writes contents of the write buffer to memory
The write buffer is just a FIFO Typical number of entries: 4 Works fine if store frequency (w.r.t. time) << 1 / DRAM write
cycle
Memory system designer’s nightmare When the store frequency (w.r.t. time) → 1 / DRAM write cycle
leading to write buffer saturation- One solution is to use a write-back cache; another is to use an “L2”
cache
ProcessorCache
write buffer
DRAM
Another Reference String Mapping
0 4 0 4
0 4 0 4
Consider the main memory word reference string 0 4 0 4 0 4 0 4
miss miss miss miss
miss miss miss miss
00 Mem(0) 00 Mem(0)01 4
01 Mem(4)000
00 Mem(0)01
4
00 Mem(0)01 4
00 Mem(0)01
401 Mem(4)
00001 Mem(4)
000
Start with an empty cache - all blocks initially marked as not valid
Ping pong effect due to conflict misses - two memory locations that map into the same cache block
8 requests, 8 misses
Sources of Cache Misses Compulsory (cold start or process migration, first reference):
First access to a block, “cold” fact of life, not a whole lot you can do about it
If you are going to run “millions” of instruction, compulsory misses are insignificant
Conflict (collision): Multiple memory locations mapped to the same cache location Solution 1: increase cache size or block length Solution 2: increase associativity
Capacity: Cache cannot contain all blocks accessed by the program Solution: increase cache size
What about the relationship between cache size and block length?
Handling Cache Misses Read misses (I$ and D$)
stall the entire pipeline, fetch the block from the next level in the memory hierarchy, install it in the cache and send the requested word to the processor, then let the pipeline resume
Write misses (D$ only)1. stall the pipeline, fetch the block from next level in the memory
hierarchy, install it in the cache (which may involve having to evict a dirty block if using a write-back cache), write the word from the processor to the cache, then let the pipeline resume or (normally used in write-back caches)
2. Write allocate – just write the word into the cache updating both the tag and data, no need to check for cache hit, no need to stall or (normally used in write-through caches with a write buffer)
3. No-write allocate – skip the cache write and just write the word to the write buffer (and eventually to the next memory level), no need to stall if the write buffer isn’t full; must invalidate the cache block since it will be inconsistent (now holding stale data)
Multiword Block Direct Mapped Cache
8Index
DataIndex TagValid012...
253254255
31 30 . . . 13 12 11 . . . 4 3 2 1 0Byte offset
20
20Tag
Hit Data
32
Block offset
Four words/block, cache size = 1K words
What kind of locality are we taking advantage of?
Taking Advantage of Spatial Locality
0
Let cache block hold more than one word 0 1 2 3 4 3 4 15
1 2
3 4 3
4 15
00 Mem(1) Mem(0)
miss
00 Mem(1) Mem(0)
hit
00 Mem(3) Mem(2)00 Mem(1) Mem(0)
miss
hit
00 Mem(3) Mem(2)00 Mem(1) Mem(0)
miss
00 Mem(3) Mem(2)00 Mem(1) Mem(0)
01 5 4hit
00 Mem(3) Mem(2)01 Mem(5) Mem(4)
hit
00 Mem(3) Mem(2)01 Mem(5) Mem(4)
00 Mem(3) Mem(2)01 Mem(5) Mem(4)
miss
11 15 14
Start with an empty cache - all blocks initially marked as not valid
8 requests, 4 misses
Miss Rate vs Block Size vs Cache Size
0
5
10
8 16 32 64 128 256
Block size (bytes)
Mis
s ra
te (
%) 8 KB
16 KB
64 KB
256 KB
Miss rate goes up if the block size becomes a significant fraction of the cache size because the number of blocks that can be held in the same size cache is smaller (increasing capacity misses)
Block Size Tradeoff
Larger block size means larger miss penalty- Latency to first word in block + transfer time for remaining words
MissPenalty
Block Size
MissRate Exploits Spatial Locality
Fewer blocks compromisesTemporal Locality
Block Size
AverageAccess
TimeIncreased Miss
Penalty& Miss Rate
Block Size
In general, Average Memory Access Time = Hit Time + Miss Penalty x Miss Rate
Larger block sizes take advantage of spatial locality but If the block size is too big relative to the cache size, the miss
rate will go up
Multiword Block Considerations Read misses (I$ and D$)
Processed the same as for single word blocks – a miss returns the entire block from memory
Miss penalty grows as block size grows- Early restart – datapath resumes execution as soon as the
requested word of the block is returned
- Requested word first – requested word is transferred from the memory to the cache (and datapath) first
Nonblocking cache – allows the datapath to continue to access the cache while the cache is handling an earlier miss
Write misses (D$) Can’t use write allocate or will end up with a “garbled” block
in the cache (e.g., for 4 word blocks, a new tag, one word of data from the new block, and three words of data from the old block), so must fetch the block from memory first and pay the stall time
Cache Summary The Principle of Locality:
Program likely to access a relatively small portion of the address space at any instant of time
- Temporal Locality: Locality in Time
- Spatial Locality: Locality in Space
Three major categories of cache misses: Compulsory misses: sad facts of life. Example: cold start misses Conflict misses: increase cache size and/or associativity
Nightmare Scenario: ping pong effect! Capacity misses: increase cache size
Cache design space total size, block size, associativity (replacement policy) write-hit policy (write-through, write-back) write-miss policy (write allocate, write buffers)
Measuring Cache Performance Assuming cache hit costs are included as part of the normal
CPU execution cycle, thenCPU time = IC × CPI × CC
= IC × (CPIideal + Memory-stall cycles) × CC
CPIstall
Memory-stall cycles come from cache misses (a sum of read-stalls and write-stalls)
Read-stall cycles = reads/program × read miss rate × read miss penalty
Write-stall cycles = (writes/program × write miss rate × write miss penalty)
+ write buffer stalls
For write-through caches, we can simplify this toMemory-stall cycles = miss rate × miss penalty
Impacts of Cache Performance Relative cache penalty increases as processor performance
improves (faster clock rate and/or lower CPI) The memory speed is unlikely to improve as fast as processor
cycle time. When calculating CPIstall, the cache miss penalty is measured in processor clock cycles needed to handle a miss
The lower the CPIideal, the more pronounced the impact of stalls
A processor with a CPIideal of 2, a 100 cycle miss penalty, 36% load/store instr’s, and 2% I$ and 4% D$ miss rates
Memory-stall cycles = 2% × 100 + 36% × 4% × 100 = 3.44
So CPIstalls = 2 + 3.44 = 5.44
What if the CPIideal is reduced to 1? 0.5? 0.25?
What if the processor clock rate is doubled (doubling the miss penalty)?
Reducing Cache Miss Rates #1 Allow more flexible block placement
In a direct mapped cache a memory block maps to exactly one cache block
At the other extreme, could allow a memory block to be mapped to any cache block – fully associative cache
A compromise is to divide the cache into sets each of which consists of n “ways” (n-way set associative). A memory block maps to a unique set (specified by the index field) and can be placed in any way of that set (so there are n choices)
(block address) modulo (# sets in the cache)