+ All Categories
Home > Documents > CES 524 May 6

CES 524 May 6

Date post: 03-Jan-2016
Category:
Upload: adena-chang
View: 37 times
Download: 8 times
Share this document with a friend
Description:
CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley. Review: Basic Cache Optimizations. Reducing hit time 1.Giving Reads Priority over Writes - PowerPoint PPT Presentation
44
CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley
Transcript
Page 1: CES 524                                                          May 6

CES 524 May 6

• Eleven Advanced Cache Optimizations (Ch 5)• parallel architectures (Ch 4)

Slides adapted from Patterson, UC Berkeley

Page 2: CES 524                                                          May 6

Review: Basic Cache Optimizations

Reducing hit time

1. Giving Reads Priority over Writes • E.g., Read completes before earlier writes in write buffer

2. Lower associativity

Reducing Miss Penalty

3. Multilevel Caches

Reducing Miss Rate

4. Larger Block size (fewer Compulsory misses)

5. Larger Cache size (fewer Capacity misses)

6. Higher Associativity (fewer Conflict misses)

Page 3: CES 524                                                          May 6

Eleven Advanced Cache Optimizations

• Reducing hit time

1.Small and simple caches

2.Way prediction

3.Trace caches

• Increasing cache bandwidth

4.Pipelined caches

5.Multibanked caches

6.Nonblocking caches

• Reducing Miss Penalty

7. Critical word first

8. Merging write buffers

• Reducing Miss Rate

9. Compiler optimizations

• Reducing miss penalty or miss rate via parallelism

10.Hardware prefetching

11.Compiler prefetching

Page 4: CES 524                                                          May 6

1. Fast Hit Times via Small, Simple Caches

• Index tag memory and then compare takes time Small cache can help hit time since smaller memory takes

less time to index to find right set of block(s) in cache– E.g., fast L1 caches were same smail size for 3 generations of AMD

microprocessors: K6, Athlon, and Opteron– Also, having a L2 cache small enough to fit on-chip with the processor avoids

time penalty of going off chip (~10X longer data latency off-chip)

• Simple direct mapping– Overlap tag check with data transmission since no choice (kill data out if tag

bad)

• Access time estimate for 90 nm using CACTI model 4.0– Median ratios of access time relative to the direct-mapped caches are 1.32,

1.39, and 1.43 for 2-way, 4-way, and 8-way caches

-

0.50

1.00

1.50

2.00

2.50

16 KB 32 KB 64 KB 128 KB 256 KB 512 KB 1 MB

Cache size

Ac

ce

ss

tim

e (

ns

)

1-way 2-way 4-way 8-way

Page 5: CES 524                                                          May 6

2. Fast Hit Times via Way Prediction

• How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way Set Assoc. cache?

• Way prediction: keep extra bits in cache to predict the “way,” or block within the set, of next cache access.

– Multiplexor is set early to select desired block, only 1 tag comparison performed that clock cycle in parallel with reading the cache data

– Miss check other blocks for matches in next clock cycle

• Accuracy 85%

• Drawback: hard to tune CPU pipeline if hit time varies from 1 or 2 cycles

Page 6: CES 524                                                          May 6

4: Increase Cache Bandwidth by Pipelining

• Pipeline cache access to maintain bandwidth, even though pipe gives higher latency for each access.

• Number of instruction cache access pipeline stages:

1 for Pentium

2 for Pentium Pro through Pentium III

4 for Pentium 4

- greater penalty on mispredicted branches: restart pipelined stream of memory addresses at new PC

- more clock cycles between the issue of a load and the availability of the loaded data

Page 7: CES 524                                                          May 6

5. Increase Cache Bandwidth: Non-Blocking Caches

• Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss

– helps if Full/Empty bits on all registers (to allow execution to go on until missed datum is actually used) or out-of-order completion

– requires multi-bank memories for the non-blocking cache

• “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests

• “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses

- Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses

- Requires multiple main memory banks (otherwise cannot support)- Penium Pro allows 4 outstanding memory misses

Page 8: CES 524                                                          May 6

Value of Hit Under Miss for SPEC

If i = 0 1 2 64• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92

Hit Under i Misses

Av

g.

Me

m.

Acce

ss T

ime

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

eqnto

tt

esp

ress

o

xlisp

com

pre

ss

mdljsp

2

ear

fpppp

tom

catv

swm

256

doduc

su2co

r

wave5

mdljdp2

hydro

2d

alv

inn

nasa

7

spic

e2g6

ora

0->1

1->2

2->64

Base

Integer Floating Point

“Hit under i Misses”

AMAT = average miss access time

i=1->0i=2->1i=64->2Base, i=64

Page 9: CES 524                                                          May 6

6: Increase Cache Bandwidth via Multiple Banks

• Rather than treat the cache as a single monolithic block, divide it into independent banks that can support simultaneous accesses

– E.g.,T1 (“Niagara”) L2 has 4 banks

• Banking works best when accesses naturally spread themselves across banks mapping of addresses to banks affects behavior of memory system

• A simple mapping that works well is “sequential interleaving” => the next block of memory goes to the next bank of memory

– Spread memory block indices sequentially across banks– E.g., if there are 4 banks, Bank 0 has all blocks whose index

modulo 4 is 0; bank 1 has all blocks whose index modulo 4 is 1; …

Page 10: CES 524                                                          May 6

7. Reduce Miss Penalty: Early Restart and Critical Word First

• Do not wait for full block before restarting CPU• Early restart—As soon as the requested word of the

block arrives, send it to the CPU and let the CPU continue execution

– Spatial locality tend to want next sequential word, so first access to a block is normally to 1st word, but next is to 2nd word, which may stall again and so on, so benefit from early restart alone is not clear

• Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block

– Long blocks more popular today Critical Word 1st Widely Used

block

Page 11: CES 524                                                          May 6

8. Merge Multiple Adjacent New Words in Write Buffer to Reduce Miss Penalty

• Write buffer allows processor to continue without waiting to finish write to next lower memory/cache

• If buffer contains blocks of modified words, not just a single word per entry, addresses can be checked to see if the address of a newly written datum matches an address in an existing write buffer entry

• If so, new datum is combined with that existing entry• For write-through caches, can increase block sizes of

writes to lower memory from writes to individual words to writes to several sequential words, which allows more efficient use of memory system

• The Sun T1 (Niagara) processor, among many others, uses write merging

Page 12: CES 524                                                          May 6

9. Reduce Misses by Compiler Optimizations• McFarling [1989] used software to reduce cache misses

by 75% for an 8KB dm cache (4 byte blocks)• Instructions

– Reorder procedures in memory so as to reduce conflict misses– Profiling to look at conflicts (using tools they developed)

• Data– Merging Arrays: improve spatial locality by single array of

compound elements vs. 2 arrays– Loop Interchange: change nesting of loops to access data in the

order that they are stored in memory– Loop Fusion: Combine 2 non-dependent loops with the same

looping structure so more accesses to all common variables in each iteration

– Blocking: Improve temporal locality by accessing “blocks” of data (equal cache block size) repeatedly vs. going down whole columns or rows and moving from cache block to cache block rapidly

Page 13: CES 524                                                          May 6

Merging Arrays Example

/* Before: 2 sequential arrays */int val[SIZE];int key[SIZE];

/* After: 1 array of stuctures */struct merge {int val;int key;

};struct merge merged_array[SIZE];

Reduce conflicts between val & key; and improve spatial locality

Page 14: CES 524                                                          May 6

Loop Interchange Example

/* Before */for (k = 0; k < 100; k = k+1)for (j = 0; j < 100; j = j+1)

for (i = 0; i < 5000; i = i+1)x[i][j] = 2 * x[i][j];

/* After, since x[i][j+1] follows x[i][j] in memory*/for (k = 0; k < 100; k = k+1)for (i = 0; i < 5000; i = i+1)

for (j = 0; j < 100; j = j+1)x[i][j] = 2 * x[i][j];

Sequential accesses instead of striding through memory every 100 words; improve spatial locality

Page 15: CES 524                                                          May 6

Loop Fusion Example

/* Before */for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)

a[i][j] = 1/b[i][j] * c[i][j];for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)

d[i][j] = a[i][j] + c[i][j];/* After */for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1){ a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

Two misses per access to a & c vs. one miss per access; improve spatial locality

Page 16: CES 524                                                          May 6

Blocking Example

/* Before */

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

{r = 0;

for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];};

x[i][j] = r;

};

• Two Inner Loops:– Read all NxN elements of z[]– Read N elements of 1 row of y[] repeatedly– Write N elements of 1 row of x[]

• Capacity Misses are a function of N &

Cache Size:– 2N3 + N2 => (assuming no conflict; otherwise …)

• Idea: Compute on B x B submatrix fitting in 1 cache block

For large N, these long accesses repeatedly flush cache blocks that are needed again soon

Better way

Page 17: CES 524                                                          May 6

Blocking Example

/* After */for (jj = 0; jj < N; jj = jj+B)for (kk = 0; kk < N; kk = kk+B)for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1)

{r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) {

r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r;};

• B called Blocking Factor• Capacity Misses fall from 2N3 + N2 to 2N3/B +N2

• Do Conflict Misses fall also?

Page 18: CES 524                                                          May 6

Reduce Conflict Misses by Blocking

• Conflict misses in non-F.A. caches vs. Blocking size– Lam et al [1991] found a blocking factor of 24 had a fifth the misses

vs. 48 despite both fitting in one cache block

Blocking Factor

Mis

s R

ate

0

0.05

0.1

0 50 100 150

Fully Associative Cache

Direct Mapped Cache

(F.A. Cache)

Page 19: CES 524                                                          May 6

Performance Improvement

1 1.5 2 2.5 3

compress

cholesky(nasa7)

spice

mxm (nasa7)

btrix (nasa7)

tomcatv

gmty (nasa7)

vpenta (nasa7)

mergedarrays

loopinterchange

loop fusion blocking

Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

Page 20: CES 524                                                          May 6

Reduce Misses by Hardware Prefetching

• Prefetching relies on having extra memory bandwidth that can be used without penalty since some prefetched values unused

• Instruction Prefetching– Typically, a CPU fetches 2 blocks on a miss: the requested block and the next

consecutive block. – Requested block is placed in instruction cache when it returns, and prefetched

block is placed into instruction stream buffer

• Data Prefetching– Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8

different 4 KB pages – Prefetching invoked whenever 2 successive L2 cache misses to 1 page,

if distance between those cache blocks is < 256 bytes

1.16

1.45

1.18 1.20 1.21 1.26 1.29 1.32 1.401.49

1.97

1.001.201.401.601.802.002.20

Perf

orm

ance I

mpro

vem

ent

SPECint2000 SPECfp2000

Page 21: CES 524                                                          May 6

Reduce Misses by Software Prefetching Data

• Data Prefetch– Load data into register (HP PA-RISC loads)– Cache Prefetch: load into cache

(MIPS IV, PowerPC, SPARC v. 9)– Special prefetching instructions cannot cause faults;

a form of speculative execution

• Issuing Prefetch Instructions takes time– Is cost of issuing prefetch instructions < savings from

reduced misses?– Higher superscalar reduces difficulty of issue bandwidth

Page 22: CES 524                                                          May 6

Compiler Optimization vs. Memory Hierarchy Search

• Compiler tries to figure out memory hierarchy optimizations

• New approach: “Auto-tuners” 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, …) and algorithms, then produce C code to be compiled for that computer

• “Auto-tuner” targeted to numerical method– E.g., PHiPAC (BLAS), Atlas (BLAS),

Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

Page 23: CES 524                                                          May 6

Reference

Best: 4x2

Mflop/s

Mflop/s

Sparse Matrix – Search for Blocking

for finite element problem [Im, Yelick, Vuduc, 2005]

Page 24: CES 524                                                          May 6

Why Matrix Multiplication?

• An important kernel in scientific problems– Appears in many linear algebra algorithms

– Closely related to other algorithms, e.g., transitive closure on a graph using Floyd-Warshall

• Optimization ideas can be used in other problems

• The best case for optimization payoffs

• The most-studied algorithm in high performance computing

Page 25: CES 524                                                          May 6

Matrix-multiply, optimized several ways

Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

Page 26: CES 524                                                          May 6

Outline

• Idealized and actual costs in modern processors

• Memory hierarchies• Case Study: Matrix Multiplication

– Simple cache model– Warm-up: Matrix-vector multiplication– Blocking algorithms– Other techniques

• Automatic Performance Tuning

Page 27: CES 524                                                          May 6

Note on Matrix Storage• A matrix is a 2-D array of elements, but

memory addresses are “1-D”• Conventions for matrix layout

– by column, or “column major” (Fortran default); A(i,j) at A+i+j*n

– by row, or “row major” (C default) A(i,j) at A+i*n+j– recursive (later)

• Column major (for now)

0

1

2

3

4

5

6

7

8

9

1011121314

1516171819

0

4

8

1216

1

5

9

1317

2

6

101418

3

7

111519

Column major Row major

cachelinesBlue row of matrix is stored in red cachelinesFigure source: Larry Carter, UCSD

Column major matrix in memory

Page 28: CES 524                                                          May 6

Computational Intensity: Key to algorithm efficiency

Machine Balance:Key to machine efficiency

Using a Simple Model of Memory to Optimize

• Assume just 2 levels in the hierarchy, fast (DRAM) and slow (cache)

• All data initially in slow memory– m = number of memory elements (words) moved between fast

and slow memory

– tm = time per slow memory operation– f = number of arithmetic operations

– tf = time per arithmetic operation << tm

– q = f / m average number of flops per slow memory access

• Minimum possible time = f* tf when all data in fast memory

• Actual time – f * tf + m * tm = f * tf * (1 + tm/tf * 1/q)

• Larger q means time closer to minimum f * tf – q tm/tf needed to get at least half of peak speed

Page 29: CES 524                                                          May 6

Warm up: Matrix-vector multiplication

{implements y = y + A*x}

for i = 1:n

for j = 1:n

y(i) = y(i) + A(i,j)*x(j)

= + *

y(i) y(i)

A(i,:)

x(:)

Page 30: CES 524                                                          May 6

Warm up: Matrix-vector multiplication

{read x(1:n) into fast memory}

{read y(1:n) into fast memory}

for i = 1:n

{read row i of A into fast memory}

for j = 1:n

y(i) = y(i) + A(i,j)*x(j)

{write y(1:n) back to slow memory}

• m = number of slow memory refs = 3n + n2

• f = number of arithmetic operations = 2n2

• q = f / m ~ 2

• Matrix-vector multiplication limited by slow memory speed

Page 31: CES 524                                                          May 6

Modeling Matrix-Vector Multiplication

• Compute time for n x n = 1000x1000 matrix• Time

f * tf + m * tm = f * tf * (1 + tm/tf * 1/q)

= 2*n2 * tf * (1 + tm/tf * 1/2)

Clock Peak Linesize t_m/t_fMHz Mflop/s Bytes

Ultra 2i 333 667 38 66 16 24.8Ultra 3 900 1800 28 200 32 14.0Pentium 3 500 500 25 60 32 6.3Pentium3M 800 800 40 60 32 10.0Power3 375 1500 35 139 128 8.8Power4 1300 5200 60 10000 128 15.0Itanium1 800 3200 36 85 32 36.0Itanium2 900 3600 11 60 64 5.5

Mem Lat (Min,Max) cycles

machinebalance(q must be at leastthis for ½ peak speed)

Page 32: CES 524                                                          May 6

Simplifying Assumptions• What simplifying assumptions did we make in

this analysis?– Ignored parallelism in processor between memory and

arithmetic within the processor» Sometimes drop arithmetic term in this type of analysis

– Assumed fast memory was large enough to hold three vectors» Reasonable if we are talking about any level of cache» Not if we are talking about registers (~32 words)

– Assumed the cost of a fast memory access is 0» Reasonable if we are talking about registers» Not necessarily if we are talking about cache (1-2 cycles for

L1)– Memory latency is constant

• Could simplify even further by ignoring memory operations in X and Y vectors

– Mflop rate/element = 2 / (2* tf + tm)

Page 33: CES 524                                                          May 6

Validating the Model

• How well does the model predict actual performance?

– Actual DGEMV: Most highly optimized code for the platform

• Model sufficient to compare across machines• But under-predicting on most recent ones due to

latency estimate

0

200

400

600

800

1000

1200

1400

Ultra 2i Ultra 3 Pentium 3 Pentium3M Power3 Power4 Itanium1 Itanium2

MFl

op/s

Predicted MFLOP(ignoring x,y)

Pre DGEMV Mflops(with x,y)

Actual DGEMV(MFLOPS)

Page 34: CES 524                                                          May 6

Naïve Matrix Multiply{implements C = C + A*B}

for i = 1 to n

for j = 1 to n

for k = 1 to n

C(i,j) = C(i,j) + A(i,k) * B(k,j)

= + *C(i,j) C(i,j) A(i,:)

B(:,j)

Algorithm has 2*n3 = O(n3) Flops and operates on 3*n2 words of memory

q potentially as large as 2*n3 / 3*n2 = O(n)

Page 35: CES 524                                                          May 6

Naïve Matrix Multiply{implements C = C + A*B}for i = 1 to n

{read row i of A into fast memory}

for j = 1 to n

{read C(i,j) into fast memory}

{read column j of B into fast memory}

for k = 1 to n

C(i,j) = C(i,j) + A(i,k) * B(k,j)

{write C(i,j) back to slow memory}

= + *C(i,j) A(i,:)

B(:,j)C(i,j)

Page 36: CES 524                                                          May 6

Naïve Matrix Multiply

• Number of slow memory references on unblocked matrix multiply

m = n3 to read each column of B n times

+ n2 to read each row of A once

+ 2n2 to read and write each element of C once

= n3 + 3n2

• So q = f / m = 2n3 / (n3 + 3n2)

~ 2 for large n, no improvement over matrix-vector multiply

= + *C(i,j) C(i,j) A(i,:)

B(:,j)

Page 37: CES 524                                                          May 6

Matrix-multiply, optimized several ways

Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

Page 38: CES 524                                                          May 6

Naïve Matrix Multiply on RS/6000

-1

0

1

2

3

4

5

6

0 1 2 3 4 5

log Problem Size

log

cycl

es/fl

opT = N4.7

O(N3) performance would have constant cycles/flopPerformance looks like O(N4.7)

Size 2000 took 5 days

12000 would take1095 years

Slide source: Larry Carter, UCSD

Page 39: CES 524                                                          May 6

Naïve Matrix Multiply on RS/6000

Slide source: Larry Carter, UCSD

0

1

2

3

4

5

6

0 1 2 3 4 5

log Problem Size

log

cycl

es/fl

op

Page miss every iteration

TLB miss every iteration

Cache miss every 16 iterations Page miss every 512 iterations

Page 40: CES 524                                                          May 6

Blocked (Tiled) Matrix Multiply

Consider A,B,C to be N-by-N matrices of b-by-b subblocks where b=n / N is called the block size

for i = 1 to N

for j = 1 to N

{read block C(i,j) into fast memory}

for k = 1 to N

{read block A(i,k) into fast memory}

{read block B(k,j) into fast memory}

C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks}

{write block C(i,j) back to slow memory}

= + *C(i,j) C(i,j) A(i,k)

B(k,j)

Page 41: CES 524                                                          May 6

Blocked (Tiled) Matrix MultiplyRecall:

m is amount memory traffic between slow and fast memory

matrix has nxn elements, and NxN blocks each of size bxb

f is number of floating point operations, 2n3 for this problem

q = f / m is our measure of algorithm efficiency in the memory system

So:m = N*n2 read each block of B N3 times (N3 * b2 = N3 * (n/N)2 = N*n2)

+ N*n2 read each block of A N3 times

+ 2n2 read and write each block of C once

= (2N + 2) * n2

So computational intensity q = f / m = 2n3 / ((2N + 2) * n2)

~= n / N = b for large n

So we can improve performance by increasing the blocksize b

Can be much faster than matrix-vector multiply (q=2)

Page 42: CES 524                                                          May 6

Using Analysis to Understand Machines

The blocked algorithm has computational intensity q ~= b• The larger the block size, the more efficient our algorithm

will be• Limit: All three blocks from A,B,C must fit in fast memory

(cache), so we cannot make these blocks arbitrarily large • Assume your fast memory has size Mfast

3b2 <= Mfast, so q ~= b <= sqrt(Mfast/3)

requiredt_m/t_f KB

Ultra 2i 24.8 14.8Ultra 3 14 4.7Pentium 3 6.25 0.9Pentium3M 10 2.4Power3 8.75 1.8Power4 15 5.4Itanium1 36 31.1Itanium2 5.5 0.7

• To build a machine to run matrix multiply at 1/2 peak arithmetic speed of the machine, we need a fast memory of size Mfast >= 3b2 ~= 3q2 = 3(tm/tf)2

• This size is reasonable for L1 cache, but not for register sets

• Note: analysis assumes it is possible to schedule the instructions perfectly

Page 43: CES 524                                                          May 6

Limits to Optimizing Matrix Multiply

• The blocked algorithm changes the order in which values are accumulated into each C[i,j] by applying associativity

– Get slightly different answers from naïve code, because of roundoff - OK

• The previous analysis showed that the blocked algorithm has computational intensity:

q ~= b <= sqrt(Mfast/3)

• There is a lower bound result that says we cannot do any better than this (using only associativity)

Theorem (Hong & Kung, 1981): Any reorganization of this algorithm (that uses only associativity) is limited to q = O(sqrt(Mfast))

Page 44: CES 524                                                          May 6

Technique Hit Time

Band-width

Miss penalty

Miss rate

HW cost/ complexity

Comment

Small and simple caches+ – 0 Trivial; widely used

Way-predicting caches + 1 Used in Pentium 4

Trace caches + 3 Used in Pentium 4

Pipelined cache access– + 1 Widely used

Nonblocking caches+ + 3 Widely used

Banked caches+ 1

Used in L2 of Opteron and Niagara

Critical word first and early restart + 2 Widely used

Merging write buffer+ 1

Widely used with write through

Compiler techniques to reduce cache misses + 0

Software is a challenge; some computers have compiler option

Hardware prefetching of instructions and data + +

2 instr., 3 data

Many prefetch instructions; AMD Opteron prefetches data

Compiler-controlled prefetching + + 3

Needs nonblocking cache; in many CPUs


Recommended