Communication-Avoiding Algorithms
Jim DemmelEECS & Math Departments
UC Berkeley
2
Why avoid communication? (1/3)
Algorithms have two costs (measured in time or energy):1. Arithmetic (FLOPS)2. Communication: moving data between
– levels of a memory hierarchy (sequential case) – processors over a network (parallel case).
CPUCache
DRAM
CPUDRAM
CPUDRAM
CPUDRAM
CPUDRAM
Why avoid communication? (2/3)
• Running time of an algorithm is sum of 3 terms:– # flops * time_per_flop– # words moved / bandwidth– # messages * latency
3
communication
• Time_per_flop << 1/ bandwidth << latency• Gaps growing exponentially with time [FOSC]
• Avoid communication to save time
Annual improvements
Time_per_flop Bandwidth Latency
Network 26% 15%
DRAM 23% 5%59%
Why Minimize Communication? (3/3)
Source: John Shalf, LBL
Why Minimize Communication? (3/3)
Source: John Shalf, LBL
Minimize communication to save energy
Goals
6
• Redesign algorithms to avoid communication• Between all memory hierarchy levels
• L1 L2 DRAM network, etc • Attain lower bounds if possible
• Current algorithms often far from lower bounds• Large speedups and energy savings possible
“New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of a floating point arithmetic operation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.”
FY 2010 Congressional Budget, Volume 4, FY2010 Accomplishments, Advanced Scientific Computing Research (ASCR), pages 65-67.
President Obama cites Communication-Avoiding Algorithms in the FY 2012 Department of Energy Budget Request to Congress:
CA-GMRES (Hoemmen, Mohiyuddin, Yelick, JD)“Tall-Skinny” QR (Grigori, Hoemmen, Langou, JD)
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
Collaborators and Supporters• Michael Christ, Jack Dongarra, Ioana Dumitriu, David Gleich, Laura
Grigori, Ming Gu, Olga Holtz, Julien Langou, Tom Scanlon, Kathy Yelick• Grey Ballard, Austin Benson, Abhinav Bhatele, Aydin Buluc, Erin
Carson, Maryam Dehnavi, Michael Driscoll, Evangelos Georganas, Nicholas Knight, Penporn Koanantakool, Ben Lipshitz, Oded Schwartz, Edgar Solomonik, Hua Xiang
• Other members of ParLab, BEBOP, CACHE, EASI, FASTMath, MAGMA, PLASMA, TOPS projects– bebop.cs.berkeley.edu
• Thanks to NSF, DOE, UC Discovery, Intel, Microsoft, Mathworks, National Instruments, NEC, Nokia, NVIDIA, Samsung, Oracle
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
Summary of CA Linear Algebra• “Direct” Linear Algebra
• Lower bounds on communication for linear algebra problems like Ax=b, least squares, Ax = λx, SVD, etc
• Mostly not attained by algorithms in standard libraries• New algorithms that attain these lower bounds
• Being added to libraries: Sca/LAPACK, PLASMA, MAGMA
• Large speed-ups possible• Autotuning to find optimal implementation
• Ditto for “Iterative” Linear Algebra
Lower bound for all “n3-like” linear algebra
• Holds for– Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, …– Some whole programs (sequences of these operations, no
matter how individual ops are interleaved, eg Ak)– Dense and sparse matrices (where #flops << n3 )– Sequential and parallel algorithms– Some graph-theoretic algorithms (eg Floyd-Warshall)
12
• Let M = “fast” memory size (per processor)
#words_moved (per processor) = (#flops (per processor) / M1/2 )
#messages_sent (per processor) = (#flops (per processor) / M3/2 )
• Parallel case: assume either load or memory balanced
Lower bound for all “n3-like” linear algebra
• Holds for– Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, …– Some whole programs (sequences of these operations, no
matter how individual ops are interleaved, eg Ak)– Dense and sparse matrices (where #flops << n3 )– Sequential and parallel algorithms– Some graph-theoretic algorithms (eg Floyd-Warshall)
13
• Let M = “fast” memory size (per processor)
#words_moved (per processor) = (#flops (per processor) / M1/2 )
#messages_sent ≥ #words_moved / largest_message_size
• Parallel case: assume either load or memory balanced
Lower bound for all “n3-like” linear algebra
• Holds for– Matmul, BLAS, LU, QR, eig, SVD, tensor contractions, …– Some whole programs (sequences of these operations, no
matter how individual ops are interleaved, eg Ak)– Dense and sparse matrices (where #flops << n3 )– Sequential and parallel algorithms– Some graph-theoretic algorithms (eg Floyd-Warshall)
14
• Let M = “fast” memory size (per processor)
#words_moved (per processor) = (#flops (per processor) / M1/2 )
#messages_sent (per processor) = (#flops (per processor) / M3/2 )
• Parallel case: assume either load or memory balanced
SIAM SIAG/Linear Algebra Prize, 2012Ballard, D., Holtz, Schwartz
Can we attain these lower bounds?
• Do conventional dense algorithms as implemented in LAPACK and ScaLAPACK attain these bounds?– Often not
• If not, are there other algorithms that do?– Yes, for much of dense linear algebra– New algorithms, with new numerical properties,
new ways to encode answers, new data structures
– Not just loop transformations (need those too!)• Only a few sparse algorithms so far• Lots of work in progress
15
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
TSQR: QR of a Tall, Skinny matrix
17
W =
Q00 R00
Q10 R10
Q20 R20
Q30 R30
W0
W1
W2
W3
Q00
Q10
Q20
Q30
= = .
R00
R10
R20
R30
R00
R10
R20
R30
=Q01 R01
Q11 R11
Q01
Q11
= . R01
R11
R01
R11
= Q02 R02
TSQR: QR of a Tall, Skinny matrix
18
W =
Q00 R00
Q10 R10
Q20 R20
Q30 R30
W0
W1
W2
W3
Q00
Q10
Q20
Q30
= = .
R00
R10
R20
R30
R00
R10
R20
R30
=Q01 R01
Q11 R11
Q01
Q11= . R01
R11
R01
R11
= Q02 R02
Output = { Q00, Q10, Q20, Q30, Q01, Q11, Q02, R02 }
TSQR: An Architecture-Dependent Algorithm
W =
W0
W1
W2
W3
R00
R10
R20
R30
R01
R11
R02Parallel:
W =
W0
W1
W2
W3
R01 R02
R00
R03
Sequential:
W =
W0
W1
W2
W3
R00
R01R01
R11
R02
R11
R03
Dual Core:
Can choose reduction tree dynamicallyMulticore / Multisocket / Multirack / Multisite / Out-of-core: ?
TSQR Performance Results• Parallel
– Intel Clovertown– Up to 8x speedup (8 core, dual socket, 10M x 10)
– Pentium III cluster, Dolphin Interconnect, MPICH• Up to 6.7x speedup (16 procs, 100K x 200)
– BlueGene/L• Up to 4x speedup (32 procs, 1M x 50)
– Tesla C 2050 / Fermi• Up to 13x (110,592 x 100)
– Grid – 4x on 4 cities (Dongarra, Langou et al)– Cloud – 1.6x slower than accessing data twice (Gleich and Benson)
• Sequential – “Infinite speedup” for out-of-core on PowerPC laptop
• As little as 2x slowdown vs (predicted) infinite DRAM• LAPACK with virtual memory never finished
• SVD costs about the same• Joint work with Grigori, Hoemmen, Langou, Anderson, Ballard, Keutzer, others
20
Data from Grey Ballard, Mark Hoemmen, Laura Grigori, Julien Langou, Jack Dongarra, Michael Anderson
Summary of dense parallel algorithms attaining communication lower bounds
• Assume nxn matrices on P processors • Minimum Memory per processor = M = O(n2 / P)• Recall lower bounds:
#words_moved = ( (n3/ P) / M1/2 ) = ( n2 / P1/2 ) #messages = ( (n3/ P) / M3/2 ) = ( P1/2 )
• Does ScaLAPACK attain these bounds?• For #words_moved: mostly, except nonsym. Eigenproblem• For #messages: asymptotically worse, except Cholesky
• New algorithms attain all bounds, up to polylog(P) factors• Cholesky, LU, QR, Sym. and Nonsym eigenproblems, SVD
Can we do Better?
Can we do better?
• Aren’t we already optimal?• Why assume M = O(n2/p), i.e. minimal?
– Lower bound still true if more memory– Can we attain it?
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
24
SUMMA– n x n matmul on P1/2 x P1/2 grid(nearly) optimal using minimum memory M=O(n2/P)
* =i
j
A(i,k)
kk
B(k,j)
C(i,j)
Brow
Acol
Using more than the minimum memory• What if matrix small enough to fit c>1 copies, so M = cn2/P ?
– #words_moved = Ω( #flops / M1/2 ) = Ω( n2 / ( c1/2 P1/2 ))– #messages = Ω( #flops / M3/2 ) = Ω( P1/2 /c3/2)
• Can we attain new lower bound?
2.5D Matrix Multiplication
• Assume can fit cn2/P data per processor, c > 1• Processors form (P/c)1/2 x (P/c)1/2 x c grid
c
(P/c)1/2
(P/c)1/2
Example: P = 32, c = 2
2.5D Matrix Multiplication
• Assume can fit cn2/P data per processor, c > 1• Processors form (P/c)1/2 x (P/c)1/2 x c grid
k
j
iInitially P(i,j,0) owns A(i,j) and B(i,j) each of size n(c/P)1/2 x n(c/P)1/2
(1) P(i,j,0) broadcasts A(i,j) and B(i,j) to P(i,j,k)(2) Processors at level k perform 1/c-th of SUMMA, i.e. 1/c-th of Σm A(i,m)*B(m,j)
(3) Sum-reduce partial sums Σm A(i,m)*B(m,j) along k-axis so P(i,j,0) owns C(i,j)
2.5D Matmul on BG/P, 16K nodes / 64K cores
2.5D Matmul on BG/P, 16K nodes / 64K coresc = 16 copies
Distinguished Paper Award, EuroPar’11 (Solomonik, D.)SC’11 paper by Solomonik, Bhatele, D.
12x faster
2.7x faster
Perfect Strong Scaling – in Time and Energy • Every time you add a processor, you should use its memory M too• Start with minimal number of procs: PM = 3n2
• Increase P by a factor of c total memory increases by a factor of c• Notation for timing model:
– γT , βT , αT = secs per flop, per word_moved, per message of size m
• T(cP) = n3/(cP) [ γT+ βT/M1/2 + αT/(mM1/2) ] = T(P)/c• Notation for energy model:
– γE , βE , αE = joules for same operations
– δE = joules per word of memory used per sec
– εE = joules per sec for leakage, etc.
• E(cP) = cP { n3/(cP) [ γE+ βE/M1/2 + αE/(mM1/2) ] + δEMT(cP) + εET(cP) } = E(P)• Perfect scaling extends to N-body, Strassen, …
Ongoing Work• Lots more work on
– Algorithms: • LDLT, QR with pivoting, other pivoting schemes, eigenproblems, … • All-pairs-shortest-path, …• Both 2D (c=1) and 2.5D (c>1) • But only bandwidth may decrease with c>1, not latency
– Platforms: • Multicore, cluster, GPU, cloud, heterogeneous, low-energy, …
– Software: • Integration into Sca/LAPACK, PLASMA, MAGMA,…
• Integration into applications (on IBM BG/Q)– Qbox (with LLNL, IBM): molecular dynamics– CTF (with ANL): symmetric tensor contractions
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
Communication Lower Bounds for Strassen-like matmul algorithms
• Proof: graph expansion (different from classical matmul)– Strassen-like: DAG must be “regular” and connected
• Extends up to M = n2 / p2/ω
• Best Paper Prize (SPAA’11), Ballard, D., Holtz, Schwartz to appear in JACM• Is the lower bound attainable?
Classical O(n3) matmul:
#words_moved =Ω (M(n/M1/2)3/P)
Strassen’s O(nlg7) matmul:
#words_moved =Ω (M(n/M1/2)lg7/P)
Strassen-like O(nω) matmul:
#words_moved =Ω (M(n/M1/2)ω/P)
vs.
Runs all 7 multiplies in parallelEach on P/7 processorsNeeds 7/4 as much memory
Runs all 7 multiplies sequentiallyEach on all P processorsNeeds 1/4 as much memory
CAPS If EnoughMemory and P 7 then BFS step else DFS step end if
Communication Avoiding Parallel Strassen (CAPS)
Best way to interleaveBFS and DFS is an tuning parameter
34
Performance Benchmarking, Strong Scaling PlotFranklin (Cray XT4) n = 94080
Speedups: 24%-184%(over previous Strassen-based algorithms)
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
Recall optimal sequential Matmul• Naïve code for i=1:n, for j=1:n, for k=1:n, C(i,j)+=A(i,k)*B(k,j)
• “Blocked” code for i1 = 1:b:n, for j1 = 1:b:n, for k1 = 1:b:n for i2 = 0:b-1, for j2 = 0:b-1, for k2 = 0:b-1 i=i1+i2, j = j1+j2, k = k1+k2 C(i,j)+=A(i,k)*B(k,j)
• Thm: Picking b = M1/2 attains lower bound: #words_moved = Ω(n3/M1/2)• Where does 1/2 come from?
b x b matmul
New Thm applied to Matmul• for i=1:n, for j=1:n, for k=1:n, C(i,j) += A(i,k)*B(k,j)• Record array indices in matrix Δ
• Solve LP for x = [xi,xj,xk]T: max 1Tx s.t. Δ x ≤ 1– Result: x = [1/2, 1/2, 1/2]T, 1Tx = 3/2 = sHBL
• Thm: #words_moved = Ω(n3/MSHBL-1)= Ω(n3/M1/2) Attained by block sizes Mxi,Mxj,Mxk = M1/2,M1/2,M1/2
i j k1 0 1 A
Δ = 0 1 1 B1 1 0 C
New Thm applied to Direct N-Body• for i=1:n, for j=1:n, F(i) += force( P(i) , P(j) )• Record array indices in matrix Δ
• Solve LP for x = [xi,xj]T: max 1Tx s.t. Δ x ≤ 1– Result: x = [1,1], 1Tx = 2 = sHBL
• Thm: #words_moved = Ω(n2/MSHBL-1)= Ω(n2/M1) Attained by block sizes Mxi,Mxj = M1,M1
i j1 0 F
Δ = 1 0 P(i)0 1 P(j)
N-Body Speedups on IBM-BG/P (Intrepid)8K cores, 32K particles
11.8x speedup
K. Yelick, E. Georganas, M. Driscoll, P. Koanantakool, E. Solomonik
New Thm applied to Random Code• for i1=1:n, for i2=1:n, … , for i6=1:n A1(i1,i3,i6) += func1(A2(i1,i2,i4),A3(i2,i3,i5),A4(i3,i4,i6)) A5(i2,i6) += func2(A6(i1,i4,i5),A3(i3,i4,i6))• Record array indices in matrix Δ
• Solve LP for x = [x1,…,x7]T: max 1Tx s.t. Δ x ≤ 1– Result: x = [2/7,3/7,1/7,2/7,3/7,4/7], 1Tx = 15/7 = sHBL
• Thm: #words_moved = Ω(n6/MSHBL-1)= Ω(n6/M8/7) Attained by block sizes M2/7,M3/7,M1/7,M2/7,M3/7,M4/7
i1 i2 i3 i4 i5 i61 0 1 0 0 1 A1
1 1 0 1 0 0 A2
Δ = 0 1 1 0 1 0 A3
0 0 1 1 0 1 A3,A4
0 0 1 1 0 1 A5
1 0 0 1 1 0 A6
Where do lower and matching upper bounds on communication come from? (1/3)
• Originally for C = A*B by Irony/Tiskin/Toledo (2004)• Proof idea
– Suppose we can bound #useful_operations ≤ G doable with data in fast memory of size M
– So to do F = #total_operations, need to fill fast memory F/G times, and so #words_moved ≥ MF/G
• Hard part: finding G• Attaining lower bound
– Need to “block” all operations to perform ~G operations on every chunk of M words of data
Proof of communication lower bound (2/3)
42
k
“A face”“B
face
”
“C face”Cube representing
C(1,1) += A(1,3)·B(3,1)
• If we have at most M “A squares”, M “B squares”, and M “C squares”, how many cubes G can we have?
i
j
A(2,1)
A(1,3)
B(1
,3)
B(3
,1)
C(1,1)
C(2,3)
A(1,1) B(1
,1)
A(1,2)
B(2
,1)
Proof of communication lower bound (3/3)
x
z
z
y
xy
k
“A shadow”
“B shadow”
“C shadow”
j
i
G = # cubes in black box with side lengths x, y and z= Volume of black box= x·y·z= ( xz · zy · yx)1/2
= (#A□s · #B□s · #C□s )1/2
≤ M 3/2
(i,k) is in “A shadow” if (i,j,k) in 3D set (j,k) is in “B shadow” if (i,j,k) in 3D set (i,j) is in “C shadow” if (i,j,k) in 3D set
Thm (Loomis & Whitney, 1949) G = # cubes in 3D set = Volume of 3D set ≤ (area(A shadow) · area(B shadow) · area(C shadow)) 1/2
≤ M 3/2
Approach to generalizing lower bounds• Matmul for i=1:n, for j=1:n, for k=1:n, C(i,j)+=A(i,k)*B(k,j) => for (i,j,k) in S = subset of Z3
Access locations indexed by (i,j), (i,k), (k,j)• General case for i1=1:n, for i2 = i1:m, … for ik = i3:i4 C(i1+2*i3-i7) = func(A(i2+3*i4,i1,i2,i1+i2,…),B(pnt(3*i4)),…) D(something else) = func(something else), … => for (i1,i2,…,ik) in S = subset of Zk
Access locations indexed by group homomorphisms, eg φC (i1,i2,…,ik) = (i1+2*i3-i7) φA (i1,i2,…,ik) = (i2+3*i4,i1,i2,i1+i2,…), …• Can we bound #loop_iterations / points in S given bounds on #points in its images φC (S), φA (S), … ?
General Communication Bound
• Given S subset of Zk, group homomorphisms φ1, φ2, …, bound |S| in terms of |φ1(S)|, |φ2(S)|, … , |φm(S)|
• Def: Hölder-Brascamp-Lieb LP (HBL-LP) for s1,…,sm: for all subgroups H < Zk, rank(H) ≤ Σj sj*rank(φj(H))
• Thm (Christ/Tao/Carbery/Bennett): Given s1,…,sm
|S| ≤ Πj |φj(S)|sj
• Thm: Given a program with array refs given by φj, choose sj to minimize sHBL = Σj sj subject to HBL-LP. Then
#words_moved = Ω (#iterations/MsHBL-1)
Is this bound attainable (1/2)?
• But first: Can we write it down?– Thm: (bad news) Reduces to Hilbert’s 10th problem over
Q (conjectured to be undecidable)– Thm: (good news) Can write it down explicitly in many
cases of interest (eg all φj = {subset of indices})– Thm: (good news) Easy to approximate
• If you miss a constraint, the lower bound may be too large (i.e. sHBL too small) but still worth trying to attain
• Tarski-decidable to get superset of constraints (may get . sHBL too large)
Is this bound attainable (2/2)?
• Depends on loop dependencies• Best case: none, or reductions (matmul)• Thm: When all φj = {subset of indices}, dual of HBL-LP
gives optimal tile sizes: HBL-LP: minimize 1T*s s.t. sT*Δ ≥ 1T
Dual-HBL-LP: maximize 1T*x s.t. Δ*x ≤ 1 Then for sequential algorithm, tile ij by Mxj
• Ex: Matmul: s = [ 1/2 , 1/2 , 1/2 ]T = x• Extends to unimodular transforms of indices
Ongoing Work
• Identify more decidable cases– Works for any 3 nested loops, or 3 different subscripts
• Automate generation of approximate LPs• Extend “perfect scaling” results for time and
energy by using extra memory• Have yet to find a case where we cannot attain
lower bound – can we prove this?• Incorporate into compilers
Outline
• Survey state of the art of CA (Comm-Avoiding) algorithms– TSQR: Tall-Skinny QR– CA O(n3) 2.5D Matmul – CA Strassen Matmul
• Beyond linear algebra– Extending lower bounds to any algorithm with arrays– Communication-optimal N-body algorithm
• CA-Krylov methods
Avoiding Communication in Iterative Linear Algebra
• k-steps of iterative solver for sparse Ax=b or Ax=λx– Does k SpMVs with A and starting vector– Many such “Krylov Subspace Methods”
• Conjugate Gradients (CG), GMRES, Lanczos, Arnoldi, … • Goal: minimize communication
– Assume matrix “well-partitioned”– Serial implementation
• Conventional: O(k) moves of data from slow to fast memory• New: O(1) moves of data – optimal
– Parallel implementation on p processors• Conventional: O(k log p) messages (k SpMV calls, dot prods)• New: O(log p) messages - optimal
• Lots of speed up possible (modeled and measured)– Price: some redundant computation– Challenges: Poor partitioning, Preconditioning, Stability
50
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Example: A tridiagonal, n=32, k=3• Works for any “well-partitioned” A
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Example: A tridiagonal, n=32, k=3
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Example: A tridiagonal, n=32, k=3
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Example: A tridiagonal, n=32, k=3
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Example: A tridiagonal, n=32, k=3
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Example: A tridiagonal, n=32, k=3
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Sequential Algorithm
• Example: A tridiagonal, n=32, k=3
Step 1
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Sequential Algorithm
• Example: A tridiagonal, n=32, k=3
Step 1 Step 2
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Sequential Algorithm
• Example: A tridiagonal, n=32, k=3
Step 1 Step 2 Step 3
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]
• Sequential Algorithm
• Example: A tridiagonal, n=32, k=3
Step 1 Step 2 Step 3 Step 4
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]• Parallel Algorithm
• Example: A tridiagonal, n=32, k=3• Each processor communicates once with neighbors
Proc 1 Proc 2 Proc 3 Proc 4
1 2 3 4 … … 32
x
A·x
A2·x
A3·x
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]• Parallel Algorithm
• Example: A tridiagonal, n=32, k=3• Each processor works on (overlapping) trapezoid
Proc 1 Proc 2 Proc 3 Proc 4
Same idea works for general sparse matrices
Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]
Simple block-row partitioning (hyper)graph partitioning
Top-to-bottom processing Traveling Salesman Problem
Minimizing Communication of GMRES to solve Ax=b
• GMRES: find x in span{b,Ab,…,Akb} minimizing || Ax-b ||2
Standard GMRES for i=1 to k w = A · v(i-1) … SpMV MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H
Communication-avoiding GMRES W = [ v, Av, A2v, … , Akv ] [Q,R] = TSQR(W) … “Tall Skinny QR” build H from R solve LSQ problem with H
•Oops – W from power method, precision lost!64
Sequential case: #words moved decreases by a factor of kParallel case: #messages decreases by a factor of k
“Monomial” basis [Ax,…,Akx] fails to converge
Different polynomial basis [p1(A)x,…,pk(A)x] does converge
65
Speed ups of GMRES on 8-core Intel Clovertown
[MHDY09]
66
Requires Co-tuning Kernels
67
CA-BiCGStab
Naive Monomial Newton Chebyshev
Replacement Its. 74 (1) [7, 15, 24, 31, …, 92, 97, 103] (17)
[67, 98] (2) 68 (1)
With Residual Replacement (RR) a la Van der Vorst and Ye
Summary of Iterative Linear Algebra
• New lower bounds, optimal algorithms, big speedups in theory and practice
• Lots of other progress, open problems– Many different algorithms reorganized
• More underway, more to be done– Need to recognize stable variants more easily– Preconditioning
• Hierarchically Semiseparable Matrices– Autotuning and synthesis
• Different kinds of “sparse matrices”
For more details
• Bebop.cs.berkeley.edu• CS267 – Berkeley’s Parallel Computing Course
– Live broadcast in Spring 2013• www.cs.berkeley.edu/~demmel
– Prerecorded version planned in Spring 2013• www.xsede.org• Free supercomputer accounts to do homework!
Summary
Don’t Communic…
72
Time to redesign all linear algebra, n-body, … algorithms and software
(and compilers)