+ All Categories
Home > Documents > Parallel Multi-Block ADMM with o(1...

Parallel Multi-Block ADMM with o(1...

Date post: 21-Jun-2018
Category:
Upload: truonglien
View: 214 times
Download: 0 times
Share this document with a friend
22
Parallel Multi-Block ADMM with o(1/k ) Convergence Wotao Yin (UCLA Math) W. Deng, M.-J. Lai, Z. Peng, and W. Yin, Parallel Multi-Block ADMM with o(1/k ) Convergence, UCLA CAM 13-64, 2013. Advanced Workshop at Shanghai U. 1 / 22
Transcript
Page 1: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Parallel Multi-Block ADMM with o(1/k) Convergence

Wotao Yin (UCLA Math)

W. Deng, M.-J. Lai, Z. Peng, and W. Yin, Parallel Multi-Block ADMM with

o(1/k) Convergence, UCLA CAM 13-64, 2013.

Advanced Workshop at Shanghai U.

1 / 22

Page 2: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

2 / 22

Page 3: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Linearly Constrained Separable Problem

minimize f1(x1) + ∙ ∙ ∙+ fN (xN )

subject to A1x1 + ∙ ∙ ∙+ AN xN = c,

x1 ∈ X1, . . . , xN ∈ XN .

• fi : Rni → (−∞,+∞] are convex functions. N ≥ 2.

• a.k.a. extended monotropic programming [Bertsekas, 2008]

• Examples:

• Linear programming• Multi-agent network optimization• Exchange problem• Regularization model

3 / 22

Page 4: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Parallel and Distributed Algorithms

Motivation:

• Data may be collected and stored in a distributed way

• Often difficult to minimize all the fi ’s jointly

Strategy:

• Decompose the problem into N simpler and smaller subproblems

• Solve subproblems in parallel

• Coordinate by passing some information

4 / 22

Page 5: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Dual Decomposition

(xk+11 ,xk+1

2 , . . . ,xk+1N ) = arg min{xi}

L(x1, . . . ,xN , λk),

λk+1 = λk − αk

(∑Ni=1 Aixk+1

i − c), αk > 0.

• Lagrangian:

L(x1, . . . ,xN , λ) =N∑

i=1

fi(xi)− λ>

(N∑

i=1

Aixi − c

)

• x-step has N decoupled xi-subproblems, parallelizable:

xk+1i = arg min

xi

fi(xi)− 〈λk ,Aixi〉, for i = 1, 2, . . . ,N ,

• Convergence rate: O(1/√

k) (for general convex problems)

• Often slow convergence in practice

5 / 22

Page 6: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Distributed ADMM[Bertsekas and Tsitsiklis, 1997, Boyd et al., 2010, Wang et al., 2013]

• Variable splitting:

min{xi},{zi}

N∑

i=1

fi(xi)

s.t. Aixi − zi =cN, i = 1, 2, . . . ,N ,

N∑

i=1

zi = 0.

• Apply ADMM, alternatively update {xi} and {zi}, then multipliers {λi}:

zk+1i =

(Aix

ki −

cN−λk

i

ρ

)−

1N

N∑

j=1

(Ajx

kj −

cN−λk

j

ρ

), ∀i;

xk+1i = arg min

xi

fi(xi) +ρ

2

∥∥∥∥Aixi − zk+1

i −cN−λk

i

ρ

∥∥∥∥

2

, ∀i;

λk+1i = λk

i − ρ(

Aixk+1i − zk+1

i −cN

), ∀i.

6 / 22

Page 7: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Jacobi ADMM

• Augmented Lagrangian:

Lρ(x1, . . . ,xN , λ) =N∑

i=1

fi(xi)−λ>

(N∑

i=1

Aixi − c

)

2

∥∥∥∥∥

N∑

i=1

Aixi − c

∥∥∥∥∥

2

• Do not introduce {zi}, directly apply Jacobi-type block minimization:

xk+1i = arg min

xi

Lρ(xk1, . . . ,x

ki−1,xi ,x

ki+1, . . . ,x

kN , λ

k)

= arg minxi

fi(xi) +ρ

2

∥∥∥∥∥

Aixi +∑

j 6=i

Ajxkj − c −

λk

ρ

∥∥∥∥∥

2

for i = 1, . . . ,N in parallel;

λk+1 = λk − ρ

(N∑

i=1

Aixk+1i − c

)

.

• Not necessarily convergent (even if N = 2)

• Need either conditions or modifications to converge

7 / 22

Page 8: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

A Sufficient Condition for Convergence

Theorem

Suppose that there exists δ > 0 such that

‖A>i Aj‖ ≤ δ, ∀ i 6= j, and λmin(A>i Ai) > 3(N − 1)δ, ∀ i,

Then Jacobi ADMM converges to a solution.

The assumption basically says:

• {Ai , i = 1, 2, . . . ,N} are mutually “near-orthogonal”

• every Ai has full column rank and is sufficiently strong.

8 / 22

Page 9: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Proximal Jacobi ADMM

1. for i = 1, . . . ,N in parallel,

xk+1i = arg min

xi

fi(xi) +ρ

2

∥∥∥∥∥

Aixi +∑

j 6=i

Ajxkj − b −

λk

ρ

∥∥∥∥∥

2

+1

2

∥∥xi − xk

i

∥∥2

Pi;

2. λk+1 = λk − γρ(∑N

i=1 Aixk+1i − b

), γ > 0.

• The added proximal term is critical to convergence.

• Some forms of Pi � 0 make subproblems easier to solve and more stable.

• Global o(1/k) convergence if Pi and γ are properly chosen.

• Suitable for parallel and distributed computing.

9 / 22

Page 10: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Little-o convergence

Lemma

If a sequence {ak} ⊂ R obeys:

ak ≥ 0 and∞∑

t=1

at <∞,

then we have:

1. (convergence) limk→∞ ak = 0;

2. (ergodic convergence) 1k

∑kt=1 at = O

(1k

);

3. (running best) mint≤k{at} = o

(1k

);

4. (non-ergodic convergence) if ak is monotonically nonincreasing, then

ak = o(

1k

).

10 / 22

Page 11: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Convergence of Proximal Jacobi ADMM

Sufficient condition: there exist εi > 0, i = 1, 2, . . . ,N such that

(C1)

{Pi � ρ( 1

εi− 1)A>i Ai , i = 1, 2, . . . ,N

∑Ni=1 εi < 2− γ.

Simplification to (C1): set εi < 2−γN :

Pi � ρ

(N

2− γ− 1

)A>i Ai , i = 1, 2, . . . ,N

• Pi = τiI (standard proximal method): τi > ρ(

N2−γ − 1

)‖Ai‖2

• Pi = τiI− ρA>i Ai (prox-linear method): τi > ρN2−γ ‖Ai‖2

11 / 22

Page 12: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

o(1/k) Convergence Rate

Notation:

Gx :=

P1 + ρA>1 A1

. . .

PN + ρA>N AN

, G′x := Gx − ρA

>A

Theorem

If G′x � 0 and the condition (C1) holds, then

‖xk − xk+1‖2G′x

= o(1/k) and ‖λk − λk+1‖2 = o(1/k).

Note: (xk+1, λk+1) is optimal if ‖xk − xk+1‖2G′x

= 0, ‖λk − λk+1‖2 = 0. The

quantity ‖uk − uk+1‖2G′ as a measure of the convergence rate. Proof is similar

to He and Yuan [2012], He et al. [2013].

12 / 22

Page 13: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Adaptive Parameter Tuning

• Condition (C1) may be rather conservative.

• Adaptively adjusting the matrices {Pi} with guaranteed convergence.

Initialize with small P0i � 0 (i = 1, 2, . . . ,N ) and a small η > 0;1

for k = 1, 2, . . . do2

if h(uk−1,uk) > η ∙ ‖uk−1 − uk‖2G then3

Pk+1i ← Pk

i , ∀i;4

else5

Increase Pi : Pk+1i ← αiPk

i + βiQi (αi > 1, βi ≥ 0, Qi � 0), ∀i;6

Restart: uk ← uk−1;7

Note: h(uk ,uk+1) can be computed at little extra cost in the algorithm.

• Often yields much smaller paramters {Pi} than those required by

condition (C1), leading to substantially faster convergence in practice.

13 / 22

Page 14: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Numerical Experiments

Compare several parallel splitting algorithms:

• Prox-JADMM: Proximal Jacobi ADMM [this work]

• VSADMM: distributed ADMM, variable splitting [Bertsekas, 2008]

• Corr-JADMM: Jacobian ADMM with correction steps [He et al., 2013]

They have roughly the same per-iteration cost (in terms of both computation

and communication).

14 / 22

Page 15: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Exchange Problem

Consider a network of N agents that exchange n commodities.

min{xi}

N∑

i=1

fi(xi) s.t.N∑

i=1

xi = 0.

• xi ∈ Rn (i = 1, 2, . . . ,N ): quantities of commodities that are exchanged

by agents i.

• fi : Rn → R: cost function for agent i.

15 / 22

Page 16: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Numerical Result

Let fi(xi) := 12‖Cixi − di‖2, Ci ∈ Rp×n and di ∈ Rp.

0 50 100 150 20010

−25

10−20

10−15

10−10

10−5

100

105

Iteration

Obj

ectiv

e V

alue

Prox−JADMMVSADMMCorr−JADMM

0 50 100 150 20010

−12

10−10

10−8

10−6

10−4

10−2

100

102

IterationR

esid

ual

Prox−JADMMVSADMMCorr−JADMM

Figure: Exchange problem (n = 100, N = 100, p = 80).

16 / 22

Page 17: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Basis Pursuit

Finding sparse solutions of an under-determined linear system:

minx‖x‖1 s.t. Ax = c

• x ∈ Rn , A ∈ Rm×n (m < n)

• Partition data into N blocks:

x = [x1,x2, . . . ,xN ], A = [A1,A2, . . . ,AN ], fi(xi) = ‖xi‖1

• YALL1: a dual-ADMM solver for the basis pursuit problem.

17 / 22

Page 18: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Numerical Result

0 200 400 600 800 100010

−20

10−15

10−10

10−5

100

105

Iteration

Rel

ativ

e E

rror

Prox−JADMMYALL1VSADMMCorr−JADMM

(a) Noise-free (σ = 0)

0 50 100 150 200 250 300 35010

−6

10−4

10−2

100

102

104

Iteration

Rel

ativ

e E

rror

Prox−JADMMYALL1VSADMMCorr−JADMM

(b) Noise added (σ = 10−3)

Figure: `1-problem (n = 1000, m = 300, k = 60).

18 / 22

Page 19: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Amazon EC2

Tested solve two basis pursuit problems.

m n k Size

dataset 1 1.0× 105 2.0× 105 2.0× 103 150GB

dataset 2 1.5× 105 3.0× 105 3.0× 103 337GB

Environment:

• C code uses GSL and MPI, about 300 lines

• 10 instances from Amazon, each with 8 cores and 68GB RAM

• price: $17 each hour

19 / 22

Page 20: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

150GB Test 337GB Test

Itr Time(s) Cost($) Itr Time(s) Cost($)

Data generation – 44.4 0.21 – 99.5 0.5

CPU per iteration – 1.32 – – 2.85 –

Comm. per iteration – 0.07 – – 0.15 –

Reach 10−1 23 30.4 0.14 27 79.08 0.37

Reach 10−2 30 39.4 0.18 39 113.68 0.53

Reach 10−3 86 112.7 0.53 84 244.49 1.15

Reach 10−4 234 307.9 1.45 89 259.24 1.22

20 / 22

Page 21: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

Summary

• It is feasible to extend ADMM from 2 blocks to 3 or more blocks

• Jacobi ADMM is good for problems with large and distributed data

• Gauss-Seidel ADMM is good for 3 or a few more blocks,

Jacobi ADMM is good for many blocks

• Asynchronous subproblems become a real need (talk to Dong Qian)

21 / 22

Page 22: Parallel Multi-Block ADMM with o(1 Convergencebicmr.pku.edu.cn/~wenzw/courses/Lec_05_JacobiADMM.pdf · Parallel and Distributed Algorithms Motivation: • Data may be collected and

References

D. P. Bertsekas. Extended monotropic programming and duality. Journal ofoptimization theory and applications, 139(2):209–225, 2008.

D. Bertsekas and J. Tsitsiklis. Parallel and Distributed Computation: NumericalMethods, Second Edition. Athena Scientific, 1997.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization andstatistical learning via the alternating direction method of multipliers. MachineLearning, 3(1):1–122, 2010.

X. F. Wang, M. Y. Hong, S. Q. Ma, and Z.-Q. Luo. Solving multiple-block separableconvex minimization problems using two-block alternating direction method ofmultipliers. arXiv preprint arXiv:1308.5294, 2013.

B. S. He and X. M. Yuan. On non-ergodic convergence rate of Douglas-Rachfordalternating direction method of multipliers. 2012.

B. S. He, L. S. Hou, and X. M. Yuan. On full Jacobian decomposition of theaugmented lagrangian method for separable convex programming. 2013.

22 / 22


Recommended