Image Reconstruction Tutorial
Part 1: Sparse optimization and learning approaches
Aleksandra Pizurica1 and Bart Goossens2
1Group for Artificial Intelligence and Sparse Modelling (GAIM)
2Image Processing and Interpretation (IPI) group
TELIN, Ghent University - imec
Yearly workshop FWO–WOG
Turning images into value through statistical parameter estimation
Ghent, Belgium, 20 September 2019
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 2 / 45
A fairly general formulation
Reconstruct a signal (image) x ∈ X from data y ∈ Y where
y = T (x) + n
X and Y are Hilbert spaces, T : X 7→ Y is the forward operator and n is noise.
A common model-driven approach is to minimize the negative log-likelihood L:
minx∈XL(T (x), y)
Typically, ill-posed and leads to over-fitting. Variational regularization, also called
model-based iterative reconstruction seeks to minimize a regularized objective
function
minx∈XL(T (x), y) + τφ(x)
φ : X 7→ R ∪ −∞,∞ is a regularization functional. τ ≥ 0 governs the influence of
the a priori knowledge against the need to fit the data.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 3 / 45
Linear inverse problems
Many image reconstruction problems can be formulated as a linear inverse problem.
A noisy indirect observation y, of the original image x is then
y = Ax + n
Matrix A is the forward operator. x ∈ Rn; y,n ∈ Rm (or x ∈ Cn; y,n ∈ Cm).
Here, image pixels are stacked into vectors (raster scanning). In general, m 6= n.
Some examples
CT: A is the system matrix modeling the X-ray transformation
MRI: A is (partially sampled) Fourier operator
OCT: A is the first Born approximation for the scattering
Compressed sensing: A is a measurement matrix (dense or sparse)
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 4 / 45
Linear inverse problems
For the linear inverse problem y = Ax + n, model-based reconstruction seeks to solve:
minx
1
2‖Ax− y‖2
2 + τφ(x) (Tikhonov formulation)
Alternatively,
minxφ(x) subject to ‖Ax− y‖2
2 ≤ ε (Morozov formulation)
minx‖Ax− y‖2
2 subject to φ(x) ≤ δ (Ivanov formulation)
Under mild conditions, these are all equivalent [Figueiredo and Wright, 2013], and
which one is more convenient is problem-dependent.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 5 / 45
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 6 / 45
Sparse optimization
A common assumption: x is sparse in a well-chosen transform domain.
We refer to a wavelet representation meaning any wavelet-like multiscale
representation, including curvelets and shearlets..
x = Ψθ, θ ∈ Rd , Ψ ∈ Rn×d
The columns of Ψ are the
elements of a wavelet frame
(an orthogonal basis or an
overcomplete dictionary)
The main results hold for learned dictionaries, trained on a set of representative
examples to yield optimally sparse representation for a particular class of images.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 7 / 45
Compressed sensing
Consider y = Ax + n, with y,n ∈ Rm, x = Ψθ, x ∈ Rn, θ ∈ Rd , m < n
θ = arg minθ
1
2‖AΨθ− y‖2
2 + τφ(θ), x = Ψθ
Commonly: minθ‖θ‖0 s.t. ‖AΨθ− y‖2
2 ≤ ε or minθ
12‖AΨθ− y‖2
2 + τ‖θ‖1
[Candes et al., 2006], [Donoho, 2006], [Lustig et al., 2007]A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 8 / 45
Compressed sensing: recovery guarantees
Consider y = Ax + n, with y,n ∈ Rm, x = Ψθ, x ∈ Rn, m < n
Matrix Φ = AΨ has K -restricted isometry property (K-RIP) with constant εK < 1 if
∀ K -sparse (having only K non-zero entries) θ:
(1− εK )‖θ‖22 ≤ ‖Φθ‖2
2 ≤ (1 + εK )‖θ‖22
Suppose matrix A ∈ Rm×n is formed by subsampling a given sampling operator
A ∈ Rn×n. The mutual coherence between A and Ψ:
µ(A,Ψ) = maxi ,j|a>i ψj |
If m > Cµ2(A,Ψ)Kn log(n), for some constant C > 0, then
minθ
1
2‖AΨθ− y‖2
2 + τ‖θ‖1
recovers x with high probability, given the K-RIP holds for Φ = AΨ.A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 9 / 45
Analysis vs. synthesis formulation
Consider y = Ax + n, with y ∈ Rm, x = Ψθ, x ∈ Rn, θ ∈ Rd
Synthesis approach:
minθ
1
2‖AΨθ− y‖2
2 + τφ(θ)
or in the constrained form:
minθφ(θ) subject to ‖AΨθ− y‖2
2 ≤ ε
Analysis approach:
minx
1
2‖Ax− y‖2
2 + τφ(x)
or in the constrained form:
minxφ(x) subject to ‖Ax− y‖2
2 ≤ ε
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 10 / 45
Analysis vs. synthesis formulation
Consider y = Ax + n, with y ∈ Rm, x = Ψθ, x ∈ Rn, θ ∈ Rd
Synthesis approach:
minθ
1
2‖AΨθ− y‖2
2 + τφ(θ)
or in a constrained form:
minθφ(θ) subject to ‖AΨθ− y‖2
2 ≤ ε
Analysis approach:
minx
1
2‖Ax− y‖2
2 + τφ(x)
or in a constrained form:
minxφ(x) subject to ‖Ax− y‖2
2 ≤ ε
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 11 / 45
Analysis vs. synthesis formulation
Consider y = Ax + n, with y ∈ Rm, x = Ψθ, x ∈ Rn, θ ∈ Rd
Synthesis approach:
minθ
1
2‖AΨθ− y‖2
2 + τφ(θ)
or in a constrained form:
minθφ(θ) subject to ‖AΨθ− y‖2
2 ≤ ε
Analysis approach that also applies to wavelet regularization
minx
1
2‖Ax− y‖2
2 + τφ(Px)
or in a constrained form:
minxφ(Px) subject to ‖Ax− y‖2
2 ≤ ε
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 12 / 45
Analysis vs. synthesis formulation
Consider y = Ax + n, with y ∈ Rm, x = Ψθ, x ∈ Rn, θ ∈ Rd
Synthesis approach:
minθ
1
2‖AΨθ− y‖2
2 + τφ(θ)
or in a constrained form:
minθφ(θ) subject to ‖AΨθ− y‖2
2 ≤ ε
Analysis approach that also applies to wavelet regularization
minx
1
2‖Ax− y‖2
2 + τφ(Px)
or in a constrained form:
minxφ(Px) subject to ‖Ax− y‖2
2 ≤ ε
P: a wavelet transform operator or P = I (standard analysis)A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 13 / 45
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 14 / 45
Solution strategies: greedy methods vs. convex optimization
Solution strategy is problem-dependent. For non-convex problems like
minx‖x‖0 subject to ‖Ax− y‖2
2 ≤ ε
Greedy algorithms, e.g.,
Matching Pursuit (MP) [Mallat and Zhang, 1993]
OMP[Tropp, 2004], CoSaMP [Needell and Tropp, 2009]
IHT [Blumensath and Davies, 2009]
or convex relaxation can be applied leading to:
minx‖x‖1 subject to ‖Ax− y‖2
2 ≤ ε
minx
1
2‖Ax− y‖2
2 + τφ(x)
known as LASSO [Tibshirani, 1996] or BPDN [Chen et al., 2001] problem.A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 15 / 45
Greedy methods: OMP
OMP algorithm for solving minx‖x‖0 subject to Ax = y
Require: k = 1, r(1) = y,Λ(0) = ∅1: repeat
2: λ(k) = arg maxj |Aj · r(k)|3: Λ(k) = Λ(k−1) ∪ λ(k)4: x(k) = arg minx‖AΛkx− y‖2
5: y(k) = AΛkx(k)
6: r(k+1) = r(k) − y(k)
7: k = k + 1
8: until stopping criterion satisfied
Aj is the j-th column of A, and AΛ a sub-matrix of A with columns indicated in Λ.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 16 / 45
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 17 / 45
Proximal operator
Many state-of-the-art image reconstruction algorithms solve problems of the kind
minx
1
2‖Ax− y‖2
2 + τφ(Px)
making use of the proximity operator i.e., the Moreau proximal mapping
[Combettes and Wajs, 2005]
proxτφ(y) = argminx
1
2‖x− y‖2
2 + τφ(x)
For certain choices of φ(x), this operator has a closed-form, e.g.,
φ(x) = ‖x‖1 → proxτ`1(y) = soft(y, τ) component-wise soft thresholding
φ(x) = ‖x‖0 → proxτ`0(y) = hard(y,
√2τ) component-wise hard thresholding
Another common regularization function is total variation (TV):
φ(x) = ‖x‖TV → proxτTV (y) Chambolle’s algorithm [Chambolle, 2004]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 18 / 45
Iterative shrinkage/thresholding (IST)
The standard algorithm for solving
minx
1
2‖Ax− y‖2
2 + τφ(x)
is iterative shrinkage/thresholding (IST) algorithm [Figueiredo and Nowak, 2003],
[Daubechies et al., 2004]:
xk+1 = proxτφ
(xk −
1
γAH(Axk − y)︸ ︷︷ ︸
gradient of the datafidelity term
)
Its key ingredient is the proximity operator proxτφ(y) = argminx
12‖x− y‖2
2 + τφ(x)
A general approach in [Daubechies et al., 2004]: φ(x) is a weighted `p norm of the
coefficients of x with respect to a wavelet basis.A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 19 / 45
Iterative shrinkage/thresholding (IST) and extensions
The standard algorithm for solving
minx
1
2‖Ax− y‖2
2 + τφ(x)
is iterative shrinkage/thresholding (IST) algorithm [Figueiredo and Nowak, 2003],
[Daubechies et al., 2004]:
xk+1 = proxτφ
(xk −
1
γAH(Axk − y)︸ ︷︷ ︸
gradient of the datafidelity term
)
Different accelerated versions:
TwIST [Bioucas-Dias and Figueiredo, 2007]
FISTA [Beck and Teboulle, 2009]
SpaRSA [Wright et al., 2009]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 20 / 45
Variable splitting
A very old idea (back to at least [Courant, 1943]): Represent minx f1(x) + f2(Gx) as
minxf1(x) + f2(z) subject to Gx = z
The rationale: it may be easier to solve the constrained problem.
Variable splitting (VS) together with the augmented Lagrangian method (ALM) and
non linear block Gauss-Seidel (NLBGS) leads to a form of Alternating Direction
Method of Multipliers (ADMM). It is this interpretation:
(VS + ALM + NLBGS)→ ADMM
that we give in the next few slides, following [Afonso et al., 2010]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 21 / 45
Variable splitting and Augmented Lagrangian Method
A very old idea (back to at least [Courant, 1943]): Represent minx f1(x) + f2(Gx) as
minxf1(x) + f2(z) subject to Gx = z
Lµ(x, z,λ) = f1(x) + f2(z) + λT (Gx− z)︸ ︷︷ ︸Lagrangian
+µ
2‖Gx− z‖2
2︸ ︷︷ ︸“augmentation”
Basic augmented Lagrangian method (ALM), a.k.a., method of multipliers (MM),:
(x(k), z(k)) = argmin(x,z)
Lµ(x, z,λ(k−1))
λ(k) = λ(k−1) + µ(Gx(k) − z(k))
Goes back to at least [Hestenes, 1969], [Powell, 1969]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 22 / 45
Variable splitting and Augmented Lagrangian Method
A very old idea (back to at least [Courant, 1943]): Represent minx f1(x) + f2(Gx) as
minxf1(x) + f2(z) subject to Gx = z
Lµ(x, z,λ) = f1(x) + f2(z) + λT (Gx− z)︸ ︷︷ ︸Lagrangian
+µ
2‖Gx− z‖2
2︸ ︷︷ ︸“augmentation”
After simple “complete-the-squares” ALM/MM yields [Afonso et al., 2010]:
(x(k), z(k)) = argmin(x,z)
f1(x) + f2(z) +µ
2‖Gx− z− d(k−1)‖2
2
d(k) = d(k−1) − (Gx(k) − z(k))
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 23 / 45
ADMM as Variable splitting and ALM
Use variable splitting (VS) to represent minx f1(x) + f2(Gx) as
minxf1(x) + f2(z) subject to Gx = z
ALM/MM yields :
(x(k), z(k)) = argmin(x,z)
f1(x) + f2(z) +µ
2‖Gx− z− d(k−1)‖2
2 (P)
d(k) = d(k−1) − (Gx(k) − z(k))
Solve (P) with one step of NLBGS → “scaled” ADMM version [Boyd et al., 2011]:
x(k) = argminxf1(x) +
µ
2‖Gx− z(k−1) − d(k−1)‖2
2
z(k) = argminzf2(z) +
µ
2‖Gx(k−1) − z− d(k−1)‖2
2
d(k) = d(k−1) − (Gx(k) − z(k))
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 24 / 45
ADMM algorithm
ADMM algorithm for solving: minx f1(x) + f2(Gx)
Require: k = 0, µ > 0, z0,d0
1: repeat
2: x(k) = argminxf1(x) + µ
2 ‖Gx− z(k−1) − d(k−1)‖22
3: z(k) = argminzf2(z) + µ
2‖Gx(k−1) − z− d(k−1)‖22
4: d(k) = d(k−1) − (Gx(k) − z(k))5: k = k + 1
6: until stopping criterion is satisfied
Equivalent to split-Bregman method [Goldstein and Osher, 2009].
Connections with Douglas-Raschford splitting [Setzer, 2009].
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 25 / 45
ADMM algoritm for linear inverse problems
Instantiate ADMM to our linear inverse problem: minx‖Ax− y‖22 + τφ(Px)
Require: k = 0, µ > 0, z0,d0
1: repeat
2: x(k) = argminx‖Ax− y‖2
2 + µ2 ‖Px− z(k−1) − d(k−1)‖2
2
3: z(k) = argminz
τφ(z) + µ2 ‖Px(k−1) − z− d(k−1)‖2
2= proxτφ/µ(Px(k−1) − d(k−1))
4: d(k) = d(k−1) − (Px(k) − z(k))5: k = k + 1
6: until stopping criterion is satisfied
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 26 / 45
A variant of ADMM algorithm for more than two functions
Consider minx∈Rn∑Jj=1 gj(Hjx) and map it into the previous: minx f1(x) + f2( Gx︸︷︷︸
z
)
f1(x) = 0, f2(z) =J∑j=1
gj(zj), G =
H1...
HJ
∈ Rp×n, z(k) =
z(k)1...
z(k)J
, d(k) =
d(k)1...
d(k)J
,
x(k) =( J∑j=1
((Hj)>Hj
)−1( J∑j=1
(Hj)>(z
(k−1)j + dk−1
j ))
z(k)1 = proxg1µ(H1x(k−1) − d
(k−1)1 )
...
z(k)J = proxgJµ(HJx
(k−1) − d(k−1)J ) C-SALSA [Afonso et al., 2011]
d(k) = d(k−1) − (Gx(k) − z(k))A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 27 / 45
Example: MRI reconstruction with shearlet regularization
A transversal slice of a FLAIR sequence, resampled along a non-Cartesian trajectory
based on an Archimedean spiral (sampling rate 15%). [Aelterman et al., 2011].A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 28 / 45
Example: CT reconstruction with shearlet regularization
x = arg minx
1
2‖C−1(Ax− y)‖2
2 + τ‖Px‖1
Matrix C is a “prewhitener” for the acquisition system [Vandeghinste et al., 2013].A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 29 / 45
Example: CT reconstruction with shearlet regularization
Top left: reference; Top right: SIRT; Bottom left: ADMM with TV regularization;
Bottom right: ADMM with shearlet regularization [Vandeghinste et al., 2013]A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 30 / 45
Modelling structured sparsity
Two main approaches to modelling structured sparsity in image reconstruction
in the acquisition stage
in the reconstruction stage
In the following we only focus on the second approach.
For the the improved design of the sampling patterns/sampling trajectories making
use of the structured sparsity, see [Roman et al., 2015], [Adcock et al., 2017],
[Gozcu, 2018]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 31 / 45
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 32 / 45
Wavelet tree sparsity
[Jacob et al., 2009], [He and Carin, 2009], [Rao et al., 2011].
Application to MRI [Chen and Huang, 2014].
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 33 / 45
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 34 / 45
Sparse reconstruction with Markov Random Field priors
Use Markov Random Field (MRF) as a statistical model for the spatial clustering of
important wavelet coefficients [Cevher et al., 2010], [Pizurica et al., 2011]A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 35 / 45
Sparse reconstruction with Markov Random Field priors
Consider (a little simpler): y = Ax + n, with y,n ∈ Rm, x ∈ Rn, si ∈ 0, 1
P(s) =1
Zexp
[−(∑i
αsi +∑〈i ,j〉
βsisj
)]P(s, x, y) = P(y|x)P(x|s)P(s)
[x, s] = arg maxx,s
∑〈i ,j〉
βsisj +∑i
[αsi + log(p(xi |si)]−1
2σ2‖y − Ax‖2
2
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 36 / 45
Sparse MRI reconstruction with MRF priors
Consider: y = Ax + n, with y,n ∈ Cm, x ∈ Cn, x = Ψθ, θ ∈ Cd , si ∈ 0, 1
Let Ωs = i ∈ N : si = 1. Define a model for θ that conforms to the support s:
Ms = θ ∈ CD : supp(θ) = Ωs
Our objective is:
minx∈CN
‖Ax− y‖22 subject to Px ∈Ms
or equivalently:
minx∈CN
‖Ax− y‖22 + ιΩs(supp(Px))
where
ιQ(q) =
0, q ∈ Q+∞, otherwise
LaSB [Pizurica et al., 2011], GreeLa [Panic et al., 2016], LaSAL [Panic et al., 2017]A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 37 / 45
Example: CS-MRI with LaSB - early motivating results for using MRFs
SB (split-Bregman) and LaSB implemented with the same shearlet transform.A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 38 / 45
Example: CS-MRI with MRF priors
20% measurements, with variable density random sampling
Left: zero fill (PSNR = 19.87 dB)
Middle: WaTMRI [Chen and Huang, 2014] (wavelet-tree; PSNR = 28.78 dB)
Right: LaSAL [Panic et al., 2017] (MRF-based; PSNR = 33.43 dB)A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 39 / 45
Example: CS-MRI with MRF priors
Reconstructions from 20% measurements, with radial sampling
Left: reconstructions; Right: error images; Top: LaSAL, Bottom: WaTMRIA. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 40 / 45
Outline
1 Model-based iterative reconstruction algorithms
Sparse optimization
Solution strategies: greedy methods vs. convex optimization
Optimization methods in sparse image reconstruction
2 Structured sparsity
Wavelet-tree sparsity
Markov Random Field (MRF) priors
3 Machine learning in image reconstruction
Main ideas and current trends
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 41 / 45
Machine learning in image reconstruction
Covered in many recent workshops, special sessions and special issues of journals:
Three main direction have been proposed
learned postprocessing or learned denoisers;
learn a regularizer and use it in a classical variational regularization scheme;
learning the full reconstruction operator
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 42 / 45
Learning fast approximations of sparse coding
Core idea: time-unfolded version of an iterative reconstruction algorithm, like IST,
truncated to a fixed number of iterations.
Representatives: LISTA [Gregor and LeCun, 2010],[Moreau and Bruna, 2017]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 43 / 45
Deep CNN models in image reconstruction
A central question is whether one can combine elements of model and data driven
approaches for solving ill-posed inverse problems.
[McCann et al., 2017]
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 44 / 45
Summary
Sparse optimization is a fundamental concept in inverse problems like image
reconstruction.
This tutorial covered some basic components of sparse image recovery
algorithms, including ADMM-based methods.
The concept of structured sparsity was underlined with particular attention to
using Markov Random Field priors in sparse image recovery.
A new frontier: machine learning in image reconstruction. Great potential, a
huge variability of approaches.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
Adcock, B., Hansen, A., Poon, C., and Roman, B. (2017).
Breaking the coherence barrier: A new theory for compressed sensing.
Forum of Mathematics, Sigma, 5(e4).
Aelterman, J., Luong, H. Q., Goossens, B., Pizurica, A., and Philips, W. (2011).
Augmented Lagrangian based reconstruction of non-uniformly sub-Nyquist
sampled MRI data.
Signal Processing, 91(12):2731–2742.
Afonso, M. V., Bioucas-Dias, J. M., and Figueiredo, M. A. (2010).
Fast image recovery using variable splitting and constrained optimization.
IEEE Transactions on Image Processing, 19(9):2345–2356.
Afonso, M. V., Bioucas-Dias, J. M., and Figueiredo, M. A. T. (2011).
An augmented Lagrangian approach to the constrained optimization formulation
of imaging inverse problems.
IEEE Transactions on Image Processing, 20(3):681–695.
Beck, A. and Teboulle, M. (2009).
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183–202.
Bioucas-Dias, J. M. and Figueiredo, M. A. (2007).
A new twist: two-step iterative shrinkage/thresholding algorithms for image
restoration.
IEEE Transactions on Image Processing, 16(12):2992–3004.
Blumensath, T. and Davies, M. (2009).
Sampling theorems for signals from the union of finite-dimensional linear
subspaces.
IEEE Transactions on Information Theory, 55(4):1872–1882.
Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. (2011).
Distributed optimization and statistical learning via the alternating direction
method of multipliers.
Foundations and Trends in Machine Learning, 3(1):1–122.
Candes, E. J., Romberg, J., and Tao, T. (2006).
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
Robust uncertainty principles: Exact signal reconstruction from highly incomplete
frequency information.
IEEE Transactions on Information Theory, 52(2):489–509.
Cevher, V., Indyk, P., Carin, L., and Baraniuk, R. G. (2010).
Sparse signal recovery and acquisition with graphical models.
IEEE Signal Processing Magazine, 27(6):92–103.
Chambolle, A. (2004).
An algorithm for total variation minimization and applications.
Journal of Mathematical Imaging and Vision, 20(1-2):89–97.
Chen, C. and Huang, J. (2014).
Exploiting the wavelet structure in compressed sensing MRI.
Magnetic Resonance Imaging, 32(10):1377–1389.
Chen, S., Donoho, D., and Saunders, M. (2001).
Atomic decomposition by basis pursuit.
SIAM Review, 43(1):129–159.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
Combettes, P. L. and Wajs, V. R. (2005).
Signal recovery by proximal forward-backward splitting.
Multiscale Modeling & Simulation, 4(4):1168–1200.
Courant, R. (1943).
Variational methods for the solution of problems of equilibrium and vibrations.
Bulletin of the American Mathematical Society, 49(1):1–23.
Daubechies, I., Defrise, M., and De Mol, C. (2004).
An iterative thresholding algorithm for linear inverse problems with a sparsity
constraint.
Communications on Pure and Applied Mathematics, 57(11):1413–1457.
Donoho, D. L. (2006).
Compressed sensing.
IEEE Transactions on Information Theory, 52(4):1289–1306.
Figueiredo, M. and Nowak, R. (2003).
An EM algorithm for wavelet-based image restoration.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
IEEE Transactions on Image Processing, 12(8):906–916.
Figueiredo, M. and Wright, S. (2013).
Applications of sparse optimization.
Goldstein, T. and Osher, S. (2009).
The split Bregman method for `1-regularized problems.
SIAM Journal on Imaging Sciences, 2(2):323–343.
Gozcu, B. e. a. (2018).
Learning-based compressive MRI.
IEEE Transactions on Medical Imaging, 37(6).
Gregor, K. and LeCun, Y. (2010).
Learning fast approximations of sparse coding.
In Proceedings of the 27th International Conference on Machine Learning.
He, L. and Carin, L. (2009).
Exploiting structure in wavelet-based bayesian compressive sensing.
IEEE Transactions on Signal Processing, 57(9):3488–3497.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
Hestenes, M. R. (1969).
Multiplier and gradient methods.
Journal of Optimization Theory and Applications, 4(5):303–320.
Jacob, L., G., O., and Vert, J. (2009).
Group Lasso with overlap and graph Lasso.
In Proceedings of the 26th International Conference on Machine Learning.
Lustig, M., Donoho, D., and Pauly, J. M. (2007).
Sparse MRI: The application of compressed sensing for rapid MR imaging.
Magnetic Resonance in Medicine, 58(6):1182–1195.
Mallat, S. and Zhang, Z. (1993).
Matching pursuits with time-frequency dictionaries.
IEEE Transactions on Image Processing, 41(12):3397–3415.
McCann, M., Jin, K., and Unser, M. (2017).
Convolutional neural networks for inverse problems in imaging.
IEEE Signal Processing Magazine, 34(6):85–95.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
Moreau, T. and Bruna, J. (2017).
Understanding trainable sparse coding via matrix factorization.
In Proceedings ICLR.
Needell, D. and Tropp, J. A. (2009).
Cosamp: Iterative signal recovery from incomplete and inaccurate samples.
Applied and Computational Harmonic Analysis, 26(3):301–321.
Panic, M., Aelterman, J., Crnojevic, V., and Pizurica, A. (2017).
Sparse recovery in magnetic resonance imaging with a Markov random field prior.
IEEE Transactions on Medical Imaging, 36(10):2104–2115.
Panic, M., Vukobratovic, D., Crnojevic, V., and Pizurica, A. (2016).
Sparse MRI with a Markov random field prior for the subband coefficients.
In Proceedings of the third ”international Traveling Workshop on Interactions
between Sparse models and Technology” (iTWIST’16), pages 56–58.
Pizurica, A., Aelterman, J., Bai, F., Vanloocke, S., Luong, Q., Goossens, B., and
Philips, W. (2011).
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
On structured sparsity and selected applications in tomographic imaging.
In SPIE Conference on Wavelets and Sparsity XIV, volume 8138, pages
81381D–1–12.
Powell, M. (1969).
A method for nonlinear constraints in minimization problems.
In Fletcher, R., editor, Optimization, pages 283–298. Academic Press, New York.
Rao, N., Nowak, R., Wright, S., and Kingsbury, N. (2011).
Convex approaches to model wavelet sparsity patterns.
In IEEE international Conference on Image Processing.
Roman, B., Adcock, B., and Hansen, A. (2015).
On asymptotic structure in compressed sensing,.
In Proceedings of the third ”international Traveling Workshop on Interactions
between Sparse models and Technology” (iTWIST’16).
Setzer, S. (2009).
Split Bregman algorithm, Douglas-Rachford splitting and frame shrinkage.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45
In International Conference on Scale Space and Variational Methods in Computer
Vision, pages 464–476. Springer.
Tibshirani, R. (1996).
Regression shrinkage and selection via the Lasso.
Journal of the Royal Statistical Society, Series B, 58(1):267–288.
Tropp, J. (2004).
Greed is good: Algorithmic results for sparse approximation.
IEEE Transactions on Information Theory, 50(10):2231–2242.
Vandeghinste, B., Goossens, B., Van Holen, R., Vanhove, C., Pizurica, A.,
Vandenberghe, S., and Staelens, S. (2013).
Iterative CT reconstruction using shearlet-based regularization.
IEEE Transactions on Nuclear Science, 60(5):3305–3317.
Wright, S. J., Nowak, R. D., and Figueiredo, M. A. (2009).
Sparse reconstruction by separable approximation.
IEEE Transactions on Signal Processing, 57(7):2479–2493.
A. Pizurica and B. Goossens (UGent) Image Reconstruction Tutorial: Part 1 FWO-WOG TIVSPE 2019 45 / 45