Date post: | 17-May-2015 |
Category: |
Technology |
Upload: | manchor-ko |
View: | 2,204 times |
Download: | 4 times |
Orthogonal Matching Pursuit and K-SVD for Sparse Encoding Manny Ko Senior Software Engineer, Imaginations Technologies
Robin Green SSDE, Microsoft Xbox ATG
Outline
β Signal Representation
β Orthonormal Bases vs. Frames
β Dictionaries
β The Sparse Signal Model
β Matching Pursuit
β Implementing Orthonomal Matching Pursuit
β Learned Dictionaries and KSVD
β Image Processing with Learned Dictionaries
β OMP for GPUs
Representing Signals
β We represent most signals as linear combinations of things we already know, called a projection
Γ πΌ1 +
Γ πΌ2 +
Γ πΌ3 +β―
=
Γ πΌ0 +
Representing Signals
β Each function we used is a basis and the scalar weights are coefficients
β The reconstruction is an approximation to the original π₯ β We can measure and control the error π₯ β π₯ 2
π₯ π‘ = ππ π‘ πΌπ
π
π=0
Orthonormal Bases (ONBs)
β The simplest way to represent signals is using a set of orthonormal bases
ππ π‘ ππ(π‘)
+β
ββ
ππ‘ = 0 π β π 1 π = π
Example ONBs
β Fourier Basis
ππ π‘ = ππ2πππ‘
β Wavelets
ππ,π π‘ = πβπ 2 π₯ πβππ‘ β ππ
β Gabor Functions
ππ,π π‘ = π π‘ β ππ ππ2πππ‘
β Contourlet
ππ,π,π§ π‘ = Ξ»π,π π‘ β 2πβ1ππn
Benefits of ONB
β Analytic formulations
β Well understood mathematical properties
β Fast algorithms for projection
Limitations
β Orthonormal bases are optimal only for specific synthetic signals β If your signal looks exactly like your basis, you only need one
coefficient
β Limited expressiveness, all signals behave the same
β Real world signals often take a lot of coefficients β Just truncate the series, which leads to artifacts like aliasing
Smooth vs. Sharp
Haar Wavelet Basis
β Sharp edges
β Local support
Discrete Cosine Transform
β Smooth signals
β Global support
Overcomplete Bases
β Frames are overcomplete bases β There is now more than one way to represent a signal
β By relaxing the ONB rules on minimal span, we can better approximate signals using more coefficients
Ξ¦ = π1|π2|π3
=1 0 10 1 β1
Ξ¦ = π 1| π 2|π 3
=2 β1 β10 1 0
Dictionaries
β A dictionary is an overcomplete basis made of atoms
β A signal is represented using a linear combination of only a few atoms
β Atoms work best when zero-mean and normalized
πππβπΌ
πΌπ = π₯
π«πΌ = π₯
Dictionaries
π Ξ±
=
π₯
Mixed Dictionaries
β A dictionary of Haar + DCT gives the best of both worlds But now how do we pick which coefficients to use?
The Sparse Signal Model
π A fixed dictionary
πΌ
=
π₯
π π
πΎ
resulting signal
Sparse vector of
coefficients
The Sparse Signal Model
Itβs Simple β Every result is built from a combination of a few atoms
Itβs Rich β Itβs a general model, signals are a union of many low dimensional parts
Itβs Used Everywhere β The same model is used for years in Wavelets, JPEG compression,
anything where weβve been throwing away coefficients
Solving for Sparsity
What is the minimum number of coefficients we can use?
1. Sparsity Constrained
keep adding atoms until we reach a maximum count
2. Error Constrained Keep adding atoms until we reach a certain accuracy
πΌ = argmin πΌ
πΌ 0 s. t. ππΌ β π₯ 22 β€ π
πΌ = argminπΌ
ππΌ β π₯ 22 s. t. πΌ 0 β€ πΎ
NaΓ―ve Sparse Methods
β We can directly find πΌ using Least Squares
β Given K=1000 and L=10 at one LS per nanosecond this would complete in ~8 million years.
1. set πΏ = 1
2. generate π = { π«πΏ π« }
3. for each set solve the Least Squares problem minπΌ
ππΌ β π₯ 22
where π π’ππ πΌ β ππ
4. if LS error β€ π finish!
5. set πΏ = πΏ + 1
6. goto 2
Matching Pursuit
1. Set the residual π = π₯
2. Find an unselected atom that best matches the residual ππΌ β π
3. Re-calculate the residual from matched atoms π = π₯ β ππΌ
4. Repeat until π β€ π
Greedy Methods
π πΌ
=
π₯
Problems with Matching Pursuit (MP)
β If the dictionary contains atoms that are very similar, they tend to match the residual over and over
β Similar atoms do not help the basis span the space of representable values quickly, wasting coefficients in a sparsity constrained solution
β Similar atoms may match strongly but will not have a large effect in reducing the absolute error in an error constrained solution
Orthogonal Matching Pursuit (OMP)
β Add an Orthogonal Projection to the residual calculation
1. set πΌ βΆ= β , π β π₯, πΎ β 0
2. while (π π‘ππππππ π‘ππ π‘ ππππ π) do
3. π β argmaxπ
ππππ
4. πΌ β πΌ, π
5. πΎπΌ β ππΌ+π₯
6. π β π₯ β ππΌπΎπΌ
7. end while
Uniqueness and Stability
β OMP has guaranteed reconstruction (provided the dictionary is overcomplete)
β By projecting the input into the range-space of the atoms, we know that that the residual will be orthogonal to the selected atoms
β Unlike Matching Pursuit (MP) that atom, and all similar ones, will not be reselected so more of the space is spanned per iteration
Orthogonal Projection
β If the dictionary π was square, we could use an inverse
β Instead we use the Pseudo-inverse π+ = πππ β1ππ
π+
Γ =
β1
ππ ππ ππ πππ£ Γ π
=
Pseudoinverse is Fragile
β In floating point, the expression πTπβ1
is notoriously
numerically troublesome β the classic FP example
β Picture mapping all the points on a sphere using πππ then inverting
Implementing the Pseudoinverse
β To avoid this, and reduce the cost of inversion, we can note that πTπ is always symmetric and positive definite
β We can break the matrix into two triangular matrices using Cholesky
Decomposition π = πππ
β Incremental Cholesky Decomp reuses the results of the previous iteration, adding a single new row and column each time
ππππ€ =
π 0
π€π 1 β π€ππ€ where π€ = πβ1π·πΌππ
OMP-Cholesky 1. set πΌ βΆ= β , πΏ β 1 , π β π₯, πΎ β 0, πΌ β πππ₯, π β 1
2. while (π π‘ππππππ π‘ππ π‘ ππππ π) do
3. π β argmaxπ
ππππ
4. if π > 1 then π€ β Solve for π€ ππ€ = ππΌ
πππ
π β π π
π€π 1 β π€ππ€
5. πΌ β πΌ, π
6. πΎπΌ β Solve for π ππππ = πΌπΌ
7. π β π₯ β ππΌπΎπΌ
8. π β π + 1
9. end while
OMP compression of Barbara
2 atoms 3 atoms 4 atoms
Batch OMP (BOMP)
β By pre-computing matrices, Batch OMP can speed up OMP on large numbers (>1000) of inputs against one dictionary
β To avoid computing πππ at each iteration
β Precompute πππ₯ and the Gram-matrix π = πππ
πππ = ππ(π₯ β ππΌ(ππ)+π₯)
= πππ₯ β ππΌ(ππΌ)+π₯
= πππ₯ β ππΌ(ππΌπππΌ)
β1ππΌππ₯
= πππ₯ β ππΌ(ππΌ,πΌ)β1ππΌ
ππ₯
Learned Dictionaries and K-SVD
β OMP works well for a fixed dictionary, but it would work better if we could optimize the dictionary to fit the data
π β β― π β― π
Sourcing Enough Data
β For training you will need a large number of samples compared to the size of the dictionary.
β Take blocks from all integer offsets on the pixel grid
π₯ = β¦
1. Sparse Encode
β Sparse encode all entries in π±. Collect these sparse vectors into a square array π
π
πT
πT
β―
β―
2. Dictionary Update
β Find all π that use atom in column π π
β―
β―
2. Dictionary Update
β Find all π that use atom in column ππ
β Calculate the error without ππ
by π = ππΌ β ππππ’πβ π
β Solve the LS problem:
β Update ππ with the new π and π with the new π
π, π = Argminπ,π
π β πππ πΉ2 π . π‘. π 2 = 1
π
Atoms after K-SVD Update
How many iterations of update?
200000
250000
300000
350000
400000
450000
500000
550000
600000
0 10 20 30 40 50 60 70
Batch OMP
K-SVD
Sparse Image Compression
β As we have seen, we can control the number of atoms used per block
β We can also specify the exact size of the dictionary and optimize it for each data source
β The resulting coefficient stream can be coded using a Predictive Coder like Huffman or Arithmetic coding
Domain Specific Compression
β Using just 550 bytes per image
1. Original
2. JPEG
3. JPEG2000
4. PCA
5. KSVD per block
Sparse Denoising
β Uniform noise is incompressible and OMP will reject it
β KSVD can train a denoising dictionary from noisy image blocks
Source Result 30.829dB Noisy image
20
Sparse Inpainting
β Missing values in π± means missing rows in π
β Remove these rows and refit Ξ± to recover π± β If πΌ was sparse enough, the recovery will be perfect
=
Sparse Inpainting
Original 80% missing Result
Super Resolution
Super Resolution
The Original Bicubic Interpolation SR result
Block compression of Voxel grids
β βA Compression Domain output-sensitive volume rendering architecture based on sparse representation of voxel blocksβ Gobbetti, Guitian and Marton [2012]
β COVRA sparsely represents each voxel block as a dictionary of 8x8x8 blocks and three coefficients
β The voxel patch is reconstructed only inside the GPU shader so voxels are decompressed just-in-time
β Huge bandwidth improvements, larger models and faster rendering
Thank you to:
β Ron Rubstein & Michael Elad
β Marc LeBrun
β Enrico Gobbetti