Home > Documents > Computational Photography and Image Manipulation CMPT 469...

# Computational Photography and Image Manipulation CMPT 469...

Date post: 16-Jun-2021
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
Embed Size (px)
of 153 /153
Computational Photography and Image Manipulation CMPT 469 / 985, Fall 2019 Week 12 Deconvolution and Noise 1
Transcript

Computational Photography and Image ManipulationCMPT 469 / 985, Fall 2019

Week 12

Deconvolution and Noise

1

Slide credits

• Slides thanks to Ioannis Gkioulekas along with his acknowledgements.

2

Why are our images blurry?

• Lens imperfections.

• Camera shake.

• Scene motion.

• Depth defocus.

Lens imperfections

object distance D focus distance D’

• Ideal lens: An point maps to a point at a certain plane.

Lens imperfections

object distance D focus distance D’

• Ideal lens: An point maps to a point at a certain plane.• Real lens: A point maps to a circle that has non-zero minimum radius among all planes.

Lens imperfections

object distance D focus distance D’

• Ideal lens: An point maps to a point at a certain plane.• Real lens: A point maps to a circle that has non-zero minimum radius among all planes.

Shift-invariant blur.

blur kernel

Lens imperfectionsWhat causes lens imperfections?• Aberrations.

• Diffraction.

large aperture

small aperture

(Important note: Oblique aberrations like coma and distortion are not shift-invariant blur and we do not consider them here!)

Lens as an optical low-pass filter

object distance D focus distance D’

Point spread function (PSF): The blur kernel of a lens.• “Diffraction-limited” PSF: No aberrations, only diffraction. Determined by aperture shape.

diffraction-limited PSF of a circular

aperture

blur kernel

Lens as an optical low-pass filter

object distance D focus distance D’

Point spread function (PSF): The blur kernel of a lens.• “Diffraction-limited” PSF: No aberrations, only diffraction. Determined by aperture shape.

Optical transfer function (OTF): The Fourier transform of the PSF. Equal to aperture shape.

diffraction-limited PSF of a circular

aperture

blur kernel

diffraction-limited OTF of a circular

aperture

Lens as an optical low-pass filter

image from a perfect lens

*

imperfect lens PSF

=

image from imperfect lens

x * c = b

Lens as an optical low-pass filter

image from a perfect lens

*

imperfect lens PSF

=

image from imperfect lens

x * c = b

If we know c and b, can we recover x?

Deconvolution

x * c = b

After division, just do inverse Fourier transform:

Reminder: convolution is multiplication in Fourier domain:

F(x) . F(c) = F(b)Deconvolution is division in Fourier domain:

F(xest) = F(b) \ F(c)

xest = F-1 ( F(b) \ F(c) )

Deconvolution

• When we divide by zero, we amplify the high frequency noise

• The OTF (Fourier of PSF) is a low-pass filter

b = c * x + n

• The measured signal includes noise

noise term

zeros at high frequencies

Naïve deconvolution

* =

b * c-1 = xest

-1

Even tiny noise can make the results awful.• Example for Gaussian of σ = 0.05

Wiener Deconvolution

noise-dependent damping factor

Apply inverse kernel and do not divide by zero:

|F(c)|2

xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω)

F(b)

F(c)

• Derived as solution to maximum-likelihood problem under Gaussian noise assumption• Requires noise of signal-to-noise ratio at each frequency

SNR(ω) =signal variance at ω

noise variance at ω

Wiener Deconvolution

noise-dependent damping factor

Apply inverse kernel and do not divide by zero:

|F(c)|2

xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω)

F(b)

F(c)

Intuitively:• When SNR is high (low or no noise), just divide by kernel.• When SNR is low (high noise), just set to zero.

Deconvolution comparisons

naïve deconvolution Wiener deconvolution

Deconvolution comparisons

σ = 0.01 σ = 0.05 σ = 0.01

Derivation

𝑥 = 𝑐 ∗ 𝑥 + 𝑛Noise n is assumed to be zero-mean and independent of signal x.

Sensing model:

Derivation

𝑏 = 𝑐 ∗ 𝑥 + 𝑛Noise n is assumed to be zero-mean and independent of signal x.

Sensing model:

Fourier transform:

𝐵 = 𝐶 ⋅ 𝑋 + 𝑁

Derivation

Noise n is assumed to be zero-mean and independent of signal x.

Sensing model:

Fourier transform:

Problem statement: Find function H(ω) that minimizes expected error in Fourier domain.

Convolution becomes multiplication.

min𝐻

𝐸 𝑋 − 𝐻 ෨𝐵2

𝑏 = 𝑐 ∗ 𝑥 + 𝑛

𝐵 = 𝐶 ⋅ 𝑋 + 𝑁

DerivationReplace B and re-arrange loss:

min𝐻

𝐸 1 + 𝐻𝐶 𝑋 − 𝐻𝑁 2

min𝐻

1 − 𝐻𝐶 2𝐸 𝑋 2 − 2 1 − 𝐻𝐶 𝐸 𝑋𝑁 + 𝐻 2𝐸 𝑁 2

Expand the squares:

DerivationWhen handling the cross terms:• Can I write the following?

𝐸 𝑋𝑁 = 𝐸 𝑋 𝐸 𝑁

DerivationWhen handling the cross terms:• Can I write the following?

𝐸 𝑋𝑁 = 𝐸 𝑋 𝐸 𝑁

Yes, because X and N are assumed independent.

• What is this expectation product equal to?

DerivationWhen handling the cross terms:• Can I write the following?

𝐸 𝑋𝑁 = 𝐸 𝑋 𝐸 𝑁

Yes, because X and N are assumed independent.

• What is this expectation product equal to?

Zero, because N has zero mean.

DerivationReplace B and re-arrange loss:

min𝐻

𝐸 1 + 𝐻𝐶 𝑋 − 𝐻𝑁 2

min𝐻

1 − 𝐻𝐶 2𝐸 𝑋 2 − 2 1 − 𝐻𝐶 𝐸 𝑋𝑁 + 𝐻 2𝐸 𝑁 2

Expand the squares:

cross-term is zero

min𝐻

1 − 𝐻𝐶 2𝐸 𝑋 2 + 𝐻 2𝐸 𝑁 2

Simplify:

How do we solve this optimization problem?

DerivationDifferentiate loss with respect to H, set to zero, and solve for H:

𝜕loss

𝜕𝐻= 0

⇒ −2 1 − 𝐻𝐶 𝐸 𝑋 2 + 2𝐻𝐸 𝑁 2 = 0

⇒ 𝐻 =𝐶𝐸 𝑋 2

𝐶2𝐸 𝑋 2 + 𝐸 𝑁 2

Divide both numerator and denominator with 𝐸 𝑋 2 , extract factor 1/C, and done!

Wiener Deconvolution

noise-dependent damping factor

Apply inverse kernel and do not divide by zero:

|F(c)|2

xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω)

F(b)

F(c)

• Derived as solution to maximum-likelihood problem under Gaussian noise assumption• Requires estimate of signal-to-noise ratio at each frequency

SNR(ω) =signal variance at ω

noise variance at ω

Natural image and noise spectraNatural images tend to have spectrum that scales as 1 / ω2

• This is a natural image statistic

Natural image and noise spectraNatural images tend to have spectrum that scales as 1 / ω2

• This is a natural image statistic

Noise tends to have flat spectrum, σ(ω) = constant• We call this white noise

What is the SNR?

Natural image and noise spectraNatural images tend to have spectrum that scales as 1 / ω2

• This is a natural image statistic

Noise tends to have flat spectrum, σ(ω) = constant• We call this white noise

Therefore, we have that: SNR(ω) = 1 / ω2

Wiener Deconvolution

amplitude-dependent damping factor

Apply inverse kernel and do not divide by zero:

|F(c)|2

xest = F-1 ( ⋅ )|F(c)|2 + ω2

F(b)

F(c)

• Derived as solution to maximum-likelihood problem under Gaussian noise assumption• Requires noise of signal-to-noise ratio at each frequency

SNR(ω) =1

ω2

Wiener Deconvolution

gradient regularization

For natural images and white noise, it can be re-written as the minimization problem

minx ‖b – c ∗ x‖2 + ‖∇x‖2

How can you prove this equivalence?

• Convert to Fourier domain and repeat the proof for Wiener deconvolution.• Intuitively: The ω2 term in the denominator of the special Wiener filter is the square of

the Fourier transform of ∇x, which is i⋅ω.

Deconvolution comparisons

blurry input gradient regularizationnaive deconvolution original

Deconvolution comparisons

blurry input gradient regularizationnaive deconvolution original

… and a proof-of-concept demonstration

noisy input gradient regularizationnaive deconvolution

Question

Can we undo lens blur by deconvolving a PNG or JPEG image without any preprocessing?

Question

Can we undo lens blur by deconvolving a PNG or JPEG image without any preprocessing?• All the blur processes we discuss today happen optically (before capture by the sensor).• Blur model is accurate only if our images are linear.

Are PNG or JPEG images linear?

Question

Can we undo lens blur by deconvolving a PNG or JPEG image without any preprocessing?• All the blur processes we discuss today happen optically (before capture by the sensor).• Blur model is accurate only if our images are linear.

Are PNG or JPEG images linear?• No, because of gamma encoding.• Before deblurring, you must linearize your images.

The importance of linearity

blurry input deconvolution after linearization

deconvolution without linearization

original

Different gradient regularizations

minx ‖b – c ∗ x‖2 + ‖∇x‖2

minx ‖b – c ∗ x‖2 + ‖∇x‖1

minx ‖b – c ∗ x‖2 + ‖∇x‖0.8

• L2 gradient regularization (Tikhonov regularization, same as Wiener deconvolution)

• L1 gradient regularization (sparsity regularization, same as total variation)

• Ln<1 gradient regularization (fractional regularization)

All of these are motivated by natural image statistics. Active research area.

Comparison of gradient regularizations

inputsquared gradient

regularizationfractional gradient

regularization

High quality images using cheap lenses

[Heide et al., “High-Quality Computational Imaging Through Simple Lenses,” TOG 2013]

Deconvolution

* =

x * c = b

If we know b and c, can we recover x?

?

How do we measure this?

PSF calibration

Take a photo of a point source

Image of PSF

Image with sharp lens Image with cheap lens

Deconvolution

* =

x * c = b

If we know b and c, can we recover x?

?

Blind deconvolution

* =

x * c = b

If we know b, can we recover x and c?

? ?

Camera shake

Camera shake as a filter

image from static camera

*

PSF from camera motion

=

image from shaky camera

x * c = b

If we know b, can we recover x and c?

Multiple possible solutions

How do we detect this

one?

Use prior information

Among all the possible pairs of images and blur kernels, select the ones where:

• The image “looks like” a natural image.

• The kernel “looks like” a motion PSF.

Use prior information

Among all the possible pairs of images and blur kernels, select the ones where:

• The image “looks like” a natural image.

• The kernel “looks like” a motion PSF.

Shake kernel statisticsGradients in natural images follow a characteristic “heavy-tail” distribution.

sharp natural image

blurry natural image

Can be approximated by ‖∇x‖0.8

Use prior information

Among all the possible pairs of images and blur kernels, select the ones where:

• The image “looks like” a natural image.

• The kernel “looks like” a motion PSF.

Gradients in natural images follow a characteristic “heavy-tail” distribution.

Shake kernels are very sparse, have continuous contours, and are always positive

How do we use this information for blind deconvolution?

Regularized blind deconvolution

Solve regularized least-squares optimization

minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1

What does each term in this summation correspond to?

Regularized blind deconvolution

natural image prior

Solve regularized least-squares optimization

minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1

data term shake kernel prior

Note: Solving such optimization problems is complicated (no longer linear least squares).

A demonstration

input deconvolved image and kernel

This image looks worse than the original…

This doesn’t look like a plausible shake kernel…

Regularized blind deconvolution

Solve regularized least-squares optimization

minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1

loss function

Regularized blind deconvolution

Solve regularized least-squares optimization

minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1

loss functioninverse

loss

pixel intensity

Where in this graph is the solution we find?

Regularized blind deconvolution

Solve regularized least-squares optimization

minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1

loss functioninverse

loss

pixel intensityoptimal solution

many plausible solutions here

Rather than keep just maximum, do a weighted

average of all solutions

A demonstration

input maximum-only

This image looks worse than the original…

average

More examples

Results on real shaky images

Results on real shaky images

Results on real shaky images

Results on real shaky images

More advanced motion deblurring

[Shah et al., High-quality Motion Deblurring from a Single Image, SIGGRAPH 2008]

Why are our images blurry?

• Lens imperfections.

• Camera shake.

• Scene motion.

• Depth defocus.

Can we solve all of these problems using (blind) deconvolution?

Why are our images blurry?

• Lens imperfections.

• Camera shake.

• Scene motion.

• Depth defocus.

Can we solve all of these problems using (blind) deconvolution?• We can deal with (some) lens imperfections and camera

shake, because their blur is shift invariant.• We cannot deal with scene motion and depth defocus,

because their blur is not shift invariant.

ReferencesBasic reading:• Szeliski textbook, Sections 3.4.3, 3.4.4, 10.1.4, 10.3.• Fergus et al., “Removing camera shake from a single image,” SIGGRAPH 2006.

the main motion deblurring and blind deconvolution paper we covered in this lecture.

Additional reading:• Heide et al., “High-Quality Computational Imaging Through Simple Lenses,” TOG 2013.

the paper on high-quality imaging using cheap lenses, which also has a great discussion of all matters relating to blurring from lens aberrations and modern deconvolution algorithms.

• Levin, “Blind Motion Deblurring Using Image Statistics,” NIPS 2006.• Levin et al., “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007.• Levin et al., “Understanding and evaluating blind deconvolution algorithms,” CVPR 2009 and PAMI 2011.• Krishnan and Fergus, “Fast Image Deconvolution using Hyper-Laplacian Priors,” NIPS 2009.• Levin et al., “Efficient Marginal Likelihood Optimization in Blind Deconvolution,” CVPR 2011.

a sequence of papers developing the state of the art in blind deconvolution of natural images, including the use Laplacian (sparsity) and hyper-Laplacian priors on gradients, analysis of different loss functions and maximum a-posteriori versus Bayesian estimates, the use of variational inference, and efficient optimization algorithms.

• Minskin and MacKay, “Ensemble Learning for Blind Image Separation and Deconvolution,” AICA 2000.the paper explaining the mathematics of how to compute Bayesian estimators using variational inference.

• Shah et al., “High-quality Motion Deblurring from a Single Image,” SIGGRAPH 2008.a more recent paper on motion deblurring.

Noise

Side-effects of increasing ISOImage becomes very grainy because noise is amplified.• Why does increasing ISO increase noise?

Sensor noise

A quick note

• We will only consider per-pixel noise.

• We will not consider cross-pixel noise effects (blooming, smearing, cross-talk, and so on).

Noise in images

Results in “grainy” appearance.

The (in-camera) image processing pipeline

• Noise is introduced in the green part.

analog front-end

RAW image (mosaiced,

linear, 12-bit)white

balanceCFA

demosaicingdenoising

color transforms

tone reproduction

compressionfinal RGB

image (non-linear, 8-bit)

The noisy image formation process

What are the various parts?

The noisy image formation process

analog-to-digital converter (ADC)

analog voltage L

analog voltage G

discrete signal I

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

The noisy image formation process

analog-to-digital converter (ADC)

analog voltage L

analog voltage G

discrete signal I

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

introduces photon noise and dark noise

introduces read noise

introduces ADC noise

• We will be ignoring saturation, but it can be modeled using a clipping operation.

simulated mean #photons/pixel

0.001 0.01 0.1

1 10 100

Photon noiseA consequence of the discrete (quantum) nature of light. • Photon detections are independent random events.• Total number of detections is Poisson distributed.• Also known as shot noise and Schott noise.

Ndetections ∼ Poisson[t ⋅ α ⋅ Φ]

photon noise depends on scene flux and exposure

Can you think of examples when dark noise is important?

Dark noiseA consequence of “phantom detections” by the sensor. • Electrons are randomly released without any photons.• Total number of detections is Poisson distributed.• Increases exponentially with sensor temperature (+6°C ≈ doubling).

Ndetections ∼ Poisson[t ⋅ D]

dark noise depends on exposure but not on scene

Can you think of examples when dark noise is important?

• Very long exposures (astrophotography, pinhole camera).

Dark noiseA consequence of “phantom detections” by the sensor. • Electrons are randomly released without any photons.• Total number of detections is Poisson distributed.• Increases exponentially with sensor temperature (+6°C ≈ doubling).

Ndetections ∼ Poisson[t ⋅ D]

dark noise depends on exposure but not on scene

Can you think of ways to mitigate dark noise?

Can you think of examples when dark noise is important?

• Very long exposures (astrophotography, pinhole camera).

Dark noiseA consequence of “phantom detections” by the sensor. • Electrons are randomly released without any photons.• Total number of detections is Poisson distributed.• Increases exponentially with sensor temperature (+6°C ≈ doubling).

Ndetections ∼ Poisson[t ⋅ D]

dark noise depends on exposure but not on scene

Can you think of ways to mitigate dark noise?

• Cool the sensor.

• Average multiple shorter exposures.

Poisson distributionIs it a continuous or discrete probability distribution?

Poisson distributionIs it a continuous or discrete probability distribution?

• It is discrete.

How many parameters does it depend on?

Poisson distributionIs it a continuous or discrete probability distribution?

• It is discrete.

What is its probability mass function?

How many parameters does it depend on?

• One parameter, the rate λ.

Poisson distributionIs it a continuous or discrete probability distribution?

• It is discrete.

What is its probability mass function?

𝑁 ∼ Poisson(𝜆) ⇔ 𝑃(𝑁 = 𝑘; 𝜆) =𝜆𝑘𝑒−𝜆

𝑘!

How many parameters does it depend on?

• One parameter, the rate λ.

What is its mean and variance?

Poisson distributionIs it a continuous or discrete probability distribution?

• It is discrete.

What is its probability mass function?

𝑁 ∼ Poisson(𝜆) ⇔ 𝑃(𝑁 = 𝑘; 𝜆) =𝜆𝑘𝑒−𝜆

𝑘!

How many parameters does it depend on?

• One parameter, the rate λ.

What is its mean and variance?

• Mean: μ(N) = λ

• Variance: σ(N)2 = λ

The mean and variance of a Poisson random variable both equal the rate λ.

What is the distribution of the sum of two Poisson random variables?

Poisson distributionIs it a continuous or discrete probability distribution?

• It is discrete.

What is its probability mass function?

𝑁 ∼ Poisson(𝜆) ⇔ 𝑃(𝑁 = 𝑘; 𝜆) =𝜆𝑘𝑒−𝜆

𝑘!

How many parameters does it depend on?

• One parameter, the rate λ.

What is its mean and variance?

• Mean: μ(N) = λ

• Variance: σ(N)2 = λ

The mean and variance of a Poisson random variable both equal the rate λ.

What is the distribution of the sum of two Poisson random variables?

𝑁1 ∼ Poisson 𝜆1 , 𝑁2 ∼ Poisson 𝜆2 ⇒ 𝑁1 + 𝑁2 ∼ Poisson 𝜆1 + 𝜆2

Fundamental question

Why are photon noise and dark noise Poisson random variables?

The noisy image formation process

analog-to-digital converter (ADC)

analog voltage L

analog voltage G

discrete signal I

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

introduces photon noise and dark noise

introduces read noise

introduces ADC noise

• What is the distribution of the sensor readout L?

The distribution of the sensor readoutWe know that the sensor readout is the sum of all released electrons:

𝐿 = 𝑁photon_detections + 𝑁phantom_detections

What is the distribution of photon detections?

The distribution of the sensor readoutWe know that the sensor readout is the sum of all released electrons:

𝐿 = 𝑁photon_detections + 𝑁phantom_detections

What is the distribution of photon detections?

𝑁photon_detections ∼ Poisson(𝑡 ⋅ 𝛼 ⋅ Φ)

What is the distribution of phantom detections?

The distribution of the sensor readoutWe know that the sensor readout is the sum of all released electrons:

𝐿 = 𝑁photon_detections + 𝑁phantom_detections

What is the distribution of photon detections?

𝑁photon_detections ∼ Poisson(𝑡 ⋅ 𝛼 ⋅ Φ)

What is the distribution of phantom detections?

𝑁phantom_detections ∼ Poisson(𝑡 ⋅ 𝐷)

What is the distribution of the sensor readout?

The distribution of the sensor readoutWe know that the sensor readout is the sum of all released electrons:

𝐿 = 𝑁photon_detections + 𝑁phantom_detections

What is the distribution of photon detections?

𝑁photon_detections ∼ Poisson(𝑡 ⋅ 𝛼 ⋅ Φ)

What is the distribution of phantom detections?

𝑁phantom_detections ∼ Poisson(𝑡 ⋅ 𝐷)

What is the distribution of the sensor readout?

𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

The noisy image formation process

analog-to-digital converter (ADC)

analog voltage G

discrete signal I

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

introduces photon noise and dark noise

introduces read noise

introduces ADC noise

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

Read and ADC noiseA consequence of random voltage fluctuations before and after amplifier. • Both are independent of scene and exposure.• Both are normally (zero-mean Guassian) distributed. • ADC noise includes quantization errors.

nread ∼ Normal(0, σread)

Very important for dark pixels.

nADC ∼ Normal(0, σADC)

The noisy image formation process

analog-to-digital converter (ADC)

analog voltage G

discrete signal I

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

introduces photon noise and dark noise

introduces read noise

introduces ADC noise

• How can we express the voltage G and discrete intensity I?

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

Expressions for the amplifier and ADC outputs

Both read noise and ADC noise are additive and zero-mean.

• How can we express the output of the amplifier?

Expressions for the amplifier and ADC outputs

Both read noise and ADC noise are additive and zero-mean.

• How can we express the output of the amplifier?

𝐺 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔

• How can we express the output of the ADC?

don’t forget to account for the ISO-dependent gain

Expressions for the amplifier and ADC outputs

Both read noise and ADC noise are additive and zero-mean.

• How can we express the output of the amplifier?

𝐺 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔

• How can we express the output of the ADC?

𝐼 = 𝐺 + 𝑛ADC

don’t forget to account for the ISO-dependent gain

The noisy image formation process

analog-to-digital converter (ADC)

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

introduces photon noise and dark noise

introduces read noise

introduces ADC noise

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

analog voltage 𝐺 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔,𝑛read ∼ Normal(0, 𝜎read)

discrete signal 𝐼 = 𝐺 + 𝑛ADC,𝑛ADC ∼ Normal(0, 𝜎ADC)

Putting it all togetherWithout saturation, the digital intensity equals:

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC

𝐿 ∼ Poisson 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷

𝑛read ∼ Normal(0, 𝜎read)𝑛ADC ∼ Normal(0, 𝜎ADC)

where

What is the mean of the digital intensity (assuming no saturation)?

𝐸 𝐼 =

Putting it all togetherWithout saturation, the digital intensity equals:

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC

𝐿 ∼ Poisson 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷

𝑛read ∼ Normal(0, 𝜎read)𝑛ADC ∼ Normal(0, 𝜎ADC)

where

What is the mean of the digital intensity (assuming no saturation)?

𝐸 𝐼 = 𝐸 𝐿 ⋅ 𝑔 + 𝐸 𝑛read ⋅ 𝑔 + 𝐸 𝑛ADC

=

Putting it all togetherWithout saturation, the digital intensity equals:

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC

𝐿 ∼ Poisson 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷

𝑛read ∼ Normal(0, 𝜎read)𝑛ADC ∼ Normal(0, 𝜎ADC)

where

What is the mean of the digital intensity (assuming no saturation)?

𝐸 𝐼 = 𝐸 𝐿 ⋅ 𝑔 + 𝐸 𝑛read ⋅ 𝑔 + 𝐸 𝑛ADC

= 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

What is the variance of the digital intensity (assuming no saturation)?

𝜎 𝐼 2 =

Putting it all togetherWithout saturation, the digital intensity equals:

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC

𝐿 ∼ Poisson 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷

𝑛read ∼ Normal(0, 𝜎read)𝑛ADC ∼ Normal(0, 𝜎ADC)

where

What is the mean of the digital intensity (assuming no saturation)?

𝐸 𝐼 = 𝐸 𝐿 ⋅ 𝑔 + 𝐸 𝑛read ⋅ 𝑔 + 𝐸 𝑛ADC

= 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

What is the variance of the digital intensity (assuming no saturation)?

𝜎 𝐼 2 = 𝜎 𝐿 ⋅ 𝑔 2 + 𝜎 𝑛read ⋅ 𝑔 2 + 𝜎 𝑛ADC2

=

Putting it all togetherWithout saturation, the digital intensity equals:

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC

𝐿 ∼ Poisson 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷

𝑛read ∼ Normal(0, 𝜎read)𝑛ADC ∼ Normal(0, 𝜎ADC)

where

What is the mean of the digital intensity (assuming no saturation)?

𝐸 𝐼 = 𝐸 𝐿 ⋅ 𝑔 + 𝐸 𝑛read ⋅ 𝑔 + 𝐸 𝑛ADC

= 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

What is the variance of the digital intensity (assuming no saturation)?

𝜎 𝐼 2 = 𝜎 𝐿 ⋅ 𝑔 2 + 𝜎 𝑛read ⋅ 𝑔 2 + 𝜎 𝑛ADC2

= 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔2 + 𝜎read2 ⋅ 𝑔2 + 𝜎ADC

2

The noisy image formation process

analog-to-digital converter (ADC)

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

analog voltage 𝐺 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔,𝑛read ∼ Normal(0, 𝜎read)

discrete signal 𝐼 = 𝐺 + 𝑛ADC,𝑛ADC ∼ Normal(0, 𝜎ADC)

𝐼 = min(𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC, 𝐼max) 𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔2 + 𝜎read2 ⋅ 𝑔2 + 𝜎ADC

2

discrete image intensity (with saturation): intensity mean and variance:

saturation level

Affine noise modelCombine read and ADC noise into a single additive noise term:

What is the distribution of the additive noise term?

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛add 𝑛add = 𝑛read ⋅ 𝑔 + 𝑛ADCwhere

Affine noise modelCombine read and ADC noise into a single additive noise term:

𝑛add ∼ Normal(0, 𝜎read2 ⋅ 𝑔2 + 𝜎ADC

2 )

What is the distribution of the additive noise term?

• Sum of two independent, normal random variables.

𝐼 = 𝐿 ⋅ 𝑔 + 𝑛add 𝑛add = 𝑛read ⋅ 𝑔 + 𝑛ADCwhere

Affine noise model

analog-to-digital converter (ADC)

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

discrete signal 𝐼 = 𝐺 + 𝑛add,𝑛add ∼ Normal(0, 𝜎add)

𝐼 = min(𝐿 ⋅ 𝑔 + 𝑛add, 𝐼max) 𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔2 + 𝜎add2

discrete image intensity (with saturation): intensity mean and variance:

Some observationsIs image intensity an unbiased estimator of (scaled) scene radiant flux?

Some observationsIs image intensity an unbiased estimator of (scaled) scene radiant flux?

• No, because of dark noise (term 𝑡 ⋅ 𝐷 ⋅ 𝑔 in the mean).

When are photon noise and additive noise dominant?

Some observationsIs image intensity an unbiased estimator of (scaled) scene radiant flux?

• No, because of dark noise (term 𝑡 ⋅ 𝐷 ⋅ 𝑔 in the mean).

When are photon noise and additive noise dominant?

• Photon noise is dominant in very bright scenes (photon-limited scenaria).

• Additive noise is dominant in very dark scenes.

Can we ever completely remove noise?

Some observationsIs image intensity an unbiased estimator of (scaled) scene radiant flux?

• No, because of dark noise (term 𝑡 ⋅ 𝐷 ⋅ 𝑔 in the mean).

Can we ever completely remove noise?

• We cannot eliminate photon noise.

• Super-sensitive detectors have pure Poisson photon noise.

single-photon avalanche photodiode (SPAD)

When are photon noise and additive noise dominant?

• Photon noise is dominant in very bright scenes (photon-limited scenaria).

• Additive noise is dominant in very dark scenes.

Summary: noise regimes

regime dominant noise notes

bright pixels photon noise scene-dependentdark pixels read and ADC noise scene-independent

low ISO ADC noise post-gainhigh ISO photon and read noise pre-gain

long exposures dark noise thermal dependence

𝐼 = min(𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC, 𝐼max) 𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔2 + 𝜎read2 ⋅ 𝑔2 + 𝜎ADC

2

discrete image intensity (with saturation): intensity mean and variance:

Noise calibration

How can we estimate the various parameters?

analog-to-digital converter (ADC)

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

dark current D

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ + 𝐷))

analog voltage 𝐺 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔,𝑛read ∼ Normal(0, 𝜎read)

discrete signal 𝐼 = 𝐺 + 𝑛ADC,𝑛ADC ∼ Normal(0, 𝜎ADC)

𝐼 = min(𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC, 𝐼max) 𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ + 𝐷 ⋅ 𝑔2 + 𝜎read2 ⋅ 𝑔2 + 𝜎ADC

2

discrete image intensity (with saturation): intensity mean and variance:

saturation level

Estimating the dark currentCan you think of a procedure for estimating the dark current D?

Estimating the dark currentCan you think of a procedure for estimating the dark current D?

• Capture multiple images with the sensor completely blocked and average to form the dark frame.

Why is the dark frame a valid estimator of the dark current D?

Estimating the dark currentCan you think of a procedure for estimating the dark current D?

• Capture multiple images with the sensor completely blocked and average to form the dark frame.

Why is the dark frame a valid estimator of the dark current D?

• By blocking the sensor, we effectively set Φ = 0.

• Average intensity becomes:

𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ 0 + 𝐷 ⋅ 𝑔 = 𝑡 ⋅ 𝐷 ⋅ 𝑔

• The dark frame needs to be computed separately for each ISO setting, unless we can also calibrate the gain g.

For the rest of these slides, we assume that we have calibrated D and removed it from captured images (by subtracting from them the dark frame).

Noise model after dark frame subtraction

analog-to-digital converter (ADC)

scene radiant flux Φ

analog amplifier (gain g = k ⋅ ISO)

sensor (exposure t, quantum efficiency α)

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ))

analog voltage 𝐺 = 𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔,𝑛read ∼ Normal(0, 𝜎read)

discrete signal 𝐼 = 𝐺 + 𝑛ADC,𝑛ADC ∼ Normal(0, 𝜎ADC)

𝐼 = min(𝐿 ⋅ 𝑔 + 𝑛read ⋅ 𝑔 + 𝑛ADC, 𝐼max) 𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ ⋅ 𝑔2 + 𝜎read2 ⋅ 𝑔2 + 𝜎ADC

2

discrete image intensity (with saturation): intensity mean and variance:

saturation level

analog-to-digital converter (ADC)

analog amplifier (gain g = k ⋅ ISO)

Affine noise model after dark frame subtraction

scene radiant flux Φ

sensor (exposure t, quantum efficiency α)

analog voltage 𝐿,𝐿 ∼ Poisson(𝑡 ⋅ (𝑎 ⋅ Φ))

discrete signal 𝐼 = 𝐺 + 𝑛add,𝑛add ∼ Normal(0, 𝜎add)

𝐼 = min(𝐿 ⋅ 𝑔 + 𝑛add, 𝐼max) 𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ ⋅ 𝑔2 + 𝜎add2

discrete image intensity (with saturation): intensity mean and variance:

Estimating the gain and additive noise variance

Can you think of a procedure for estimating these quantities?

Estimating the gain and additive noise variance

1. Capture a large number of images of a grayscale target.

Estimating the gain and additive noise variance

What do you expect the measurements

to look like?

1. Capture a large number of images of a grayscale target.

2. Compute the empirical mean and variance for each pixel, then form a mean-variance plot.

mean

vari

ance

Estimating the gain and additive noise variance

1. Capture a large number of images of a grayscale target.

2. Compute the empirical mean and variance for each pixel, then form a mean-variance plot.

mean

vari

ance

𝐸 𝐼 = 𝑡 ⋅ 𝑎 ⋅ Φ ⋅ 𝑔

𝜎 𝐼 2 = 𝑡 ⋅ 𝑎 ⋅ Φ ⋅ 𝑔2 + 𝜎add2

⇒ 𝜎 𝐼 2= 𝐸 𝐼 ⋅ 𝑔 + 𝜎add2

Estimating the gain and additive noise variance

1. Capture a large number of images of a grayscale target.

2. Compute the empirical mean and variance for each pixel, then form a mean-variance plot.

mean

vari

ance 𝜎 𝐼 2 = 𝐸 𝐼 ⋅ 𝑔 + 𝜎add

2

3. Fit a line and use slope and intercept to estimate the gain and variance.

equal to line slope

equal to line intercept𝜎add

2

𝑔

How would you modify this procedure to separately estimate read and ADC noise?

Estimating the gain and additive noise variance

1. Capture a large number of images of a grayscale target.

2. Compute the empirical mean and variance for each pixel, then form a mean-variance plot.

mean

vari

ance 𝜎 𝐼 2 = 𝐸 𝐼 ⋅ 𝑔 + 𝜎add

2

3. Fit a line and use slope and intercept to estimate the gain and variance.

equal to line slope

equal to line intercept𝜎add

2

𝑔

How would you modify this procedure to separately estimate read and ADC noise?• Perform it for a few different ISO settings (i.e., gains g).

Important note

Noise calibration should be performed with RAW images!

Optimal weights for HDR merging

Merging non-linear exposure stacks

1. Calibrate response curve

2. Linearize images

For each pixel:

3. Find “valid” images

4. Weight valid pixel values appropriately

5. Form a new pixel value as the weighted average of valid pixel values

(noise) 0.05 < pixel < 0.95 (clipping)

(pixel value) / ti

Same steps as in the RAW case.

Note: many possible weighting schemes

Many possible weighting schemes“Confidence” that pixel is noisy/clipped

• What are the optimal weights for merging an exposure stack?

RAW (linear) image formation model

Exposure time:t5 t4 t3 t2 t1

(Weighted) radiant flux for image pixel (x,y): α ⋅ Φ(x, y)

What weights should we use to merge these images, so that the resulting HDR image is an optimal estimator of the weighted radiant flux?

Different images in the exposure stack will have

different noise characteristics

Simple estimation exampleWe have two noisy unbiased estimators x and y of the same quantity (e.g., a pixel’s intensity).• The two estimators have variance σ[x]2 and σ[y]2.

Simple estimation exampleWe have two noisy unbiased estimators x and y of the same quantity (e.g., a pixel’s intensity).• The two estimators have variance σ[x]2 and σ[y]2.

Assume we form a new estimator from the convex combination of the other two:

z = a ⋅ x + (1 – a) ⋅ y

Simple estimation example

Assume we form a new estimator from the convex combination of the other two:

z = a ⋅ x + (1 – a) ⋅ y

What criterion should we use in selecting the mixing weight a?

We have two noisy unbiased estimators x and y of the same quantity (e.g., a pixel’s intensity).• The two estimators have variance σ[x]2 and σ[y]2.

Simple estimation example

Assume we form a new estimator from the convex combination of the other two:

z = a ⋅ x + (1 – a) ⋅ y

What criterion should we use in selecting the mixing weight a?• We should select a to minimize the variance of estimator z. (Why?)

What is the variance of z as a function of a?

We have two noisy unbiased estimators x and y of the same quantity (e.g., a pixel’s intensity).• The two estimators have variance σ[x]2 and σ[y]2.

Simple estimation exampleAssume we have two noisy estimators x and y of the same quantity (e.g., intensity at a pixel).• The two estimators have variance σ[x]2 and σ[y]2.

Assume we form a new estimator from the convex combination of the other two:

z = a ⋅ x + (1 – a) ⋅ y

What criterion should we use in selecting the mixing weight a?• We should select a to minimize the variance of estimator z.

What is the variance of z as a function of a?

σ[z]2 = a2 ⋅ σ[x]2 + (1 – a)2 ⋅ σ[y]2

What value of a minimizes σ[z]2?

Simple estimation exampleSimple optimization problem:

𝜕σ[z]2

𝜕𝑎= 0

𝜕(𝑎2 ⋅ σ[x]2 + (1 –𝑎)2 ⋅ σ[y]2)

𝜕𝑎= 0

2 ⋅𝑎⋅ σ[x]2 − 2 ⋅ (1 –𝑎) ⋅ σ[y]2 = 0

𝑎 =σ[y]2

σ[x]2 + σ[y]21 − 𝑎 =

σ[x]2

σ[x]2 + σ[y]2and

Simple estimation examplePutting it all together, the optimal linear combination of the two estimators is

𝑧 =σ[x]2σ[y]2

σ[x]2 + σ[y]2⋅

1

σ[x]2𝑥 +

1

σ[y]2𝑦

normalization factor

weights inversely proportional to variance

Simple estimation examplePutting it all together, the optimal linear combination of the two estimators is

𝑧 =σ[x]2σ[y]2

σ[x]2 + σ[y]2⋅

1

σ[x]2𝑥 +

1

σ[y]2𝑦

normalization factor

weights inversely proportional to variance

This is weighting scheme is called Fisher weighting and is a BLUE estimator.

More generally, for more than two estimators,

𝑧 =1

σ𝑖=1𝑁 1

σ[𝑥𝑖]2

𝑖=1

𝑁1

σ[𝑥𝑖]2𝑥𝑖

Video Textures

Arno Schödl

Richard Szeliski

David Salesin

Irfan Essa

Microsoft Research, Georgia Tech

[SIGGRAPH 2000]

Problem statement

video clip video texture

Our approach

How do we find good transitions?

Finding good transitions

Compute L2 distance Di, j between all frames

Similar frames make good transitions

frame ivs.

frame j

Markov chain representation

2 3 41

Similar frames make good transitions

Transition costs

Transition from i to j if successor of i is like j

Cost function: Ci→j = Di+1, j

i

j

i+1

j-1

i j→ Di+1, j

Transition probabilities

Probability for transition Pi→j inversely related

to cost:Pi→j ~ exp ( – Ci→j / s2 )

high s low s

Preserving dynamics

Preserving dynamics

Preserving dynamics

Cost for transition i→j

Ci→j = wk Di+k+1, j+kk = -N

N -1

i

j j+1

i+1 i+2

j-1j-2

i j→Di, j-1 D Di+1, j i+2, j+1

i-1

Di-1, j-2

Finding good loops

• Alternative to random transitions

• Precompute set of loops up front

Recommended