Date post: | 14-Apr-2018 |
Category: |
Documents |
Upload: | marcelafernandez95 |
View: | 220 times |
Download: | 0 times |
of 28
7/27/2019 ruido en fotografia digital.doc
1/28
7/27/2019 ruido en fotografia digital.doc
2/28
The fundamental part in a digital camera sensor is the photosite. This is the part of the
sensor that actually detects light when you are taking an image. The photosite achieves
the detection of light by converting as many photons that strike it as possible into
electrons1. These electrons are then stored until the exposure is completed. Once the
exposure is over, the charge at each photosite is measured, and the measurement is
converted into a digital value. This measurement process is called readout.
There are two common types of sensor in digital cameras - CCDs, and CMOS sensors.
They differ in several significant respects.
In a CCD, the electrons in each photosite are transferred, in 'bucket-brigade' style, to
one corner of the chip, where the readout is performed. CCDs use a single amplifier (or
set of amplifiers) to read out the entire chip, so each photosite sends its charge to the
same amplifiers that all the other pixels use.
In CCDs, this readout circuitry sits on top of the photosites and partially obscures them,
so that some of the light falling on a sensor doesn't make it to the photosite to bedetected. Two methods have been devised to address this. One is shaving the CCD chip
until it is very thin, and then mounting it upside down so that light enters the CCD from
the bottom (so that the readout circuitry is then on the back of the chip, underneath the
photosites). CCDs in this configuration are called "back-illuminated" and are found only
on very expensive cameras. Another technique is to place a microlens above the
photosite and its adjacent readout circuitry, which redirects some of the light that would
otherwise strike the circuitry into the photosite instead.
In CMOS sensors, each photosite's amplifier and related circuitry are adjacent to the
photosite, directly on the sensor. Therefore, CMOS also has the problem of a significant
amount of sensor area being taken up by devices that are not sensitive to light just as
CCDs do, but with CMOS sensors, the problem is usually quite a bit worse. With
CMOS sensors, the microlens method is very commonly used to help overcome this.
Sources of noise in digital SLRs:
There are four main sources of noise in digital camera images:
Dark noise: Dark noise is an accumulation of heat-generated electrons in the
sensor, which end up in the photosites and contribute a snow-like appearance to
the image. The related term "dark current" refers to the rate of generation ofthese electrons, most of which come from boundaries between silicon and
silicon dioxide in the sensor.
Readout noise aka Bias Noise: Constructing an image from the sensor's
photosites requires that the charge in each photosite be measured, and converted
to a digital value. Making this measurement is part of the process of "reading
out" the sensor. But doing so is an imperfect process. The amount of charge in
the photosite is too small to be measured without prior amplification, and this is
the main source of trouble: no perfect amplifier has been invented, and the
amplifiers used on digital imaging sensors add a little bit of noise, similar to
static in a radio signal, to the charge they are amplifying. The readout amplifier
in a sensor is the main contributor to readout noise.
7/27/2019 ruido en fotografia digital.doc
3/28
Photon noise, aka Poisson noise: Photon noise is caused by the differences in
arrival time of light to the sensor. If photons arrived at a constant rate, as though
they were being delivered to the photosite by a conveyor belt at an efficient
factory, then there would be no photon noise. But that isn't how it works.
Photons arrive at the photosite irregularly. One pixel might be lucky enough to
be hit with 100 photons in a given amount of time, while its neighbor onlyreceives 80. If the photo is of an evenly illuminated surface, this photon noise
will show up as one pixel having an improperly low value compared to an
adjacent one.
Random noise: The remaining noise is traceable to erroneous fluctuations in
voltage or current in the camera's circuitry, to electromagnetic interference, and
who-knows-what. Random noise will vary from image to image and is a result
of many influences. One of the most significant might be random variation in
the way electronic components operate at different times, temperatures, and
conditions. Whatever the case, random noise is almost always infinitesimal - in
most modern digital cameras, random noise will not be detectable in an 8-bit
image; it may be barely measurable in a 16-bit image but will very rarely bevisible in a conventional photo.
In addition to these sources of noise, variations in photosite sensitivity across the sensor,
as well as shadows cast on the sensor by dust and dirt, can appear to contribute "noise"
to the image in the form of snow or regions of greater or lesser apparent sensitivity.
Those interested in reducing this form of pseudo-noise may wish to research the topic of
flat fields, but I won't cover it here because these problems are mostly solved in digital
SLRs (as long as the sensor is clean).
Finally, if a cosmic ray strikes a sensor during an exposure, it can result in a very hot
pixel or a spurious streak in the image. This too might look like noise, but it isn't - it is a
legitimate detection of a high-energy particle by a sensor efficient at detecting high-
energy particles.
Characteristics of Photon, Dark, and Bias Noise:
Photon noise is pseudo-random. The arrival times of photons at a photosite describe a
Poisson distribution, and there is essentially nothing post-exposure that can be done
about photon noise. However, the impact of photon noise in the resulting image will be
greater with (a) fast shutter speeds, (b) dimly lit subjects, and/or (c) high amplification
of the signal. So to reduce the visibility of photon noise, longer exposure times, brighterillumination, and low ISO settings may help.
Dark noise accumulates over time, and does so in a very convenient manner: an
exposure time twice as long can be expected to have roughly twice the amount of dark
noise. In part for this reason, long-exposure photographs are troublesome with some
digital cameras; but the increase of dark noise over time suggests a strategy for dealing
with the problem.
7/27/2019 ruido en fotografia digital.doc
4/28
Dark noise, 32 minute exposure taken at 22 C. Taken with a Canon 10D, levels
adjusted to increase noise visibility and image resampled to a manageable size.
7/27/2019 ruido en fotografia digital.doc
5/28
Dark noise, 62 minute exposure otherwise identical to the above. As theory predicts, the
dark noise in this image is almost exactly double that in the previous image in terms of
individual pixel values.
Dark noise is caused by heat-generated electrons making their way into the photosites,
so the temperature of the camera's sensor also affects the amount of dark noise in theimages. As the temperature of the sensor goes up, dark noise increases. Different
sensors behave differently, but in general, increasing the temperature of a sensor by six
to ten degrees C will result in the dark noise in the resulting image doubling. While this
is a nonlinear effect, it is at least easy to describe mathematically.
Dark noise is not random; in fact, it is highly repeatable. A given photosite on a sensor
will accumulate almost exactly the same amount of dark noise from one exposure to the
next, as long as temperature and exposure duration do not vary.
Bias noise is also highly repeatable - but since it is a result of reading out the sensor, it
does not even depend on shooting conditions being the same. Practically the onlyvariable affecting readout noise in a digital camera exposure is the amount of amplifier
gain. As long as the amplifier gain remains the same, readout noise will be nearly
identical from shot to shot. In general, doubling amplifier gain can be expected to
approximately double the amount of readout noise.
In digital cameras, photographers have nearly direct control over amplifier gain by
adjusting the ISO setting. Increasing ISO increases amplifier gain, and reducing ISO
reduces gain. As you would expect, bias noise in digital images is usually less
conspicuous when lower ISO settings are used. At any given ISO setting, the bias noise
is going to be very nearly the same from one image to the next.
7/27/2019 ruido en fotografia digital.doc
6/28
Bias noise at 1600 ISO in a Canon 10D. Levels adjusted and image resampled to a
manageable size.
7/27/2019 ruido en fotografia digital.doc
7/28
Bias noise at 3200 ISO in a Canon 10D, otherwise identical to above. Bias noise in this
frame is approximately double that in the previous frame.
Calibrating Images:
Since dark and bias noise is not random and is consistent from image to image,techniques have been developed to allow scientific and technical imagers to remove
these sources of noise from their images. This process is called "calibrating" the image.
Dealing with both dark and bias noise involves making two special images and
subtracting them from the photo. The first image is a bias frame - a zero-duration
exposure in which the sensor is reset and immediately read out, without any light falling
on the sensor and with no time gap between the reset and readout. The image that this
process creates is a snapshot of what the sensor's bias noise looks like, since the only
contribution to the resulting image is the readout amplifier's static.
The other special image is the dark frame. This is most commonly an exposure of the
same duration, taken at the same sensor temperature, as the photo. Since no light isallowed to fall on the sensor, the resulting image shows only an accumulation of dark
noise (plus bias noise - since to get the image you have to read out the sensor). For
various reasons, in most scientific imagery the bias and dark frames are generated as
separate steps and subtracted from the photo separately, in a defined sequence.
However, most digital cameras do not allow a zero-duration exposure without the use of
special software - such as testing software used by camera service departments, or
expensive software written specifically for science and engineering applications, which
might require that physical modifications be made to the camera to operate properly.
For this reason, most photographers who are calibrating their digital SLR images are
doing so with a single combined bias and dark frame, taken as an exposure at the same
ISO, shutter duration, and ambient temperature as the photograph.
7/27/2019 ruido en fotografia digital.doc
8/28
A bias frame (3200 ISO).
7/27/2019 ruido en fotografia digital.doc
9/28
A bias frame shot approximately an hour later at the same ISO setting.
The result of subtracting the second bias frame from the first. Theory suggests that thisprocess should result in a nearly black image, save for any remaining random noise.
The dramatic reduction in noise is clearly visible in this calibrated image. All three
images' levels have been adjusted identically.
Complicating Factors:
CMOS sensors allow the placement of both photosites and transistors on the sensor
itself. (CCDs cannot have any processing circuitry built into the sensor - just transfer
gates and the like, which are controlled by off-sensor control circuitry.) Because of this,
CMOS sensors generally have at least the readout amplifier built in to the photosite.
There may be other transistors as well, which perform other processing steps. It is nowvery common for a CMOS sensor to include noise-reduction circuitry directly on the
sensor alongside the readout amplifier. In some designs, a sort of small dummy
photosite, shaded from light, is used to quantify the likely dark noise level in the actual
photosite, and this quantity is subtracted during readout. In other designs, a constant -
corresponding to the tested dark current of the sensor - is subtracted from the photosite
value during readout. If anything like this is happening, expectations such as "dark noise
will double with twice the exposure duration" may turn out to be false.
In addition, this on-sensor circuitry can be designed to subtract the amount of bias noise
that the sensor designer expects will be contributed to that particular pixel. This is a
design-time decision, so bias noise may still be introduced due to manufacturingvariations, erroneous expectations on the part of the designer, changes in other circuitry
7/27/2019 ruido en fotografia digital.doc
10/28
at a later point in development that the designer decided not to compensate for, and so
forth. In any case, if bias noise is being addressed in a CMOS sensor camera - and it is
being aggressively dealt with in all known current DSLRs - the relationship between
ISO and readout noise in a particular camera's images might not be as simple or as
repeatable as expected.
Note that both of these kinds of on-sensor processing affect the camera's RAW image.
That is to say, the RAW image is notnecessarily "exactly what the sensor detected," as
is often said. Instead, it is exactly what the sensor detected, plus or minus whatever
built-in, on-sensor processing is being done in that particular camera. The raw image
lacks any post-readout processing, of course - the point is that on CMOS sensors some
processing may be unavoidable and its effects will be present in the raw format image.
Of course, in-camera processing after readout alters the noise profile a great deal as
well. No JPEG image can have its noise reduced by the calibration steps described here
- the data is too drastically altered by compression to allow dark or bias subtraction to
work right. In-camera resampling, resizing, binning2, sharpening, and noise reductionwill all change the appearance of the noise in the image and the way it varies by
exposure time, temperature, and ISO setting.
Despite this, digital camera photos can often be beneficially calibrated to reduce noise.
Although aggressive noise reduction is probably occurring in any modern camera either
during or just after readout, the residuum of noise that is not addressed is often largely
non-random and consistent from shot to shot.
A ten-second exposure, taken with the lens covered, at ISO 3200 and 22 C.
7/27/2019 ruido en fotografia digital.doc
11/28
An identical shot taken about two hours later.
7/27/2019 ruido en fotografia digital.doc
12/28
The result of subtracting the second shot from the first. Theory predicts that this will
result in a nearly noise-free image, which is clearly visible here. All three images' levels
have been set identically.
WORKFLOW:
In practical terms, for the average digital SLR user, a combined bias-dark frame is the
only practical calibration frame to apply to their images.
In Photoshop, there are probably a dozen ways to subtract one image from another. In
the workflow below, I will describe how to do it using layers. Use whatever method
works for you.
In the following, a "photo" is a picture of a subject that you want to calibrate. A
"calibration frame" is a special image of the camera's noise characteristics that you will
subtract from the photo - in this case a combined bias and dark frame.
Taking the photo and calibration frame:
1. Set the camera to take RAW format images.
2. Turn off any in-camera sharpening. (Contrary to popular opinion or general
rules of thumb, with some cameras the RAW image willbe affected by in-
camera sharpening.)
3. Set the camera properly, paying special attention to ISO and exposure time.
4. Take the photo.
5. Put the dust cap on the lens.
6. Put the eyepiece cover, if available, on the eyepiece so that no light can get in
from the back end of the camera.7. Double check that the ISO and exposure time settings are the same as used when
taking the photo in step 4. (Taking the calibration frame at a different lens
aperture is not recommended, since this can introduce random noise of a
different profile than that in the photo.)
8. Wrap the camera with a dark towel or other fabric. (This may be overkill if the
lens cap is good - use your judgment, but insure no light reaches the sensor
while performing the following step.)
9. Take the exposure.
Calibrating:
1. Open the photo in your raw conversion software.2. Select a white balance for the photo.
3. Make no other alterations in the raw conversion. In particular, do not modify
levels in such a way as to clip dark values, and do not allow the RAW converter
to apply sharpening.
4. Open your calibration frame in the raw conversion software and apply the same
conversion settings to it as will be used for the photo.
5. Convert the raw images to maximum bit depth TIFF files.
6. Load both the TIFF files in Photoshop.
7. Select the calibration frame.
8. Press "ctrl-a" or choose "All" from the Selectmenu to select all of the calibration
frame.
7/27/2019 ruido en fotografia digital.doc
13/28
9. Press "ctrl-c" or choose "copy" from theEditmenu to copy the calibration frame
to the clipboard.
10. Select the photo.
11. Press "ctrl-v" or choose "paste" from theEditmenu to paste the calibration
frame into your photo as a new layer.
12. Close the calibration file.13. Select the new layer in your photo - the one that was created by copying the
calibration frame (we will call this the calibration layer).
14. Open theBlending Options dialogue (or make the following adjustments at the
top of the Layers window).
15. Select "difference" for Blend Mode.
16. Select 100% for Opacity.
At this point, your photo is calibrated, and if the photo lacks significant amounts of
photon noise and random noise, it should look significantly better than it did before
calibration.
A 1:1 crop of a photo taken in poor lighting in a coffee shop at ISO 3200.
7/27/2019 ruido en fotografia digital.doc
14/28
The same part of the photo after calibration.
You can now proceed however you like, as long as you follow a simple pair of rules:
1. If you use adjustment layers, place them above both the photo (background)
layer and the calibration layer. Putting an adjustment layer in between the two
will destroy the calibration.
2. If you make any destructive alterations to the image, flatten the image first. A
"destructive alteration" is anything in Photoshop that changes the image, for
which an adjustment layer is not available.
If you are not very familiar with Photoshop, or the two rules don't make a lot of sense, it
is probably best to just flatten the image immediately after calibration.
Of course, after you have done the calibration you can still apply various noise-
reduction algorithms to further attack photon noise and random noise, for example if
Noise Ninja or similar software is available.
If there are problems:
If you suddenly lost a lot of dynamic range in your image, and white stuff turned
gray, but at least the snowy noise disappeared, congratulations - this is expected.You are after all subtracting from the pixel values in the original image. This
7/27/2019 ruido en fotografia digital.doc
15/28
means that getting the proper exposure in the first place is even more critical if
you want to maximize dynamic range. If you have a severely underexposed
image in which the brightest value is only halfway to the right in the histogram,
you can expect those highlights to move even farther to the left after calibration.
Calibration is not a good way to rescue impossible images; it can only help
reduce the appearance of noise in a well-exposed image that lacks significantphoton and random noise.
If your hot (bright) grainy pixels have turned to dark grainy pixels, reduce the
opacity of the calibration layer. You might find that somewhat lesser opacity
results in a good calibration. If no opacity level does any good for your image, it
may be time to blame photon and random noise (possibly exacerbated by the
photo being badly underexposed?), and give up by moving on to Noise Ninja or
the like.
If you find that a few unusually hot pixels in the calibration frame are punching
dark holes in your photo after calibration, you might calibrate using a tool like
Blackframe NRfreeware, which detects and corrects this condition.
If you see a moire pattern in your image after calibration - especially in darkportions of the photo - you can take the usual steps to filter this out. You might
protest that the moire wasn't there in your original photo. You are right; it was
obscured by noise. By calibrating out the noise, you are now showing just how
few bits you were using to represent that portion of the photo. You will just have
to deal with this, as it is one of the costs of having a nearly noise-free image.
If your image erupts with a case of what look like JPEG artifacts, clusters of
bright pixels, rivers of dark areas, bad halos, and the like, then something has
gone badly wrong. Possibly your calibration frame was taken with significantly
different camera settings or at a significantly different temperature. Possibly you
have turned on some mysterious (to the author) adaptive or heuristic noise-
reduction feature of the camera or RAW converter, which is making the noise
profile vary wildly. Maybe you mistakenly selected the wrong blending mode
for the calibration layer. Possibly you have tried to calibrate a JPEG. Frequently
when this happens the camera is found to be sharpening the image. Go back
through the process and try to figure out what went wrong. If you can't figure it
out, you might re-convert your RAW frames to linear TIFFs and see if
subtraction works with those.
Advanced calibration methods:
You can take multiple calibration frames if you want. Doing so reduces theimpact of random noise in the calibration frame, and results in a better sample of
the repeating bias and dark noise. The way to do it is to take your multiple
calibration frames and "median combine" them. (This is not the same as using a
median filter in Photoshop.) The process of median combination makes a list of
each pixel value in each channel of each calibration frame at each pixel location,
and builds a new calibration frame by selecting the median value from that list
for that pixel and channel. Various software packages (Maxim DL, GIMP) allow
convenient median combination of TIFF images, but as far as I know Photoshop
is not one of them.
You can take your calibration frame at a different exposure time, ISO, or
temperature if you like, and scale the frame accordingly. For example, you cantake a calibration frame at the same ISO and temperature, but at twice the
http://www.mediachance.com/digicam/blackframe.htmhttp://www.mediachance.com/digicam/blackframe.htmhttp://www.mediachance.com/digicam/blackframe.htm7/27/2019 ruido en fotografia digital.doc
16/28
exposure duration. You can then divide the pixel values in that image by two
before calibrating the photo. However, check "complicating factors" above for
why this might not work well.
1 There are a few sensor types in which photons result in a charge dissipation rather than accumulation -
but I want to limit the scope of this discussion so it is manageable; I'll just assume every sensor works thesame way.2 Binning is technically impossible in most CMOS sensors, requiring a digital (and post-readout)simulation of binning to achieve. It is however very commonly available in CCDs.
Copyright 2004 Jeff Medkeff
Reader's Comments
Thanks for the exhaustive analysis Jeff. It is a very interesting approach to practical
noise reduction.
I have a question though. Why does the photo of the guy after calibration look lighter
than the one before it? It looks smoothened out, the face looks less red than in the one
with the noise.Of course, the noisy pic has lots of red speckeled noise, so the calibration
might have removed some red in the pic in addition to the red noise speckles.
Any ideas?
-- Gurpreet Singh Bhasin, December 22, 2004
1:1 Crop showing poor noise cancellation.
Thank you very much for this excellent article! It's just the kind of thing I'm always
looking to find more of.
I thought I understood this technique well, but I can't seem to reproduce your results. I
followed your procedure using an almost identical camera (Canon Digital Rebel) -
camera wrapped in cloth, no sharpening in camera or RAW converter, even linear 16-bit
tiffs. I've tried various settings, but for example:
I took two dark frames in a row using your procedure, 10 seconds each at ISO 1600.After a levels exaggeration, they look very much like the pictures of noise you posted
http://www.photo.net/shared/community-member?user_id=402304http://www.photo.net/shared/community-member?user_id=4023047/27/2019 ruido en fotografia digital.doc
17/28
(lots of red splotches). The only problem is, the noise doesn't match up at all from one
frame to the other! Most all of the hot pixels cancel each other out, as expected, but the
"grainy" noise does not match up at all. If I subtract one frame from the other, the noise
doesn't go away, it just changes.
Subsequently, if I use this technique on a real photo, all hell breaks loose. It's definitelynot working.
The attached example is a 1:1 crop of what I'm talking about. On the left is a dark frame
taken 10s at ISO 1600. On the right, I've subtracted the second dark frame (same
settings) from it using difference blending mode in Photoshop. Things are different, all
right, but they aren't much better!
I'm clearly missing something here, and I can't figure out what. Anyone want to clue me
in to why my noise varies so terribly much from one photo to the next?
-- David Little, December 22, 2004
Canon 10D, ISO 100, 4900 seconds
Dark frame's should be subtracted from raw data prior to the colour demosaic
processing, if this possible. This is because the demosaic can smear or disturb warm
pixel data. I've attached an example I took a while ago, with an ISO 100 exposure in
excess of an hour(!!) with a Canon 10D. From the top-left, clockwise, you can see the
unprocessed frame, the dark frame, the dark-subtracted frame, and, because it was an
under-exposed image (my camera battery died), a "gain" frame to make up for the loss.
-- Walang Pangalan, December 22, 2004
http://www.photo.net/shared/community-member?user_id=748606http://www.photo.net/shared/community-member?user_id=788741http://www.photo.net/shared/community-member?user_id=748606http://www.photo.net/shared/community-member?user_id=7887417/27/2019 ruido en fotografia digital.doc
18/28
David, I guess noise varies from photo to photo because noise is a random occurence. i
dont know if noise is predictive, if it were then we could make all photos noise proof.
-- Gurpreet Singh Bhasin, December 22, 2004
> It is now very common for a CMOS sensor to include noise-reduction> circuitry directly on the sensor alongside the readout amplifier.
> In some designs, a sort of small dummy photosite, shaded from
> light, is used to quantify the likely dark noise level in the
> actual photosite, and this quantity is subtracted during readout.
I might add that some CCD's -- I'm not sure about the particular ones used in the
cameras of interest here -- have some number of light-shielded pixels at the end of each
row which can be used in a like manner. These pixels would be read out like other
pixels and then used as a measure of the bias + dark noise to be subtracted from the
image pixel values in that row.
It's also worth mentioning that dark noise can decrease dynamic range in very long
exposures since the pixels are thus partially filled up with thermal electrons, leaving less
space for the electrons coming from the (faint) image. This is why CCD's used in
astronomy are cooled -- to -30 or -40 Celsius in the 0.09MP camera I built some time
back. Anyone taking very long exposures with an uncooled camera would be advised to
try to come up with their full well potential (how many electrons each pixel can hold
before it's full) and the rate at which thermal electrons are generated to get an idea of the
limits and how much dynamic range is available at a given exposure time.
To reduce readout noise in astronomical cameras the pixels are read very slowly to
allow the amplification electronics and the A/D converter to settle. I'd bet commercial
cameras involve compromises between noise and readout speed.
The other common technique used in astronomy to reduce noise even further is to take a
dark-light-dark sequence and average the two dark frames before subtracting from the
light frame. Besides improving the statistics it removes any linear drift in the
electronics.
-- Chris Wetherill, December 23, 2004
The Canon 20D has a dark frame feature built in that can be swithched on and off for
long exposures. I assume the 20D is doing the calibration you describe in camera, but is
there any advantage to manually making a dark frame in addition to the 20d dark frame
processing?
-- Shaun O'Boyle, December 23, 2004
What a wonderful article, Jeff! I'm quite impressed by the technique. My mind isbuzzing with ideas and comments.
http://www.photo.net/shared/community-member?user_id=402304http://www.photo.net/shared/community-member?user_id=875520http://www.photo.net/shared/community-member?user_id=24329http://www.photo.net/shared/community-member?user_id=402304http://www.photo.net/shared/community-member?user_id=875520http://www.photo.net/shared/community-member?user_id=243297/27/2019 ruido en fotografia digital.doc
19/28
While I think that this is a wonderfully effective technique, one limitation is the
inconvenient requirement of taking a dark photo at the same time (really the same
temperature) as the image to be calibrated. Perhaps there's another way. I think that it
would be possible to write a program that would characterize the noise from each RGB
pixel by a one-time calibration procedure that would take dark photos at each ISO
setting and a grid of temperatures. Given the exposure time (t), temperature (T), andISO setting of the image to be calibrated, the program could predict the noise level at
each pixel--that is, it could automatically generate the calibration image. The calibration
image could be subtracted as you described so well above.
The noise model could be the sum of the two predictable sources that you mentioned,
calculated at each pixel. It could be completely characterized by just a few global
parameters and a manageable number of pixel specific parameters. Mathematically, it
might look something like this:
expected_noise(x,y,color,t,T,ISO) = (c1(x,y,color) + c2(x,y,color) * t) * exp(c3*T) +
c4(x,y,color) + c5(x,y,color) * exp(c6*ISO)
where x,y are the sensor pixel coordinates, color specifies the RGB channel, and
c1(x,y,color), c2(x,y,color), etc. are parameters to be defined momentarily.
The first two terms try to describe the dark noise: each pixel has some fixed noise
[c1(x,y,color)] and some noise that is linear in time [c2(x,y,color) * t], and both of these
are multiplied by an exponential temperature-dependent factor [exp(c3*T)]. I suspect
that the temperature-dependent part is independent of pixel coordinates (hence c3 is
only one parameter, not one for each pixel) since this noise probably arises from
spatially-independent material properties of the detector.
The last set of terms in the above equation attempt to model the bias noise, which is due
to the amplifiers. In general, one could imagine that the amplifiers have both offset and
gain errors. I therefore included a constant term [c4(x,y,color)] and a term exponential
in amplifier gain/ISO [c5(x,y,color) * exp(c6*ISO)]. I've assumed that all amplifiers are
created equal (that is, c6 is independent of pixel coordinates), but this may be erroneous.
Besides the global parameters (c3,c6), there are a lot of pixel-specific parameters that
have to be determined. In my model, the number of parameters is four (c1,c2,c4,c5)
times three (R,G,B) times the number sensor pixels. So, for a 6MP Digital Rebel, that's
72 megaparameters! While this is a lot(!), I think that one could write a computerprogram that would automatically determine all of them with reasonable accuracy and
in finite time. This program would take several dark photos at each ISO setting, and for
several temperatures. (In practice the temperature grid could be obtained by chilling the
camera in a freezer and taking the calibration photos as it slowly warms to ambient.)
The parameters would be determined by a least-squares fit. Additional calibration
images would more precisely determine the parameters. One could also see how
accurate the model is by comparing predicted and actual noise values by some statistical
measure (perhaps chi-square).
To implement the program outlined above, it would be nice if the camera recorded the
temperature in the EXIF. From a design standpoint, it would be easy to measure thetemperature near the imager; many microcontrollers provide temperature almost as a
7/27/2019 ruido en fotografia digital.doc
20/28
throwaway feature, and there are lots of tiny temperature sensing chips that could be
incorporated into future cameras. (Does anyone know of a camera that already reports
temperature in the EXIF?) On the other hand, I wonder if it would be possible to
calculate (!) the temperature that a given image was taken at based on the noise profile.
I envision calculating the distribution of some statistical measure of noise (perhaps the
correlation function of brightness between nearest neighbors?) and fitting thetemperature to that distribution. Since dark noise is exponential in temperature, it should
be relatively easy to fit. Of course this distribution function partially depends on the
subject of the image, but it still seems possible. Perhaps the distribution could be
calculated only in the smoothest areas of the photo. Does this idea resonate with
anyone?
I wonder if some of these ideas have been integrated into camera firmware, raw
processing programs, or noise removal programs. I haven't read much on the subject.
Does anyone know?
-- Andrew Howard, December 23, 2004
Isn't this the same as what many Canon DSLRs already do when you turn the NR on?
-- Tommy Huynh, December 23, 2004
While all these schemes are interesting, they don't address photon noise, which is
probably the major noise source for normal images. To reduce photon noise (and
random noise), the usual noise reduction programs are pretty effective, but dark noise
subtraction does nothing (as is pointed out in the article)
My brief experiments (with a 20D, not a 10D) seem to show that fixed dark noise is, in
fact, very low and in most normal images the effects of dark frame subtration are subtle
at best, more often invisible.
It does slightly improve grossly underexposed (-3 stops) images where dark noise is a
significant fraction of the total signal, but even there it's not a huge effect and the
images are still very noisy, as expected.
-- Bob Atkins (www.bobatkins.com), December 24, 2004
Dark current is quite low in general use: short exposures at moderate temperatures.
You'll notice that he's getting really noticeable results in exposures over half an hour -
not a typical shot for most of us.
However it's real. The dark noise characteristics for sensors are often given in terms of
electrons/sec at a specific temperature, then they give a doubling rate in degrees C, that
is 'the dark current will double if you increase the temperature by this many degrees for
a given exposure length'. Of course, if you decrease the temperature by that much then
the dark current will decrease by a factor of 2. Most high-end scientific cameras are
cooled, often surprising cold.
http://www.photo.net/shared/community-member?user_id=976620http://www.photo.net/shared/community-member?user_id=289935http://www.photo.net/shared/community-member?user_id=14630http://www.photo.net/shared/community-member?user_id=976620http://www.photo.net/shared/community-member?user_id=289935http://www.photo.net/shared/community-member?user_id=146307/27/2019 ruido en fotografia digital.doc
21/28
With my 10D at about -25C and iso 3200 I saw roughly the same noise that I see at 22C
- on a typical daylit exposure, so most of that noise must be shot (photon) noise and
electronic (readout etc) noise.
On a 30 minute exposure, however, you should have averaged out the shot noise fairly
well - but the dark current will take over. Other electronic and random noise will be atroughly the same level.
Dark current accumulation is linear in time - that's why it's given in electrons/sec. This
means that with two measurements, say one at 1 minute and another at 30 minutes, you
should be able to interpolate and extrapolate very easily to get a dark reference for any
other time you'd like.
It's a worthwhile technique it you're in the habit of long exposures (say in
astrophotography where all the CCD stuff was originally developed).
There may be some confounding factors that I haven't delved into with consumercameras where the image is the final product, and we're not making quantitative
measurements of how much light is striking the sensor. In the image processing, the
goal is often to get the image to 'look like film', so there may be a variety of look up
tables or functions (log?) applied to the pixel data. This would change the darkfield
correction to something other than simple subtraction, but subtraction may be good
enough for most applications.
-- Arunas Salkauskas, December 24, 2004
Nice article. Thanks, Jeff!
As others have noted, several DSLRs have this feature built-in. In fact, I think they do it
the right way: before raw conversion. Raw conversion can decrease the effectiveness of
this procedure in a few ways. You mentioned some, but I think there are at least 2 others
that are significant: the demosaicing algorithm may be adaptive, resulting in
inconsistent profiles in the converted images; and gamma correction and tone curve
application will introduce non-linearities that make dark frame subtraction inaccurate.
It'd be nice if there's a tool that performs this NR on raw files -- does anyone know of
one? I'm not aware of one, so I'm actually looking into writing one myself.
I don't know how much a role demosaicing, etc. played, but I couldn't make this workwith my D70 files & ACR. I only tested it with short duration (1/60s) exposures, so
that's probably the main reason.
-- Zhi-da Zhong, December 24, 2004
Very interesting article. I love low light photography, which has been a challenge using
my Nikon D70 for the last year. I have gotten better in my technique - one thing i
discovered is the special Noise Reduction for Long Exposures mode on my D70. That
mode seems to use the slow readout of the electrons that was mentioned above in the
article. The write time seems to be directly related to the exposure time. It does a good
job of cleaning up long exposures. This may be why the above poster had trouble
http://www.photo.net/shared/community-member?user_id=861546http://www.photo.net/shared/community-member?user_id=297245http://www.photo.net/shared/community-member?user_id=861546http://www.photo.net/shared/community-member?user_id=2972457/27/2019 ruido en fotografia digital.doc
22/28
achieving useful results using his D70. The D70 seems to do a good job in-camera of
minimizing noise.
I do think I will try this technique on my next low-light photography assignment where
I cannot afford the extra delay of the in camera Noise Reduction technique. Once I pick
my exposure time, I should simply take the dark frame reference shot and shoot the restof the shoot normally - correct? Everything else is done back at my Mac, post-shoot -
right?
Again, a very insightful article. It is an exciting time to be in photography - so many
new techniques are being developed which leverage the new possibilities of digital
capture.
-- Robert Campbell, December 24, 2004
That mode seems to use the slow readout of the electrons that was mentioned above in
the article.
Slow readout doesn't help reduce thermal noise (the main type of noise in long
exposures). The D70 actually does dark frame subtraction: same idea as the procedure
above, but applied to the raw data directly. That's why the "processing" time increases
with exposure time -- the camera needs to take a second exposure w/ the shutter closed.
-- Zhi-da Zhong, December 25, 2004
So the D70 already does the Dark Frame technique....would this mean I'd gain no
benefit from using the technique described in the article when I had the Noise Reduction
mode enabled? Or maybe double Dark Frame subtraction might do an even more
thorough job? Or just amplify the random noise left after the first Dark Frame done in-
camera?
I don't know that I'll ever take the time to try it myself, but it would be interesting to see
the effect of trying the built in Dark Frame versus the manual technique described here.
I have to give props to Nikon for including such a nifty feature on such an affordable
camera. I have been VERY pleased with my D70. If only it could meter AI lenses andhad a mirror lock-up (and maybe a couple more dedicated buttons & dials), I wouldn't
even give a crap about the D2X!
-- Robert Campbell, December 26, 2004
One more point to consider is that if you simply subtract two images from each other
you are theoretically increasing the noise in the resulting image. So if you have no
significant dark-current problems then you will simply _amplify_ the random noise by
subtracting.
Why is this? Well, suppose the random noise in the image is normally distributed with amean of 0. Then subtracting one dark image from another will give you an image
http://www.photo.net/shared/community-member?user_id=929133http://www.photo.net/shared/community-member?user_id=297245http://www.photo.net/shared/community-member?user_id=929133http://www.photo.net/shared/community-member?user_id=929133http://www.photo.net/shared/community-member?user_id=297245http://www.photo.net/shared/community-member?user_id=9291337/27/2019 ruido en fotografia digital.doc
23/28
without the stationary (structured) noise, but what about the random noise? Well, since
it's all over the map (sometimes negative, sometimes positive) subtracting is basically
identical to adding - so you'll have increased the noise! It isn't doubled, but it'll increase
by a factor of root 2.
You can reduce the random noise by taking say 32 (or 256...or more) dark frames andaveraging them together. Averaging them will retain the stationary component of the
noise but reduce the random noise by a factor of the square-root of the number of
frames averaged. (Adding the images together increases the noise by sqr(n), then you
divide by n to get the average).
So subtracting a _single_ shot darkfield image is _only_ helpful if you have a LOT of
dark current compared with the random noise.
The recommended approach would otherwise be:
New Image = Old Image - Average of 32 Dark frames
-- Arunas Salkauskas, December 28, 2004
I would like to get a quantitative feel for these effects. Does anybody have numbers of
how many photons will be collected per pixel at a given light intensity (and Fstop,
exposure time ASA setting, output value in a 16bit raw image)? In comparison what are
the estimated noise numbers in output values? Can anybody quote a source? Am I right
to presume, that we are talking about a few (1-100) photons per pixel at say 1600 ASA
in the darker image areas of a hand held shot just giving detail? Thanks Walter
-- Walter Schroeder, December 29, 2004
A couple replies to what has been posted so far, but first, in the article I wrote this:
If you see a moire pattern in your image after calibration - especially in dark portions
of the photo ....
I meant to refer to posterization, not moire.
Now, to respond to various folks:
Why does the photo of the guy after calibration look lighter than the one before it?
It doesn't look particularly lighter to me, and in the originals the pixel values are
definitely lower, not higher. The smoothness is simply the result of sweeping away the
speckle.
but I can't seem to reproduce your results.
That's a common problem, and there are a several possible reasons for it, some of which
I offered. SometimesIcan't reproduce these results on specific 10D images. 1Ds images
never seem to have a problem. This comment is probably on the right track:
http://www.photo.net/shared/community-member?user_id=861546http://www.photo.net/shared/community-member?user_id=608315http://www.photo.net/shared/community-member?user_id=861546http://www.photo.net/shared/community-member?user_id=6083157/27/2019 ruido en fotografia digital.doc
24/28
Dark frame's should be subtracted from raw data prior to the colour demosaic
processing, if this possible.
And more on the same topic, very intelligently said:
the demosaicing algorithm [in raw conversion] may be adaptive, resulting ininconsistent profiles in the converted images; and gamma correction and tone curve
application will introduce non-linearities that make dark frame subtraction inaccurate.
The more playing around I do with my lower-end camera, the more convinced I am that
when I can't get a calibration, things went bad at the demosaicing step. My impression is
that when I apply conversion settings defined on one image to a second one in my
software, I should sidestep gamma and curve problems. But it could be my software
documentation is misleading. I wish I had the source code.
It'd be nice if there's a tool that performs this NR on raw files -- does anyone know of
one?
I have heard that a product called Sharp RAW from Logical Designs does raw
subtraction, but I have never used it. I in no way present this as a discouragement to
your writing your own software - this package does a lot of other stuff and probably
costs accordingly.
I guess noise varies from photo to photo because noise is a random occurence. i dont
know if noise is predictive, if it were then we could make all photos noise proof.
To deal with noise, we aren't really all that interested in finding out if noise is random or
not; we are interested in whether it is randomly varying or not. Randomly varying noise
is impossible to calibrate or remove, and some sources of noise are randomly varying.
For example photon noise, though it behaves predictably, randomly varies from shot to
shot; while readout noise behaves predictably and does not vary from shot to shot (at
least it doesn't vary by much).
We are also interested, of course, in whether the residuum of remaining noise in an
image varies randomly, or not, after the in-camera NR gets done with it. Where in-
camera NR is efficient, we would expect calibration to be futile, and where it is
inefficient, we would expect it to be helpful.
I assume the 20D is doing the calibration you describe in camera, but is there any
advantage to manually making a dark frame in addition to the 20d dark frame
processing?
None that I can think of, except the possibility of speeding up your shooting time by
using a calibration frame from the bank instead of taking one on the spot.
I wonder if some of these ideas [about characterizing a sensor and then using a static
model for calibration thereafter] have been integrated into camera firmware, raw
processing programs, or noise removal programs. I haven't read much on the subject.
Does anyone know?
http://www.logicaldesigns.com/Imaging1.htmhttp://www.logicaldesigns.com/Imaging1.htm7/27/2019 ruido en fotografia digital.doc
25/28
Yes, they have. Some if not all cameras do something like this between the exposure
and writing the raw file; in addition calibration frame libraries and scaling methods are
common amongst scientific users of imagery, and several programs are out there that
build noise profiles of a sensor to use in 'rough and ready' calibrations. Other algorithms
simply describe noise rather than characterize it, and remove it accordingly.
Isn't this the same as what many Canon DSLRs already do when you turn the NR on?
I would hope, and have every reason to believe, that the technique I provide is
significantly more crude than what happens in-camera. The only reason the technique
here is of any interest at all, in my opinion, is that at the present time only a minority of
the cameras in peoples' hands have the feature you describe.
So the D70 already does the Dark Frame technique .... maybe double Dark Frame
subtraction might do an even more thorough job?
Double dark frame subtraction will definitely not do a better job. If it does, you areprobably doing something wrong.
In comparison what are the estimated noise numbers in output values?
In output values? Depends a lot upon how you convert the images to 16 bits, I'd
suppose.
For the 10D I'd say half the pixels are more and half less than 65 ADU and in the 20D
half are more and half are less than 40 ADU in terms of dark noise after 300 seconds at
room temperature. I've measured my 10D mean bias at 29 ADU and I've measured a
20D at 31. (This is insignificantly different.) I believe (contra Bob) that bias and pattern
noise predominates in normal shooting circumstances.
Can anybody quote a source?
Christian Buil has measured and published similar figures. I'm sorry that the
methodology there has a rather different emphasis than a conventional photographer
would like, since the application is different. You can also get a sense of just how
strongly oversimplified my article is from his analysis.
Am I right to presume, that we are talking about a few (1-100) photons per pixel at say1600 ASA in the darker image areas of a hand held shot just giving detail?
Yes. I think people who have tried to measure it have come up with something like a
gain of 3.5 electrons/ADU for the 10D and 3 electrons/ADU for the 20D at iso 400.
-- Jeff Medkeff, January 6, 2005
I tried this and it seems to add noise. I don't know what I am missing here, but I shot a
photo of a coworker at his desk. My exposure was 1/20 at f4 and 1600 asa. I then put
the cap on, closed the cover on the eyepiece, and fired another shot right after it at the
same exposure. I opened them both using the raw converter in photoshop cs. I openedthe second one and selected "same as previous conversion" for the settings. I did not use
http://www.astrosurf.com/buil/20d/20dvs10d.htmhttp://www.astrosurf.com/buil/20d/20dvs10d.htmhttp://www.photo.net/shared/community-member?user_id=413137http://www.astrosurf.com/buil/20d/20dvs10d.htmhttp://www.photo.net/shared/community-member?user_id=4131377/27/2019 ruido en fotografia digital.doc
26/28
any sharpening either in the camera or raw converter. When I pasted the black image
over the origonal the noise and convert it to difference the noise gets worse.
Image:Untitled-1.jpg
-- David Campbell, January 12, 2005
I have a Nikon coolpix 4500. It has a noise reduction setting. Which is a Dark Frame
Substitution method.
-- Asit Jain, January 17, 2005
I've done a little more looking into sources of success and failure in calibrating Canon
raw files from my 10D. Basically, I sent around six raw images and their respective
calibration frames to users of various software and checked the results. We've only tried
normal (not linear) conversions so far. Here's what I've learned:
Adobe CameraRAW: Never seems to work on any image; the raw conversionprocess is doing something to the calibration frame that is very different from
the image frame.
Canon software (incl. Breezebrowser): Works nicely on most images with dark
backgrounds (e.g., astrophotographs); not as nicely when there are broad areas
of lighter tones (e.g., aurora); and fails utterly on images with tonal values
similar to that of normal snapshots.
Phase One CaptureOne: Results similar to Adobe RAW.
Iris: Works on any image.
Pursuant to the above discussion about doing subtraction prior to raw conversion, I'd
note that Iris does just this. It is also freeware.
-- Jeff Medkeff, January 22, 2005
>> I tried this and it seems to add noise.
That's correct - at high ISO settings and short exposures, such a simple process will
actually increase the noise in the image. You've got almost no dark current for such a
simple exposure, so when you subtract you're actually just doing an additive operation
with more noise.
The only place where you'll find dark image subtraction helping is where the noise you
see is actually always appearing in the same location and in the same amounts.
Electronic read noise and photon shot noise, which are the problems in high ISO
exposures, are not stationary in this way, so the only way you can correct for them is
with some sort of filtering.
-- Arunas Salkauskas, January 24, 2005
Last night I did some dark frame correlation tests on my 10D. I used David Coffin's
decompress program to get the actual sensor data from RAW files. I shot ten frames at
ISO=3200 at 1/4000s, 1/30s, 1s, 6s, and 30s, at freezer temp (-4 C?) and at room tempwith the camera body's lens cover on. Then I treated the resulting images as a
http://www.photo.net/comments/image-attachment?comment_id=2224926&return_url=%2Flearn%2Fdark_noise%2Findex.html%3Fhttp://www.photo.net/shared/community-member?user_id=869570http://www.photo.net/shared/community-member?user_id=988755http://www.astrosurf.org/buil/us/iris/iris.htmhttp://www.photo.net/shared/community-member?user_id=413137http://www.photo.net/shared/community-member?user_id=861546http://www.photo.net/comments/image-attachment?comment_id=2224926&return_url=%2Flearn%2Fdark_noise%2Findex.html%3Fhttp://www.photo.net/shared/community-member?user_id=869570http://www.photo.net/shared/community-member?user_id=988755http://www.astrosurf.org/buil/us/iris/iris.htmhttp://www.photo.net/shared/community-member?user_id=413137http://www.photo.net/shared/community-member?user_id=8615467/27/2019 ruido en fotografia digital.doc
27/28
~6,500,000 element vector, and ran dot products between the (ten frames averaged
together, then normalized) and individual frames.
For each individual frame, I subtracted the correlated noise.
Results:
- frames with subtracted correlated noise had a norm of just 7% of the original frames. I
take this to mean that about 93% of the noise can be eliminated. This was very
repeatable.
- there was very good correlation between frames at different temperatures and different
exposure times, but correlation within frames sets at the same temp/exposure was
marginally better. This suggests that most of the noise varies linearly with a function of
temperature and exposure.
- absolute values of sensor data did not appear to change much over this temperaturevariation
Note that even if different pixels respond differently to temperature, there is no need to
know the temperature to subtract out the noise. Instead, we can compare the dot
products of several normalized dark frames at different temperatures and the picture to
be improved. The largest dot product signifies the frame closest to the noise profile of
the actual image, and that dark frame's correlated noise can be subtracted. As a bonus,
we get a rough measure of the camera's temperature when the picture was taken!
I'll try some further experiments later to check correlation between different ISO
settings.
-- Iain McClatchie, February 18, 2005
Shaun: I assume the 20D is doing the calibration you describe in camera, but is there
any advantage to manually making a dark frame in addition to the 20d dark frame
processing?
Jeff: None that I can think of, except the possibility of speeding up your shooting time
by using a calibration frame from the bank instead of taking one on the spot.
Well, if the camera takes a single dark frame, and there is any nonrepeatable error in
taking that frame, this error gets added to your image. If instead you use a banked dark
frame shot which is the average of many actual dark frame exposures, you average
away these nonrepeatable errors and do not add this kind of error to the shot.
As for whether *both* kinds of dark frame subtraction can be useful, I think the answer
is: maybe. You could first subtract the banked image, hoping that this would take care
of the bulk of the fixed-pattern noise. You could then subtract the *correlated* noise of
the dark frame snapped directly after the actual image. This would find any noise
repeated from image to the immediate successor frame, which was not repeated in the
banked dark frame shots (such error would come from some hypothetical slowly-varying but random process).
http://www.photo.net/shared/community-member?user_id=778374http://www.photo.net/shared/community-member?user_id=7783747/27/2019 ruido en fotografia digital.doc
28/28
As this successive frame would not have to take out the bulk of the fixed-pattern noise,
the weight would be much lower. This would contain the error added, but probably
sharply limit any benefit. My guess is that the random noise would probably swamp any
slow-varying error, and you would see no benefit (weight would be zero).
-- Iain McClatchie, February 18, 2005
Add a comment | Add a link
2000-2005 Luminal Path Corporation and contributors. Contributed content used
with permission.
About Us |Photo.net FAQ| Subscribe! |Related Sites | Contact Us | Terms of Use|
DMCA Agent | Privacy
Backup Software | Concert Tickets
photo.net is sponsored by Digital Camera HQ
http://www.photo.net/shared/community-member?user_id=778374http://www.photo.net/comments/add?page_id=17081http://www.photo.net/links/add?page_id=17081http://www.photo.net/about-ushttp://www.photo.net/frequent-questionshttp://www.photo.net/frequent-questionshttp://www.photo.net/frequent-questionshttp://www.photo.net/photonet-subscriptionshttp://www.photo.net/community/linkshttp://www.photo.net/community/linkshttp://www.photo.net/contact-ushttp://www.photo.net/terms-of-usehttp://www.photo.net/terms-of-usehttp://www.photo.net/dmca-agenthttp://www.photo.net/privacy-policyhttp://www.yosemitetech.com/http://www.totaltickets.com/http://www.digitalcamera-hq.com/http://www.photo.net/shared/community-member?user_id=778374http://www.photo.net/comments/add?page_id=17081http://www.photo.net/links/add?page_id=17081http://www.photo.net/about-ushttp://www.photo.net/frequent-questionshttp://www.photo.net/photonet-subscriptionshttp://www.photo.net/community/linkshttp://www.photo.net/contact-ushttp://www.photo.net/terms-of-usehttp://www.photo.net/dmca-agenthttp://www.photo.net/privacy-policyhttp://www.yosemitetech.com/http://www.totaltickets.com/http://www.digitalcamera-hq.com/