+ All Categories
Home > Documents > ResearchPaper_Final

ResearchPaper_Final

Date post: 18-Aug-2015
Category:
Upload: matthew-johnson
View: 15 times
Download: 2 times
Share this document with a friend
38
Matt Johnson Computer Science Senior Seminar Research Paper University of Jamestown Anti-Aliasing in Computer Graphics Spring 2015
Transcript
Page 1: ResearchPaper_Final

Matt Johnson

Computer Science Senior Seminar Research Paper

University of Jamestown

Anti-Aliasing in Computer Graphics

Spring 2015

Page 2: ResearchPaper_Final

Introduction

When attempting to render images onto a screen in computer graphics, various unwanted

artifacts may show up. These artifacts include things such as jagged edges, noise patterns, or

missing details. The main reason these artifacts show up is because of a certain restriction that is

present in all digital imagery: pixels. These artifacts are a result of what is called aliasing, and

can be further examined using Fourier analysis. The process of reducing the effect of these

artifacts on our eyes to make the image more appealing is called anti-aliasing. There are

numerous ways to go about anti-aliasing depending on hardware restrictions and needs of the

software that is performing the task. In this paper, I will explain a few of these things in further

detail. First I will examine aliasing: what causes it and some of its effects. Then I will briefly

describe how a signal is converted into digital format and how this can cause problems to arise.

Next I will look at an approach to solve these problems. Finally I will touch on a handful of

recent research and anti-aliasing implementation methods.

Aliasing

In image digitization, the problem involves converting an image from a continuous space

to a discrete space, i.e. from analog to digital. Think of an object in the real world such as a

spherical ball: it is perfectly round. Now, if we try to represent that same ball on a computer

screen, we find that we can never get it to be perfectly round just as it is in the real world because

of the restriction that is caused by pixels. This is because pixels are arranged in a linear fashion

and can only be filled with one color at a time. The software that is converting the image into a

digital format will see the boundary between the ball and its background crossing through some

pixels, where part of the pixel is occupied by the ball and part of it is occupied by its

Page 1 of 26

Page 3: ResearchPaper_Final

background. Since each pixel can only have one color in it at a time, the software must decide to

color it with the color of the ball or the background. With no anti-aliasing, the result is a jagged,

stair-like pattern where a smooth, curved line should be. These effects are also known as

“jaggies.”

Fig. 1: Jaggies (Aliasing, n.d.)

Another common result of aliasing in computer graphics is something called a moiré

pattern. Moiré patterns are described as interference patterns that are produced by overlaying

similar, but slightly offset templates (Weisstein, n.d., Moiré Pattern). In the case of computer

graphics, the two templates considered are the desired image to display and the sampling pattern

used to display it. These patterns pop up a lot when displaying circles on a computer screen. Fig.

2 shows some examples of moiré patterns found in concentric circles. Notice the phantom lines

that show up due to the discretization of the finite-sized pixels. These lines will even seem to

appear differently on screens with varying resolutions. It is clear to see that aliasing in computer

graphics can produce some undesirable results. So how is aliasing taken care of? The first step is

identifying the problem.

Page 2 of 26

Page 4: ResearchPaper_Final

Fig. 2: Moiré patterns in concentric circles (Weisstein, n.d.)

Sampling and Reconstruction

As was mentioned earlier, images are displayed on a screen with pixels in discrete space,

and the display is a representation of the “real” image in continuous space. In order for the

conversion between the two to take place, the continuous-space image must be sampled. Now,

the way the image is sampled can vary depending on the method used to render the image from

3D into 2D, but the simplest form of sampling is called point sampling, where one sample of the

scene is taken per pixel on the screen. Since digital images are defined in 2D discrete space and

can be tricky to understand at first, I will start this discussion by looking at one-dimensional

signals in order to keep it relatively simple. I will, however, get into digital image signals after

the basic concepts have been touched on.

Fig.3: The input signal multiplied by the comb function

yields the sampled function (Term, 2003)

Page 3 of 26

Page 5: ResearchPaper_Final

To sample a function of a signal, we must acquire signal values from the original signal

at regular intervals. To accomplish this, we utilize the comb function:

comb ( x )= ∑n=−∞

δ (x−nX )(1)

The comb function is an infinite set of Dirac delta functions spaced by integer multiples of X .

The comb function is also referred to as an impulse train. To acquire the signal values, we

multiply the continuous input signal by the comb function. This results in the value of the

original signal at each of the sampling points.

f s ( x )= ∑n=−∞

δ(x−nX) ∙ f (nX)(2)

Fig. 3 shows the sampling process for a simple input signal. Now that the function has been

sampled, we have a sequence of numbers – called samples – and nothing more. This is because

we only have values defined at the sampling points, so the digital signal could potentially take on

any value in the space between each sampling point. This means that we need to essentially

guess what the value of the signal is between each sample. The process of filling in the values of

the signal in between each sample is known as interpolation, or reconstruction. Fig. 4 shows the

signal in Fig. 3 reconstructed from the samples using a simple linear interpolation, where the

values at the sample points are simply connected with a line to fill in the missing spaces. There

are other, more sophisticated methods of interpolation that are capable of producing smoother

reconstructions than linear interpolation, but the point here is to get the idea across.

Page 4 of 26

Page 6: ResearchPaper_Final

Fig. 4: Linear interpolation

What I have just gone over is the basic idea of reproducing an input signal so that it is

ready for output: sampling, then reconstruction. Again, the reason for this is because we need to

convert the continuous signal to a discrete signal so that we have an exact definition of it.

Continuing on, I will dive deeper into the signal processing and frequency domain analysis of the

computer graphics realm.

The Sampling Theorem

We now turn our attention to the sampling theorem and its significance. The Nyquist-

Shannon sampling theorem states that if a signal contains no frequencies higher than W , it can be

completely reconstructed from sampling points that occur at a frequency of 2 W or greater

(Shannon, 1949). Conversely, for a given sample rate, f s, all frequencies greater than f s

2 must be

cut off in order to achieve this complete reconstruction. Since we will generally not be looking at

situations in which we must use a constant sampling rate, we will just sum the sampling theorem

up with this inequality:

f s

2≥W (3 )

Page 5 of 26

Page 7: ResearchPaper_Final

Where f s is the sampling frequency (or rate) and W is the highest frequency present in the signal.

We call f s

2 the Nyquist frequency and 2 W the Nyquist sampling rate. We will see, in a little bit,

some of the consequences of not holding this inequality true. There are two ways to ensure that

this criterion is met. Either the sampling rate can be increased, or the signal can be bandlimited.

Bandlimiting a signal is the process of filtering out frequencies that are above a certain cutoff

frequency.

Fig. 5: A sampling frequency below the Nyquist rate will result in

an alias in the interpolation (Digital Signals – Sampling and Quantization, n.d.)

In Fig. 5, the original signal (green) is sampled at a frequency below the Nyquist rate.

When the sampling values are connected again after interpolation, we notice what appears to be a

lower frequency wave (blue). This new, lower frequency wave that results from under sampling

the signal is called an alias of the original signal. If the two waves in Fig. 5 were input signals,

they would be sampled exactly the same with the current sampling rate. Even though these are

not the same signals, the two reproduced signals would be exactly the same.

Fourier Analysis

We can go further and start using Fourier analysis to examine the input signal. Typically,

a one-dimensional signal is defined in the time domain, and a two-dimensional signal is defined

Page 6 of 26

Page 8: ResearchPaper_Final

in the spatial domain. We can use the continuous Fourier Transform to convert and analyze these

functions in the frequency domain. The Fourier Transform of a function, f (x), is given as

F [ f (x ) ]=F (u )=∫−∞

f ( x ) e−i 2 πux dx (4 )where u is the independent variable in the frequency domain.

If we perform this operation on the input signal, we can determine exactly what frequencies are

included in it. The Fourier Transform is also invertible, meaning that we can move back and

forth between the x domain and the frequency domain.

F−1 [ F (u ) ]=f ( x )=∫−∞

F (u )e i 2πux du (5 )

Equations (4 ) and (5 ) form a Fourier pair and these are usually denoted asf (x)↔F (u). Fig. 6

shows the result of performing a Fourier Transform on a given signal. In the frequency domain,

it is possible to apply a filter to cut off any unwanted frequencies. If we are trying to bandlimit

the signal to ensure that no frequency is above the Nyquist frequency, we would apply a low-

pass filter, i.e. a filter that only allows frequencies below the band to pass through it. A filter of

this type would be applied in the frequency domain by multiplying its spectrum by the spectrum

(Fourier Transform) of the input image, then performing the inverse Fourier Transform on that

result.

Fig. 6: Fourier Transform (Term, 2003)

Convolution Theorem

Page 7 of 26

Page 9: ResearchPaper_Final

Recall that, at some point, the signal has to be sampled. This means that when we

perform the transform in the input domain, we’re actually performing it on the product of the

input signal and the sampling function (comb function). This leads to some interesting features

of the Fourier Transform. According to the Convolution theorem, convolution in one domain is

equivalent to multiplication in the other domain. This statement holds true for both domains that

we are working with. Put into the form of two equations, this is what the theorem says:

F [ f∗g ]=F [ f ] ∙ F [ g ] (6 )F [ f ∙ g ]=F [ f ]∗F [ g ] (7 )

The output of the convolution operator is a function that expresses the amount of overlap of one

function as it is shifted over another function (Weisstein, n.d. Convolution). It can be thought of

as “blending” one function with another. This means that we can compute the Fourier Transform

of two separate functions, multiply their spectra together, and compute that product’s inverse

Fourier Transform in order to get the convolution of the two original functions. This is actually

what is done in the real world when applying filters to signals, only the filter is usually already

defined in the frequency domain. So when we compute the Fourier Transform of our sampled

signal function (input signal multiplied by the comb function), we’re actually computing the

convolution of the Fourier Transforms of those functions separately. We’ve already seen the

Fourier Transform of an input signal in Fig. 6, so we’ll use that as an example again. The Fourier

Transform of the comb function (equation (1 )) from earlier is actually the exact same function,

only the spacing between each separate delta function is 1X

instead of X :

∑n=−∞

δ (x−nX )↔ 1X

∙ ∑n=−∞

δ(u− nX ) (8 )

Page 8 of 26

Page 10: ResearchPaper_Final

There is also a normalization constant present, but it is unimportant in our discussion, so I will

ignore it for now.

Fig. 7: Multiplication in input domain is convolution in frequency domain (Term, 2003)

Fig. 7 shows what this Fourier Transform looks like. The convolution of the Fourier Transform

of the continuous signal and the Fourier Transform of the comb function will just place replicas

of the Fourier Transform of the continuous signal at evenly spaced intervals. This is what the

equation of this sampled Fourier Transform looks like:

F [ f s ( x ) ]=F s (u )= 1X

∙ ∑n=−∞

δ(u− nX )∗F(u)= 1

X∙ ∑

n=−∞

F (u− nX ) (9 )

Sinc Reconstruction

Our goal at this point is to reconstruct the signal in the original domain. Since we’re in

the frequency domain, the next logical step would be to apply a filter. What we want to do is

filter out the replicas in the frequency domain. The ideal filter will let only the copy that is

centered at u=0 to pass through. This filter has been determined to be the rectangle, or rect ,

function. We will call this functionH (u ):

H (u )=rect (uX ) (10 )

In order to actually apply the filter, we multiply it by the sampled Fourier Transform, F s (u ), in

equation (9). Notice in Fig. 8, that this is actually the exact same function computed as F (u ), the

Page 9 of 26

Page 11: ResearchPaper_Final

Fourier Transform of the original input signal. Therefore, F (u )=F s (u ) ∙ H (u ), which means that

F−1 [ F s (u ) ∙ H (u ) ]=F−1 [ F (u ) ]=f ( x ). From the Convolution theorem, we can deduce that

f ( x )=F−1 [ F s (u ) ∙ H (u ) ]=f s ( x )∗sinc( πxX )(11 )

Where sinc ( πx )= sin ( πx )πx

and F [sinc ( πx ) ]=rect (u). Thus we end up with the equation

f ( x )= ∑n=−∞

f (nX ) ∙ δ (x−nX )∗sinc( πxX )f ( x )= ∑

n=−∞

f (nX ) ∙ sinc[ πX

∙ ( x−nX )] (12 )

Here we have a sinc reconstruction filter applied to the input signal defined in discrete space.

Note that there are no Fourier Transforms performed here. All that needs to be done is determine

f ( nX ) from the original signal, and find a sampling rate that satisfies the Nyquist criterion.

Although using the sinc reconstruction filter is ideal, it is not always practical to use in a real

application because the sinc function extends to infinity in both positive and negative directions

(Term, 2003). Other interpolation methods exist. The nice thing is that in order to use a different

reconstruction filter, one need only to swap out the sinc function for something else.

Fig. 8: A rectangle filter is used to get rid of replicas in frequency domain (Term, 2003)

Aliasing in the Frequency Domain

Page 10 of 26

Page 12: ResearchPaper_Final

So what happens when the sampling rate does not satisfy the Nyquist criterion? One

could easily guess, at this point, that aliasing occurs. Let’s see what happens in the frequency

domain. Now if the sampling rate is decreased, the space between each sample is increased,

meaning that the X variable in the comb function becomes greater. Since a sampling space of X

corresponds to 1X

in the frequency domain, each “tooth” of the comb function in the frequency

domain gets closer together (because 1X

gets smaller as X gets larger). This simultaneously puts

the replicas of F (u ) closer together. Fig. 9 shows what this looks like. These overlaps in the

frequency domain cause aliasing when trying to reconstruct the signal because the two adjacent

copies will interfere with the one in the center. Intuitively, this aliasing is taken care of by

increasing the sample rate or by filtering out high frequencies previous to sampling.

Fig. 9: Aliasing in the frequency domain is identified by overlap in replicas (Term, 2003)

Wrap-up

There are many ways to go about performing these anti-aliasing operations, so I will

leave it at this for now. The main idea to get across is that aliasing is caused by insufficient

sampling. Jaggies show up because shape boundaries cause discontinuities in the signal,

introducing infinitely high frequencies. Moiré patterns show up when the signal and the sampling

pattern aren’t matched up well enough. In a later portion of this paper, I will go over a few

implementation methods. Some of these methods include increasing the sample rate, pre-

Page 11 of 26

Page 13: ResearchPaper_Final

filtering, post-filtering, or a mixture. These methods can be software- and hardware-based. These

methods are chosen by developers based on the type of aliasing they wish to reduce most. There

are two main categories of aliasing: spatial aliasing and temporal aliasing. Spatial aliasing is seen

in an image when the display doesn’t match up with what the true image is. Examples of this

include jaggies and moiré patterns. Temporal aliasing occurs in signals that are sampled with

time as a variable. In computer graphics, and specifically real-time environments such as video

games, this is seen when the framerate is below the optimal level. When rendering real-time

graphics, there is always a tradeoff between fidelity and framerate. The more detailed the scene

is, the longer it takes to render and display to the screen. With advances in hardware inside of

GPUs (graphics processing units), developers have been able to utilize methods that create video

games with exceptional quality while maintaining acceptable framerates.

Two-Dimensional Anti-Aliasing

Now that the basic ideas of aliasing and how it is prevented have been touched on, I will

move on to its explanation in the case of images. Up until now, I’ve been using one-dimensional

signals as examples. These are typically used when dealing with sound waves, and use time as

the independent variable. Hence, one-dimensional signals are typically defined in the time

domain as f (t). When dealing with images (computer graphics), we use two-dimensional

signals. These signals are typically defined in the spatial domain as f ( x , y ). That having been

said, the remainder of this paper will deal with signals and their spectra being in the spatial and

frequency domains, respectively.

The good news about two-dimensional signals is that processing them works the same

way as one-dimensional signals in the frequency domain. There are just a few conceptual

Page 12 of 26

Page 14: ResearchPaper_Final

differences in both the spatial domain and the frequency domain. The 2D Fourier Transform with

its inverse:

F [ f (x , y ) ]=F (u , v )=∫−∞

∫−∞

f (x , y ) e−i2 π (ux+ vy ) dx dy (13 )

F−1 [ F (u , v )]=f (x , y )=∫−∞

∫−∞

F (u , v )ei 2π (ux+vy ) du dv (14 )

Instead of a comb function, we will use the 2D equivalent known as the bed of nails function:

bed -of - nails ( x , y )= ∑n=−∞

∑m=−∞

δ (x−nX )δ( y−mY ) (15 )

Fig. 10: Bed of nails function in the spatial domain (Term, 2003)

Fig. 11: Fourier Transform of a sampled image (Term, 2003)

Page 13 of 26

Page 15: ResearchPaper_Final

In Fig. 10, the bed of nails function is shown, and in Fig. 11, the Fourier Transform of a

sampled image signal is shown. Again from here, we apply the reconstruction filter:

H (u , v )=rect (uX ) ∙ rect (vY ) (16 )

We then come up with the equation for the input image:

f ( x , y )= ∑n=−∞

∑m=−∞

f (nX , mY ) ∙ sinc[ πX

∙ ( x−nX )]sinc[ πY

∙ ( y−mY )](17)

Just as with one-dimensional signals, we need to make sure the samples agree with the sampling

theorem to prevent aliasing. This means that the sampling rates in both the x and the y directions

must be greater than twice the maximum frequencies in their respective directions. Other than

these differences that I briefly described, two-dimensional anti-aliasing is the same as in one

dimension: the goal is to meet the criterion put forth in the sampling theorem. The similarities in

the way aliasing is dealt with between 1D signals and 2D signals are mostly due to the fact that

we can use the Fourier Transform on both of them. Although these signals are typically defined

in different domains, the conversion to the frequency domain allows us to approach these

problems in very similar ways.

Recent Research

Everything in this paper has led up to actually dealing with the problem of aliasing. A

great deal of research has been put into making computer generated images more appealing to

the human eye. I will now go over a handful of some methods that have been published in recent

journals. Since these recent methods are extremely complex and hard to understand, I will spare

most of the details, and only briefly explain them, making sure to get the basic idea across.

Page 14 of 26

Page 16: ResearchPaper_Final

Introduction

Anti-aliasing techniques can be put into two categories: pre-filtering based and post-

filtering based. Pre-filtering based methods are focused around filtering out high frequencies

prior to sampling in order to be able to use less samples. Post-filtering based methods can be

further subdivided into hardware supported and post-process techniques (Jiang, 2014). Hardware

supported techniques have utilized the parallelism capabilities of GPUs to create complex

acceleration structures and perform many tasks at the same time. Post-processing techniques are

based on optimizing a reconstruction filter after the samples have already been taken. Fig. 12

shows a schematic of both a pre-filtering and a post-filtering anti-aliasing system.

Fig. 12: Schematic of pre-filtering (a) or post-filtering (b) anti-aliasing (Jiang et al., 2014)

In 1988, Mitchell and Netravali focused their research on reconstruction filters in

computer graphics, arguing that prefiltering is not the correct approach in computer graphics

because it results in an implicit definition of the signal so that explicit signal operations may not

be performed. They then introduce two types of aliasing: prealiasing and postaliasing.

Prealiasing occurs as a consequence of undersampling, which causes overlap in the frequency

domain. Postaliasing occurs from poor reconstruction, where the filter in the frequency domain

may allow too much to pass through. They show the spatial effects of various different types of

Page 15 of 26

Page 17: ResearchPaper_Final

filters since the sinc filter is not always ideal in every situation due to “ringing” caused by the

Gibb’s phenomenon. This paper has pioneered much research in the field, and since then, many

more researchers have focused their efforts on reconstructing signals.

Morphological Anti-Aliasing

In the area of real-time computer graphics, Supersample Anti-Aliasing (SSAA) and

Multisample Anti-Aliasing (MSAA) have emerged as the gold standard solutions. SSAA works

by rendering the scene in a higher resolution than the display has, then downsampling to the

screen resolution. MSAA is an adaptive form of SSAA, and therefore has a greater performance

speed at the cost of potentially less quality. These methods can cause a lot of overhead due to

their increased resolution nature, and haven’t been used extensively due to hardware constraints.

Another drawback of these techniques is that deferred shading systems can’t really take

advantage of them. A technique called Morphological Anti-Aliasing (MLAA) was developed by

A. Reshetov in 2009, and sparked a lot more creative techniques. MLAA allows for anti-aliasing

as a post-processing step, and therefore can effectively be used in a deferred shading system.

MLAA works by identifying noticeably different pixels, defining separation lines with

silhouettes, and filtering color based on the pixels intersected by the silhouette lines. Fig. 13

shows an illustration of the main MLAA concepts, where lines b-c-d form a Z-shape and lines d-

e-f form a U-shape, and the bottom part shows how the color propagation works. The article

“Filtering Approaches for Real-Time Anti-Aliasing” describes this original MLAA method in

more detail as well as other, more advanced methods.

Page 16 of 26

Page 18: ResearchPaper_Final

Fig. 13: Main MLAA concepts (Jimenez et al., 2011)

Subpixel Reconstruction Anti-Aliasing

Another anti-aliasing method that is useful in deferred shading rendering systems is

called Subpixel Reconstruction Anti-Aliasing (SRAA). Outlined by Chajdas et al. (2011), it

combines single-pixel shading with subpixel visibility to create anti-aliased images without

increasing the shading cost. Its sampling scheme uses different types of samples. It works by

taking four randomly placed samples from a 4x4 grid inside of each pixel. All four of these

samples are geometric samples and are stored in a geometry buffer, but one of these samples also

contains shading information. At each geometric sample, bilateral weights from neighboring

shading samples are computed. A neighboring sample with significantly different geometry is

probably across a geometric edge, and is given a low weight. Fig. 14 shows what this would look

like for one subpixel. This deferred shading anti-aliasing method leaves room for error, but the

focus is on rendering speed while maintaining acceptable quality.

Page 17 of 26

Page 19: ResearchPaper_Final

Fig. 14: SRAA weight computation for a single subpixel (Chajdas et al., 2011)

Subpixel Morphological Anti-Aliasing

Jimenez et al. (2012) proposed a method for anti-aliasing that combines MLAA strategies

and SSAA/MSAA strategies called Subpixel Morphological Anti-Aliasing (SMAA). It is an

image-based, post-processing anti-aliasing technique that includes new features such as local

contrast analysis, more reliable edge detection, and a simple way to handle sharp geometric

features and diagonal lines. The types of patterns extend the MLAA concept to include L-shapes

in addition to Z- and U-shapes in order to handle sharp geometric features and diagonals

processing. Temporal reprojection is also utilized to prevent residual artifacts in video games,

also called “ghosting”.

Adaptive Sampling

Since rendering speed is a huge concern for real-time environments, adaptive sampling is

very popular. Adaptive sampling will selectively choose areas of the image that require more

samples to fully capture the detail. Chen et al. (2011) developed an adaptive sampling method

for creating a depth-of-field effect in scenes. This method is aimed at getting rid of noise and

other artifacts such as discontinuities and having a defocused foreground over a focused

background. It uses a blur-size map to determine the sample density at certain areas. Then a

Page 18 of 26

Page 20: ResearchPaper_Final

complex multiscale reconstruction filter is implemented. Fig. 15 shows the blur-size map’s role

in reconstruction. The sampling scheme used is based on the Monte Carlo method, which means

that the samples are randomly placed in order to reduce noise and other artifacts.

Fig. 15: Blur-size map and image reconstruction (Chen et al., 2011)

Approximating the Convolution

Shen Li et al. (2011) even developed a method of anti-aliasing by analytically

approximating a convolution, and not actually computing it. The convolution is between a soft

shadow signal and a filter that has been mapped to shadow space (their technique was only being

used to render soft shadows). Their pseudo convolution involves temporarily interpreting two

different filters as ellipsoid Gaussian, approximating variance, then converting the result of the

convolution into a parallelogram box filter. There is a great amount of math involved in getting

the shadows just right at a low computational cost.

Dual-Buffered Filtering

Page 19 of 26

Page 21: ResearchPaper_Final

Rousselle et al. (2012) proposed another adaptive Monte Carlo sampling scheme that

hinges on a state of the art image denoising technique. The process includes adaptively

distributing samples in the scene based on how much detail is needed, then denoising the image

using a non-linear filter, and finally estimated the error of the rendering leading to another

adaptive sampling step. The denoising filter used is a modification of the Non-Local (NL) Means

filter that computes an output pixel as a weighted sum of input pixels. The input pixels can come

from a large region in the input image. The modifications that Rousselle et al. use in their

technique are dual-buffered filtering, support for non-uniform variance, and symmetric distance

computation to better handle gradients.

Error Estimation

Yet another similar Monte Carlo rendering method aimed at reducing noise was proposed

by Tzu-Mao Li et al. (2012). It applies Stein’s Unbiased Risk Estimator (SURE) in adaptive

sampling and reconstruction to reduce noise. SURE is a general estimator for mean squared

error. The reconstruction kernels that they used were more effective because the SURE is able to

estimate error better. With a more reliable error estimation, adaptive sampling is also more

reliable.

Wavelet Rasterization

Manson and Schaefer (2011) used the fact that wavelets are localized in both the spatial

and frequency domains to represent signals with wavelets. Their work shows that using the

Page 20 of 26

Page 22: ResearchPaper_Final

simplest wavelet, the Haar basis, is equivalent in quality to performing a box-filter to the image.

Wavelets are superior in a lot of cases because they can represent signals with discontinuities

better. The use of wavelets also implicitly reduces the effect of Gibb’s phenomenon.

Spherically Symmetric Filtering

Auzinger et al. (2012) proposed a way to perform anti-aliasing in two and three

dimensions through the use of a filter that is a spherically symmetric polynomial of any order.

They make the claim that, even though separable filtering is computationally less expensive, this

approach can cause visible artifacts due to the angle-dependent nature of anisotropic effects.

They also compute the convolution of the image and the filter analytically. This method is used

for anti-aliased sampling of polytopes with a linear function defined on them, so using it is

probably extremely limited, but does prove more useful depending on the setting.

Conclusion

The restriction of finite-sized pixels on a digital screen and the need for discretization of

the signal cause aliasing, leading to unwanted image artifacts. Fourier analysis in the frequency

domain helps us find aliasing and can help reduce its effects. The main idea is to sample a signal,

filter out any unwanted frequencies, and reconstruction. In general, aliasing is caused by

insufficient sampling. The Nyquist-Shannon sampling theorem tells us that the sample rate of a

signal must be at least twice that of the highest frequency present in the signal in order to be able

to fully reconstruct it. I went through a derivation of a signal reconstruction equation using a sinc

filter. Although it may be an ideal reconstruction for anti-aliasing, it’s not always feasible to use

and problems may arise. Other types of filters exist. Although pre-filtering out high frequencies

prior to sampling will take care of the aliasing problem, it’s not ideal to use in computer graphics

Page 21 of 26

Page 23: ResearchPaper_Final

because it’s prone to loss of detail. Post-filtering methods are better suited for imagery, leading

to the fact that most of the research done in the field has focused on image reconstruction. When

anti-aliasing in computer graphics, speed and quality must be considered. Supersample and

Multisample Anti-Aliasing are capable of producing high quality anti-aliased images, but are

very computationally expensive. The creation of Morphological Anti-Aliasing sparked a great

deal of work being done on the topic, and thus the era of post-process anti-aliasing. As research

and advances in hardware continue into the future, aliasing will become less of a problem and

graphics will look even more realistic.

Fig. 16: Example from God of War III. Original on the left, anti-aliasing with MLAA on

Page 22 of 26

Page 24: ResearchPaper_Final

the right. Notice the jagged edges in the original compared to the smoother edges in the AA

version. (Jimenez et al., 2011)

References

Aliasing [PDF document]. (n.d.). Retrieved from https://sisu.ut.ee/sites/default/files/imageprocessing/files/aliasing.pdf.

Auzinger, T., Guthe, M., & Jeschke, S. (2012). Analytic Anti-Aliasing of Linear Functions on Polytopes. Computer Graphics Forum, 31(2), pp. 335-344. doi: 10.1111/j.1467-8659.2012.03012.x.

Chajdas, M. G., McGuire, M. & Luebke, D. (2011). Subpixel reconstruction antialiasing for deferred shading. Symposium on interactive 3D graphics and games, pp. 15-22. doi: 10.1145/1944745.1944748.

Chen J., Wang, B., Wang, Y., Overbeck, R. S., Yong, J., & Wang, W. (2011). Efficient Depth-of-Field Rendering with Adaptive Sampling and Multiscale Reconstruction. Computer Graphics Forum, 30(6), pp. 2667-1680. doi: 10.1111/j.1467-8659.2011.01854.x.

Digital Signals – Sampling and Quantization [PDF document]. (n.d.). Retrieved from http://www.rs-met.com/documents/tutorials/DigitalSignals.pdf.

Jiang, X., Sheng, B., Lin, W., Lu, W., Ma, L. (2014). Image anti-aliasing techniques for Internet visual media processing: a review. Journal of Zhejiang University-SCIENCE C (Computers & Electronics), 15(9), pp. 717-728. doi: 10.1631/jzus.C1400100.

Jimenez, J., Gutierrez, D., Yang, J., Reshetov, A., Demoreuille, P., Berghoff, T., ... & Sousa, T. (2011). Filtering approaches for real-time anti-aliasing. ACM SIGGRAPH Courses, 2(3), 4. Retrieved from http://www.iryoku.com/aacourse/downloads/Filtering-Approaches-for-Real-Time-Anti-Aliasing.pdf.

Page 23 of 26

Page 25: ResearchPaper_Final

Jimenez, J., Echevarria, J. I., Sousa, T., & Gutierrez, D. (2012). SMAA: enhanced subpixel morphological antialiasing. Computer Graphics Forum, 31(2). pp. 355-364. doi: 10.1111/j.1467-8659.2012.03014.x.

Li, S., Guennebaud, G., Yang, B., & Feng, J. (2011). Predicted Virtual Soft Shadow Maps with High Quality Filtering. Computer Graphics Forum, 30(2). Retrieved from https://hal.inria.fr/inria-00566223/document.

Li, T. M., Wu, Y. T., & Chuang, Y. Y. (2012). SURE-based optimization for adaptive sampling and reconstruction. ACM Transactions on Graphics, 31(6), Article 194. doi: 10.1145/2366145.2366213.

Manson, J., & Schaefer, S. (2011). Wavelet Rasterization. Computer Graphics Forum, 30(2), pp. 395-404. doi: 10.1111/j.1467-8659.2011.01887.x.

Mitchell, D., & Netraveli, A. (1988). Reconstruction Filters in Computer Graphics. Computer Graphics, 22(4), pp. 221-228. doi: 10.1145/54852.378514.

Rousselle, F., Knaus, C., & Zwicker, M. (2012). Adaptive Rendering with Non-Local Means Filtering. ACM Transactions on Graphics, 31(6), Article 195 (November 2012), 11 pages. doi: 10.1145/2366145.2366214.

Shannon, C. E., (1949). Communication in the presence of noise. Proc. Institute of Radio Engineers, 37(1), pp. 10-21. Reprinted as classic paper in: Proc. IEEE, 86(2), (February 1998). Retrieved from http://web.stanford.edu/class/ee104/shannonpaper.pdf.

Term, H., Zisserman, A. (2003). Two-Dimensional Signal Analysis [PDF document]. Retrieved From Lecture Notes Online Web Site: http://www.robots.ox.ac.uk/~az/lectures/sa/lect12.pdf.

Weisstein, E. (n.d.). Moiré Pattern. Retrieved from http://mathworld.wolfram.com/MoirePattern.html.

Weisstein, E. (n.d.). Convolution Theorem. Retrieved from http://mathworld.wolfram.com/ConvolutionTheorem.html.

Page 24 of 26

Page 26: ResearchPaper_Final

Weisstein, E. (n.d.). Convolution. Retrieved from http://mathworld.wolfram.com/Convolution.html.

Page 25 of 26