+ All Categories
Home > Documents > PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Date post: 10-Apr-2015
Category:
Upload: samuel-mpawulo
View: 383 times
Download: 0 times
Share this document with a friend
Description:
Photon mapping is a simple yet robust global illumination rendering algorithm.This paper not only presents a detailed discussion of the photon mapping algorithm, but also introduces the concept of hierarchical photon mapping. Hierarchical photon mapping is a unique modification to the traditional photon mapping algorithm and is intended to optimise the computation of illumination.
103
PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS Samuel Mpawulo A dissertation submitted in partial fulfilment of the requirements of Staffordshire University for the degree of Master of Science.
Transcript
Page 1: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

PHOTON MAPPING USING

HIERARCHICAL PHOTON MAPS

Samuel Mpawulo

A dissertation submitted in partial fulfilment of the requirements of Staffordshire

University for the degree of Master of Science.

Supervised by Dr Claude C. Chibelushi

JANUARY 2005

Page 2: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Abstract

Photon mapping is a simple yet robust global illumination rendering algorithm that was

developed by Henrik Wann Jensen in recent years. Photon mapping is now used

extensively in global illumination to render photorealistic images and has quickly

become the preferred algorithm for the simulation of caustics and illumination in

volumetric media. The photon mapping algorithm is a two pass algorithm that uses the

first pass to generate and store illumination information in a data structure called a

photon map. The second pass is the rendering phase and is performed by a ray-tracer that

repetitively queries the photon map. This research is mainly focused on optimising the

generation and querying of this photon map which are the key factors in the performance

of the algorithm.

This project not only presents a detailed discussion of the photon mapping algorithm, but

also introduces the concept of hierarchical photon mapping. Hierarchical photon mapping

is a unique modification to the traditional photon mapping algorithm and is intended to

optimise the computation of illumination. The hierarchical photon mapping algorithm

splits the photon maps into a two tier hierarchical structure of multiple-maps, based on

the photon density and on the topology of the polygons within the scene. The use of

multiple photon maps is not novel in itself, it is the manner in which the photon maps are

split and the arrangement of the photon maps into a two tier hierarchy that makes this

project unique. As part of the project the hierarchical photon mapping algorithm is fully

described and implemented in a purposely built test application. This application is used

to compare the performance of hierarchical photon mapping to that of the more

traditional single and multiple photon mapping paradigms. The project was able show

that hierarchical photon mapping has several significant advantages and performance

gains when used to render relatively complex scenes with high photon densities.

Page 3: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Contents

Chapter 1: Introduction.................................................................................................1

1.1 Project Background.............................................................................................1

1.2 Aim......................................................................................................................3

1.3 Objectives............................................................................................................4

1.4 Intellectual Challenge..........................................................................................4

1.5 Research Program................................................................................................4

1.5.1 Phase one.....................................................................................................5

1.5.2 Phase two.....................................................................................................5

1.5.3 Phase three...................................................................................................6

1.5.4 Phase four....................................................................................................7

1.6 Deliverables:........................................................................................................7

1.7 Resources.............................................................................................................7

Chapter 2: Global Illumination...................................................................................9

2.1 The fundamentals of Global Illumination...........................................................9

2.1.1 Surface and Sub-Surface Scattering Functions.........................................10

2.1.1.1 BSSRDF and BRDF..............................................................................11

2.1.1.2 Reflection Model...................................................................................15

2.1.2 Light Scattering in Participating Media....................................................16

2.2 Rendering..........................................................................................................18

2.2.1 Image Based Rendering.............................................................................18

2.2.2 Ray Tracing...............................................................................................19

2.2.3 Finite Element Radiosity Technique.........................................................19

2.2.4 Hybrid and Multi-Pass Techniques...........................................................20

2.3 Photon Mapping Algorithm...............................................................................20

2.3.1 Photon Tracing..........................................................................................22

2.3.2 Computing Radiance from Photon maps...................................................24

2.3.3 The KD Tree..............................................................................................26

2.3.4 Multiple photon maps................................................................................29

Chapter 3: Hierarchical Photon Maps......................................................................31

3.1 Motivation.........................................................................................................31

3.2 Generating Shared Photon Maps.......................................................................32

Page 4: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

3.3 Generating Local Photon Maps.........................................................................34

3.4 Summary...........................................................................................................39

Chapter 4: Experimental Details...............................................................................40

4.1 The Hierarchical Photon Mapping Test Application........................................40

4.2 The Experimental Objectives............................................................................40

4.3 The Experimental Method.................................................................................41

4.4 The Experiment Results....................................................................................44

Chapter 5: Conclusion................................................................................................48

References........................................................................................................................50

APPENDIX A - Raw test results....................................................................................55

Scene 1..........................................................................................................................55

Scene 2..........................................................................................................................57

Scene 3..........................................................................................................................59

APPENDIX B - Image Based Rendering......................................................................60

B.1 Light Fields and Lumigraphs............................................................................60

B.2 Image Warping..................................................................................................61

Page 5: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

List of Figures

Figure 1 Radiance L..........................................................................................................12

Figure 2 Subsurface scattering as described by BSSRDF.................................................13

Figure 3 BRDF models the local reflection of light..........................................................14

Figure 4 Illumination map.................................................................................................21

Figure 5 Computing reflected radiance (Jensen, 2001).....................................................26

Figure 6 Spatial subdivision using a kd-tree.....................................................................27

Figure 7 Simplified 2D kd-tree for the spatial subdivision shown in figure 5..................28

Figure 8 Photon leakage at corners...................................................................................29

Figure 9 Generating shared photon maps..........................................................................34

Figure 12 Scene 1 rendered using hierarchical photon maps............................................41

Figure 13 Scene 2 rendered using hierarchical photon mapping......................................42

Figure 14 Scene 3, the egg chair rendered using ray tracing............................................42

Figure 15 Graph showing rendering performance for scene 1..........................................46

Figure 16 Graph showing rendering performance for scene 2..........................................46

Figure 17 Graph showing rendering performance results test 2 performed on scene 3....47

Figure 18 Scene 1 rendered using hierarchical photon maps and 60,000 photons per

triangle...............................................................................................................................55

Figure 19 Scene 3 rendered 5,000 photons per triangle....................................................59

Figure 20 The plenoptic function......................................................................................60

List of Tables

Table 1. Complexity of multiple kd-trees.........................................................................32

Table 2 Description of tests performed............................................................................43

Table 3 Result Summary for tests performed on scene 1..................................................45

Table 4 Result Summary for tests performed on scene 2..................................................45

Table 5 Scene 1 test results using hierarchical photon maps............................................55

Table 6 Scene 1 test results using only multiple shared photon maps..............................56

Table 7 Scene 1 test results using a single global photon map.........................................56

Table 8 Scene 2 test results using hierarchical photon maps............................................57

Table 9 Scene 2 test results using a single global photon map.........................................58

Page 6: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Chapter 1: Introduction

The generation of realistic synthetic images in computer graphics has many useful

applications in advertising, interior design, architecture, film production and

manufacturing design, to mention just a few. These synthetic simulations provide an

alternative to the construction of real, physical models, which are usually expensive, lack

flexibility, need relatively long construction periods and whose construction and use may

present risks to human life. Synthetic scenes are very portable and can be replicated

easily.

1.1 Project Background

Computer graphics images tend to demand a lot of processor power and this has resulted

in the development of a category of rendering methods that are based on empirical

geometrical approximations rather than pure physics. The Lambertian and Phong shading

models are examples of such approximation techniques. These methods however fail to

adequately address the problem of refraction and global illumination, which are both vital

for the realistic rendering. Further more the computational load is dependent on the

complexity of the scenes. Both models are extensively discussed by Allan Watt (1993)

and various other publications.

The second category of rendering techniques is based on the physics or science of

propagation of light. These are capable of addressing refraction and global illumination

to varying degrees. They include ray tracing, radiosity and photon mapping.

Ray tracing is a point-sampling technique that traces infinitesimal beams of light through

a model. Basic ray tracing as first introduced (Turner Whitted, 1980), traces light rays

backwards from the observer to the light sources. Basic ray tracing is not a fully global

illumination algorithm. It simulates only direct illuminations and refractions, and does

not account for all light paths within a scene. It is therefore unable to compute important

effects such as caustics, motion blurs, depth of field, indirect illumination or glossy

reflection. Monte Carlo based ray tracing methods overcome this limitation by

distributing the rays stochastically; however a very large number of sample rays must be

used to avoid variance or noise in the rendered images (Jensen, 2001 pp.4-5, 35-39).

1

Page 7: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Finite element radiosity techniques are an alternative to ray-tracing in which the model is

subdivided into small patches that form a basis for the distribution of light. It is assumed

that the light reflected by each patch is constant and independent of direction. Radiosity

was therefore initially introduced for scenes with only diffuse (Lambertian) surfaces.

Radiosity becomes very costly when used to simulate complex models or models with

non-diffuse surfaces. This is mainly because the algorithm computes values for each

patch in the model and the number of patches tends to increase with complexity (John,

2003 pp.63-65) (Jensen, 2001 pp.5-6).

Photon tracing is the most recent technique and is not as widely publicised. It was first

introduced by Henrik Wann Jensen (1995). Photon mapping changes the way in which

illumination is represented. Instead of tightly coupling lighting information with the

geometry of a scene, the information is stored separately in an independent structure, the

photon map. The decoupling of the photon map from the geometry simplifies the

representation and makes it possible to represent lighting in very complex models. The

photon mapping process occurs in two passes. The first pass casts photons from light

sources into the scenes, and results in the creation of the photon map. The second pass

uses Monte Carlo ray-tracing-based rendering algorithm to collect the photons, compute

irradiance and generate an image.

The introduction of photon mapping resulted in the addition of a new class of scenes to

the repertoire of computer-generated imagery. These include illumination of smoke,

clouds, under water scenes, and of translucent materials to mention just a few. Photon

mapping can simulate global illumination in complex models with arbitrary Bidirectional

Reflectance Distribution Functions or BRDFs (Jensen, 2001). The key word here is

“arbitrary”. Although image mapping algorithms have been used to address global

illumination with impressive results, the BRDF in such cases is predetermined; that is to

say the location of all objects affecting the global illumination are already specified by

the image.

Despite these benefits Photon mapping is to a great extent limited to walk through and

static scenes due to computational costs associated with the photon map generation and

the computation of irradiance from the photon map. Photon map algorithms for storing

and querying photons are generally too slow for interactive purposes. Rebuilding the

photon map for every frame does not amortize. Further more photon mapping algorithms

2

Page 8: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

computes irradiance estimates using the photon density statistics of the area in question.

This requires repetitive transversal of the photon map in search for the nearest-neighbour

photons each time a ray intercepts an object. It is for this reason that photon mapping is

sometimes applied only to visualize caustics, where the photon density is usually rather

high and density estimation can be applied with a fixed filter radius.

In summary the photon mapping algorithm consists of three problematic stages: the

scattering or shooting of photons, the sorting and storage of the photons (photon map

construction), and the search for the nearest neighbour photons during the rendering

stage. Interactive use of photon mapping was achieved by Günther J., Wald I. and

Slusallek P. (2004) through the integration of a synergy of optimisation strategies. First

and foremost Günther and his colleagues limited the use of photon mapping algorithm to

the generation of caustics alone and intentionally neglected non-caustic illumination.

Secondly their proposal involves the parallelization of the algorithm and the distribution

of the computational load amongst up to thirty six computer processing units (CPU), an

idea that was also pursued by Jensen (2000); and finally they reduce the construction cost

of the photon maps by using an un-balanced kd-tree structure to store the photons. A

balanced kd-tree is generally believed to be faster to traverse during rendering however

balancing requires time. Using un-balanced kd-trees also makes it possible to insert new

photons into an existing tree without rebuilding an entirely new one.

Larsen (2003) on the other hand decided to concentrate his efforts on optimisation of the

third stage of the photon mapping process by introducing the use of multiple photon

maps. Although Larsen was able to successfully reduce the query time for nearest

neighbours his technique could not be applied directly to caustics or participating media

for reasons that are discussed in more detail in the following chapter. This research

project attempts to address the same problem by introducing a unique two tier

hierarchical system of photon maps to photon mapping. The Hierarchical photon map

paradigm increases the number of multiple photon maps by introducing alternative ways

of splitting a global photon map.

1.2 Aim

The principal aim of this project is to analyse, design, implement and test a hierarchical

two tier multiple photon map algorithm that reduces computational cost for irradiance

estimates without loss to image quality.

3

Page 9: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

1.3 Objectives

The main objectives of this study are:

1. To propose and design a storage structure and search algorithm that optimises the

search for the nearest neighbour by using a multiple photon map environment.

2. To develop and build an application that implements the proposed algorithm and that

will serve as a test-bench for testing and demonstration purposes.

3. To analyse, compare, contrast and identify the benefits and shortcomings of using

multiple photon maps in the proposed manner as opposed to using the more

traditional single photon map.

4. To identify new opportunities and threats to the further development of the proposed

algorithm.

5. To review the research methodology, the resulting outcome and the resources of this

project.

1.4 Intellectual Challenge

The project fundamentally involves an understanding of the physics - how light interacts

with objects, the psychophysics - how the eye perceives things, and radiometry.

Furthermore an in-depth study of computer graphics, with particular attention to

geometric modelling, data structuring and rendering algorithms is mandatory. It also

demands good programming skills and a sound mathematical background. The researcher

is required to master and acquire an enhanced understanding of the use of the photon

mapping algorithm in image synthesis, which he will use to assess, select and/or create a

viable synergy of algorithms. This project introduces and tests a unique technique for the

optimisation of photon mapping. It is certainly a relevant for the award of an MSc in

computing.

1.5 Research Program

This research project will employ the experimental research methodology. Experimental

Research can be defined as “an attempt by the researcher to maintain control over all

factors that may affect the result of an experiment” (Key, 1997). In doing this, the

researcher attempts to determine or predict what may occur. The researcher manipulates

a variable (anything that can vary) under highly controlled conditions to see if this

4

Page 10: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

produces (causes) any changes in a second variable. The variable, or variables, that the

researcher manipulates is called the independent variable while the second variable, the

one measured for changes, is called the dependent variable. Independent variables are

sometimes referred to as antecedent (preceding) conditions.

Experimental research, although very demanding of time and resources, often produces

the soundest evidence concerning hypothesized cause-effect relationships (Gay, 1987). It

is therefore the method of choice for scientific disciplines whenever it is practical and

ethical to manipulate the antecedent conditions.

1.5.1 Phase one.

The first phase of this project will consist of a literature review of current rendering and

illumination models highlighting their particular strengths or weaknesses. The purpose of

this is to quickly acclimatise the researcher to the existing developments in 3D image

synthesis. It will include a study of recent developments in photon mapping, placing

particular emphasis on the structure of the photon map itself and operational issues.

Finally it will cover the most current developments in the use of multiple photon maps

and explore research opportunities. This chapter provides the platform from which the

rest of the project will be launched.

The outcome of this phase will be the first two deliverables of this project. These are:

a) An in-depth discussion and comparison, of current rendering techniques and

illumination models.

b) An in-depth discussion of the most current developments in the optimisation of

photon mapping and the use of multiple photon maps.

The completion of this phase is considered to be the first major project milestone.

1.5.2 Phase two

The second phase of the project comprises of two sections. It covers the experimental

design stage of the project in which the problem and the proposed solution are revisited.

A scientific and academic argument is then presented in support of the proposed

algorithm, clearly spelling out the expected improvements. The algorithm itself is

discussed in detail and defined using pseudo codes. The antecedent conditions, the

5

Page 11: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

dependent variables and the criteria of measurement to be adopted will be defined as

well.

While the first section focuses on the algorithm, the later part of this phase discuses the

test application as a whole and the incorporation of the algorithm. This phase of the

project will produce the third and fourth deliverables. These are:

a) A specification of a data structure for the storage of multiple photon maps and an

algorithm for rapidly traversing these maps, with a full discussion on the criteria

employed to select the data structure and a comparison with alternative structures.

b) A documented logical design or blue print for the development of an application that

will be used to test the proposed algorithm.

This phase will also contribute substantially to the sixth and seventh deliverables that are

produced by phase three of the project.

1.5.3 Phase three

In this phase of the project two distinct activities are carried out. First the test application

and several virtual test scenes of varying complexity are coded using a high level

programming language (probably C++). The application is based on the design

developed in phase two of the project.

Secondly the developed code is actually used to test the hypotheses and the performance

measured and recorded, based on the criteria earlier defined. The outcome is analysed

critically and compared to the earlier defined expectations.

The fifth and sixth deliverables are produced as a result of the completion of this phase.

They are:

a) A computer application written in an appropriate high level programming language,

that implements the algorithms that have been developed, altered and/or amalgamated

in this project for computing irradiance estimates using hierarchical photon maps.

b) A set of test results in form of statistical data and charts that will be used to establish

the effectiveness of the hierarchical photon mapping algorithm.

6

Page 12: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

1.5.4 Phase four

The fourth and final phase is the critical review and evaluation of the project as a whole.

The objectives and aims will be revisited and discussed in-depth. The level of

significance of the research will be analysed. The research methodology employed, the

adequacy of the resources, the learning points, the mistakes made, the benefits of the

project and the future research possibilities will also be analysed and discussed.

1.6 Deliverables:

The project deliverables shall include:

1. An in-depth discussion and conclusion, on the current rendering techniques and

illumination models.

2. An in-depth discussion of the most current developments in the optimisation of

photon mapping and the use of multiple photon maps.

3. A specification of a data structure for the storage of multiple photon maps and an

algorithm for rapidly traversing these maps, with a full discussion on the criteria

employed to select the data structure and a comparison with alternative structures.

4. A documented logical design or blue print for the development of an application that

will be used to test the hypotheses.

5. A computer application written in an appropriate high level programming language,

that implements the algorithms and methodologies that have been developed, altered

and/or amalgamated in this project for the use of multiple photon maps.

6. A set of test results in form of statistical data and charts that will be used to establish

the validity of the hypothesis.

7. A report on the analysis of the results and indication of further research development

possibilities and trends.

8. An evaluation of the whole project.

1.7 Resources

The main resources required will be access to relevant literature on the specialised topics

which may be in form of books, journals or internet websites that have been made

available from the Staffordshire University library.

7

Page 13: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

The following journals are of particular relevance to the project: ACM Transactions on

Graphics, IEEE Computer Graphics and Applications, and Graphics Models. Most of the

referenced journals are available on the Citeseer web site (www.citeseer.com).

The most often referenced books in this project are Jensen’s, ‘Realistic Image Synthesis

Using Photon Mapping’ (Jensen, 2001) and John, Marlon’s ‘Photon Mapping’ (John,

2003). These specifically relate to photon mapping. John (2003) provides the sample

code of the photon map algorithm and this is used as a starting point for the construction

of the test application.

The hardware on which the application will be developed and on which all tests will be

performed is an AMD Athlon 64 3000+ with 1GB DDR RAM running Windows XP

Professional.

The application will be coded in C++ using Visual Studio 6 development software

provided by the University.

Finally the researcher will make use of the availability of an experienced project

supervisor with appropriate knowledge in computer graphics and research methods, to

provide necessary guidance and advice.

8

Page 14: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Chapter 2: Global Illumination

This chapter reviews the fundamentals of global illumination and then proceeds to

discuss the various rendering approaches in reasonable detail. The intention here is to

give an understanding of the various approaches to rendering, and to briefly highlight and

compare their specific strengths and weaknesses. The later sections of this chapter will

include an in-depth review of photon mapping, the use of multiple photon maps and the

opportunities they present in terms of optimisation.

2.1 The fundamentals of Global Illumination

Global illumination has become quite important in computer graphics. Its main goal is to

compute all possible light interactions within a scene so as to obtain a truly photorealistic

image. Global illumination mimics the subtleties of natural light by tracing the light rays

bouncing between a collection of radiating objects and carrying their diffuse colour

properties with them. These are, in turn, transferred onto other neighbouring objects. This

type of indirect lighting is different from traditional computer generated lights which

provide only local illumination and typically die upon reaching their subjects. The result

of these inter-reflected light rays produces a much more realistic image due to light

properly scattering and diffusing throughout the environment, thus creating much more

accurate tones and shadows.

Global illumination rendering has been around for many years but it was never

incorporated into commercial production renderers due to the lengthy calculations

required to produce the desired results. Now that computers have become much faster,

this type of rendering is becoming somewhat feasible. Nevertheless it is important to note

that modern day global illumination renderers such as Brazil, VRay, FinalRender and

Mental Ray can still take hours to produce a high quality image (Rosenman, 2002).

Global illumination requires an understanding of light transport and how each effect can

be simulated in a mathematical model. The physics of light is currently explained using

several models based on historic developments. These are (Saleh and Teich, 1991):

9

Page 15: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Ray optics models light as independent rays travelling in optical media and

following a set of geometric rules. Ray optics can be used to model all the effects

that we see including refraction, reflection and image formation.

Wave optics represents light as electromagnetic waves. In addition to all the

phenomena that ray optics models, it can model interference and diffraction.

Electromagnetic optics includes wave optics but also explains polarisation and

dispersion.

Photon optics assumes a particulate model to light. It provides the foundation for

understanding the interaction of light and matter.

Computer graphics almost exclusively uses ray optics, even photon mapping, despite its

name, uses ray optics; and because of this, the effects of diffraction, interference and

polarisation are ignored (Jensen, 2001). Furthermore light is also assumed to have

infinite speed; this means that the whole illumination model reaches a steady state

immediately the light sources are switched on.

2.1.1 Surface and Sub-Surface Scattering Functions

When light encounters an obstacle it is either scattered or absorbed. A rough surface will

reflect light in random directions depending on the inclination of the different points on

the surface. In such a case, it makes more sense to analyse the average behaviour over the

surface rather than that of specific points. Because the angle of incidence keeps changing

from point to point, it becomes irrelevant and the reflection model that is developed can

be assumed to be isotropic. This aggregate behaviour of spreading light in all directions

is called directional diffuse or glossy reflection (Shirley et al., 1997).

Traditional lighting models assume that any light leaving a surface will do so at exactly

the same point that it went in and that any subsurface transmission and scattering will

occur at this point. This however is only true for hard metallic surfaces that exhibit

specular reflection. It is certainly untrue for translucent materials like wax. Translucency

is a material phenomenon where light travels through the surface of an object’s rather

than simply bouncing off the surface. Most non-metal surfaces contain a certain degree

of translucency. Both Shirley et al. (1997) and Koutajoki (2002) agree that the

calculations needed for generating the illumination of light in volumetric medium is

computationally heavy and thus that it is impractical to adopt a full-blown volume model

for translucent surfaces even though subsurface scattering is a volumetric phenomenon.

10

Page 16: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

2.1.1.1 BSSRDF and BRDF

As already mention subsurface scattering occurs when a beam of light enters some

material at one point then scatters around before leaving the surface from an entirely

different location. This behaviour is described by the BSSRDF or ‘Bi-directional Surface

Scattering Reflectance Distribution Function’ (Jensen, 2001), which is best understood

after the introduction of some basic terminology used in radiometry to describe light.

The basic quantity of light is the photon and its energy, which is dependent on its

wavelength, can be represented as

(1)

where h ≈ 6.63x10 Js is Planck’s constant and c is the speed of light.

The spectral radiant energy, , in photons with wavelength λ is

(2)

Radiant energy, Q, is the energy of a collection of photons. This is computed by

integrating the spectral energy over all possible wavelengths:

(3)

Radiant flux, , sometimes simply called flux, is the rate of flow of radiant energy:

(4)

The radiant flux area density is the differential radiant flux per differential area at a

surface, . This is often separated into the radiant exitance M, which is the flux

area density leaving a surface, also known as radiosity B; and the irradiance E, which is

the flux arriving at a surface location x.

11

Page 17: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

(5)

The radiant intensity, I, is the radiant flux per unit solid angle, where represents

direction away from the surface:

(6)

Radiance, L, is the radiant flux per unit solid angle per unit solid projected area (figure

1).

Figure 1 Radiance L

(7)

The last term expresses radiance as an integral over wavelength of the flow of energy in

photons per projected differential area per differential solid angle per unit time.

Radiance is one of the most important quantities in global illumination. It can be used to

describe the intensity of light in a given direction at a given point in space.

If the radiance on a surface is known then flux can be computed by integrating the

radiance over all directions Ω and the area A.

12

Page 18: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

(8)

where is the normal of the surface at x and is the direction of the incoming radiance.

The BSSRDF relates the differential reflected radiance to the differential incident flux. If

S is the BSSRDF, x´ is the point of entry of an incoming ray of light, on the surface of

some translucent medium; x is the point of exit of the out-going ray of light from the

same medium, is the direction of the incoming ray and is the direction of the out-

going ray, then S is dependent on x, , x´ and and can be represented as:

(9)

Where dL is the differential reflected radiance at x in the direction and is the

differential incident flux (Jensen, 2001). Another way of interpreting this is to say the

BSSRDF is the ratio of rate of change in reflected radiance with incident flux. This

equation is the most generic description of light transport. Unfortunately it represents a

twelve dimensional function, which is extremely expensive to compute. It has therefore

been used in only a few papers in computer graphics and these papers all deal with

subsurface scattering.

Figure 2 Subsurface scattering as described by BSSRDF

The BRDF (Bi-directional Reflectance Distribution Function) was introduced by

Nicodemus et al. (1977) (cited by Jensen, 2001). It is in reality an approximation of the

BSSRDF. It assumes that reflected light will leave the surface at exactly the same point

that it struck the surface. This assumption reduces the BRDF to a six dimensional

function. It enables a series of simplifications that make rendering much more practical.

13

Page 19: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

The BRDF, , defines the relationship between reflected radiance, , and irradiance E:

(10)

Remember that representing the BRDF as simply emphasizes the fact that

the BRDF, , is a function of x, the point of exit of the out-going ray of light, , the

direction of the incoming ray and , the direction of the out-going ray. The irradiance

differential term in equation 10 can be replaced by its equivalent terms from

equations 5 and 8. This results in the following BRDF equation:

(11)

Normally, in computer graphics, the incident radiance at a surface location is known and

it is the reflected radiance, , that needs to be computed. The BRDF describes the local

illumination model. By rearranging and then integrating both sides of the BRDF

equation, the reflected radiance in all directions can be computed.

(12)

Figure 3 BRDF models the local reflection of light

A useful property of the BRDF is Helmholz’s law of reciprocity, which states that the

BRDF is independent of the direction in which light flows. This makes it possible for

global illumination models to trace light paths in both directions. Another important

property of the BRDF is that it obeys the principle of energy conservation. Therefore a

surface cannot reflect more light than it receives (Jensen, 2001).

14

Page 20: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Surfaces that reflect light in all directions, irrespective of the incident direction, are

referred to as diffuse surfaces. This is typical of rough surfaces or with subsurface

scattering. Lambertian surfaces exhibit a special case of diffuse reflection in which the

reflected direction is perfectly random and as a result the radiance is uniform in all

directions regardless of the irradiance. The BRDF is said to be constant.

Specular reflection on the other hand, occurs when light hits a perfectly smooth surface

and is reflected only in the mirror direction. Specular reflection can also be considered

perfect for smooth dielectric surfaces like glass, water or metallic surfaces, however most

surfaces have some imperfections and as a result light is reflected in a small cone around

the mirror direction. This is called glossy reflection. For perfect mirror reflection the

radiance shall be zero in all directions except in the mirror direction.

2.1.1.2 Reflection Model

Materials normally reflect light in a complicated manner that cannot be described by the

simple Lambertian or the perfect specular reflection models. Several alternative

reflection models were developed to address this problem. Early models like the Phong

model were not based on physics or science but on phenomena and logic (Phong, 1975).

The Phong model actually results in a surface that reflects more light than it receives, but

this problem has since been addressed by Lewis (1994) who derived a normalising factor

for the Phong model. Despite notable improvements in the Phong model over the years, a

comprehensive model of how light reflects or transmits when it hits a surface, including

its subsurface interactions, needs to be developed.

The Torrance-Sparrow model was introduced to computer graphics by Blinn (1977). It is

one of the best known models based on the actual physical behaviour of light. This model

represents off-specular peaks quite well but lacks a proper diffuse term. The model by

Oren and Nayar (1994) is much better for rough surfaces, while models by Ward, and by

Poulin and Fournier (both cited by Jensen, 2001 p.25) are commonly used for anisotropic

reflection, where the amount of light reflected is contingent on surface orientation.

Lafortune et al (1997) developed a model that supports importance sampling, but the

model parameters are not intuitive and therefore the model needed to be mapped onto a

set of measured data; which data may not always be easy to practically measure.

15

Page 21: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

A simple, intuitive and empirical reflection model for physically plausible rendering was

proposed by Schlick (1993) and is well described by Jensen (2001). Jensen is apparently

impressed by the model and claims that it is computationally efficient. It also supports

importance sampling, which is useful for Monte Carlo rendering methods (Cook, 1986).

The Schlick model is different from the models discussed so far in that it is not derived

from a physical theory of surface reflection. It is a mix of approximations to existing

theoretical models and has some intuition of light reflection behaviour.

A good rendering equation should be based on a reflection model that results in a

bidirectional reflectance distribution function (BRDF) that correctly predicts the diffuse,

the directional diffuse and the specular components of the reflected light at any surface of

interaction.

The rendering equation is the mathematical basis for all global illumination algorithms. It

presents the equilibrium conditions of light transport at the interface of two medium. The

rendering equation can be used to compute out-going radiance at any surface location.

The outgoing radiance, , is the sum of emitted radiance, , and the reflected radiance,

:

(13)

The reflected radiance term can be replaced by equation 12:

(14)

This is the rendering equation as used in Monte Carlo ray-tracing algorithms including

photon mapping (Jensen 2001). The rendering equation, however, does not apply to

participating media.

2.1.2 Light Scattering in Participating Media

The previous discussions assume that optical interactions occur only at an interface

between two media. This is true only in a vacuum. In the case of dusty air, clouds, smoke

and silty water optical interactions occur within the medium as light progresses through

it. Such materials are referred to as participating media. Glass is certainly a participating

16

Page 22: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

medium so much so that the colour of glass is actually determined within the medium

and not at the surface as portrayed by most computer graphics applications.

When light interacts with a participating medium it can either absorbed or scattered. The

probability of either happening is represented by an absorption coefficient and a

scattering coefficient. A ray of light passing through such a medium therefore

experiences a continuous change in its radiance. This change in radiance is not only due

to absorption and out scattering, that both result in a loss of radiance of the ray, but also

due to in-scattering of light. There could be a gain in radiance due to emission from the

medium if the medium were a flame. The phase function is used to describe the

distribution of scattered light within a participating medium, just like the BRDF does for

surfaces. The phase function depends only on the angle θ, between the incoming and

scattered rays and can be written as p(θ) where θ = 0 in the forward direction and θ = π

in the backwards direction.

A number of different phase functions are available for different types of media however

in the case of typical glass (with homogeneous impurities) the scattering will be isotropic

therefore phase function will be a constant irrespective of the value of θ. Most other

materials are anisotropic and therefore have a phase function with a preferential

scattering direction. The most commonly used phase function is the empirical Henyey-

Greenstein phase function (Henyey and Greenstein, 1941), which was introduced to

explain scattering by intergalactic dust but has since been used to describe scattering in

oceans, clouds, skin, stone and many more. However for many applications the Henyey-

Greenstein phase function is computationally costly and its accurate shape is of less

importance. Having observed this, Schlick (1993) presented his own simplification of the

Henyey-Greenstein phase function that traded accuracy for computational efficiency.

The phase function forms the basis of the volumetric rendering equation just as the

BRDF is used to create the rendering equation. Unlike the rendering equation, the

volume rendering equation can only be solved using numerical integration. This is

implemented by taking small uniform steps or intervals through the medium and making

computations at these intervals. This approach is called ray marching. In non-

homogeneous medium, and medium with local variations in lighting, it is better to vary

the length of the intervals to capture local changes more efficiently. This is called

adaptive ray marching (Jensen, 2001 p119).

17

Page 23: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

A true global illumination solution needs to accommodate the rendering equation and the

volumetric rendering equations if it is to realistically render arbitrary scenes. The photon

mapping algorithm easily adapts to any rendering equation, and since photons can be

stored anywhere within a medium, it can be used to simulate participating media

phenomena. This is one of the main strengths of the algorithm and it will become

apparent in the next section of this chapter, when the different rendering algorithms are

discussed and compared.

2.2 Rendering

As mentioned earlier. the rendering paradigms can be placed in three categories. The first

category comprises of the methods based on geometrical approximations. These included

the Gouraud and Phong shading models. Again as previously mentioned these methods

do not address the problem of refraction and global illumination.

The second category of rendering methods is based on the physics of light and includes

methods based on radiosity, ray tracing and photon mapping. This category is most

relevant to this project and is discussed in more detail later on in the chapter.

The third category comprises of image based rendering methods. These basically include

light-field rendering and image warping. These are new rendering paradigms that use

pre-acquired reference images of the scene in order to synthesise novel views. These

images may be actual photographs or images generated by other means.

2.2.1 Image Based Rendering

Image based rendering was introduced as a means of overcoming the tedious task of

creating synthetic models. Image based rendering has also been found to reduce the

computational costs considerably and is therefore very useful for interactive applications.

An additional benefit is that the computational cost is independent of the complexity of

the scene. Although image based rendering paradigms are not directly relevant to this

thesis they are an important and strong rival to photon mapping for simulating global

illumination. A more detailed description of this class of rendering paradigms is therefore

appropriately provided in appendix B.

18

Page 24: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

2.2.2 Ray Tracing.

Ray tracing was the first algorithm for photo-realistic rendering of a synthetic three

dimensional scene. It is particularly good for the simulation of specular reflections and

transparency. This is because a ray of light can apply the appropriate reflection model as

it progresses around the scene and through transparent objects. Ray tracing is a point

based rendering technique and therefore excels at curved surfaces (John, 2003 p. 58).

Basic ray tracing as introduced by Turner (1980), traces light rays backwards from the

observer to the light sources. This approach only handles mirror reflections, refractions

and direct illumination. Caustics, motion blur, depth of field and glossy reflection cannot

be computed despite their importance. These effects were simulated by extending ray

tracing with Monte Carlo methods of ray distribution (Cook, 1986) in which rays are

distributed stochastically so as to account for all light paths, however the images

generated suffer from variance, seen as noise. This noise is simply how information gaps

are manifested in the resultant image. Eliminating noise requires bombarding the scene

with an increased number of sample rays and this is computationally very expensive.

Alternatively one could seek out areas of visual importance and carefully distributing the

rays so as to reduce this noise. Several methods of doing this have been proposed

including the Monte Carlo bi-directional ray tracing (Lafortune and Willems, 1993 pp.

95-104; Lafortune, 1996), in which rays are traced in both directions. One interesting

point about ray tracing is that it tends to separate the illumination from the geometry.

Rays are traced into the scene and return with some illumination value. This fact can be

used to build up independent illumination stores as is done by the irradiance caching

method which stores and reuses indirect illumination on diffuse surfaces.

2.2.3 Finite Element Radiosity Technique

Finite element radiosity is an alternative to ray tracing in which the total energy in a

scene is distributed among the surfaces within the scene. This is done by dividing up the

objects within the scene into elements or patches, which form a basis for the final light

distribution. Radiosity was initially introduced to render scenes with only diffuse

reflection (Lambertian) where the reflected light from each element is independent of the

direction of the incident ray or location of the viewer. As light leaves the light source, its

energy is splattered onto the elements in the scene. The elements store the illumination

information, but at the same time they acts as a secondary light source and emit light

19

Page 25: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

particles to neighbouring surfaces. This process is repeated until equilibrium is reached

(Goral et al., 1984).

Radiosity is a much better approach to computing global illumination than ray tracing

however it is very poor at computing specular reflections. This is because the intensity is

computed as an element average and not on a point-by-point basis. The algorithm

generates visible artefacts that can only be reduced by either simplifying the geometry of

the scene objects or by increasing the number of elements. It therefore does not produce

images with sharp illumination features. Radiosity in its basic form is certainly not

suitable for rendering effects like caustics or scenes where specular high-lights are

expected (John, 2003 pp. 63-66).

2.2.4 Hybrid and Multi-Pass Techniques

Hybrid techniques have been developed with the aim of reaping the benefits of both ray

tracing and radiosity. The first hybrid technique used radiosity to generate the basic

image then made a second pass-using ray tracing to add specular reflections (Wallace et

al., 1987). The next set of hybrid techniques extended the contribution of ray tracing to

include computation of shadows as seen by the eye (Shirley, 1990). The more recent

hybrids reverse the roles by using ray tracing to generate the base image and limiting the

role of radiosity to the computation of indirect illumination on diffuse surfaces. This shift

in methodology was aimed at reducing the visible artefacts generated by the radiosity

algorithm (as pointed out in the earlier discussion on radiosity).

The main problem with the hybrid paradigms is that the use of radiosity limits the

complexity of the scene. Holly Rushmeier et al. (1993) (cited by Jensen, 2001) proposed

a geometrical simplification technique in which the radiosity algorithm is used on a

separate simplified version of the scene that did not contain the finer geometrical details.

This technique worked well but was not very practical because the scene simplification

was usually done manually due to lack of a generic algorithm or tool that could be used

arbitrarily. Also, there is need to analyse the effect of simplification on the resulting

global illumination in the scene.

2.3 Photon Mapping Algorithm

The photon mapping algorithm was invented by Henrik Wann Jensen. Photon mapping

decouples lighting from the geometry of scene and stores this data in a separate structure

called a photon map (Jensen, 1995; Jensen, 2001). The decoupling of the photon map

20

Page 26: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

from the geometry simplifies representation and allows for the representation of complex

scenes (Jensen, H. W. and Niels 1995).

The idea of separating the illumination from the geometry of the scene is not entirely

new, illumination maps were first proposed by Arvo (1986) as a step in this direction.

Arvo’s illumination map is similar to a texture map but instead of displacement or

normal vector perturbations, it carries illumination information. The illumination map

consists of a rectangular array of data points imposed on a 1x1 square and mapped onto

the surface of the object. A ray of light hitting a surface, increments the energy stored in

four neighbouring data points in a weighted manner as illustrated in figure 4.

Figure 4 Illumination map

Illumination mapping suffers from the same problems as finite element methods. The

main problem is in computing the resolution of the map. It is also too costly to use

illumination maps in complex models and finally it is too difficult to use it on arbitrary

surfaces even if they can be parameterised.

Both illumination mapping and photon mapping are two-phase algorithms. The first

phase illuminates the scene and generates the maps. The second phase introduces the

21

Page 27: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

viewer and renders the scene. Therefore light is first scattered around the scene, then the

scene is observed after all light has propagated through the scene.

Photon mapping however differs from illumination mapping in that photon maps store

photons and not just illumination intensity. A photon has a direction, a location and a

colour value. The illumination maps on the other hand, rely entirely on the geometry of

the objects for directional information. Secondly the photons are not stored in a fixed

grid system as is done with illumination maps. The distribution of photons is non-

uniform and therefore much more detached from the geometry. The photon in itself is a

self contained data point.

2.3.1 Photon Tracing.

In order to save illumination separately from the geometry of the scene, the photon

structure is required. To represent flux each photon must carry information on the

direction of flow of the energy, the amount of energy and the location of the photon. A

typical photon structure expressed in C would be as follows:

struct photon

{

float x,y,z; // position ( 3 x 32 bit floats )

float color[3]; // power ( 3 x 32 bit floats )

float vector[3]; // incident direction ( 3 x 32 bit floats )

short sortFlag; // flag used in kd-tree

}

In cases where memory is of concern, the photon can be compressed by representing the

power as four bytes using Ward’s shared-exponent RGB-format (Ward G. 1991) and by

using spherical coordinates for the photon direction.

The first pass of the photon mapping algorithm is referred to by Jensen (2001) as photon

tracing. Photon tracing is the process of emitting photons from the light sources and

tracing them though the model. Photon tracing works in exactly the same way as ray

tracing. The difference is that the photons in photon tracing propagate flux whereas the

rays in ray tracing gather radiance. The interaction of a photon with an object may

therefore differ from the interaction of a ray with an object. For example when a ray is

22

Page 28: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

refracted its radiance is changed based on the relative index of refraction, this does not

happen to photons.

The photon map represents the direct and indirect illumination. The photon map is

created as a result of photon tracing which is the first pass of the algorithm. Photons are

scattered from light sources into the scene and eventually stored in a photon map. The

nature of a photon that interacts with a point on a surface can be described as follows

(John M. 2003):

When a photon meets a diffuse surface, its incidence direction and energy are

stored in the photon map at the point of intersection and it is reflected as well in a

random direction. The photon map only stores diffuse interactions.

When a photon meets a specular surface the photon map is not updated. The

photon is either reflected or refracted depending on the surface properties.

If the surface is partially diffuse and partially specular then a method must be used

to randomly determine which component of the material carries more weight. One

such method is the Russian roulette.

If a photon is absorbed by a surface, it is stored in the photon map and the photon

life is ended.

A very important technique in photon tracing is the Russian roulette (Arvo and Kirk,

1990). The Russian roulette technique is used to decide whether a photon should be

reflected, absorbed or transmitted. It is then used to decide whether a reflected photon

should be reflected diffusely or specularly. In other words it determines which

component of the material carries more weight. This is illustrated in the following pseudo

code where d is the reflectivity of a surface and Φ is the incoming photon power:

probability of reflection p = d

ξ is a uniformly distributed random number between 0 and 1, ξ = random()

if (ξ < p)

reflect photon with power Φ

or else

absorb photon.

23

Algorithm 1. Russian roulette algorithm.

Page 29: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

The power of the Russian roulette algorithm is clearly apparent when one imagine a

situation where 1000 photons are shot at a surface with a reflectivity of 0.5. We could

reflect all 1000 photons with half the power or reflect only half (500) of the photons with

the full power. Russian roulette enables the later option to be taken and is clearly a

powerful technique for reducing computational requirements for photon tracing (Shirley,

1990).

As already mentioned photons that hit non-specular surfaces are stored in a global data

structure called a photon map. Each photon can be stored several times along its path.

The information about a photon is stored at the point on the surface where it was

absorbed. Initially photon map is arranged as a simple flat array of photons. For

efficiency reasons this array is reorganized into a balanced kd-tree structure before

rendering.

2.3.2 Computing Radiance from Photon maps

The second pass of the photon mapping algorithm is the rendering phase. It is during

within this phase that the photon mapping algorithm uses Monte Carlo ray tracing to

trace rays from the observer into the scene. The photon map that was generated earlier is

referenced for statistical information on the illumination in the scene. The photon map

represents incoming flux, with each photon carrying just a fraction of the light source

power. A photon hit in a region indicates that this region receives some illumination from

the light source, but the single photon hit does not indicate how much light the region

receives. The irradiance for the region can only be estimated from the photon density,

. “The irradiance estimate is determined by summing the closest photons found

and dividing them by the area of the circle in which they are found” (John Marlon, 2003

p.260).

The above argument can also be demonstrated mathematically from the rendering

equation derived in section 2.1.1 (equation 14). Equation 12 expressed the reflected

radiance term as:

24

Page 30: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

(15)

where is the reflected radiance at x in direction . is the hemisphere of incoming

directions, is the normal of the surface at x, is the BRDF at x, and is the

incoming radiance. Since the photon map provides information about the incoming flux,

the incoming radiance, , needs to be rewritten and expressed in terms of flux. This can

be done by using the derivatives of equation 8 (section 2.1.1.1):

, (16)

Equation 16 can now replace the incoming radiance term in equation 15:

(17)

The incoming flux is approximated from the photon map by locating n photons that are

nearest to x. Each photon p has the power , so by assuming that the photon

intersects the surface at x, then the integral in equation 17 approximates to the summation

expressed in equation 18.

(18)

The procedure is akin to expanding a sphere around x until it contains n photons, then

using these photons to estimate the radiance. The equation still has which is related

to the density of photons around x. By assuming a locally flat surface around x, this area

can be computed as:

25

Page 31: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

(19)

which is the area projected by the sphere onto the surface (Figure 5) with a radius r

equivalent to the distance from point x to the furthest photon of the set of n nearest

photons. The final result is an equation that can be used to compute the reflected radiance

at any surface location using the photon map:

(20)

Figure 5 Computing reflected radiance (Jensen, 2001)

It is now clear why the photon map structure must be able to quickly locate the nearest

neighbours to any point within the scene. At the same time it should remain compact

since the intention is for it to store millions of photons. This immediately rules out the

use of simple structures like lists, vectors or multi-dimensional arrays, since searching for

the nearest neighbours through such structures is too costly. Three dimensional grid

structures are impractical because the distribution of photons is highly non-uniform

especially when simulating important effects like caustics that focus light in specific

areas of the scene.

2.3.3 The KD Tree

Given the requirement for efficiency, the kd-tree structure is a natural choice for a photon

map. A kd-tree is a spatial subdivision algorithm that can be used during orthogonal

26

Page 32: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

range search. The first step in constructing a kd-tree is to define the spatial extent of the

problem; this is simply the maximum spatial extent of the particles in each spatial

dimension. For example in a two dimensional space the problem space is a rectangle

whose sides are the maximum separation between particles in each of the two

dimensions.

Figure 6 illustrates the use of a kd-tree algorithm for spatial subdivision of a two

dimensional space. In the illustration the maximum horizontal distance is that between

particles 2 and 6 and this defines the horizontal size of the problem space, while the

vertical distance between points 3 and 7 defines the vertical size. The original problem

space is the large rectangle labelled Node 1.

The next step is to recursively divide the space in the dimension for which the maximum

spatial separation is the greatest. Thus Node 1 is divided along the vertical dimension to

from node 2 and node 3. The recursive division of nodes is continued until there is only

one particle in each cell. The results of each stage of the process can be stored in a tree

structure similar to that depicted in figure 7. Since the recursive spatial division

terminates when there is only one particle in each cell; the original eight particles

generate a tree with a total of fifteen nodes.

Figure 6 Spatial subdivision using a kd-tree

27

Page 33: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

The kd-tree described here is a perfectly balanced one. It requires the number of photons

to be divisible by two and the number of nodes in the tree is always 2N-1 where N is the

number of photons (Appel, 1985).

Figure 7 Simplified 2D kd-tree for the spatial subdivision shown in figure 5

A three dimensional kd-tree is built in the same manner but includes a third dimension.

The problem space in this case would take the form of a cube.

During its second pass, the photon mapping algorithm uses Monte Carlo ray tracing to

render the scene. Rays are traced from the observer through each pixel of the view plane,

into the scene. The kd-tree (photon map) is used to find the nearest neighbours at the

points of intersection between rays and objects in the scene. Because the photon map is

now constructed spatially it returns the relevant photons much faster. This is because the

spatial arrangement allows the algorithm to rapidly narrow the problem space by

performing qualifying tests at group level. The search begins from the root, which is

labelled Node 1 in the given example. A qualifying test is applied to Nodes 2 and 3. If a

node fails to qualify all its sub nodes are disqualified without the need to test each

member of the sub nodes (Bentley, 1975; Bentley, 1979).

The performance of the photon mapping algorithm is mainly dependent on the efficiency

of three activities:

1. The cost or speed of constructing and balancing the photon map. Unbalanced kd-

trees are faster to construct but are much slower at tracing the nearest neighbour.

28

Page 34: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

2. The cost of identifying the object of intersection during ray tracing. This cost is

dependent on the number of objects in the scene but is increased further by the

presence of reflective and refractive materials.

3. The cost of computing irradiance estimates from the photon map, at the point of

intersection. This relates to how fast a set of nearest neighbouring photon can be

identified.

This project is mainly concerned with optimisation aspects that relate directly with the

construction of photon maps and the computation of irradiance estimates (points 1 and 3

above).

2.3.4 Multiple photon maps

The use of multiple photon maps is not entirely new. Usually three photon maps are

used, one for caustics, another for indirect illumination and a third one for participating

media. This gives flexibility to solve different parts of the rendering equation separately,

and keeps the global photon map small allowing for faster searches. Larsen and

Christensen used multiple photon maps to optimise the computation irradiance estimates

(Larsen, 2003). They proposed an algorithm that places photons is separate photon maps

when adjacent surfaces have a large angle between them. This criteria for splitting the

photon map has the added advantage of preventing undesirable leakage of irradiance at

corners as illustrated in figure 8.

Figure 8 Photon leakage at corners

The rule they chose for the photon map segregation is the following (Larsen, 2003):

29

Page 35: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

1. If the angle between two adjacent polygons is below a predefined threshold (α)

they should use the same photon map for storing photons and lookups on the

nearest neighbors. An edge between such two polygons is classified as

connected.

2. If the angle between two adjacent polygons is above the same predefined

threshold (α) they should use different photon maps for storing photons and

lookups on the nearest neighbors. An edge between such two polygons is

classified as unconnected.

Larsen was able to demonstrate significant improvements in performance however he

noted that the criteria he was using to split the photon maps was not suitable for scenes

generated using fractal algorithms because of the high number of polygons and sharp

edges. The algorithm was not applied to caustics and volume photon maps because of

difficulties in determining when to split the maps (Larsen, 2003). Furthermore the

number of multiple maps that are use is contingent to the geometry of the scene.

The use of multiple photon maps have been shown by previous research, to reduce

computational costs although the benefits are limited by criteria used to split the photon

maps. In the following chapters an alternative algorithm and splitting criteria will be

proposed. The proposed algorithm provides a means of sub-dividing Larsen’s multiple

photon maps to even smaller units. This algorithm is the hierarchical photon mapping

algorithm.

30

Page 36: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Chapter 3: Hierarchical Photon Maps.

3.1 Motivation

The idea of using hierarchical photon maps, as a means of optimising the computation of

irradiance estimates, was conceived by studying the complexity of the photon mapping

algorithm using the big O notation.

The big O notation was first introduced by German number theorist Paul Bachmann in

his 1892 book ‘Analytische Zahlentheorie’ and later popularized by another German

number theorist, Edmund Landau, hence it is also called Landau's symbol. Big O

notation is used in complexity theory, computer science, and mathematics to describe the

asymptotic behaviour of functions. Big O notation is useful when analyzing algorithms

for efficiency. For example, the time (or the number of steps) it takes to complete a

problem of size n might be found to be:

T(n) = 4n2 - 2n + 2. (21)

As n grows larger, the n2 term becomes more dominant, so at some point all other terms

can be neglected. Furthermore, the actual value of the constants in the equation will

depend on the precise details of the implementation and the hardware it runs on. This

implies that these constants should also be neglected for any hardware independent

examination of algorithm complexity. The big O notation focuses on the dominant term

so one could say that equation 21 represents an algorithm with an order of n2 time

complexity, which is written as T(n) = O(n2) or O(N^2). Note that this notation doesn't

indicate how quickly or slowly the algorithms actually executes for a given input.

The most common complexities, in order of increasing time are: O(1), O(log N), O(N),

O(N log N), O(N^2) followed by other polynomial times such as O(N^3) then the very

slow algorithms such as the travelling salesman problem which takes O(N!) time

(Norman, 1995).

Table 1 below uses big O notation values to reflect the complexities of using a varying

number of multiple photon maps. It is through the development and the examination of

this table of complexities that the idea of hierarchical photon maps was conceived.

The values have been computed for a scene with 137 objects and 137,000 photons. The

search and insertion of photons in a kd-tree is an O(log N) operation while the actual

construction is an O(N log N) operation. It is clear from these computations that the

31

Page 37: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

computational time required to construct the photon maps reduces as the number of maps

increase, as does the search time for a single photon map. However the computational

time required to search through just two of the smaller maps is far greater than that

required to search a single global map. This imposes a serious restrictions to the splitting

of photon maps that is; all the nearest neighbouring photons must be placed within the

same map. This is probably explains why Larsen (2003) was unable to apply multiple

photon maps to caustics and volume photon maps. Since the photon maps can not be split

beyond this point, the alternative is not to split the maps but to create sub-maps or small

extracts of the larger maps and to develop an algorithm that decides when its appropriate

to search the larger map and when to search the sub-map. The algorithm should avoid

searches across more than one map as much as possible or any optimisation benefit will

be lost. This is the basic principle behind the hierarchical photon mapping algorithm.

Table 1. Complexity of multiple kd-trees

3.2 Generating Shared Photon Maps.

The hierarchical photon mapping algorithm is based on a two tier photon map system.

The first tier consists of photon maps that are presumed to be independent of each other.

These shall be referred to as shared photon maps, since they are shared by several

polygons. The second tier consists of sub-sets of the shared photon maps. These sub-sets

are created as an alternative to actually splitting the shared maps. The second tier photon

maps shall be referred to as local photon maps.

32

Page 38: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

During rendering, the shared photon map is queried as usual for the set of nearest

neighbours to each point of intersection between the scene objects and the rays traced

into the scene. The neighbours that are of interest are not necessarily the real closest

neighbours to the intersection point but the set of neighbours that actual contribute to the

irradiance estimate calculation and therefore lie on surfaces that point in the same

direction as the surface for which the irradiance is being calculated. This means that

photons located on two perpendicular surfaces can be safely placed in two separate

photon maps irrespective of their proximity to each other, without affecting the computed

irradiance estimate and with the added benefit of eliminating photon leakage (see figure

7). The rules used to determine which polygons use the same photon map are similar to

those used by Larsen and Christensen (2003). They can be summarised as follows:

The polygons need to be adjacent to each other or they need to share an adjacent

neighbouring polygon.

The angle between the adjacent polygons must be below a predefined threshold.

Note that the angle between two polygons is the angle between their normals.

Polygons that are found to share the same photon map shall be referred to as connected

polygons. The photon maps that they share are the first tier maps of the hierarchy and

here forth shall be referred to as the shared maps. These shared maps are individually

smaller than a single global map and as illustrated in table 1; they are also faster to

collectively construct and faster to individually query.

The following pseudo-code describes the part of the hierarchical photon mapping

algorithm that generates these shared photon maps:

for each polygon A in the scene:

assign a unique map ID to the polygon if not already assigned then

flag this polygon as checked.

for each remaining unchecked polygon B in the scene

find out if polygon A is connected to polygon B and if so:

1. assign A’s ID to B and to any other polygon C whose

ID already matches B’s current ID

2. flag or mark the connected edges of polygons A and B

finally for each allocated map ID create a Shared photon map.

33

Algorithm 2 Creating shared photon maps.

Page 39: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Each polygon is now associated with a shared photon map and several polygons may

share the same photon map. Most importantly, each lookup on the nearest neighbour

involves only one of the maps and results in the same set of nearest neighbours as a

lookup in a single global photon map.

Figure 9 demonstrates the number of shared photon maps that would be created for two

different objects. The predefined threshold angle in this case is ninety degrees. In the

case of the cube six shared photon maps would be generated while the second object

generates only three shared maps. Note that the number of shared photon maps is

independent of photon density of the maps or the number of polygons within the scene. It

is also important to note that the number of shared photon maps can not be increased.

Figure 9 Generating shared photon maps

3.3 Generating Local Photon Maps.

As demonstrated earlier the number of shared photon maps is controlled by the geometry

of the scene and can not be arbitrarily increased. This is a major limitation with the

multiple photon mapping technique proposed by Larsen (2003). Furthermore the benefits

of using multiple photon maps are expected to be more apparent in regions of high

photon density and yet photon density is irrelevant when creating shared photon maps.

This is the reason for the introduction of a second tier of photon maps.

The second tier of photon maps are copies of some, but not all, of the photon already

stored in the shared photon maps. In this case any polygon that has more than a preset

minimum number of photons on its surface, stores a copy of these photons in its own

34

Page 40: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

photon map. These photon maps are different from those earlier discussed firstly because

they are not shared, and secondly they do not necessarily contain all the real neighbours

for a given lookup search. This means that searches that fail to retrieve the neighbours in

the local photon map will have to be abandoned and repeated against the shared photon

map. Repeating the search is certainly computationally expensive however it is presumed

that the majority of the lookups in the local photon maps will not be repeated and the

computational savings will outweigh the cost of abandoning and repeating a few lookups.

Nevertheless it is important to:

1. Limit the number of abandoned searches as much as possible.

2. Abandon searches from the local photon maps as early as possible within the

search and thus limit time wasting.

The figure below shows a polygon ABC. Point x is a point of intersection between the

polygon and a ray traced into the scene during the rendering pass. The irradiance estimate

is to be computed for this point and to do so a circle is placed around the point of

intersection and it is expanded until it includes the number of nearest neighbours

requested. The distance between point X and the closest edge of the polygon is ‘r’. This

radius will be referred to as the limiting radius.

The full set of nearest neighbours

must all fall within the circle

formed by the limiting radius ‘r’.

Searches that expand beyond the

limiting radius are abandoned

because they extend beyond the

limits of the local photon map and

therefore some neighbours may lie

outside the local photon map. This

is the case at point Y where the

search radius has extended into

polygon ACD.

The grey coloured areas represent regions within the polygon for which searches should

be abandoned. The size and shape of this grey edge depends on the density of the photon

map, the photon distribution within the map and the number of nearest neighbours

requested.

35

Figure 10 Illustration of the constrained nearest neighbour search for connected polygons.

Page 41: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

It is clear that the algorithm will need to include a function for quick determination of the

limiting radius. This computation will have to be carried out before each nearest

neighbour lookup. The algorithm can be optimised further by eliminating edges AB and

BC from the limiting radius test, if polygon ABC has no neighbours along these edges.

Furthermore, the edge elimination technique is also applicable in cases where the angle

between ABC and the neighbouring polygons exceeds the predefined threshold. The

exemption of the unconnected edges effectively reduces the grey areas of polygon ABC

and hence increases the number of searches that can be made on local photon maps

without the risk or the need to repeat the search on the shared maps.

As mentioned earlier the algorithm can also be optimised by ensuring that abandoned

searches are actually abandoned at the very early stages of the search. By introducing a

fixed filter radius, searches on local maps that have a higher risk of failure can be avoided

even before they are started. The fixed filter radius can be computed dynamically for

each polygon from the photon density ratio:

(22)

which is leads to:

36

Figure 11. Searching local photon maps.

Page 42: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

(23)

where is the fixed filter radius, A is the polygon area, is the number of nearest

neighbour photons requested and is the total number of photons in the photon map.

The filter radius is inversely proportional to the photon density and the area it represents

is the area in which the probability of finding the nearest photons is highest. Searches on

the local map are avoided if the filter radius, , is greater than the limiting radius r.

At this stage each polygon is already associated to a shared photon map, several polygons

may share the same photon map but now some polygons also have their own local

photon maps. All photons in the local maps are actually duplicates of a few photons from

the shared photon maps. This duplication of photons increases the required storage space

by a considerable amount. The extra demand on storage can be reduced by storing

references to photons within the photon maps and not the actual photons themselves. The

storage of references should also speed up the balancing of the kd-trees which are the

primary structure used by the photon maps within this project.

A re-examination of figure 9 above can be used to demonstrate the benefits of

introducing the second tier of photon maps. The number of photon maps for each object

was initially restricted to six and three respectively, which restrictions are based on the

number of shared photon maps. The introduction of local photon maps increases raises

these limits because each of the polygons can potentially have its own local photon map.

The cube can now potentially generate an additional twelve local photon maps in

addition to the six shared photon maps. The second object illustrated in figure 9 has its

multiple photon map possibilities drastically increased by an additional thirty two local

photon maps. The actual number of photon maps generated will depend on the photon

density and the threshold chosen for creation of local maps.

The following pseudo-code describes the part of the hierarchical photon mapping

algorithm that generates the local photon maps:

37

Page 43: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

for each light source in the scene

release photons into the scene.

for each photon that intersects a polygon A with map ID x

find out if the photon is to be reflected, refracted or stored

if the photon is to be stored

1 -Store a copy of the photon in polygon A’s local map

2 -Store another copy in the shared map of ID x

now for each polygon B in the scene, if the photon count in B’s local map

is less than the threshold

Delete the polygon B’s local map

or else if the photon count in B’s local map meets the threshold

Compute the fixed filter radius for polygon B

The following pseudo-code describes the manner in which the proposed algorithm

computes irradiance using both tiers of photon maps:

for each point of intersection on a polygon

1 - determine if the polygon has a local photon map and if so:

calculate the limiting radius ‘r’

if the filter radius r’ is greater than the limiting radius r

do not search the local photon map

or else start search for ‘n’ nearest neighbours on the local photon

map but if the search radius reaches limit ‘r’

abandon the search in the local photon map

2 - if the polygon has no local photon map OR if a search in the local map

has been abandoned:

get the ID of the first tier photon map associated to this polygon

start search for the ‘n’ nearest neighbours in this photon map

(continued on next page)

38

Algorithm 3. Creating local photon maps

Page 44: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

(continued from previous page)

3 -for each of the nearest photons found ensure that the photon

contributes to surface illumination by checking if its direction faces the

normal of the surface

4 - compute the irradiance estimate by adding up the power of the ‘n’

photons and dividing this by the area of search circle. The search circle is

a circle with a radius equal to the distance to the furthest of the

neighbours

3.4 Summary

The hierarchical photon mapping algorithms have been described fully within this

chapter. It uses a two tier system of multiple photon maps. The introduction of the

second tier of photon maps allows multiple photon maps to be used for caustics and not

just direct illumination. It also allows scenes to generate a much larger quantity of photon

maps. The hierarchical use of multiple photon maps is unique to this project and is

intended to address the limitations identified by Larsen (2003). Although the first tier

photon maps can be used on their own to compute irradiance at any point within the

scene, they are only referred to in the absence of local photon maps or when a search on a

local map is abandoned.

The local photon maps were introduced at the price of having to perform some lookups

twice. The mitigation measures to be taken have been discussed. They included the

introduction of the limiting radius and the filter radius. The limiting radius has been

defined as the distance from point x on the polygon and the nearest edge of the same

polygon, where x is the point of intersection of a ray generated during ray tracing and the

polygon. The limiting radius must be computed for each lookup. The fixed filter radius is

computed once for each polygon, it demarcates the area in which the neighbouring

photons would be found if the photon distribution were uniform.

The next chapter describes the test application and the details of the experiments that are

to be performed to access the performance of the hierarchical photon mapping algorithm.

39

Algorithm 4 Computing irradiance from local and shared photon maps.

Page 45: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Chapter 4: Experimental Details.

4.1 The Hierarchical Photon Mapping Test Application.

The test application is coded in C/C++. It basically comprises of a scene, objects and the

photon map classes. The primitive classes (Sphere and Triangle) are used to define the

model while the material class defines the physical attributes and lighting factors for each

primitive. The object class combines the primitive classes and material class in one entity

by inheritance. The objects are kept simple and comprise of a single primitive which, in

the context of this dissertation, is either a triangle or a sphere.

The application loads its scenes from a file in ASCII format and maintains an internal

image buffer for file or video output. The application is able to generate image files in

Portable Pixel Map (PPM) format.

The basic rendering code is provided by John (2003), however this has been drastically

modified to accommodate the multiple photon map algorithm, the ability to simulate

light refraction, anti-aliasing functionality, texture mapping and several other

functionalities that were missing from the basic code. The application can be switched to

use a single photon map, multiple photon maps or to run as a ray tracer without any

photon mapping.

A timer class has been incorporated into the application. Its main function is to make a

log of the application settings and dependent variables values for each test run. These

logs are subsequently used for performance analysis.

The source code for the application is available within Appendix C, while a complete

executable copy of the application including the sample scene files, output files,

installation instructions and operating instructions are available on the enclosed compact

disc.

4.2 The Experimental Objectives.

The objective of the experiment is to compare the performance of a single global photon

map, multiple shared photon maps without local maps and the complete hierarchical

photon maps. The performance areas to be compared are:

1. The cost or speed of constructing and balancing the photon maps.

2. The cost of computing irradiance estimates from the photon maps.

40

Page 46: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

4.3 The Experimental Method.

Three different scenes have been purpose built for this project. All three scenes

incorporate complex light transport models that including mirror reflections, refractions

and multiple light sources. The first scene (scene 1) however is geometrically simple in

that the intersecting polygons are either parallel or perpendicular to each other. It

comprises of 149 triangles. The second scene is slightly more complex because it has

hexagonal shaped objects.

The third scene, scene 3, is the most complex of the three. It comprises of 2916

triangles used to represent a curved chair in a room. The curved chair was generated

using a fractal algorithm and therefore consists of many small triangles with small angles

between connected triangles.

Figure 12 Scene 1 rendered using hierarchical photon maps

41

Page 47: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Figure 13 Scene 2 rendered using hierarchical photon mapping

Figure 14 Scene 3, the egg chair rendered using ray tracing

42

Page 48: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

A set of tests are performed to compare the performance of the application in the three

different application modes namely, the single global photon map mode, the multiple

shared photon map mode and the multiple two tier photon map mode ( hierarchical).

These test are performed in each mode on all three scenes and using a varying number of

photons stored in the photon maps. The tests will demonstrate if the use of multiple

photon maps actually optimizes performance, they will also to establish the relationship

between the optimisation gains and the number of photons scattered in the scene.

Table 2 Description of tests performed

Test Photons Scattered Photons per pixel

contribution.

local photon map

minimum size

Test 1 2,500 photons per object 25 250

Test 2 5,000 photons per object 50 500

Test 3 15,000 photons per object 150 1,500

Test 4 30,000 photons per object 300 3,000

Test 5 60,000 photons per object 600 6,000

The local photon maps are generated using a minimum size setting equivalent to 10

times the number of photons required to compute the irradiance estimate for a pixel. For

instance in test 2, a minimum local photon map size of 500 is used and the number of

nearest neighbouring photons for a pixel contribution is set to 50. Reducing the local

photon map threshold increases the number of local maps but also increases the risk of

abandoned searches. The chosen ratio of number of searched photons per irradiance

computation to the minimum local map size is not necessarily optimal; however it serves

the purpose of maintaining consistency across the tests.

It is important to remember that the rendering stage of the photon mapping algorithm

includes a ray tracing component. No attempt has been made to optimise the ray tracing

process within the application; it may therefore take up a considerable potion of the

rendering time and distort the results. The ray tracing component can be quantified by

measuring the time taken to render the scene with an empty photon map. The time taken

43

Page 49: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

to query the photon maps and compute irradiance estimates can then be computed by

deducting the ray tracing component from the overall time taken to render the scene.

Net Cost of

Querying Photon

maps and

computing

irradiance Estimates

=

Cost of Rendering with

populated photon

maps

-

Cost of Rendering

with empty photon

maps.

Secondly it should be remembered that the main difference between hierarchical photon

mapping and multiple photon mapping, as proposed by Larsen (2003), is the presence of

local photon maps. Therefore shared photon maps without local maps are equivalent to

multiple photon maps as proposed by Larsen (2003).

4.4 The Experiment Results.

A complete set of raw results from all tests carried out can be found in appendix A of this

document. Tables 3 and 4 summarise the cost of querying the photon maps and of

calculating irradiance estimates in the various modes of operation for scene 1 and scene

2. For both these scenes the performance of the multiple photon mapping algorithms was

far superior to that of the traditional single global photon maps. The performance benefits

of multiple photon mapping increased with increasing number of photons stored in the

scenes.

There was little difference in performance of hierarchical photon mapping and the plain

multiple photon mapping algorithms. The lack of improvement is probably due to the

small number of local photon maps being used. The geometry of scene 1 favours the use

of shared photon maps and as such 80% of the photon maps generated were of the shared

type. Furthermore the cost of abandoning searches against the local maps appears to be

taking its toll. An 8% drop in performance is registered when Test 5 is performed on

scene 1.

The hierarchical photon mapping algorithm performed better against scene 2. This scene

has a few hexagonally shaped objects and as a result less shared photon maps are

generated. Effectively the ratio of shared photon maps to local photon maps changes in

44

Page 50: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

favour of the local photon maps and the shared maps now represent only 52% of all

photon maps in the scene. Despite the improved ratio of local photon maps to shared

photon maps, the performance of the hierarchical photon maps in scene 2 is only slightly

better than that of the multiple shared photon maps without local maps. This

improvement is only apparent when the number of photons in the scene is significantly

high as in test 4 and test 5 (figure 15).

Table 3 Result Summary for tests performed on scene 1

Table 4 Result Summary for tests performed on scene 2

The shared photon maps represent only 2% of the photon maps in scene 3. The scene has

2,916 triangles and when test 2 was performed, over 1,152,000 photons were stored. The

scene generates 529 local photon maps but only 12 shared photon maps. This is because

the angle between the triangles is no longer a good measure for when to split the photon

map. Under these circumstances the hierarchical photon maps are queried and irradiance

estimates are computed in less than half the time taken by the multiple shared photon

maps without local maps (figure 17). The manner in which the scene geometry dictates

the quantity of shared photon maps is well illustrated in chapter 3, figure 9.

45

Page 51: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

0

250

500

750

1,000

1,250

1,500

1,750

2,000

2,250

2,500

2,750

3,000

26,019 50,513 152,881 384,119 778,138

No of stored photons

Co

st

of

qu

err

ing

ph

oto

n m

ap

s i

n s

ec

on

ds

HIERARCHICAL PHOTON MAPSMULTIPLE PHOTON MAPS WITHOUT LOCAL MAPSSINGLE GLOBAL PHOTON MAP

Figure 15 Graph showing rendering performance for scene 1

0

250

500

750

1,000

1,250

1,500

1,750

2,000

2,250

2,500

2,750

3,000

7,814 16,178 48,869 97,588 180,389

No of stored photons

Co

st

of

qu

err

ing

ph

oto

n m

ap

s i

n s

ec

on

ds

HIERARCHICAL PHOTON MAPSMULTIPLE PHOTON MAPS WITHOUT LOCAL MAPSSINGLE GLOBAL PHOTON MAP

Figure 16 Graph showing rendering performance for scene 2

46

Page 52: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

40 sec

85 sec

0 sec

10 sec

20 sec

30 sec

40 sec

50 sec

60 sec

70 sec

80 sec

90 sec

sec

on

ds

Hierarchical photon mapping Larsens multiple photon mapping (Nolocal maps)

Hierarchical photon mapping Larsens multiple photon mapping (No local maps)

Figure 17 Graph showing rendering performance results test 2 performed on scene 3

47

Page 53: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Chapter 5: Conclusion.

This project has demonstrated that hierarchical photon mapping is an effective way of

optimizing the computation of irradiance estimates. The main benefit of the algorithm is

that it allows the photon map to be split almost arbitrarily and therefore extends the use

of multiple photon maps to a new level. Splitting the photon map into several photon

maps has the following advantages:

It allows faster irradiance calculations.

It allows faster balancing of the photon maps.

It removes leakage of illumination at corners.

It allows updates to be made to individual photon maps without modifying the

whole structure.

The project also exposed some limitations of the hierarchical photon mapping algorithm.

It showed that in scenes that predominantly comprise of flat perpendicular surfaces, the

presence of local photon maps may actually reduce the overall efficiency of a multiple

photon map as was the case in scene 1. Secondly the algorithm requires a high number of

photons to trigger the generation of local maps, and a large number of local maps in

order to significantly improve the computation of irradiance. Thirdly the algorithm

increases storage requirements since the local photon maps are actually a duplication of

part of the shared photon maps. Finally the algorithm is rather complex and requires the

incorporation of special mitigation techniques to guarantee its success (see chapter 3

pages 35-37).

This project chose to use the polygon photon density as a basis for the creation of local

photon maps. This may not be appropriate for extremely complex scenes especially those

generated by fractal algorithms; since too many of the polygons will individually fail to

qualify to own a local photon map. In such cases it is preferable to group polygons into

objects or sub-objects and to use the group’s photon density as the basis for creating the

local maps. This will allow a group of polygons to create and share a local map without

reverting to the ‘shared maps’. This line of thinking leads to a possibility of a third tier

within the photon map structure. All these possibilities require further investigation.

The efficiency of the hierarchical photon mapping algorithm can be improved much

further by improving the computation of the fixed filter radius (see chapter 3 page 36).

The current computation assumes that the photons are evenly distributed across the

48

Page 54: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

polygon which is normally not the case. An inaccurate limiting radius may either

increase the number of costly abandoned searches performed on a local map; or cause the

application to search a shared map in circumstances where a search of a local map could

have been performed more efficiently to produce the same result. One way of improving

the limiting radius accuracy is to create a separate photon map for caustic illumination

and apply the hierarchical photon mapping algorithm to the indirect illumination only.

This is another area that requires further investigation.

Finally the hierarchical photon mapping algorithm requires fine tuning for optimal

performance. This can only be achieved by investigating the relationship between the

performance of the algorithm and the ratio of number of photons used to compute

irradiance to the minimum acceptable size of the local photon maps. This project chose to

use a ratio of 1:10 for each of the tests carried out, however there is no evidence to show

that this ratio maximises the use of local photon maps. This is again another area that

requires further investigation.

This research project achieved its main objectives in general terms however it is

important to point out that the algorithm was not tested as extensively as intended.

Reason being, too much time was spent building the test application and creating test

scenes. Secondly, the testing process was rather slow particularly for scene 3. The reason

for this is that the scene contains almost 3,000 polygons. These polygons are stored in a

basic array structure by the test application and as a result the ray tracing and photon

scattering processes were not fully optimised. Nonetheless the use of a real application

and a real scene gives an accurate indication of the performance optimisation and the

image quality that can be achieved; furthermore the optimisation of ray tracing and

photon scattering has nothing to do with the hierarchical photon mapping algorithm and

is considered to be a separate issue, outside the scope of this project.

49

Page 55: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

References

Appel, A. A. (1985) An Efficient Program for Many-Body Simulations, SIAM Journal on

Scientific and Statistical Computing, vol.16, n.1, pp.85-103.

Adelson, E. H. and Bergen, J. R. (1991) The Plenoptic Function and Elements of Early

Vision, Computational Models of Visual Processing, [Online], MIT Press,

Cambridge, MA 1991. Available:

http://web.mit.edu/persci/people/adelson/pub_pdfs/elements91.pdf

Arvo J. (1986) Backward Ray Tracing. Developments in Ray Tracing, SIGGRAPH’86:

Seminar Notes, August 1986, vol.12.

Arvo J and Krik D. B. (1990) Particle Transport and Image Synthesis. Computer

Graphics, SIGGRAPH’90: Proceedings, August 1990, vol.24, no.4, pp.63-66.

Bentley J, L. (1975) Multidimensional Binary Search Trees Used for Associative

Searching, Communications of the ACM, vol.18, no.9, pp.509-517.

Bentley J, L. (1979) Multidimensional Binary Search Trees in Database Applications,

IEEE Trans. On Soft. Eng, July 1979, vol.5, no.4, pp.333-340.

Blinn J. F. (1977) Models of Light Reflection for Computer Synthesized pictures,

Computer Graphics, SIGGRAPH ’77: Proceedings, July 1977, vol.11, no.2, pp.192-

198.

Cook, R. L. (1986) Stochastic sampling in Computer Graphics, ACM Transactions on

Graphics, vol.5, no.1, pp. 55-72.

Gay, L. R. (1987). Educational research: Competencies for analysis and application (3rd

ed.). New York, Merrill.

Goral C. M., Torrance K. E., Greenberg D. P. and Battaile B. (1984) Modelling the

interaction of light between diffuse surfaces. Computer Graphics, SIGGRAPH ’84:

Proceedings, July 1984, pp.212-22

50

Page 56: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Gortler, S.J., Grzeszcuk, R., Szelinski, R. and Cohen, M.F. (1996) The Lumigraph,

SIGGRAPH ’96: Proceedings, August 1996, pp.43-54.

Available:http://citeseer.ist.psu.edu/gortler96lumigraph.html

Heidrich, W., Lensch, H., Cohen, M. F. and Seidel (1999) Light Field Techniques For

Reflections And Refractions, Eurographics Rendering Workshop Proceedings,

[Online] June 1999, Eurographics, pp.1-11.

Available:http://citeseer.nj.nec.com/heidrich99light.html

Henyey, L.G. and Greenstein, J. L. (1941) Diffuse Radiation in the Galaxy. Astrophysics

Journal, vol.93, pp. 70-83.

Jensen, H. W. and Christensen, N. J. (1995) Photon Maps in Bi-directional Monte Carlo

Ray Tracing of Complex Objects, Computers and Graphics, vol. 19, no. 2, pp. 215 -

224.

Jensen, H. W. (2000) Parallel Global Illumination Using Photon Mapping, SIGGRAPH

2000: Course Notes, New York ACM Press, July 2000.

Jensen, H. W. (2001) Realistic Image Synthesis Using Photon Mapping, Massachusetts,

A. K. Peters Ltd.

John, M. (2003) Photon Mapping, Ohio, Premier Press.

Key, J. P. (1997) Research Design in Occupational Education, [Online], Oklahoma

State University, Available:

http://www.okstate.edu/ag/agedcm4h/academic/aged5980a/5980/newpage2.htm

Koutajoki, K. (2002) BSSRDF (Bi-directional Surface Scattering Distribution Function),

[Online], April 2002, Helsinki University of Technology, Available:

http://www.tml.hut.fi/Opinnot/Tik-111.500/ 2002/paperit/kalle_koutajoki.pdf

Lafortune, E., Sing-Choong Foo, Torrance, K. E. and Greenberg, D. P. (1997) Non-

Linear Approximation of Reflectance Functions., Computer Graphics, SIGGRAPH

’97: Proceedings, eds Whitted Turner, 1997, pp.117-126.

51

Page 57: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Lafortune, E. (1996) Mathematical Models and Monte Carlo Algorithms for Physically

Based Rendering, [Online], University of Leuven. Available:

http://www.cs.kuleuven.ac.be/cwis/research/graphics/ERICL/thesis/

Lafortune, E. and Willems, Y. D. (1993) Bi-directional path tracing, Compugraphics

’93 : Proceedings, pp.145-153.

Larsen,. B. D and Christensen, N. J. (2003) Optimizing Photon Mapping Using Multiple

Photon Maps for Irradiance Estimates., WSCG POSTER: proceedings, February

2003.

Levoy, M. and Hanrahan, P. (1996) Light Field Rendering, SIGGRAPH ’96:

Proceedings, August 1996, pp.31-42.

Lewis, R. (1994) Making Shaders More Physically Plausible, Fourth Eurographics

Workshop on Rendering Proceedings, eds M. F. Cohen, Claude Puech and Francois

Sillion, Eurographics, June 1993, pp. 7-9. Available:

http://citeseer.ist.psu.edu/cache/papers/cs/2942/http:zSzzSzwww.cs.ubc.cazSznestzSz

imagerzSzcontributionszSzhealeyzSzimager-trzSzpszSzlewis.cgf.94.pdf/

lewis94making.pdf

Lischinski, D. and Rappoport, A. (1998) Image-Based Rendering for Non-Diffuse

Synthetic Scenes, [Online], The Hebrew University, Available:

http://citeseer.nj.nec.com/lischinski98imagebased.html

Nicodemus, F., Richmond, J. C., Hsia, J. J., Ginsberg, I. W. and Limperis, T. (1977)

Geometric Considerations And Nomenclature For Reflectance. Monograph 161,

National Bureau of Standards (US).

. Norman M. G and Moscato, P. (1995) The Euclidean Traveling Salesman Problem and

a Space-Filling Curve, Chaos, Solitons and Fractals, vol.6, pp.389-397.

Oren, M and Nayar, S.K. (1994) Generalization of Lambert's Reflectance Model,

Computer Graphics, SIGGRAPH ’94: Proceedings, eds Glassner Andrew, July 1994,

pp.239-246.

52

Page 58: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Phong, Bui-T., June (1975) Illumination for Computer generated Pictures,

Communications of the ACM vol.18, no.6, pp.311-317.

Rosenman, R. (2002) Global Illumination, [Online], 24 March 2002, Available:

http://www.richardrosenman.com/global.htm

Saleh, B. and Teich, M. (1991) The Fundamentals of Photonics, New York, John Wiley

and Sons

Schlick, C. (1993) A Customisable Reflectance Model for Everyday Rendering. Fourth

Eurographics Workshop on Rendering Proceedings, eds Cohen, M. F., Puech, C. and

Sillion, F., Eurographics, June 1993

Shirley, P. (2000) Realistic Ray Tracing, Massachusetts, A. K. Peters Ltd..

Shirley, P., Smits, B., Hu, H. and Lafortune, E. October (1997) A Practitioners’

Assessment of Light Reflection Models. Pacific Graphics, [Online], Available:

http://citeseer.nj.nec.com/shirley97practitioners.html [22 April 2003].

Shirley P. (1990) A Ray Tracing Method For Illumination Calculation In Diffuse-

Specular Scenes. Graphics Interface Proceedings, May 1990, Canadian Information

Processing Society, pp.205-212

Schirmacher, H. and Seidel, H. P. (2000) High-Quality Interactive Lumigraph Rendering

Through Warping, Graphics Interface Proceedings, May 2000, pp. 87-94, [Online]

Available: www.cs.ubc.ca/spider/heidrich/Papers/GI.00.pdf

Turner, W. (1980) An Improved Illumination Model for Shaded Display,

Communications of the ACM, vol.23, no.6, pp.343-349.

Ward, G. (1991) Real Pixels, Graphics Gems II, eds Arvo, J., 1991, San Diego Academic

Press, pp. 80-83

Wallace, J. R., Cohen, M. and Greenberg, (1987) A Two-Pass Solution to the Rendering

Equation: A Synthesis of Ray Tracing and Radiosity Methods. SIGGRAPH ’87:

Proceedings, July 1987, pp.311-320.

53

Page 59: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Watt, A. (1993) 3D Computer Graphics, 2nd edn, Workingham England, Addison -

Wesley Publishing Company.

54

Page 60: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

APPENDIX A - Raw test results

Scene 1

Figure 18 Scene 1 rendered using hierarchical photon maps and 60,000 photons per triangle

Table 5 Scene 1 test results using hierarchical photon maps

55

Page 61: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Table 6 Scene 1 test results using only multiple shared photon maps

Table 7 Scene 1 test results using a single global photon map

56

Page 62: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Scene 2

Table 8 Scene 2 test results using hierarchical photon maps

57

Page 63: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Table Scene 2 test results using only multiple shared photon maps

Table 9 Scene 2 test results using a single global photon map

58

Page 64: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Scene 3

Figure 19 Scene 3 rendered 5,000 photons per triangle

The blotches are due to the use

of a low number of photons

per triangle.

59

Page 65: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

APPENDIX B - Image Based Rendering

Image based rendering is the main rivalling algorithm to photon mapping. It is of such

importance that a discussion on global illumination rendering would be incomplete

without a brief description of this approach.

Image based rendering was introduced as a means of overcoming the tedious task of

creating synthetic models. Image based rendering paradigms can generally be categorised

into two main groups. The paradigms that are dependent on image warping form the first

group, while those that are based on the interpolation of images form the second

grouping.

B.1 Light Fields and Lumigraphs

Figure 20 The plenoptic function

Light Fields and Lumigraphs were introduced independently, by Levoy et al. (1996) and

Gortler et al. (1996). Both make use of the plenoptic function (Adelson and Bergen,

1991) to describe the directional radiance distribution in space and both rely on the fact

that radiance is constant along any ray in empty space. The plenoptic function uses five

parameters to define a ray of light. Three of these parameters Vx, Vy and Vz locate the

position of the viewer. The other two θ and φ, define the line of sight vector.

60

Page 66: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

A simplification of the plenoptic function is the two-plane parameterisation, which is

able to define a ray using only four parameters. This 4D function is called the light field.

Each ray passing through the scene is represented by a pair of points (s, t) and (u, v) on

the two planes. A set of all (u, v) samples through a point (s0, t0) on the (s, t) plane is an

image created by a sheared perspective from (s0, t0). An image of the scene from a novel

viewpoint is created by computing radiance of each ray from the specified viewpoint to

the scene using quadri-linear interpolation of rays from the closest images.

What this simply means is that a picture taken of the left side of an object, and a picture

taken from the right side of the same object, can be interpolated to obtain an image of the

front of the object.

Basic light field rendering naturally generates a blurred image because it involves

averaging or linear interpolation of the actual radiance, secondly the lack of depth

information creates errors in selection of the precise rays to interpolate. The lumigraph is

an improvement to the light field in that it incorporates some depth information and

therefore allows for depth correction (Gortler et al. 1996).

Image based rendering has also been found to reduce the computational costs

considerably and is therefore very useful for interactive applications. An additional

benefit is that the computational cost is independent of the complexity of the scene.

Further more this paradigm creates realistic images especially if the original images are

photographs of real scenes.

The main draw back is that a large number of sample images are required in order to

obtain quality output, secondly it is not easy to obtain depth information and finally the

blurring effect is detrimental to the representation of specular highlights. Although some

research has been done in an attempt to use this technique to render specular reflections

and refractions (Lischinski and Rappoport 1998; Heidrich et al 1999), the results and

analysis are beyond the scope of this project.

B.2 Image Warping

Image warping is the process of taking individual pixels from one or more images and

projecting them onto the image plane for a new eye location. Image warping relies

61

Page 67: PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

heavily on depth information at a pixel-by-pixel level. This information describes the

pixel motion in the image plane per unit camera motion. Image warping does not involve

interpolation of intensities and therefore produces sharper images than light field

rendering. The most serious problem with the method is that holes may occur within the

novel image due to either under sampling or dis-occlusion of parts of the scene that were

previously occluded. Some hybrid paradigms have been proposed that merge light field

rendering and image wrapping (Schirmacher et al. 2000).

62


Recommended