Home >Documents >PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Date post:10-Apr-2015
Category:
View:368 times
Download:0 times
Share this document with a friend
Description:
Photon mapping is a simple yet robust global illumination rendering algorithm.This paper not only presents a detailed discussion of the photon mapping algorithm, but also introduces the concept of hierarchical photon mapping. Hierarchical photon mapping is a unique modification to the traditional photon mapping algorithm and is intended to optimise the computation of illumination.
Transcript:

PHOTON MAPPING USING HIERARCHICAL PHOTON MAPS

Samuel Mpawulo

A dissertation submitted in partial fulfilment of the requirements of Staffordshire University for the degree of Master of Science.

Supervised by Dr Claude C. Chibelushi

JANUARY 2005

AbstractPhoton mapping is a simple yet robust global illumination rendering algorithm that was developed by Henrik Wann Jensen in recent years. Photon mapping is now used extensively in global illumination to render photorealistic images and has quickly become the preferred algorithm for the simulation of caustics and illumination in volumetric media. The photon mapping algorithm is a two pass algorithm that uses the first pass to generate and store illumination information in a data structure called a photon map. The second pass is the rendering phase and is performed by a ray-tracer that repetitively queries the photon map. This research is mainly focused on optimising the generation and querying of this photon map which are the key factors in the performance of the algorithm. This project not only presents a detailed discussion of the photon mapping algorithm, but also introduces the concept of hierarchical photon mapping. Hierarchical photon mapping is a unique modification to the traditional photon mapping algorithm and is intended to optimise the computation of illumination. The hierarchical photon mapping algorithm splits the photon maps into a two tier hierarchical structure of multiple-maps, based on the photon density and on the topology of the polygons within the scene. The use of multiple photon maps is not novel in itself, it is the manner in which the photon maps are split and the arrangement of the photon maps into a two tier hierarchy that makes this project unique. As part of the project the hierarchical photon mapping algorithm is fully described and implemented in a purposely built test application. This application is used to compare the performance of hierarchical photon mapping to that of the more traditional single and multiple photon mapping paradigms. The project was able show that hierarchical photon mapping has several significant advantages and performance gains when used to render relatively complex scenes with high photon densities.

ContentsChapter 1: Introduction....................................................................................................1 1.1 Project Background...................................................................................................1 1.2 Aim............................................................................................................................3 1.3 Objectives..................................................................................................................4 1.4 Intellectual Challenge................................................................................................4 1.5 Research Program......................................................................................................4 1.5.1 Phase one............................................................................................................5 1.5.2 Phase two............................................................................................................5 1.5.3 Phase three..........................................................................................................6 1.5.4 Phase four...........................................................................................................7 1.6 Deliverables:..............................................................................................................7 1.7 Resources...................................................................................................................7 Chapter 2: Global Illumination.......................................................................................9 2.1 The fundamentals of Global Illumination.................................................................9 2.1.1 Surface and Sub-Surface Scattering Functions................................................10 2.1.1.1 BSSRDF and BRDF..................................................................................11 2.1.1.2 Reflection Model.......................................................................................16 2.1.2 Light Scattering in Participating Media...........................................................17 2.2 Rendering................................................................................................................19 2.2.1 Image Based Rendering ...................................................................................19 2.2.2 Ray Tracing......................................................................................................19 2.2.3 Finite Element Radiosity Technique.................................................................20 2.2.4 Hybrid and Multi-Pass Techniques...................................................................21 2.3 Photon Mapping Algorithm.....................................................................................21 2.3.1 Photon Tracing..................................................................................................23 2.3.2 Computing Radiance from Photon maps..........................................................25 2.3.3 The KD Tree.....................................................................................................28 2.3.4 Multiple photon maps.......................................................................................30 Chapter 3: Hierarchical Photon Maps..........................................................................33 3.1 Motivation...............................................................................................................33 3.2 Generating Shared Photon Maps.............................................................................34 3.3 Generating Local Photon Maps...............................................................................36 3.4 Summary..................................................................................................................41 Chapter 4: Experimental Details....................................................................................42 4.1 The Hierarchical Photon Mapping Test Application...............................................42 4.2 The Experimental Objectives..................................................................................42 4.3 The Experimental Method.......................................................................................43 4.4 The Experiment Results...........................................................................................46 Chapter 5: Conclusion.....................................................................................................50 References........................................................................................................................52 APPENDIX A - Raw test results....................................................................................57 Scene 1..........................................................................................................................57 Scene 2..........................................................................................................................59

Scene 3..........................................................................................................................61 APPENDIX B - Image Based Rendering .....................................................................62 B.1 Light Fields and Lumigraphs.................................................................................62 B.2 Image Warping.......................................................................................................63

List of FiguresFigure 1 Radiance L........................................................................................................12 Figure 2 Subsurface scattering as described by BSSRDF...........................................13 Figure 3 BRDF models the local reflection of light.......................................................15 Figure 4 Illumination map..............................................................................................22 Figure 5 Computing reflected radiance (Jensen, 2001)................................................27 Figure 6 Spatial subdivision using a kd-tree.................................................................29 Figure 7 Simplified 2D kd-tree for the spatial subdivision shown in figure 5............29 Figure 8 Photon leakage at corners................................................................................31 Figure 9 Generating shared photon maps.....................................................................36 Figure 10 Illustration of the constrained nearest neighbour search for connected polygons............................................................................................................................37 Figure 11. Searching local photon maps........................................................................38 Figure 12 Scene 1 rendered using hierarchical photon maps .....................................43 Figure 13 Scene 2 rendered using hierarchical photon mapping................................44 Figure 14 Scene 3, the egg chair rendered using ray tracing ......................................44 Figure 15 Graph showing rendering performance for scene 1....................................48 Figure 16 Graph showing rendering performance for scene 2....................................48 Figure 17 Graph showing rendering performance results test 2 performed on scene 3.........................................................................................................................................49 Figure 18 Scene 1 rendered using hierarchical photon maps and 60,000 photons per triangle..............................................................................................................................57 Figure 19 Scene 3 rendered 5,000 photons per triangle...............................................61 Figure 20 The plenoptic function...................................................................................62

List of TablesTable 1. Complexity of multiple kd-trees......................................................................34 Table 2 Description of tests performed.........................................................................45 Table 3 Result Summary for tests performed on scene 1.............................................47 Table 4 Result Summary for tests performed on scene 2.............................................47 Table 5 Scene 1 test results using hierarchical photon maps ......................................57 Table 6 Scene 1 test results using only multiple shared photon maps........................58 Table 7 Scene 1 test results using a single global photon map.....................................58 Table 8 Scene 2 test results using hierarchical photon maps ......................................59 Table 9 Scene 2 test results using a single global photon map.....................................60

Chapter 1:IntroductionThe generation of realistic synthetic images in computer graphics has many useful applications in advertising, interior design, architecture, film production and manufacturing design, to mention just a few. These synthetic simulations provide an alternative to the construction of real, physical models, which are usually expensive, lack flexibility, need relatively long construction periods and whose construction and use may present risks to human life. Synthetic scenes are very portable and can be replicated easily. 1.1 Project Background

Computer graphics images tend to demand a lot of processor power and this has resulted in the development of a category of rendering methods that are based on empirical geometrical approximations rather than pure physics. The Lambertian and Phong shading models are examples of such approximation techniques. These methods however fail to adequately address the problem of refraction and global illumination, which are both vital for the realistic rendering. Further more the computational load is dependent on the complexity of the scenes. Both models are extensively discussed by Allan Watt (1993) and various other publications. The second category of rendering techniques is based on the physics or science of propagation of light. These are capable of addressing refraction and global illumination to varying degrees. They include ray tracing, radiosity and photon mapping. Ray tracing is a point-sampling technique that traces infinitesimal beams of light through a model. Basic ray tracing as first introduced (Turner Whitted, 1980), traces light rays backwards from the observer to the light sources. Basic ray tracing is not a fully global illumination algorithm. It simulates only direct illuminations and refractions, and does not account for all light paths within a scene. It is therefore unable to compute important effects such as caustics, motion blurs, depth of field, indirect illumination or glossy reflection. Monte Carlo based ray tracing methods overcome this limitation by distributing the rays stochastically; however a very large number of sample rays must be used to avoid variance or noise in the rendered images (Jensen, 2001 pp.4-5, 35-39).

1

Finite element radiosity techniques are an alternative to ray-tracing in which the model is subdivided into small patches that form a basis for the distribution of light. It is assumed that the light reflected by each patch is constant and independent of direction. Radiosity was therefore initially introduced for scenes with only diffuse (Lambertian) surfaces. Radiosity becomes very costly when used to simulate complex models or models with non-diffuse surfaces. This is mainly because the algorithm computes values for each patch in the model and the number of patches tends to increase with complexity (John, 2003 pp.63-65) (Jensen, 2001 pp.5-6). Photon tracing is the most recent technique and is not as widely publicised. It was first introduced by Henrik Wann Jensen (1995). Photon mapping changes the way in which illumination is represented. Instead of tightly coupling lighting information with the geometry of a scene, the information is stored separately in an independent structure, the photon map. The decoupling of the photon map from the geometry simplifies the representation and makes it possible to represent lighting in very complex models. The photon mapping process occurs in two passes. The first pass casts photons from light sources into the scenes, and results in the creation of the photon map. The second pass uses Monte Carlo ray-tracing-based rendering algorithm to collect the photons, compute irradiance and generate an image. The introduction of photon mapping resulted in the addition of a new class of scenes to the repertoire of computer-generated imagery. These include illumination of smoke, clouds, under water scenes, and of translucent materials to mention just a few. Photon mapping can simulate global illumination in complex models with arbitrary Bidirectional Reflectance Distribution Functions or BRDFs (Jensen, 2001). The key word here is arbitrary. Although image mapping algorithms have been used to address global illumination with impressive results, the BRDF in such cases is predetermined; that is to say the location of all objects affecting the global illumination are already specified by the image. Despite these benefits Photon mapping is to a great extent limited to walk through and static scenes due to computational costs associated with the photon map generation and the computation of irradiance from the photon map. Photon map algorithms for storing and querying photons are generally too slow for interactive purposes. Rebuilding the photon map for every frame does not amortize. Further more photon mapping algorithms

2

computes irradiance estimates using the photon density statistics of the area in question. This requires repetitive transversal of the photon map in search for the nearest-neighbour photons each time a ray intercepts an object. It is for this reason that photon mapping is sometimes applied only to visualize caustics, where the photon density is usually rather high and density estimation can be applied with a fixed filter radius. In summary the photon mapping algorithm consists of three problematic stages: the scattering or shooting of photons, the sorting and storage of the photons (photon map construction), and the search for the nearest neighbour photons during the rendering stage. Interactive use of photon mapping was achieved by Gnther J., Wald I. and Slusallek P. (2004) through the integration of a synergy of optimisation strategies. First and foremost Gnther and his colleagues limited the use of photon mapping algorithm to the generation of caustics alone and intentionally neglected non-caustic illumination. Secondly their proposal involves the parallelization of the algorithm and the distribution of the computational load amongst up to thirty six computer processing units (CPU), an idea that was also pursued by Jensen (2000); and finally they reduce the construction cost of the photon maps by using an un-balanced kd-tree structure to store the photons. A balanced kd-tree is generally believed to be faster to traverse during rendering however balancing requires time. Using un-balanced kd-trees also makes it possible to insert new photons into an existing tree without rebuilding an entirely new one. Larsen (2003) on the other hand decided to concentrate his efforts on optimisation of the third stage of the photon mapping process by introducing the use of multiple photon maps. Although Larsen was able to successfully reduce the query time for nearest neighbours his technique could not be applied directly to caustics or participating media for reasons that are discussed in more detail in the following chapter. This research project attempts to address the same problem by introducing a unique two tier hierarchical system of photon maps to photon mapping. The Hierarchical photon map paradigm increases the number of multiple photon maps by introducing alternative ways of splitting a global photon map. 1.2 Aim

The principal aim of this project is to analyse, design, implement and test a hierarchical two tier multiple photon map algorithm that reduces computational cost for irradiance estimates without loss to image quality. 3

1.3

Objectives

The main objectives of this study are: 1. To propose and design a storage structure and search algorithm that optimises the search for the nearest neighbour by using a multiple photon map environment. 2. To develop and build an application that implements the proposed algorithm and that will serve as a test-bench for testing and demonstration purposes. 3. To analyse, compare, contrast and identify the benefits and shortcomings of using multiple photon maps in the proposed manner as opposed to using the more traditional single photon map. 4. To identify new opportunities and threats to the further development of the proposed algorithm. 5. To review the research methodology, the resulting outcome and the resources of this project. 1.4 Intellectual Challenge

The project fundamentally involves an understanding of the physics - how light interacts with objects, the psychophysics - how the eye perceives things, and radiometry. Furthermore an in-depth study of computer graphics, with particular attention to geometric modelling, data structuring and rendering algorithms is mandatory. It also demands good programming skills and a sound mathematical background. The researcher is required to master and acquire an enhanced understanding of the use of the photon mapping algorithm in image synthesis, which he will use to assess, select and/or create a viable synergy of algorithms. This project introduces and tests a unique technique for the optimisation of photon mapping. It is certainly a relevant for the award of an MSc in computing. 1.5 Research Program

This research project will employ the experimental research methodology. Experimental Research can be defined as an attempt by the researcher to maintain control over all factors that may affect the result of an experiment (Key, 1997). In doing this, the researcher attempts to determine or predict what may occur. The researcher manipulates a variable (anything that can vary) under highly controlled conditions to see if this

4

produces (causes) any changes in a second variable. The variable, or variables, that the researcher manipulates is called the independent variable while the second variable, the one measured for changes, is called the dependent variable. Independent variables are sometimes referred to as antecedent (preceding) conditions. Experimental research, although very demanding of time and resources, often produces the soundest evidence concerning hypothesized cause-effect relationships (Gay, 1987). It is therefore the method of choice for scientific disciplines whenever it is practical and ethical to manipulate the antecedent conditions. 1.5.1 Phase one.

The first phase of this project will consist of a literature review of current rendering and illumination models highlighting their particular strengths or weaknesses. The purpose of this is to quickly acclimatise the researcher to the existing developments in 3D image synthesis. It will include a study of recent developments in photon mapping, placing particular emphasis on the structure of the photon map itself and operational issues. Finally it will cover the most current developments in the use of multiple photon maps and explore research opportunities. This chapter provides the platform from which the rest of the project will be launched. The outcome of this phase will be the first two deliverables of this project. These are: a) b) An in-depth discussion and comparison, of current rendering techniques and illumination models. An in-depth discussion of the most current developments in the optimisation of photon mapping and the use of multiple photon maps. The completion of this phase is considered to be the first major project milestone. 1.5.2 Phase two

The second phase of the project comprises of two sections. It covers the experimental design stage of the project in which the problem and the proposed solution are revisited. A scientific and academic argument is then presented in support of the proposed algorithm, clearly spelling out the expected improvements. The algorithm itself is discussed in detail and defined using pseudo codes. The antecedent conditions, the

5

dependent variables and the criteria of measurement to be adopted will be defined as well. While the first section focuses on the algorithm, the later part of this phase discuses the test application as a whole and the incorporation of the algorithm. This phase of the project will produce the third and fourth deliverables. These are: a) A specification of a data structure for the storage of multiple photon maps and an algorithm for rapidly traversing these maps, with a full discussion on the criteria employed to select the data structure and a comparison with alternative structures. b) A documented logical design or blue print for the development of an application that will be used to test the proposed algorithm. This phase will also contribute substantially to the sixth and seventh deliverables that are produced by phase three of the project. 1.5.3 Phase three

In this phase of the project two distinct activities are carried out. First the test application and several virtual test scenes of varying complexity are coded using a high level programming language (probably C++). The application is based on the design developed in phase two of the project. Secondly the developed code is actually used to test the hypotheses and the performance measured and recorded, based on the criteria earlier defined. The outcome is analysed critically and compared to the earlier defined expectations. The fifth and sixth deliverables are produced as a result of the completion of this phase. They are: a) A computer application written in an appropriate high level programming language, that implements the algorithms that have been developed, altered and/or amalgamated in this project for computing irradiance estimates using hierarchical photon maps. b) A set of test results in form of statistical data and charts that will be used to establish the effectiveness of the hierarchical photon mapping algorithm.

6

1.5.4

Phase four

The fourth and final phase is the critical review and evaluation of the project as a whole. The objectives and aims will be revisited and discussed in-depth. The level of significance of the research will be analysed. The research methodology employed, the adequacy of the resources, the learning points, the mistakes made, the benefits of the project and the future research possibilities will also be analysed and discussed. 1.6 Deliverables:

The project deliverables shall include: 1. An in-depth discussion and conclusion, on the current rendering techniques and illumination models. 2. An in-depth discussion of the most current developments in the optimisation of photon mapping and the use of multiple photon maps. 3. A specification of a data structure for the storage of multiple photon maps and an algorithm for rapidly traversing these maps, with a full discussion on the criteria employed to select the data structure and a comparison with alternative structures. 4. A documented logical design or blue print for the development of an application that will be used to test the hypotheses. 5. A computer application written in an appropriate high level programming language, that implements the algorithms and methodologies that have been developed, altered and/or amalgamated in this project for the use of multiple photon maps. 6. A set of test results in form of statistical data and charts that will be used to establish the validity of the hypothesis. 7. A report on the analysis of the results and indication of further research development possibilities and trends. 8. An evaluation of the whole project. 1.7 Resources

The main resources required will be access to relevant literature on the specialised topics which may be in form of books, journals or internet websites that have been made available from the Staffordshire University library.

7

The following journals are of particular relevance to the project: ACM Transactions on Graphics, IEEE Computer Graphics and Applications, and Graphics Models. Most of the referenced journals are available on the Citeseer web site (www.citeseer.com). The most often referenced books in this project are Jensens, Realistic Image Synthesis Using Photon Mapping (Jensen, 2001) and John, Marlons Photon Mapping (John, 2003). These specifically relate to photon mapping. John (2003) provides the sample code of the photon map algorithm and this is used as a starting point for the construction of the test application. The hardware on which the application will be developed and on which all tests will be performed is an AMD Athlon 64 3000+ with 1GB DDR RAM running Windows XP Professional. The application will be coded in C++ using Visual Studio 6 development software provided by the University. Finally the researcher will make use of the availability of an experienced project supervisor with appropriate knowledge in computer graphics and research methods, to provide necessary guidance and advice.

8

Chapter 2: Global IlluminationThis chapter reviews the fundamentals of global illumination and then proceeds to discuss the various rendering approaches in reasonable detail. The intention here is to give an understanding of the various approaches to rendering, and to briefly highlight and compare their specific strengths and weaknesses. The later sections of this chapter will include an in-depth review of photon mapping, the use of multiple photon maps and the opportunities they present in terms of optimisation. 2.1 The fundamentals of Global Illumination

Global illumination has become quite important in computer graphics. Its main goal is to compute all possible light interactions within a scene so as to obtain a truly photorealistic image. Global illumination mimics the subtleties of natural light by tracing the light rays bouncing between a collection of radiating objects and carrying their diffuse colour properties with them. These are, in turn, transferred onto other neighbouring objects. This type of indirect lighting is different from traditional computer generated lights which provide only local illumination and typically die upon reaching their subjects. The result of these inter-reflected light rays produces a much more realistic image due to light properly scattering and diffusing throughout the environment, thus creating much more accurate tones and shadows. Global illumination rendering has been around for many years but it was never incorporated into commercial production renderers due to the lengthy calculations required to produce the desired results. Now that computers have become much faster, this type of rendering is becoming somewhat feasible. Nevertheless it is important to note that modern day global illumination renderers such as Brazil, VRay, FinalRender and Mental Ray can still take hours to produce a high quality image (Rosenman, 2002). Global illumination requires an understanding of light transport and how each effect can be simulated in a mathematical model. The physics of light is currently explained using several models based on historic developments. These are (Saleh and Teich, 1991):

9

Ray optics models light as independent rays travelling in optical media and following a set of geometric rules. Ray optics can be used to model all the effects that we see including refraction, reflection and image formation.

Wave optics represents light as electromagnetic waves. In addition to all the phenomena that ray optics models, it can model interference and diffraction. Electromagnetic optics includes wave optics but also explains polarisation and dispersion. Photon optics assumes a particulate model to light. It provides the foundation for understanding the interaction of light and matter.

Computer graphics almost exclusively uses ray optics, even photon mapping, despite its name, uses ray optics; and because of this, the effects of diffraction, interference and polarisation are ignored (Jensen, 2001). Furthermore light is also assumed to have infinite speed; this means that the whole illumination model reaches a steady state immediately the light sources are switched on. 2.1.1 Surface and Sub-Surface Scattering Functions

When light encounters an obstacle it is either scattered or absorbed. A rough surface will reflect light in random directions depending on the inclination of the different points on the surface. In such a case, it makes more sense to analyse the average behaviour over the surface rather than that of specific points. Because the angle of incidence keeps changing from point to point, it becomes irrelevant and the reflection model that is developed can be assumed to be isotropic. This aggregate behaviour of spreading light in all directions is called directional diffuse or glossy reflection (Shirley et al., 1997). Traditional lighting models assume that any light leaving a surface will do so at exactly the same point that it went in and that any subsurface transmission and scattering will occur at this point. This however is only true for hard metallic surfaces that exhibit specular reflection. It is certainly untrue for translucent materials like wax. Translucency is a material phenomenon where light travels through the surface of an objects rather than simply bouncing off the surface. Most non-metal surfaces contain a certain degree of translucency. Both Shirley et al. (1997) and Koutajoki (2002) agree that the calculations needed for generating the illumination of light in volumetric medium is

10

computationally heavy and thus that it is impractical to adopt a full-blown volume model for translucent surfaces even though subsurface scattering is a volumetric phenomenon. 2.1.1.1 BSSRDF and BRDF As already mention subsurface scattering occurs when a beam of light enters some material at one point then scatters around before leaving the surface from an entirely different location. This behaviour is described by the BSSRDF or Bi-directional Surface Scattering Reflectance Distribution Function (Jensen, 2001), which is best understood after the introduction of some basic terminology used in radiometry to describe light. The basic quantity of light is the photon and its energy, which is dependent on its wavelength, can be represented as e = hc

(1)

where h 6.63x10 Js is Plancks constant and c is the speed of light. The spectral radiant energy, Q , in n photons with wavelength is Q = n e = n hc

(2)

Radiant energy, Q, is the energy of a collection of photons. This is computed by integrating the spectral energy over all possible wavelengths: Q = Q d0

(3)

Radiant flux, , sometimes simply called flux, is the rate of flow of radiant energy: = dQ dt (4)

The radiant flux area density is the differential radiant flux per differential area at a surface, d dA . This is often separated into the radiant exitance M, which is the flux

11

area density leaving a surface, also known as radiosity B; and the irradiance E, which is the flux arriving at a surface location x. E ( x) = d dA

(5)

The radiant intensity, I, is the radiant flux per unit solid angle, d where represents

direction away from the surface: d I ( ) = d

(6)

Radiance, L, is the radiant flux per unit solid angle per unit solid projected area (figure 1).

Figure 1 Radiance L

L ( x, ) =

d 4 n hc d 2 d = 0 cos dA d cos dA d dt d

(7)

The last term expresses radiance as an integral over wavelength of the flow of energy in n photons per projected differential area per differential solid angle per unit time. Radiance is one of the most important quantities in global illumination. It can be used to describe the intensity of light in a given direction at a given point in space.

12

If the radiance on a surface is known then flux can be computed by integrating the radiance over all directions and the area A. = L( x, )( n ) d dxA

(8)

where n is the normal of the surface at x and is the direction of the incoming

radiance. The BSSRDF relates the differential reflected radiance to the differential incident flux. If S is the BSSRDF, x is the point of entry of an incoming ray of light, on the surface of some translucent medium; x is the point of exit of the out-going ray of light from the same medium, is the direction of the incoming ray and is the direction of the out going ray, then S is dependent on x, , x and and can be represented as: dL( x, ) S ( x, , x , ) = d i ( x , )

(9)

Where dL is the differential reflected radiance at x in the direction and d i is the

differential incident flux (Jensen, 2001). Another way of interpreting this is to say the BSSRDF is the ratio of rate of change in reflected radiance with incident flux. This equation is the most generic description of light transport. Unfortunately it represents a twelve dimensional function, which is extremely expensive to compute. It has therefore been used in only a few papers in computer graphics and these papers all deal with subsurface scattering.

Figure 2 Subsurface scattering as described by BSSRDF

13

The BRDF (Bi-directional Reflectance Distribution Function) was introduced by Nicodemus et al. (1977) (cited by Jensen, 2001). It is in reality an approximation of the BSSRDF. It assumes that reflected light will leave the surface at exactly the same point that it struck the surface. This assumption reduces the BRDF to a six dimensional function. It enables a series of simplifications that make rendering much more practical. The BRDF, f r , defines the relationship between reflected radiance, Lr , and irradiance E: dLr ( x, ) f r ( x, , ) = dE ( x, )

fr

OR

(10)

14

Remember that representing the BRDF as f r ( x, , ) simply emphasizes the fact that the BRDF, f r , is a function of x, the point of exit of the out-going ray of light, , the direction of the incoming ray and , the direction of the out-going ray. The irradiance differential term dE ( x, ) in equation 10 can be replaced by its equivalent terms from equations 5 and 8. This results in the following BRDF equation: f r ( x, , ) = dLr ( x, ) Li ( x, )( n ) d

(11)

Normally, in computer graphics, the incident radiance at a surface location is known and it is the reflected radiance, Lr , that needs to be computed. The BRDF describes the local illumination model. By rearranging and then integrating both sides of the BRDF equation, the reflected radiance in all directions can be computed. Lr ( x , ) =

f

r

( x, , ) dE ( x, ) =

f

r

( x, , ) Li ( x, )( n ) d

(12)

Figure 3 BRDF models the local reflection of light

A useful property of the BRDF is Helmholzs law of reciprocity, which states that the BRDF is independent of the direction in which light flows. This makes it possible for global illumination models to trace light paths in both directions. Another important property of the BRDF is that it obeys the principle of energy conservation. Therefore a surface cannot reflect more light than it receives (Jensen, 2001). Surfaces that reflect light in all directions, irrespective of the incident direction, are referred to as diffuse surfaces. This is typical of rough surfaces or with subsurface scattering. Lambertian surfaces exhibit a special case of diffuse reflection in which the

15

reflected direction is perfectly random and as a result the radiance is uniform in all directions regardless of the irradiance. The BRDF is said to be constant. Specular reflection on the other hand, occurs when light hits a perfectly smooth surface and is reflected only in the mirror direction. Specular reflection can also be considered perfect for smooth dielectric surfaces like glass, water or metallic surfaces, however most surfaces have some imperfections and as a result light is reflected in a small cone around the mirror direction. This is called glossy reflection. For perfect mirror reflection the radiance shall be zero in all directions except in the mirror direction. 2.1.1.2 Reflection Model Materials normally reflect light in a complicated manner that cannot be described by the simple Lambertian or the perfect specular reflection models. Several alternative reflection models were developed to address this problem. Early models like the Phong model were not based on physics or science but on phenomena and logic (Phong, 1975). The Phong model actually results in a surface that reflects more light than it receives, but this problem has since been addressed by Lewis (1994) who derived a normalising factor for the Phong model. Despite notable improvements in the Phong model over the years, a comprehensive model of how light reflects or transmits when it hits a surface, including its subsurface interactions, needs to be developed. The Torrance-Sparrow model was introduced to computer graphics by Blinn (1977). It is one of the best known models based on the actual physical behaviour of light. This model represents off-specular peaks quite well but lacks a proper diffuse term. The model by Oren and Nayar (1994) is much better for rough surfaces, while models by Ward, and by Poulin and Fournier (both cited by Jensen, 2001 p.25) are commonly used for anisotropic reflection, where the amount of light reflected is contingent on surface orientation. Lafortune et al (1997) developed a model that supports importance sampling, but the model parameters are not intuitive and therefore the model needed to be mapped onto a set of measured data; which data may not always be easy to practically measure. A simple, intuitive and empirical reflection model for physically plausible rendering was proposed by Schlick (1993) and is well described by Jensen (2001). Jensen is apparently impressed by the model and claims that it is computationally efficient. It also supports importance sampling, which is useful for Monte Carlo rendering methods (Cook, 1986).

16

The Schlick model is different from the models discussed so far in that it is not derived from a physical theory of surface reflection. It is a mix of approximations to existing theoretical models and has some intuition of light reflection behaviour. A good rendering equation should be based on a reflection model that results in a bidirectional reflectance distribution function (BRDF) that correctly predicts the diffuse, the directional diffuse and the specular components of the reflected light at any surface of interaction. The rendering equation is the mathematical basis for all global illumination algorithms. It presents the equilibrium conditions of light transport at the interface of two medium. The rendering equation can be used to compute out-going radiance at any surface location. The outgoing radiance, Lo , is the sum of emitted radiance, Le , and the reflected radiance, Lr : Lo ( x, ) = Le ( x, ) + Lr ( x , ) (13)

The reflected radiance term can be replaced by equation 12: Lo ( x, ) = Le ( x, ) +

f

r

( x, , ) Li ( x, )( n ) d

(14)

This is the rendering equation as used in Monte Carlo ray-tracing algorithms including photon mapping (Jensen 2001). The rendering equation, however, does not apply to participating media. 2.1.2 Light Scattering in Participating Media

The previous discussions assume that optical interactions occur only at an interface between two media. This is true only in a vacuum. In the case of dusty air, clouds, smoke and silty water optical interactions occur within the medium as light progresses through it. Such materials are referred to as participating media. Glass is certainly a participating medium so much so that the colour of glass is actually determined within the medium and not at the surface as portrayed by most computer graphics applications. When light interacts with a participating medium it can either absorbed or scattered. The probability of either happening is represented by an absorption coefficient and a

17

scattering coefficient. A ray of light passing through such a medium therefore experiences a continuous change in its radiance. This change in radiance is not only due to absorption and out scattering, that both result in a loss of radiance of the ray, but also due to in-scattering of light. There could be a gain in radiance due to emission from the medium if the medium were a flame. The phase function is used to describe the distribution of scattered light within a participating medium, just like the BRDF does for surfaces. The phase function depends only on the angle , between the incoming and scattered rays and can be written as p() where = 0 in the forward direction and = in the backwards direction. A number of different phase functions are available for different types of media however in the case of typical glass (with homogeneous impurities) the scattering will be isotropic therefore phase function will be a constant irrespective of the value of . Most other materials are anisotropic and therefore have a phase function with a preferential scattering direction. The most commonly used phase function is the empirical HenyeyGreenstein phase function (Henyey and Greenstein, 1941), which was introduced to explain scattering by intergalactic dust but has since been used to describe scattering in oceans, clouds, skin, stone and many more. However for many applications the HenyeyGreenstein phase function is computationally costly and its accurate shape is of less importance. Having observed this, Schlick (1993) presented his own simplification of the Henyey-Greenstein phase function that traded accuracy for computational efficiency. The phase function forms the basis of the volumetric rendering equation just as the BRDF is used to create the rendering equation. Unlike the rendering equation, the volume rendering equation can only be solved using numerical integration. This is implemented by taking small uniform steps or intervals through the medium and making computations at these intervals. This approach is called ray marching. In nonhomogeneous medium, and medium with local variations in lighting, it is better to vary the length of the intervals to capture local changes more efficiently. This is called adaptive ray marching (Jensen, 2001 p119). A true global illumination solution needs to accommodate the rendering equation and the volumetric rendering equations if it is to realistically render arbitrary scenes. The photon mapping algorithm easily adapts to any rendering equation, and since photons can be stored anywhere within a medium, it can be used to simulate participating media

18

phenomena. This is one of the main strengths of the algorithm and it will become apparent in the next section of this chapter, when the different rendering algorithms are discussed and compared. 2.2 Rendering

As mentioned earlier. the rendering paradigms can be placed in three categories. The first category comprises of the methods based on geometrical approximations. These included the Gouraud and Phong shading models. Again as previously mentioned these methods do not address the problem of refraction and global illumination. The second category of rendering methods is based on the physics of light and includes methods based on radiosity, ray tracing and photon mapping. This category is most relevant to this project and is discussed in more detail later on in the chapter. The third category comprises of image based rendering methods. These basically include light-field rendering and image warping. These are new rendering paradigms that use pre-acquired reference images of the scene in order to synthesise novel views. These images may be actual photographs or images generated by other means. 2.2.1 Image Based Rendering

Image based rendering was introduced as a means of overcoming the tedious task of creating synthetic models. Image based rendering has also been found to reduce the computational costs considerably and is therefore very useful for interactive applications. An additional benefit is that the computational cost is independent of the complexity of the scene. Although image based rendering paradigms are not directly relevant to this thesis they are an important and strong rival to photon mapping for simulating global illumination. A more detailed description of this class of rendering paradigms is therefore appropriately provided in appendix B. 2.2.2 Ray Tracing.

Ray tracing was the first algorithm for photo-realistic rendering of a synthetic three dimensional scene. It is particularly good for the simulation of specular reflections and transparency. This is because a ray of light can apply the appropriate reflection model as it progresses around the scene and through transparent objects. Ray tracing is a point

19

based rendering technique and therefore excels at curved surfaces (John, 2003 p. 58). Basic ray tracing as introduced by Turner (1980), traces light rays backwards from the observer to the light sources. This approach only handles mirror reflections, refractions and direct illumination. Caustics, motion blur, depth of field and glossy reflection cannot be computed despite their importance. These effects were simulated by extending ray tracing with Monte Carlo methods of ray distribution (Cook, 1986) in which rays are distributed stochastically so as to account for all light paths, however the images generated suffer from variance, seen as noise. This noise is simply how information gaps are manifested in the resultant image. Eliminating noise requires bombarding the scene with an increased number of sample rays and this is computationally very expensive. Alternatively one could seek out areas of visual importance and carefully distributing the rays so as to reduce this noise. Several methods of doing this have been proposed including the Monte Carlo bi-directional ray tracing (Lafortune and Willems, 1993 pp. 95-104; Lafortune, 1996), in which rays are traced in both directions. One interesting point about ray tracing is that it tends to separate the illumination from the geometry. Rays are traced into the scene and return with some illumination value. This fact can be used to build up independent illumination stores as is done by the irradiance caching method which stores and reuses indirect illumination on diffuse surfaces. 2.2.3 Finite Element Radiosity Technique

Finite element radiosity is an alternative to ray tracing in which the total energy in a scene is distributed among the surfaces within the scene. This is done by dividing up the objects within the scene into elements or patches, which form a basis for the final light distribution. Radiosity was initially introduced to render scenes with only diffuse reflection (Lambertian) where the reflected light from each element is independent of the direction of the incident ray or location of the viewer. As light leaves the light source, its energy is splattered onto the elements in the scene. The elements store the illumination information, but at the same time they acts as a secondary light source and emit light particles to neighbouring surfaces. This process is repeated until equilibrium is reached (Goral et al., 1984). Radiosity is a much better approach to computing global illumination than ray tracing however it is very poor at computing specular reflections. This is because the intensity is computed as an element average and not on a point-by-point basis. The algorithm

20

generates visible artefacts that can only be reduced by either simplifying the geometry of the scene objects or by increasing the number of elements. It therefore does not produce images with sharp illumination features. Radiosity in its basic form is certainly not suitable for rendering effects like caustics or scenes where specular high-lights are expected (John, 2003 pp. 63-66). 2.2.4 Hybrid and Multi-Pass Techniques

Hybrid techniques have been developed with the aim of reaping the benefits of both ray tracing and radiosity. The first hybrid technique used radiosity to generate the basic image then made a second pass-using ray tracing to add specular reflections (Wallace et al., 1987). The next set of hybrid techniques extended the contribution of ray tracing to include computation of shadows as seen by the eye (Shirley, 1990). The more recent hybrids reverse the roles by using ray tracing to generate the base image and limiting the role of radiosity to the computation of indirect illumination on diffuse surfaces. This shift in methodology was aimed at reducing the visible artefacts generated by the radiosity algorithm (as pointed out in the earlier discussion on radiosity). The main problem with the hybrid paradigms is that the use of radiosity limits the complexity of the scene. Holly Rushmeier et al. (1993) (cited by Jensen, 2001) proposed a geometrical simplification technique in which the radiosity algorithm is used on a separate simplified version of the scene that did not contain the finer geometrical details. This technique worked well but was not very practical because the scene simplification was usually done manually due to lack of a generic algorithm or tool that could be used arbitrarily. Also, there is need to analyse the effect of simplification on the resulting global illumination in the scene. 2.3 Photon Mapping Algorithm

The photon mapping algorithm was invented by Henrik Wann Jensen. Photon mapping decouples lighting from the geometry of scene and stores this data in a separate structure called a photon map (Jensen, 1995; Jensen, 2001). The decoupling of the photon map from the geometry simplifies representation and allows for the representation of complex scenes (Jensen, H. W. and Niels 1995). The idea of separating the illumination from the geometry of the scene is not entirely new, illumination maps were first proposed by Arvo (1986) as a step in this direction. Arvos illumination map is similar to a texture map but instead of displacement or normal 21

vector perturbations, it carries illumination information. The illumination map consists of a rectangular array of data points imposed on a 1x1 square and mapped onto the surface of the object. A ray of light hitting a surface, increments the energy stored in four neighbouring data points in a weighted manner as illustrated in figure 4.

Figure 4 Illumination map

Illumination mapping suffers from the same problems as finite element methods. The main problem is in computing the resolution of the map. It is also too costly to use illumination maps in complex models and finally it is too difficult to use it on arbitrary surfaces even if they can be parameterised. Both illumination mapping and photon mapping are two-phase algorithms. The first phase illuminates the scene and generates the maps. The second phase introduces the viewer and renders the scene. Therefore light is first scattered around the scene, then the scene is observed after all light has propagated through the scene. Photon mapping however differs from illumination mapping in that photon maps store photons and not just illumination intensity. A photon has a direction, a location and a colour value. The illumination maps on the other hand, rely entirely on the geometry of the objects for directional information. Secondly the photons are not stored in a fixed grid system as is done with illumination maps. The distribution of photons is non-

22

uniform and therefore much more detached from the geometry. The photon in itself is a self contained data point. 2.3.1 Photon Tracing.

In order to save illumination separately from the geometry of the scene, the photon structure is required. To represent flux each photon must carry information on the direction of flow of the energy, the amount of energy and the location of the photon. A typical photon structure expressed in C would be as follows: struct photon { float x,y,z; float color[3]; float vector[3]; short sortFlag; } In cases where memory is of concern, the photon can be compressed by representing the power as four bytes using Wards shared-exponent RGB-format (Ward G. 1991) and by using spherical coordinates for the photon direction. The first pass of the photon mapping algorithm is referred to by Jensen (2001) as photon tracing. Photon tracing is the process of emitting photons from the light sources and tracing them though the model. Photon tracing works in exactly the same way as ray tracing. The difference is that the photons in photon tracing propagate flux whereas the rays in ray tracing gather radiance. The interaction of a photon with an object may therefore differ from the interaction of a ray with an object. For example when a ray is refracted its radiance is changed based on the relative index of refraction, this does not happen to photons. The photon map represents the direct and indirect illumination. The photon map is created as a result of photon tracing which is the first pass of the algorithm. Photons are scattered from light sources into the scene and eventually stored in a photon map. The nature of a photon that interacts with a point on a surface can be described as follows (John M. 2003): // position ( 3 x 32 bit floats ) // power ( 3 x 32 bit floats ) // incident direction ( 3 x 32 bit floats ) // flag used in kd-tree

23

When a photon meets a diffuse surface, its incidence direction and energy are stored in the photon map at the point of intersection and it is reflected as well in a random direction. The photon map only stores diffuse interactions. When a photon meets a specular surface the photon map is not updated. The photon is either reflected or refracted depending on the surface properties. If the surface is partially diffuse and partially specular then a method must be used to randomly determine which component of the material carries more weight. One such method is the Russian roulette. If a photon is absorbed by a surface, it is stored in the photon map and the photon life is ended. A very important technique in photon tracing is the Russian roulette (Arvo and Kirk, 1990). The Russian roulette technique is used to decide whether a photon should be reflected, absorbed or transmitted. It is then used to decide whether a reflected photon should be reflected diffusely or specularly. In other words it determines which component of the material carries more weight. This is illustrated in the following pseudo code where d is the reflectivity of a surface and is the incoming photon power:

probability of reflection p = d is a uniformly distributed random number between 0 and 1, = random() if ( < p) reflect photon with power or else absorb photon.Algorithm 1. Russian roulette algorithm.

The power of the Russian roulette algorithm is clearly apparent when one imagine a situation where 1000 photons are shot at a surface with a reflectivity of 0.5. We could reflect all 1000 photons with half the power or reflect only half (500) of the photons with the full power. Russian roulette enables the later option to be taken and is clearly a powerful technique for reducing computational requirements for photon tracing (Shirley, 1990).

24

As already mentioned photons that hit non-specular surfaces are stored in a global data structure called a photon map. Each photon can be stored several times along its path. The information about a photon is stored at the point on the surface where it was absorbed. Initially photon map is arranged as a simple flat array of photons. For efficiency reasons this array is reorganized into a balanced kd-tree structure before rendering. 2.3.2 Computing Radiance from Photon maps

The second pass of the photon mapping algorithm is the rendering phase. It is during within this phase that the photon mapping algorithm uses Monte Carlo ray tracing to trace rays from the observer into the scene. The photon map that was generated earlier is referenced for statistical information on the illumination in the scene. The photon map represents incoming flux, with each photon carrying just a fraction of the light source power. A photon hit in a region indicates that this region receives some illumination from the light source, but the single photon hit does not indicate how much light the region receives. The irradiance for the region can only be estimated from the photon density, d / dA . The irradiance estimate is determined by summing the closest photons found and dividing them by the area of the circle in which they are found (John Marlon, 2003 p.260). The above argument can also be demonstrated mathematically from the rendering equation derived in section 2.1.1 (equation 14). Equation 12 expressed the reflected radiance term as:

25

L r ( x, )

=

f

r

( x, , ) Li ( x, )( n x ) d

(15)

where Lr is the reflected radiance at x in direction . is the hemisphere of incoming directions, n x is the normal of the surface at x, f r is the BRDF at x, and Li is the incoming radiance. Since the photon map provides information about the incoming flux, the incoming radiance, Li , needs to be rewritten and expressed in terms of flux. This can be done by using the derivatives of equation 8 (section 2.1.1.1):

Li ( x, )

=

d 2 i ( x, ) , ( n x ) d dA

(16)

Equation 16 can now replace the incoming radiance term in equation 15:

L r ( x, )

d 2 i ( x, ) = f r ( x, , ) ( n x ) d (n x ) d dA d 2 i ( x, ) = f r ( x, , ) dA

(17)

The incoming flux is approximated from the photon map by locating n photons that are nearest to x. Each photon p has the power p ( p ) , so by assuming that the photon intersects the surface at x, then the integral in equation 17 approximates to the summation expressed in equation 18.

L r ( x, )

p =1

n

p ( x, p ) f r ( x, p , ) A

(18)

The procedure is akin to expanding a sphere around x until it contains n photons, then using these photons to estimate the radiance. The equation still has A which is related to the density of photons around x. By assuming a locally flat surface around x, this area can be computed as: 26

A = r 2

(19)

which is the area projected by the sphere onto the surface (Figure 5) with a radius r equivalent to the distance from point x to the furthest photon of the set of n nearest photons. The final result is an equation that can be used to compute the reflected radiance at any surface location using the photon map: L r ( x, ) 1 r 2n

fp =1

r

( x, p , ) p ( x, p )

(20)

Figure 5 Computing reflected radiance (Jensen, 2001)

It is now clear why the photon map structure must be able to quickly locate the nearest neighbours to any point within the scene. At the same time it should remain compact since the intention is for it to store millions of photons. This immediately rules out the use of simple structures like lists, vectors or multi-dimensional arrays, since searching for the nearest neighbours through such structures is too costly. Three dimensional grid structures are impractical because the distribution of photons is highly non-uniform especially when simulating important effects like caustics that focus light in specific areas of the scene.

27

2.3.3

The KD Tree

Given the requirement for efficiency, the kd-tree structure is a natural choice for a photon map. A kd-tree is a spatial subdivision algorithm that can be used during orthogonal range search. The first step in constructing a kd-tree is to define the spatial extent of the problem; this is simply the maximum spatial extent of the particles in each spatial dimension. For example in a two dimensional space the problem space is a rectangle whose sides are the maximum separation between particles in each of the two dimensions. Figure 6 illustrates the use of a kd-tree algorithm for spatial subdivision of a two dimensional space. In the illustration the maximum horizontal distance is that between particles 2 and 6 and this defines the horizontal size of the problem space, while the vertical distance between points 3 and 7 defines the vertical size. The original problem space is the large rectangle labelled Node 1. The next step is to recursively divide the space in the dimension for which the maximum spatial separation is the greatest. Thus Node 1 is divided along the vertical dimension to from node 2 and node 3. The recursive division of nodes is continued until there is only one particle in each cell. The results of each stage of the process can be stored in a tree structure similar to that depicted in figure 7. Since the recursive spatial division terminates when there is only one particle in each cell; the original eight particles generate a tree with a total of fifteen nodes.

28

Figure 6 Spatial subdivision using a kd-tree

The kd-tree described here is a perfectly balanced one. It requires the number of photons to be divisible by two and the number of nodes in the tree is always 2N-1 where N is the number of photons (Appel, 1985).

Figure 7 Simplified 2D kd-tree for the spatial subdivision shown in figure 5

A three dimensional kd-tree is built in the same manner but includes a third dimension. The problem space in this case would take the form of a cube.

29

During its second pass, the photon mapping algorithm uses Monte Carlo ray tracing to render the scene. Rays are traced from the observer through each pixel of the view plane, into the scene. The kd-tree (photon map) is used to find the nearest neighbours at the points of intersection between rays and objects in the scene. Because the photon map is now constructed spatially it returns the relevant photons much faster. This is because the spatial arrangement allows the algorithm to rapidly narrow the problem space by performing qualifying tests at group level. The search begins from the root, which is labelled Node 1 in the given example. A qualifying test is applied to Nodes 2 and 3. If a node fails to qualify all its sub nodes are disqualified without the need to test each member of the sub nodes (Bentley, 1975; Bentley, 1979). The performance of the photon mapping algorithm is mainly dependent on the efficiency of three activities: 1. The cost or speed of constructing and balancing the photon map. Unbalanced kdtrees are faster to construct but are much slower at tracing the nearest neighbour. 2. The cost of identifying the object of intersection during ray tracing. This cost is dependent on the number of objects in the scene but is increased further by the presence of reflective and refractive materials. 3. The cost of computing irradiance estimates from the photon map, at the point of intersection. This relates to how fast a set of nearest neighbouring photon can be identified. This project is mainly concerned with optimisation aspects that relate directly with the construction of photon maps and the computation of irradiance estimates (points 1 and 3 above). 2.3.4 Multiple photon maps

The use of multiple photon maps is not entirely new. Usually three photon maps are used, one for caustics, another for indirect illumination and a third one for participating media. This gives flexibility to solve different parts of the rendering equation separately, and keeps the global photon map small allowing for faster searches. Larsen and Christensen used multiple photon maps to optimise the computation irradiance estimates (Larsen, 2003). They proposed an algorithm that places photons is separate photon maps when adjacent surfaces have a large angle between them. This criteria for splitting the photon

30

map has the added advantage of preventing undesirable leakage of irradiance at corners as illustrated in figure 8.

Figure 8 Photon leakage at corners

The rule they chose for the photon map segregation is the following (Larsen, 2003): 1. If the angle between two adjacent polygons is below a predefined threshold () they should use the same photon map for storing photons and lookups on the nearest neighbors. An edge between such two polygons is classified as connected. 2. If the angle between two adjacent polygons is above the same predefined threshold () they should use different photon maps for storing photons and lookups on the nearest neighbors. An edge between such two polygons is classified as unconnected. Larsen was able to demonstrate significant improvements in performance however he noted that the criteria he was using to split the photon maps was not suitable for scenes generated using fractal algorithms because of the high number of polygons and sharp edges. The algorithm was not applied to caustics and volume photon maps because of difficulties in determining when to split the maps (Larsen, 2003). Furthermore the number of multiple maps that are use is contingent to the geometry of the scene. The use of multiple photon maps have been shown by previous research, to reduce computational costs although the benefits are limited by criteria used to split the photon maps. In the following chapters an alternative algorithm and splitting criteria will be proposed. The proposed algorithm provides a means of sub-dividing Larsens multiple

31

photon maps to even smaller units. This algorithm is the hierarchical photon mapping algorithm.

32

Chapter 3:Hierarchical Photon Maps.3.1 Motivation

The idea of using hierarchical photon maps, as a means of optimising the computation of irradiance estimates, was conceived by studying the complexity of the photon mapping algorithm using the big O notation. The big O notation was first introduced by German number theorist Paul Bachmann in his 1892 book Analytische Zahlentheorie and later popularized by another German number theorist, Edmund Landau, hence it is also called Landau's symbol. Big O notation is used in complexity theory, computer science, and mathematics to describe the asymptotic behaviour of functions. Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be: T(n) = 4n2 - 2n + 2. (21)

As n grows larger, the n2 term becomes more dominant, so at some point all other terms can be neglected. Furthermore, the actual value of the constants in the equation will depend on the precise details of the implementation and the hardware it runs on. This implies that these constants should also be neglected for any hardware independent examination of algorithm complexity. The big O notation focuses on the dominant term so one could say that equation 21 represents an algorithm with an order of n2 time complexity, which is written as T(n) = O(n2) or O(N^2). Note that this notation doesn't indicate how quickly or slowly the algorithms actually executes for a given input. The most common complexities, in order of increasing time are: O(1), O(log N), O(N), O(N log N), O(N^2) followed by other polynomial times such as O(N^3) then the very slow algorithms such as the travelling salesman problem which takes O(N!) time (Norman, 1995). Table 1 below uses big O notation values to reflect the complexities of using a varying number of multiple photon maps. It is through the development and the examination of this table of complexities that the idea of hierarchical photon maps was conceived. The values have been computed for a scene with 137 objects and 137,000 photons. The search and insertion of photons in a kd-tree is an O(log N) operation while the actual construction is an O(N log N) operation. It is clear from these computations that the

33

computational time required to construct the photon maps reduces as the number of maps increase, as does the search time for a single photon map. However the computational time required to search through just two of the smaller maps is far greater than that required to search a single global map. This imposes a serious restrictions to the splitting of photon maps that is; all the nearest neighbouring photons must be placed within the same map. This is probably explains why Larsen (2003) was unable to apply multiple photon maps to caustics and volume photon maps. Since the photon maps can not be split beyond this point, the alternative is not to split the maps but to create sub-maps or small extracts of the larger maps and to develop an algorithm that decides when its appropriate to search the larger map and when to search the sub-map. The algorithm should avoid searches across more than one map as much as possible or any optimisation benefit will be lost. This is the basic principle behind the hierarchical photon mapping algorithm.Table 1. Complexity of multiple kd-trees

3.2

Generating Shared Photon Maps.

The hierarchical photon mapping algorithm is based on a two tier photon map system. The first tier consists of photon maps that are presumed to be independent of each other. These shall be referred to as shared photon maps, since they are shared by several polygons. The second tier consists of sub-sets of the shared photon maps. These sub-sets are created as an alternative to actually splitting the shared maps. The second tier photon maps shall be referred to as local photon maps. 34

During rendering, the shared photon map is queried as usual for the set of nearest neighbours to each point of intersection between the scene objects and the rays traced into the scene. The neighbours that are of interest are not necessarily the real closest neighbours to the intersection point but the set of neighbours that actual contribute to the irradiance estimate calculation and therefore lie on surfaces that point in the same direction as the surface for which the irradiance is being calculated. This means that photons located on two perpendicular surfaces can be safely placed in two separate photon maps irrespective of their proximity to each other, without affecting the computed irradiance estimate and with the added benefit of eliminating photon leakage (see figure 7). The rules used to determine which polygons use the same photon map are similar to those used by Larsen and Christensen (2003). They can be summarised as follows: The polygons need to be adjacent to each other or they need to share an adjacent neighbouring polygon. The angle between the adjacent polygons must be below a predefined threshold. Note that the angle between two polygons is the angle between their normals. Polygons that are found to share the same photon map shall be referred to as connected polygons. The photon maps that they share are the first tier maps of the hierarchy and here forth shall be referred to as the shared maps. These shared maps are individually smaller than a single global map and as illustrated in table 1; they are also faster to collectively construct and faster to individually query. The following pseudo-code describes the part of the hierarchical photon mapping algorithm that generates these shared photon maps: for each polygon A in the scene: assign a unique map ID to the polygon if not already assigned then flag this polygon as checked. for each remaining unchecked polygon B in the scene find out if polygon A is connected to polygon B and if so: 1. assign As ID to B and to any other polygon C whose ID already matches Bs current ID 2. flag or mark the connected edges of polygons A and B finally for each allocated map ID create a Shared photon map.

Algorithm 2 Creating shared photon maps.

35

Each polygon is now associated with a shared photon map and several polygons may share the same photon map. Most importantly, each lookup on the nearest neighbour involves only one of the maps and results in the same set of nearest neighbours as a lookup in a single global photon map. Figure 9 demonstrates the number of shared photon maps that would be created for two different objects. The predefined threshold angle in this case is ninety degrees. In the case of the cube six shared photon maps would be generated while the second object generates only three shared maps. Note that the number of shared photon maps is independent of photon density of the maps or the number of polygons within the scene. It is also important to note that the number of shared photon maps can not be increased.

Figure 9 Generating shared photon maps

3.3

Generating Local Photon Maps.

As demonstrated earlier the number of shared photon maps is controlled by the geometry of the scene and can not be arbitrarily increased. This is a major limitation with the multiple photon mapping technique proposed by Larsen (2003). Furthermore the benefits of using multiple photon maps are expected to be more apparent in regions of high photon density and yet photon density is irrelevant when creating shared photon maps. This is the reason for the introduction of a second tier of photon maps. The second tier of photon maps are copies of some, but not all, of the photon already stored in the shared photon maps. In this case any polygon that has more than a preset minimum number of photons on its surface, stores a copy of these photons in its own

36

photon map. These photon maps are different from those earlier discussed firstly because they are not shared, and secondly they do not necessarily contain all the real neighbours for a given lookup search. This means that searches that fail to retrieve the neighbours in the local photon map will have to be abandoned and repeated against the shared photon map. Repeating the search is certainly computationally expensive however it is presumed that the majority of the lookups in the local photon maps will not be repeated and the computational savings will outweigh the cost of abandoning and repeating a few lookups. Nevertheless it is important to: 1. Limit the number of abandoned searches as much as possible. 2. Abandon searches from the local photon maps as early as possible within the search and thus limit time wasting. The figure below shows a polygon ABC. Point x is a point of intersection between the polygon and a ray traced into the scene during the rendering pass. The irradiance estimate is to be computed for this point and to do so a circle is placed around the point of intersection and it is expanded until it includes the number of nearest neighbours requested. The distance between point X and the closest edge of the polygon is r. This radius will be referred to as the limiting radius. The full set of nearest neighbours must all fall within the circle formed by the limiting radius r. Searches that expand beyond the limiting radius are abandoned because they extend beyond the limits of the local photon map and therefore some neighbours may lie outside the local photon map. This isFigure 10 Illustration of the constrained nearest neighbour search for connected polygons.

the case at point Y where the search radius has extended into polygon ACD.

The grey coloured areas represent regions within the polygon for which searches should be abandoned. The size and shape of this grey edge depends on the density of the photon map, the photon distribution within the map and the number of nearest neighbours requested. 37

It is clear that the algorithm will need to include a function for quick determination of the limiting radius. This computation will have to be carried out before each nearest neighbour lookup. The algorithm can be optimised further by eliminating edges AB and BC from the limiting radius test, if polygon ABC has no neighbours along these edges. Furthermore, the edge elimination technique is also applicable in cases where the angle between ABC and the neighbouring polygons exceeds the predefined threshold. The exemption of the unconnected edges effectively reduces the grey areas of polygon ABC and hence increases the number of searches that can be made on local photon maps without the risk or the need to repeat the search on the shared maps.

Figure 11. Searching local photon maps.

As mentioned earlier the algorithm can also be optimised by ensuring that abandoned searches are actually abandoned at the very early stages of the search. By introducing a fixed filter radius, searches on local maps that have a higher risk of failure can be avoided even before they are started. The fixed filter radius can be computed dynamically for each polygon from the photon density ratio:

Area represented by fixed filter radius Nearest neighbours requested = Area of the polygon Photons in the local photon map

(22)

which is leads to:

38

r'2 = A k n

pl

(23)

where r ' is the fixed filter radius, A is the polygon area, k n is the number of nearest neighbour photons requested and

p

l

is the total number of photons in the photon map.

The filter radius is inversely proportional to the photon density and the area it represents is the area in which the probability of finding the nearest photons is highest. Searches on the local map are avoided if the filter radius, r ' , is greater than the limiting radius r. At this stage each polygon is already associated to a shared photon map, several polygons may share the same photon map but now some polygons also have their own local photon maps. All photons in the local maps are actually duplicates of a few photons from the shared photon maps. This duplication of photons increases the required storage space by a considerable amount. The extra demand on storage can be reduced by storing references to photons within the photon maps and not the actual photons themselves. The storage of references should also speed up the balancing of the kd-trees which are the primary structure used by the photon maps within this project. A re-examination of figure 9 above can be used to demonstrate the benefits of introducing the second tier of photon maps. The number of photon maps for each object was initially restricted to six and three respectively, which restrictions are based on the number of shared photon maps. The introduction of local photon maps increases raises these limits because each of the polygons can potentially have its own local photon map. The cube can now potentially generate an additional twelve local photon maps in addition to the six shared photon maps. The second object illustrated in figure 9 has its multiple photon map possibilities drastically increased by an additional thirty two local photon maps. The actual number of photon maps generated will depend on the photon density and the threshold chosen for creation of local maps. The following pseudo-code describes the part of the hierarchical photon mapping algorithm that generates the local photon maps:

39

for each light source in the scene release photons into the scene. for each photon that intersects a polygon A with map ID x find out if the photon is to be reflected, refracted or stored if the photon is to be stored 1 -Store a copy of the photon in polygon As local map 2 -Store another copy in the shared map of ID x now for each polygon B in the scene, if the photon count in Bs local map is less than the threshold Delete the polygon Bs local map or else if the photon count in Bs local map meets the threshold Compute the fixed filter radius for polygon B

Algorithm 3. Creating local photon maps

The following pseudo-code describes the manner in which the proposed algorithm computes irradiance using both tiers of photon maps:

for each point of intersection on a polygon 1 - determine if the polygon has a local photon map and if so: calculate the limiting radius r if the filter radius r is greater than the limiting radius r do not search the local photon map or else start search for n nearest neighbours on the local photon map but if the search radius reaches limit r abandon the search in the local photon map 2 - if the polygon has no local photon map OR if a search in the local map has been abandoned: get the ID of the first tier photon map associated to this polygon start search for the n nearest neighbours in this photon map (continued on next page)

40

(continued from previous page) 3 -for each of the nearest photons found ensure that the photon contributes to surface illumination by checking if its direction faces the normal of the surface 4 - compute the irradiance estimate by adding up the power of the n photons and dividing this by the area of search circle. The search circle is a circle with a radius equal to the distance to the furthest of the neighboursAlgorithm 4 Computing irradiance from local and shared photon maps.

3.4

Summary

The hierarchical photon mapping algorithms have been described fully within this chapter. It uses a two tier system of multiple photon maps. The introduction of the second tier of photon maps allows multiple photon maps to be used for caustics and not just direct illumination. It also allows scenes to generate a much larger quantity of photon maps. The hierarchical use of multiple photon maps is unique to this project and is intended to address the limitations identified by Larsen (2003). Although the first tier photon maps can be used on their own to compute irradiance at any point within the scene, they are only referred to in the absence of local photon maps or when a search on a local map is abandoned. The local photon maps were introduced at the price of having to perform some lookups twice. The mitigation measures to be taken have been discussed. They included the introduction of the limiting radius and the filter radius. The limiting radius has been defined as the distance from point x on the polygon and the nearest edge of the same polygon, where x is the point of intersection of a ray generated during ray tracing and the polygon. The limiting radius must be computed for each lookup. The fixed filter radius is computed once for each polygon, it demarcates the area in which the neighbouring photons would be found if the photon distribution were uniform. The next chapter describes the test application and the details of the experiments that are to be performed to access the performance of the hierarchical photon mapping algorithm.

41

Chapter 4:Experimental Details.4.1 The Hierarchical Photon Mapping Test Application.

The test application is coded in C/C++. It basically comprises of a scene, objects and the photon map classes. The primitive classes (Sphere and Triangle) are used to define the model while the material class defines the physical attributes and lighting factors for each primitive. The object class combines the primitive classes and material class in one entity by inheritance. The objects are kept simple and comprise of a single primitive which, in the context of this dissertation, is either a triangle or a sphere. The application loads its scenes from a file in ASCII format and maintains an internal image buffer for file or video output. The application is able to generate image files in Portable Pixel Map (PPM) format. The basic rendering code is provided by John (2003), however this has been drastically modified to accommodate the multiple photon map algorithm, the ability to simulate light refraction, anti-aliasing functionality, texture mapping and several other functionalities that were missing from the basic code. The application can be switched to use a single photon map, multiple photon maps or to run as a ray tracer without any photon mapping. A timer class has been incorporated into the application. Its main function is to make a log of the application settings and dependent variables values for each test run. These logs are subsequently used for performance analysis. The source code for the application is available within Appendix C, while a complete executable copy of the application including the sample scene files, output files, installation instructions and operating instructions are available on the enclosed compact disc. 4.2 The Experimental Objectives.

The objective of the experiment is to compare the performance of a single global photon map, multiple shared photon maps without local maps and the complete hierarchical photon maps. The performance areas to be compared are: 1. The cost or speed of constructing and balancing the photon maps. 2. The cost of computing irradiance estimates from the photon maps.

42

4.3

The Experimental Method.

Three different scenes have been purpose built for this project. All three scenes incorporate complex light transport models that including mirror reflections, refractions and multiple light sources. The first scene (scene 1) however is geometrically simple in that the intersecting polygons are either parallel or perpendicular to each other. It comprises of 149 triangles. The second scene is slightly more complex because it has hexagonal shaped objects. The third scene, scene 3, is the most complex of the three. It comprises of 2916 triangles used to represent a curved chair in a room. The curved chair was generated using a fractal algorithm and therefore consists of many small triangles with small angles between connected triangles.

Figure 12 Scene 1 rendered using hierarchical photon maps

43

Figure 13 Scene 2 rendered using hierarchical photon mapping

Figure 14 Scene 3, the egg chair rendered using ray tracing

44

A set of tests are performed to compare the performance of the application in the three different application modes namely, the single global photon map mode, the multiple shared photon map mode and the multiple two tier photon map mode ( hierarchical). These test are performed in each mode on all three scenes and using a varying number of photons stored in the photon maps. The tests will demonstrate if the use of multiple photon maps actually optimizes performance, they will also to establish the relationship between the optimisation gains and the number of photons scattered in the scene.Table 2 Description of tests performed Test Photons Scattered Photons per pixel contribution. Test 1 Test 2 Test 3 Test 4 Test 5 2,500 photons per object 5,000 photons per object 15,000 photons per object 30,000 photons per object 60,000 photons per object 25 50 150 300 600 local photon map minimum size 250 500 1,500 3,000 6,000

The local photon maps are generated using a minimum

Click here to load reader

Reader Image
Embed Size (px)
Recommended