+ All Categories
Home > Documents > Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using...

Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using...

Date post: 14-Aug-2020
Category:
Upload: others
View: 12 times
Download: 2 times
Share this document with a friend
21
Global Illumination using Photon Maps Fredrik Pr¨ antare ([email protected]) David Lindqvist (david isak [email protected]) October 2016 1 Abstract This report presents a ray tracing algorithm that was implemented during the course Advanced Global Illumination and Rendering (alias TNCG15) at Link¨ oping University. The ray tracing algorithm is based on a standardized two pass global illumination technique which uses Monte Carlo ray tracing and photon maps. Two photon maps were used. One which is used to optimize rendering by the use of different types of photons, and one which is used to render caustics. See figure 1 for an example of an image that was rendered by using the ray tracer presented in this report. Figure 1: An example of an image that was rendered using the ray tracer presented in this report. 1
Transcript
Page 1: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

Global Illumination using Photon Maps

Fredrik Prantare ([email protected])David Lindqvist (david isak [email protected])

October 2016

1 Abstract

This report presents a ray tracing algorithm that was implemented duringthe course Advanced Global Illumination and Rendering (alias TNCG15) atLinkoping University. The ray tracing algorithm is based on a standardizedtwo pass global illumination technique which uses Monte Carlo ray tracing andphoton maps. Two photon maps were used. One which is used to optimizerendering by the use of different types of photons, and one which is used torender caustics.

See figure 1 for an example of an image that was rendered by using the raytracer presented in this report.

Figure 1: An example of an image that was rendered using the ray tracerpresented in this report.

1

Page 2: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

2 Introduction

Photorealistic image rendering has been of interest in multiple different dis-ciplines for many years, including disciplines such as visualization, research,entertainment, design and architecture. An important aspect in the creationof such images is to emulate1 the complex characteristics of illumination. Forinstance, both direct and indirect lighting needs to be considered in order toachieve realism. Without indirect lighting, the realism suffers greatly, as de-picted by figure 2.

Figure 2: A comparison between direct (left) and global illumination (right).

Therefore, we need to be able to model the different lighting phenomenas inorder to render photoralistic images. However, a correct lighting model is notenough for practical reasons. We also need to be able to emulate a correct light-ing model with great efficiency. There are many different methods that attemptto do that, such as ray tracing, radiosity and photon mapping. A generaliza-tion of many rendering methods, such as ray tracing and photon mapping, wasformulated by J. T. Kajiya in 1986 [3]. The formulation, which was named therendering equation, is given by equation 1.

I(x,x′) = g(x,x′)[ε(x,x′) +

∫S

ρ(x,x′,x′′)I(x′,x′′)dx′′] (1)

1A common misconception is that we need to simulate the complex characteristics of il-lumination. This is not the case, since all we really need is to make the results ”look” real.The underlying behaviour does not necessarily have to match the lighting model that we areemulating.

2

Page 3: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

Where:

I(x,x′) is the intensity of light passing from point x′ to point x.g(x,x′) is a distance factor calculated as 1

r2 , where r is the distancefrom point x′ to x.

ε(x,x′) is the intensity of emitted light from x′ to x.ρ(x,x′,x′′) is the intensity of light scattered from x′′ to x by a patch of a

surface at x′.

Many important rendering algorithms that are generalized by the renderingequation attempt to solve the rendering equation approximately, including themethods mentioned above.

2.1 Ray tracing

Ray tracing is a technique used to render images by tracing rays from the viewpoint, through the camera plane, and into the scene. Ray tracing can be usedto simulate and emulate many different aspects of light, such as reflections,transparency, refraction and specularity.

One of the most famous breakthroughs in offline computer generated globalillumination was done by Turner Whitted in 1979. Whitted started to traceboth refractive, reflective and shadow rays through the scene, something thathadn’t been done before. The new ray types made it possible to render objectswith many different kinds of materials, such as glass and metal. Today, itis possible to use more advanced ray tracing schemes, such as Monte Carlo-based ray tracing methods. For example, distributed Monte Carlo ray tracingimproved on Whitted’s attempts at photorealistic rendering by introducing softphenomenas, such as soft shadows and blurry reflections. Distributed ray tracingaccomplishes that by averaging multiple camera rays per pixel, and can thusalso be used to reduce aliasing, which was harder (or even impossible) for earlierdeterministic and undistributed ray tracing algorithms. [5, 4]

Ray tracing is a rather simple and elegant solution to many problems inrendering. Its main challenge lies in achieving computational efficiency, whilestill being able to render highly realistic images. Therefore, a lot of effort hasbeen put into computational optimizations, such as improved data structuresand specialized algorithms. One such important optimization is to avoid ray-surface intersection calculations that require the use of numerical solutions. Asimple way to avoid such calculations is to use isosurfaces, such as spheres ormetaballs2, when possible. Such isosurfaces can be treated analytically, and ray-isosurface intersection points can thus be found without having to use numericalalgorithms.

2.2 Radiosity

Another method that attempts to solve the rendering equation is radiosity. Ra-diosity methods were first applied to computer graphics in 1984 [1]. Radiosity

2Metaballs are organic-looking n-dimensional objects.

3

Page 4: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

is a global illumination method that typically (only) approximates diffuse light-ing. Radiosity works by subdividing the scene into smaller surface areas called”patches”. A form factor, which describes patch visibility, is calculated for eachpair of patches. The form factors are then used to create a linear equation sys-tem of rendering equations. The radiosity of each patch is then given by solvingthe equation system of rendering equations.

In comparison to ray tracing, radiosity puts higher demands on the geometryof the scene, as the radiosity method may require very small patches in orderto achieve realistic results. Also, radiosity can easily be precomputed since it isview independent. The precomputed radiosity can then be reused as light mapsin real-time applications such as games.

2.3 Photon mapping

Photon mapping is another method that can be used to approximately solve therendering equation. Henrik Wann Jensen presented a two pass global illumina-tion algorithm in 1996 that takes advantage of two separate photon maps. Inphoton mapping, rays are traced from the light sources instead of from the viewpoint. Each photon has a position, a radiance value, and an incoming direction.The radiance value of a photon is affected by the materials of the surfaces thatthe photons collide with. The photons are stored in a data structure as theyhit surfaces. Henrik Wann Jensen suggested the use of a k-d tree as the photonmap data structure, although it is also possible to use a spatial hash map oroctrees (for the same purpose). [2]

Henrik Wann Jensen improves the rendering performance by introducingshadow photons, direct photons3 and indirect photons4. Shadow photons arestored at surfaces behind the surfaces that are hit by direct photons, while directphotons are stored on surfaces directly lit by light sources. The shadow anddirect photons are then used in the rendering step to decide whether a surfaceis in shadow or not. This reduces the number of required shadow rays, in someinstances even drastically, since visibility computations can in some cases bereduced to counting photons in the vicinity of a ray-surface intersection point.

3 Background and implementation

Our ray tracing algorithm is based on the two pass global illumination methoddescribed by Jensen in his paper ”Global Illumination using Photon Maps” [2].However, there are a few key differences worth mentioning. Our implementationsupports parallelized rendering using OpenMP5 and takes advantage of stratifiedsampling when creating primary rays6. Also, our main photon map was mainly

3Direct photons are photons that haven’t bounced (yet).4Indirect photons are photons that have bounced at least once.5OpenMP (Open Multi-Processing) is a programming interface that provides shared mem-

ory multiprocessing programming in C++, Fortran and C.6A primary ray is a ray with its origin in the view point, and that is cast through a pixel

in the camera plane into the 3D scene.

4

Page 5: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

used to improve rendering performance by discarding unnecessary rays, whileJensen’s algorithm uses indirect photons more effectively.

3.1 Ray-surface intersection

In ray tracing it is necessary to be able to determine the first object intersectedby a ray. It is also necessary to be able to determine a close approximation tothe point which a given ray intersects a given object’s surface (if there is a ray-surface intersection). In our case, ray-surface intersection is determined usingmanipulation of the mathematical representations of spheres, triangles and rays.We start by defining the concept of a ray, and then move on to define the othertwo geometrical primitives. We then derive how to calculate the ray-sphere andray-triangle intersection points using analytical methods.

3.1.1 The ray

A ray is the half of a line which starts at a given point called the origin, andgoes off infinitely in a fixed direction. We can thus parametrize any given rayas stated by definition 3.1.

Definition 3.1. A ray R solely depends on its origin O and direction D, andis defined and parametrized using t and its equation:

R(t) = O + tD, where t ≥ 0.

3.1.2 The sphere

A sphere is a perfectly round geometrical object. Thus, any sphere has a fixedradius. In our case we are also concerned about spheres that can be placed atany position in the scene, and as a consequence we also need to take the centerof the sphere into consideration. The sphere can therefore be defined, ratherintuitively, using definition 3.2.

Definition 3.2. A sphere S solely depends on its center C and radius R, andis defined using its equation (in Cartesian coordinates):

|X −C|2 = R2, where X is a point on the sphere.

3.1.3 The triangle

A triangle can be described using its three end-point vertices, and is thus definedusing definition 3.3.

Definition 3.3. A triangle T is defined as an unordered triple {V1,V2,V3}using its three unique vertices V1, V2 and V3.

5

Page 6: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

3.1.4 Ray-sphere intersection

One way to solve the ray-sphere intersection problem is to solve the more gen-eralized line-sphere intersection problem, and then determine if any of the line-sphere intersection points are on the ray. By solving the line-sphere intersectionproblem we get either zero, one or two intersection points. If there are zeroline-sphere intersection points there can impossibly be a ray-sphere intersectionpoint. If there are any line-sphere intersection points we can decide whetherany of these points are on the ray. Any line-sphere intersection point on theray is also a ray-sphere intersection point. Given a ray with the origin O, thenormalized direction D, and the parametrization t, and a sphere with the centerC and radius R, the ray-sphere intersection points are given by the points Xthat satisfy equation 2 and 3.

X = O + tD, t ≥ 0 (2)

|X −C|2 = R2 (3)

In order to determine the intersection points we therefore insert equation 2 intoequation 3:

|O + tD −C|2 = R2

⇒ (O + tD −C) · (O + tD −C)−R2 = 0

⇒ t2(D ·D) + 2tD · (O −C) + (O −C) · (O −C)−R2 = 0

By substituting u = 2 ·D · (O − C), v = (O − C) · (O − C) − R2, and sinceD ·D = 1, we get:

t2 + u+ v = 0⇒ t =−u2±√u2

4− v

The ray-sphere intersection points are thus given by the points X that satisfy:

X = O + tD, where t =−u2±

√u2

4− v, (

u2

4− v) ≥ 0 and t ≥ 0.

3.1.5 Ray-triangle intersection

There are many different ways to solve the ray-triangle intersection problem.One way to do so is to use the Moller-Trumbore ray-triangle intersection al-gorithm. Moller-Trumbore uses barycentric coordinates instead of Cartesiancoordinates, so in order to use Moller-Trumbore we need to map our Cartesiantriangle to barycentric coordinates. In barycentric coordinates (u, v), a triangleT can be described using its Cartesian vertices V1,V2,V3 by:

T (u, v) = (1− u− v)V1 + uV2 + vV3, where u ≥ 0, v ≥ 0 and u+ v ≤ 1.

6

Page 7: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

By intersecting the triangle T (u, v) with a ray R(t) = O + tD we get:

(1− u− v)V1 + uV2 + vV3 = O + tD

⇒ (−u− v)V1 + uV2 + vV3 − tD = O − V1

⇒ u(V2 − V1) + v(V3 − V1)− tD = O − V1

Next we substitute T = O − V1, E1 = V2 − V1 and E2 = V3 − V1, and get:

uE1 + vE2 − tD = T

By defining P = D × E2 and Q = T × E1, and using Cramer’s rule to solvethe equation above, we get:

t =Q ·E2

P ·E1

The ray-sphere intersection points are thus given by the points X that satisfy:

X = O + tD, where t =Q ·E2

P ·E1, and t ≥ 0.

3.2 Ray tracing

In order to render an image, primary rays are cast into the scene, as depictedby figure 3.

Figure 3: Primary rays are cast from a view point, through pixels in the cameraplane, and into the 3D scene.

Multiple primary rays are cast per pixel. This means that the resultingimage will be anti-aliased and less prone to noise. Stratified grid sampling wasused in order to reduce noise even further. The primary rays spawn shadow rays,indirect illumination rays, refraction rays and reflection rays whenever they hita surface. The type of rays spawned depends on the material of the hit surface.For example, refraction rays are only spawned whenever a transparent surface ishit, and perfectly transparent or perfectly reflective objects never spawn shadowrays.

7

Page 8: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

3.2.1 Materials

In our implementation, the material is basically an abstraction for two things:its bidirectional reflectance distribution function (BRDF) and physically-basedparameters. There are two types of materials: Lambertian and Oren-Nayar ma-terials, which use a Lambertian BRDF and an Oren-Nayar BRDF respectively.Every material also contains information about reflectivity, refractivity (trans-parency and refractive index), specularity, emissivity and surface color. Ourabstraction layer makes it simple to reuse and create many different physically-based materials; such as glass, rough surfaces (using Oren-Nayar BRDFs), mir-rors, soap bubbles and Lambertian surfaces. A few examples of such materialscan be seen in figure 4.

Figure 4: An example of a few different materials that can be created by simplytweaking a few material parameters.

3.2.2 The photon maps

We use two photon maps: the general photon map and the caustics photon map.The photon maps are created by ”shooting” photons from all the light sourcesinto the scene. These photons are then let loose, bouncing around in the sceneuntil Russian roulette decides otherwise. Whenever a photon hits a surface, itis stored in a k-d tree data structure. The k-d tree is rebalanced and optimizedafter all photons have been emitted.

8

Page 9: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

3.2.2.1 The general photon map The general photon map contains threekinds of photons: indirect, shadow and direct. The general photon map ismainly used to improve performance, and can in some cases also be used toapproximate indirect lighting.

The indirect illumination photons can be used to approximate indirect light-ing when there are few indirect lighting photons on a surface. Also, there is noneed to send indirect illumination rays from a ray-surface intersection point ifthere are no indirect illumination photons nearby. Figure 5 is a direct visual-ization of the indirect illumination photons.

Figure 5: A visualization of the indirect illumination photons used in the generalphoton map.

The direct and shadow photons are then used in order to decide whether it’smeaningful to send shadow rays or not. If there are many direct illuminationphotons, and no shadow photons, then the surface is simply lit by all lightsources. If there are many shadow photons, and nearly no direct illuminationphotons, then the surface is not lit by any of the light sources. Shadow photonsare sent through objects that occlude light, and stored whenever they hit asurface. The shadow and direct illumination photons are visualized in figure 6and 7 respectively.

9

Page 10: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

Figure 6: A visualization of the shadow photons used in the general photonmap.

Figure 7: A visualization of the direct illumination photons used in the generalphoton map.

3.2.2.2 The caustics photon map The caustics photon map containsonly caustics photons. The caustics photons are cast from the light sourcestowards all (non-occluded) transparent objects, and then traced through thescene. Whenever a caustics photon has refracted through a transparent object,and then hit an opaque object (such as a diffuse surface), it is stored in thecaustics photon map. The caustics photons are visualized in figure 8

10

Page 11: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

Figure 8: A visualization of the caustics photon map.

3.2.3 Direct illumination

Direct illumination is computed using shadow rays in combination with thegeneral photon map. If a ray-surface intersection point is visible from a givenlight source, then that light source should contribute to the direct lighting ofthe ray-surface intersection point. This is done by casting a shadow ray towardeach light source. If the shadow ray reaches the light source, we deduce thatthe surface is directly lit by that light source. If that is the case, then that lightsource will contribute to the illumination of the surface by using the surface’smaterial (and indirectly the surface’s BRDF).

If there are many shadow photons in the vicinity of the ray-surface intersec-tion point, and nearly no direct illumination photons, then the surface is not litby any of the light sources, and we can skip casting shadow rays. On the otherhand, if there are many direct illumination photons, and few shadow photons,we can also skip casting shadow rays since we know that the ray-intersectionpoint is directly illuminated by all light sources.

3.2.4 Indirect illumination and reflective rays

Indirect illumination is calculated by using a recursive ray trace. The ray trace isoptimized by using the general photon map. The ray trace is rather complicatedin the sense that it uses all the properties of the scene when deciding whether toshoot reflective, refractive or shadow rays. Rays may either stop the recursivetrace if they reach a maximum depth, or if they don’t hit a surface. It is alsopossible to use Russian roulette in order to terminate ray traces.

The normalized direction D of any indirect illumination ray is calculatedby using random cosine-weighted sampling from the hemisphere created in thedirection of the surface’s normal N (⇒D ·N > 0).

Another way to do the randomization of the ray direction is to use the naıvemethod of creating a ray direction by rotating the normal. This method uses

11

Page 12: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

two uniform random generators for the azimuth and inclination angles, and thensimply rotates the normal using those angles.

A ray can also be perfectly reflected when intersecting with a surface thatis either transparent or perfectly reflective. We define a perfectly reflected rayas the ray that has the origin in the ray-surface intersection point, and itsdirection R given by reflecting the incoming ray’s normalized direction D overthe surface’s normal N . The reflected direction is thus given by equation 4.

R = 2I −D (4)

Where I = D − (D ·N) ·N is an intermediary vector depicted in figure 9.

Figure 9: Reflection of a ray.

Finally, when a ray terminates its recursive trace, its final radiance (andthus also its contribution to a surface) is calculated using the surface’s material,using either Oren-Nayar or Lambertian BRDFs.

3.2.5 Refractive rays

A ray is refracted when intersecting with a surface that is transparent. Thedirection of a refracted ray is depicted in figure 10, and given by Snell’s law:

sin(θ1)

sin(θ2)=n2n1

Where θ1 is the angle between the incoming direction D and the normal N1, θ2is the angle between the normal N2 and the refracted ray R1 and n1 and n2 arethe refractive indexes for each medium.

12

Page 13: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

Figure 10: Refraction of a ray.

Transparent surfaces may also reflect rays. In such cases we used Schlick’sapproxmation to blend between reflected specular rays and refracted rays:

R(θ) = R0 + (1−R0)(1− cos θ)5

Where R0 = (n1−n2n1+n2 )2, and θ is the angle between the direction of the incoming

light and the normal of the transparent surface, and R is the approximatedspecular reflection coefficient.

3.2.6 Rendering of caustics

Caustics are directly visualized using the caustics photon map. This is doneby first searching for all caustics photons within a given radius at the pointwhere a ray intersects a surface. The radiance value for each photon is thencalculated using either Oren-Nayar or Lambertian BRDFs, depending on thematerial of the surface. The final contribution F from the caustics photon map,of a ray-surface intersection point, can thus be calculated as follows:

F =1

πr2

N∑i=1

Ri

Where r is the radius of the given search area, N is the number of photons,and Ri is the radiance of photon i.

13

Page 14: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

4 Results

We implemented a two pass global illumination algorithm that combined raytracing with two separate photon maps: the global photon map and the causticsphoton map. The implementation was programmed in C++ from scratch, ex-cept for a k-d tree library named kdtree++, a shared memory multiprocessingAPI named OpenMP, and a rather popular mathematics library named glm. Alltest results were generated using a x64 Windows 7 PC, an Intel Core i5-4690Kprocessor (with 4 cores at 3.5GHz) and 8GB RAM.

4.1 Parallelization

Rendering parallelization decreased rendering times by a factor higher than 2.0when using a high number of rays on our 4 core machine, as shown by table 1.All of the parallelization tests were rendered using a maximum recursion depthof 4, and a resolution of 400x400 pixels.

Table 1: A comparison of unparallelized rendering times (URT) and parallelizedrendering times (PRT).

Rays per pixel URT PRT URT / PRT1 3 sec. 22 sec 0.14

10 17 sec. 28 sec 0.61100 3 min. 8 sec. 1 min. 46 sec. 1.79200 6 min. 2 sec. 3 min. 28 sec. 1.75

1 000 23 min. 11 sec. 10 min. 40 sec. 2.17

4.2 Caustics

In another test we varied the number of caustics photons. The measured render-ing times of those tests are shown in table 2, and the resulting rendered imagesin figure 11. All of the caustics photons tests were rendered using 64 rays perpixel, a maximum recursion depth of 4, and a resolution of 400x400 pixels.

Table 2: Comparison of rendering times for different numbers of caustics pho-tons.

Number of caustics photons Rendering time0 26 sec.

1 000 37 sec.10 000 44 sec.

100 000 53 sec.

14

Page 15: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

0 photons. 1 000 photons.

10 000 photons. 100 000 photons.

Figure 11: Comparison of rendering quality for different numbers of causticsphotons.

4.3 Ray performance and quality

Varying the number of rays per pixel greatly affects both quality and perfor-mance, which is shown by our tests that are presented in table 3, and in ourrendered images in figure 12. The images were rendered using 100 000 causticsphotons, a maximum recursion depth of 4, and a resolution of 400x400 pixels.

15

Page 16: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

Table 3: Comparison of rendering times for different numbers of rays per pixel.

Amount of rays per pixel Rendering time1 16 sec.

10 20 sec.100 1 min. 12 sec.

1 000 12 min. 43 sec.

1 ray per pixel. 10 rays per pixel.

100 rays per pixel. 1 000 rays per pixel.

Figure 12: Comparison of rendering quality for different numbers of rays perpixel.

16

Page 17: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

4.4 Recursion depth performance and quality

The rendering times for varying (maximum) ray recursion depths can be seen intable 4, and the resulting images in figure 13. The images were rendered using100 000 caustics photons, 64 rays per pixel, and a resolution of 400x400 pixels.

Table 4: Rendering times for varying maximum recursion depths.

Maximum recursion depth Rendering time1 22 sec.2 33 sec.3 44 sec.4 53 sec.

Maximum recursion depth of 1. Maximum recursion depth of 2.

Maximum recursion depth of 3. Maximum recursion depth of 4

Figure 13: Images rendered with varying maximum recursion depths.

17

Page 18: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

4.5 The Oren-Nayar BRDF

A final test was done by varying the roughness parameter of the same Oren-Nayar material. The results can be seen in figure 14. The images were renderedusing 64 rays per pixel, and a resolution of 400x400 pixels.

Roughness at 0. Roughness at 1.

Roughness at 10. Roughness at 100.

Figure 14: Images with varying roughness parameters of the same Oren-Nayarmaterial.

18

Page 19: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

5 Discussion

Making a ray tracer is hard, and its even harder to make an efficient ray tracerthat can render photorealistic images. Most of the principles are rather intuitiveand simple, but it’s easy to miss important details, such as a simple dot productor ray direction (which was our case numerous times). Most research papersdon’t delve into details, for good reasons, but that can make it hard to find goodsources on how to do the basics correctly, which is not really helpful if you’renew to the game.

After hard struggle we managed to create something that we deem as asuccess. Our implementation is successful in the sense that it handles differ-ent materials, smooth lighting phenomenas and anti-aliasing gracefully (andefficiently), as can be seen in figure 15.

Figure 15: Our renderer handles different materials, smooth lighting phenome-nas and anti-aliasing with grace.

19

Page 20: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

However, there is always room for improvement. For instance, the compu-tational efficiency of ray-surface intersections could be improved by using somekind of scene subdivision. Octrees are pretty good at that, and there are veryefficient AABB-ray intersection algorithms. It’s even possible to go further byusing non-uniform binary space partitioning trees. Another possible optimiza-tion is to use Russian roulette, and priority-driven ray tracing to terminateunnecessary rays early.

There are many other aspects that could be improved. For example, inthe images in figure 11 we can see that the caustics of the objects with lowerrefractivity is less noisy when using 100 000 photons than when using 10 000photons. And when looking at objects with higher refractivity, where the out-going photons are more centered around a fixed point, the difference in noisinessis much smaller. This difference hints at a possible improvement: perhaps wecould improve performance by shooting fewer caustics photons towards objectswith higher refractivity, and thus reducing the total number of caustics photonsneeded.

Lastly, the indirect photons could have been used far more often to ap-proximate indirect lighting, and thus also improve performance. We did a fewattempts at that, but none of our attempts were successful. We either had badperformance due to too many indirect photons, or bad quality due to too fewindirect photons. In the end we had to go for something more conservative, andthe indirect photons became wasted potential at best.

It’s evident that it’s possible to come up with optimizations, improvementsand changes no matter how many improvements and optimizations that areimplemented. In the end, it is the time and resources at hand that limits theimplementation. By using today’s computational capabilities we are able to domany interesting things that weren’t even remotely possible 30 years ago, andit’s rather mind-boggling to think about what we will be able to do 30 yearsfrom now. This concludes our report, and we hope that you had a pleasantreading.

References

[1] Torrance K. E. Greenberg D. P. Battaile B. Goral, C. M. Modeling theinteraction of light between diffuse surfaces. ACM SIGGRAPH ComputerGraphics, 18(3), 1984.

[2] Henrik Wann Jensen. Global Illumination using Photon Maps. RenderingTechniques ’96 (Proceedings of the Seventh Eurographics Workshop on Ren-dering), 1996.

[3] James T. Kajiya. The rendering equation. ACM Siggraph Computer Graph-ics, 20(4):143–150, 1986.

[4] Loren Carpenter Robert L. Cook, Thomas Porter. Distributed Ray Tracing.Computer Graphics Volume 18, Number 3, 1984.

20

Page 21: Global Illumination using Photon Maps - WordPress.com · 2016-10-28 · Global Illumination using Photon Maps Fredrik Pr antare (prantare@gmail.com) David Lindqvist (david isak lindqvist@hotmail.com)

[5] Turner Whitted. An improved illumination model for shaded display. ACMSiggraph 2005 Courses, 23(6):343–349, 1980.

21


Recommended