+ All Categories
Home > Documents > 'Single Sample Soft Shadows using Depth...

'Single Sample Soft Shadows using Depth...

Date post: 29-Aug-2020
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
10
Single Sample Soft Shadows using Depth Maps Stefan Brabec Hans-Peter Seidel Max-Planck-Institut f¨ ur Informatik Abstract In this paper we propose a new method for rendering soft shadows at interactive frame rates. Although the algorithm only uses information obtained from a single light source sample, it is capable of producing subjec- tively realistic penumbra regions. We do not claim that the proposed method is physically correct but rather that it is aesthetically correct. Since the algorithm operates on sampled representations of the scene, the shadow compu- tation does not directly depend on the scene complexity. Having only a single depth and object ID map represent- ing the pixels seen by the light source, we can approxi- mate penumbrae by searching the neighborhood of pixels warped from the camera view for relevant blocker infor- mation. We explain the basic technique in detail, showing how simple observations can yield satisfying results. We also address sampling issues relevant to the quality of the computed shadows, as well as speed-up techniques that are able to bring the performance up to interactive frame rates. 1 Introduction One of the most problematic tasks in computer graphics is the accurate and efficient computation of soft shadows caused by extended light sources. Although there have been enormous efforts in this specific area, only a small subset of algorithms are really appropriate for interactive rendering applications. In this paper we will present a way of computing soft shadows using only sampled images taken from the view of a point light source. This soft shadow algorithm can be seen as an extension of the classical shadow map algo- rithm for calculating hard shadows. Instead of computing only a binary value (shadowed or lit) for each pixel seen by the camera, our algorithm processes the neighborhood of the corresponding depth map entry to gather informa- tion about what the shadow might look like in the case of an area light source. Even though the input data contains no information about the characteristics of an area light, the resulting shadows are yet of very good quality and give the impres- sion of a physically plausible computation. Using only a minimal amount of input data and a very compact algo- rithm, we can achieve extremely high computation speed. This way we can also utilize graphics hardware and spe- cialized processor instruction sets. 2 Previous Work Since a vast number of hard and soft shadow methods ex- ist for general and very specific situations, we will only briefly discuss some methods here, focusing on those suitable for interactive and real-time applications, as well as on algorithms which are related to our method. As a good starting point we recommend Woo’s survey on shadow algorithms [21]. In the field of hardware accelerated, interactive ren- dering, shadow algorithms are mainly categorized by the space in which the calculation takes place. One of the fundamental shadow algorithms, Crow’s shadow vol- umes [5], processes the geometry of the scene. By ex- tending occluder polygons to form semi-infinite volumes, so called shadow volumes, shadowed pixels can be deter- mined by simply testing if the pixel lies in at least one shadow volume. A hardware-accelerated implementation of Crow’s shadow algorithm was later proposed by Hei- dmann [10]. McCool [15] presented an algorithm that reduces the often problematic geometry complexity of Crow’s method by reconstructing shadow volumes from a sampled depth map. Complexity issues were also ad- dressed by Chrysanthou and Slater [4]. They propose the use of BSP trees for efficient shadow volume calculations in dynamic scenes. Brotman and Badler [3] came up with a soft shadow version of Crow’s algorithm where they generated shadow volumes for a number of light source samples and computed the overlap using a depth buffer algorithm. Discontinuity Meshing, e.g. [14], is another exact way for computing soft shadows in object-space. Here surfaces are subdivided in order to determine areas where the visible part of the area light source is constant. William’s shadow map algorithm [20] is the funda- mental idea of most methods working on sampled rep- resentations of the scene. The depths of visible pixels are computed for the view of the light source and stored away in a so called depth or shadow map. In the final render- ing pass, pixels seen by the camera are transformed to the light source coordinate system and tested against the pre- computed depth values. A hardware-based shadow map
Transcript
Page 1: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

Single Sample Soft Shadows using Depth Maps

Stefan Brabec Hans-Peter Seidel

Max-Planck-Institut fur Informatik

AbstractIn this paper we propose a new method for rendering

soft shadows at interactive frame rates. Although thealgorithm only uses information obtained from a singlelight source sample, it is capable of producing subjec-tively realistic penumbra regions. We do not claim thatthe proposed method is physically correct but rather thatit is aesthetically correct. Since the algorithm operates onsampled representations of the scene, the shadow compu-tation does not directly depend on the scene complexity.Having only a single depth and object ID map represent-ing the pixels seen by the light source, we can approxi-mate penumbrae by searching the neighborhood of pixelswarped from the camera view for relevant blocker infor-mation.

We explain the basic technique in detail, showing howsimple observations can yield satisfying results. We alsoaddress sampling issues relevant to the quality of thecomputed shadows, as well as speed-up techniques thatare able to bring the performance up to interactive framerates.

1 Introduction

One of the most problematic tasks in computer graphicsis the accurate and efficient computation of soft shadowscaused by extended light sources. Although there havebeen enormous efforts in this specific area, only a smallsubset of algorithms are really appropriate for interactiverendering applications.

In this paper we will present a way of computing softshadows using only sampled images taken from the viewof a point light source. This soft shadow algorithm canbe seen as an extension of the classical shadow map algo-rithm for calculating hard shadows. Instead of computingonly a binary value (shadowed or lit) for each pixel seenby the camera, our algorithm processes the neighborhoodof the corresponding depth map entry to gather informa-tion about what the shadow might look like in the case ofan area light source.

Even though the input data contains no informationabout the characteristics of an area light, the resultingshadows are yet of very good quality and give the impres-sion of a physically plausible computation. Using only aminimal amount of input data and a very compact algo-

rithm, we can achieve extremely high computation speed.This way we can also utilize graphics hardware and spe-cialized processor instruction sets.

2 Previous Work

Since a vast number of hard and soft shadow methods ex-ist for general and very specific situations, we will onlybriefly discuss some methods here, focusing on thosesuitable for interactive and real-time applications, as wellas on algorithms which are related to our method. Asa good starting point we recommend Woo’s survey onshadow algorithms [21].

In the field of hardware accelerated, interactive ren-dering, shadow algorithms are mainly categorized bythe space in which the calculation takes place. One ofthe fundamental shadow algorithms, Crow’s shadow vol-umes [5], processes the geometry of the scene. By ex-tending occluder polygons to form semi-infinite volumes,so called shadow volumes, shadowed pixels can be deter-mined by simply testing if the pixel lies in at least oneshadow volume. A hardware-accelerated implementationof Crow’s shadow algorithm was later proposed by Hei-dmann [10]. McCool [15] presented an algorithm thatreduces the often problematic geometry complexity ofCrow’s method by reconstructing shadow volumes froma sampled depth map. Complexity issues were also ad-dressed by Chrysanthou and Slater [4]. They propose theuse of BSP trees for efficient shadow volume calculationsin dynamic scenes. Brotman and Badler [3] came up witha soft shadow version of Crow’s algorithm where theygenerated shadow volumes for a number of light sourcesamples and computed the overlap using a depth bufferalgorithm. Discontinuity Meshing, e.g. [14], is anotherexact way for computing soft shadows in object-space.Here surfaces are subdivided in order to determine areaswhere the visible part of the area light source is constant.

William’s shadow map algorithm [20] is the funda-mental idea of most methods working on sampled rep-resentations of the scene. The depths of visible pixels arecomputed for the view of the light source and stored awayin a so called depth or shadow map. In the final render-ing pass, pixels seen by the camera are transformed to thelight source coordinate system and tested against the pre-computed depth values. A hardware-based shadow map

Page 2: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

technique was presented by Segal et al. [18].William’s original work suffered from sampling arti-

facts during the generation of the shadow map as well aswhen performing the shadow test. Reeves et al. [17] pro-posed a filtering method called percentage closer filteringwhich solved these problems and generates smooth, an-tialiased shadow edges. Reeves’ approach is also oftenused to approximate penumbra regions by varying the fil-ter kernel with respect to the projected footprint. This issomewhat similar to our approach but in general requiresa very high resolution depth map in order to obtain softshadows with reasonable quality.

Brabec et al. [2] showed how Reeves’ filtering schemecan be efficiently mapped to hardware. Hourcade andNicolas [12] also addressed the shadow map samplingproblems and came up with a method using object iden-tifiers (priority information) and prefiltering.

To compute soft shadow textures for receiver poly-gons, Herf and Heckbert [9] combined a number of hardshadow images using an accumulation buffer [7]. Al-though this method uses graphics hardware, it still re-quires a large number of light source samples to achievesmooth penumbra regions.

An approximative approach to soft shadowing was pre-sented by Soler and Sillion [19] using convolution ofblocker images. On modern hardware this method canutilize specialized DSP features to convolve images, lead-ing to interactive rendering times. The main drawback ofthe method is the clustering of geometry, as the number ofclusters is directly related to the amount of texture mem-ory and convolution operations.

Heidrich et al. [11] showed that soft shadows for linearlight sources can be computed using only a minimal num-ber of light source samples. Depth maps are generated foreach sample point and processed using an edge detectionstep. The resulting discontinuities are then triangulatedand warped to a so called visibility map, in which a per-centage visibility value is stored. Although the methodworks very well for linear lights, it can not directly beapplied to the case of area light sources.

Keating and Max [13] used multi-layered depth images(MDIs) to approximate penumbra regions. This methodis related to our algorithm because MDIs are obtainedfrom only a single light source sample. However, in con-trast to this multi-layer approach, our algorithm operatesjust on a single depth map taken from the view of the lightsource sample.

Agrawala et al. [1] efficiently adopted image-basedmethods to compute soft shadows. Although theircoherence-based ray tracing method does not perform atinteractive rates, they also presented an approach usinglayered attenuation maps, which can be used in interac-

Figure 1: Computing penumbrae for a point light source.

tive applications.A fast soft shadow method, especially suited for tech-

nical illustrations, was proposed by Gooch et al. [6]. Herethe authors project the same shadow mask multiple timesonto a series of stacked planes and translate and accumu-late the results onto the receiver plane.

Haines [8] proposed a method for approximating softshadows by first generating a hard shadow image on thereceiver plane and then compute penumbra regions us-ing distance information obtained from the occluder’s sil-houette edges. This paper is related to our work sinceit is also based on the work of Parker et al. [16], whichwill be explained in detail in Section 3.1. Drawbacks ofHaines’ method are that receivers need to be planar andthat penumbra regions are only generated for regions out-side the initial hard shadow.

3 Soft Shadow Generation using Depth Maps

3.1 Single Sample Soft ShadowsParker et al. [16] showed how soft penumbra regions canbe generated by defining an extended hull for each pos-sible occluder object. By treating theinner object asopaque and having the opacity of theouterobject fall offtowards the outer boundary one can dim the contributionof a point light source according to the relative distancesof light, occluder and receiver. Figure 1 illustrates thisscheme.

In order to avoid light leaks occurring for adjacent ob-jects the size of the inner object needs to be at least aslarge as the original occluder geometry. Since this causesrelatively large umbra regions, which would not occur ina physically correct shadow computation, the approxima-tion still produces reasonably looking shadows as longas the occluder objects aren’t too small compared to thesimulated light source area. Parker et al. implemented

Page 3: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

this scheme using standard ray tracing. In this case, it isa comparatively easy task to compute the extended hullsfor primitives like spheres and triangles, and ray intersec-tion directly calculates the distances to the outer and innerboundaries, which are used to compute a correspondingattenuation factor.

Although it was shown that the algorithm only intro-duces about 20% of computation overhead (compared tonormal ray tracing), it is still far from being suitable forinteractive rendering. Especially when it comes to morecomplex scenes, too much computation is spent on ex-tending the geometric primitives and computing attenua-tion factors that later will be discarded.

In the following sections we will show that this methodcan be adopted to work on sampled data (depth maps) ina much more efficient manner, while still achieving goodshadow quality.

3.2 A Sampling Based ApproachJust like the traditional shadow map algorithm presentedin [20], we start with the computation of two depth im-ages, one taken from the view of the point light sourceand one taken from the camera. To compute hard shad-ows we simply have to compare the transformed z valueof each frontmost pixel (as seen by the camera) to the cor-responding entry in the light source depth map, accordingto the following algorithm:

foreach( x, y) {P = (x, y, depth camera [x, y])P ′ =warp to light( P )if (depth light [P ′x, P

′y] < P ′z)

pixel is blockedelse

pixel is lit}

To modify this method to add anoutsidepenumbra re-gion, we have to extend theelse branch of the shadowtest to determine if the pixel is really lit or lies in apenumbra region. According to the ray tracing schemeexplained in the previous section, we have to trace backthe ray from the surface point towards the light sourceand see if any outer object is intersected. If we con-sider the light source depth map as a collection ofvir-tual layers, where each layer is a binary mask describingwhich pixels between the light and the layer got blockedby an occluder inbetween (hard shadow test result), wecan simulate the intensity fall-off caused by an area lightsource by choosing the nearest layer toP ′z that is stillin front, and compute the distance between(P ′x, P

′y) and

the nearest blocked pixel in that specific layer. This is ina sense similar to Parker’s method since finding the min-imum distance corresponds to intersecting the outer hull

and computing the distance to the inner boundary. Themain difference is of course that we use a sampled repre-sentation containing all possible occluders rather than theexact geometry of only one occluder.

���

� ������� ����� �

������������ ���

Figure 2: Projecting and searching for the nearest blockerpixel.

Figure 2 illustrates the search scheme using a very sim-ple setup consisting of the umbra generated by an ellip-soid as an occluder and a ground plane as the receiverpolygon. For a given pointP which does not lie insidethe umbra, we first warpP to the view of the light source(P ′). Since the original pointP was somewhere near theumbra, we find the transformed pointP ′ in the neighbor-hood of the blocker image which causes the umbra. Tocalculate an attenuation factor forP , we start searchingthe neighborhood ofP ′ till we either find a blocked pixelor a certain maximal search radiusrmax is exceeded. Theattenuation factorf is now simply the minimal distancer (or r = rmax if no blocking pixel is found) divided bythe maximal radiusrmax. Sof = r

rmaxrises up from0

(no illumination) to1 (full illumination) as the distanceto the blocker increases.

In other words, we can now generate smooth shadowpenumbra regions of a given maximal extentrmax. Tosimulate the behavior of areal area light source, we nowhave to define which properties affect the size of thepenumbra and how these can be realized with our searchscheme. As widely known, the following two distancesmainly define the extend of the penumbra1:

• the distance between occluder and receiver, and

• the distance between receiver and light source.

For our search scheme the distance between receiverand light source can be integrated by varyingrmax ac-cording to the distance between a given surface pointPand the light source position. Assuming a fixed occluder,

1Apart from other properties like orientation of receiver and lightsource etc.

Page 4: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

a receiver far away from the light source will get a largerpenumbra whereas a receiver near to the light source willhave a smallerrmax assigned.

Taking into account the distance between occluder andreceiver is a little bit tricky: Since finding an appropriateoccluder is the stop criterion for our search routine, we donot know in advance what this distance will be. What wedo know is that the occluder has to be inside the regiondetermined by the maximal extend, which is computedusing the distance between receiver and light source.

In other words, the finalrmax may be less the initialsearch radius. For our search routine this means that wefirst search the maximal extend since an occluder pixelis found and then re-scale the initial search radius by afactor computed using the distance between the surfacepoint P and the found occluder pixel and use thisrmaxas the denominator for computingf (attenuation factor).

Assuming that the position of the point light in lightsource space is located at(0, 0, 0) and that the light di-rection is along thez axis, we set the inital search radius

rmax = rscale ∗ |P ′z|+ rbias ,

whererscale andrbias are user defined constants describ-ing the area light effect2. Since shadow maps are usuallygenerated for the very limited cut-off angle of spotlights,the difference of usingP ′z instead of computing an eu-clidean distance is negligible. We can now rewrite thehard shadow algorithm to produce soft shadows by sim-ply adding this search function:

foreach( x, y) {P = (x, y,depth camera [x, y])P ′ =warp to light( P )if (depth light [P ′x, P

′y] < P ′z)

pixel is blockedelsef = search( P ′)modulate pixel by f

}search( P ′) {r = 0rmax = rscale ∗ |P ′z|+ rbiaswhile( r < rmax) {

if ∃(s, t) : ‖(P ′x, P ′y)− (s, t)‖ = r∧depth light [s, t] < P ′z {

rmax∗ = rshrink ∗ (P ′z − depth light [s, t])return clamp0,1(r/rmax)

}else

increase r}return 1.0

}2rbias can be used to force a certain penumbra width even for re-

ceivers very near to the light source.

In the first loop we iterate over all frontmost pixels asseen by the camera performing the hard shadow test. Foreach lit pixel we start a search routine where we searchin the light source depth map in order to find a suitableblocker pixel at a minimal distance to the transformedsurface point. If a blocker pixel is found we then re-scalethe inital rmax by a factor computed using the distancebetween the surface point and the occluder pixel. Anuser-defined scaling factorrshrink is used to give addi-tional control on the effect of this distance.

As can be seen in the pseudo code the describedvirtuallayers are implicitly selected by processing only thosepixels in the depth map where a blocker lies in front ofthe potential receiver (depth light [s, t] < P ′z).

Up to now we have restricted ourselves to a very simplesetup where the receiver was parallel to the light sourceimage plane. This has the effect thatP ′z remains con-stant during the soft shadow search, or in other words,the search takes place in a constant virtual layer. This isno longer the case if we consider an arbitrary receiver asdepicted in Figure 3.

� ���

����� ��� � �� ��� ����������������

� ������ �!#"%$'& (

Figure 3: Wrong self shadowing due to layer crossing.

If we performed a search on the constant layerz <P ′z we would immediately end up in the receiver’s ownshadow since the receiver plane may cross several of thevirtual layers. This can be seen in the virtual layer imagein Figure 3 where about two thirds of the layer containblocked pixels belonging to the receiver polygon.

To solve this problem, we either have to divide thescene into disjunct occluders and receivers3, which wouldmake the algorithm only suitable for very special situ-ations, or we need to supply more information to the

3Which is e.g. suitable for games where a character moves in a staticenvironment.

Page 5: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

search routine. To define an additional search criterion,which gives the answer to the question ”does this blockerpixel belong to me?”, we follow Hourcade’s [12] ap-proach and assign object IDs. These IDs are identifica-tion numbers grouping logical, spatially related, objectsof the scene.

It must be pointed out that all triangles belonging toa certain object in the scene must be assigned the sameobject ID, otherwise self shadowing artifacts would occurif the search exceeded the projected area of the trianglebelonging toP . Of course there are situations where alsothe ID approach fails, e.g. if distinct objects are nearlyadjacent, but for most real-time applications there shouldexist a reasonable grouping of objects.

3.3 Handling of Hard Shadow Regions

Up to now we have concentrated on the computation ofthe outer part of the hard shadow region and simply as-sumed that the hard shadow region is not lit at all. In thecase of an area light source, which we would like to sim-ulate, this is of course an indefensible assumption. Whatwe would like to obtain is of course a penumbra regionwhich also smoothes thisinner region. This can be eas-ily achieved if we apply the same search technique forpixels that are initially blocked by an occluder. Insteadof searching for the nearest blocker pixel within a givensearch radius we now have to search for the nearest pixelthat is lit by the light source.

To combine this with theouterpenumbra result we as-sume thatouterandinner regions meet at an attenuationvalue of 0.5 (or some user defined constant). The fi-nal algorithm (including the object ID test) that producespenumbra regions can then be implemented according tothe following pseudo code:

foreach( x, y) {P = (x, y,depth camera [x, y])P ′ =warp to light( P )P ′ID =id camera [x, y]inner = depth light [P ′x, P

′y] < P ′z

f = search( P ′, inner)modulate pixel by f

}search( P ′, inner) {r = 0rmax = rscale ∗ |P ′z|+ rbiaswhile( r < rmax) {

if innerif ∃(s, t) : ‖(P ′x, P ′y)− (s, t)‖ = r ∧

depth light [s, t] >= P ′z ∧id light [s, t] == P ′ID

rmax∗ = rshrink ∗ (depth light [s, t]− P ′z)return 0.5 ∗ clamp0,1(r/rmax)

elseif ∃(s, t) : ‖(P ′x, P ′y)− (s, t)‖ = r ∧

depth light [s, t] < P ′z ∧id light [s, t] 6= P ′ID

rmax∗ = rshrink ∗ (P ′z − depth light [s, t])return 1.0− 0.5 ∗ clamp0,1(r/rmax)

increase r}return inner ? 0.0 : 1.0

}

3.4 DiscussionThe presented algorithm is capable of producing percep-tually pleasing, rather than physically correct soft shad-ows using a total of four sampled images of the scene(two object ID maps, two depth maps). The behavior (ex-tent) of the area light can be controlled by user definedconstants. Using unique object IDs to group primitivesinto logical groups, soft shadows are computed for ev-ery occluder/receiver combination not sharing the sameobject ID.

4 Implementation

4.1 Generating the Input DataSince our algorithm relies on sampled input data, graph-ics hardware can be used to generate the input dataneeded for the shadow computation. In a first step werender the scene as seen by the light source and encodeobject IDs as color values. For very complex scenes weeither use all available color channels (RGBA) or restrictourselves to one channel (alpha) and assign object IDsmodulo2n (n bits precision in the alpha channel). Thisgives us the depth map (z-buffer) and the object IDs ofthe frontmost pixels according to the light source view,which we transfer back to the host memory. We then re-peat the same procedure for the camera view. If only thealpha channel is used for encoding the object IDs, we cancombine this rendering pass with the rendering of the fi-nal scene (without shadows).

In cases where 8 bits are enough we could also use aspecial depth/stencil format available on newer NVIDIAGeForce cards. With this mode we simply encode IDs asstencil values and obtain a packed ID/depth map (8 bitsstencil, 24 bit depth) using only one frame buffer read.Another benefit of this format is that memory accesses toid/depth pairs are more cache friendly.

4.2 Shadow ComputationThe actual shadow computation takes place at the hostCPU. According to the pseudo code in Section 3.3, we it-erate over all pixels seen by the camera and warp them tothe light source coordinate system. Next we start search-ing for either the nearest blocker pixel (outer penumbra

Page 6: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

Figure 4: Computing distances at subpixel accuracy.

region) or the nearest pixel that is lit (inner penumbra re-gion).

Since memory accesses (and the resulting cachemisses) are the main bottleneck, we do not search cir-cularly around the warped pixel but search linearly usingan axis aligned bounding box. Doing so we are actuallycomputing more than needed but this way we can utilizeSIMD (single instruction, multiple data) features of theCPU, e.g. MMX, 3DNow, or SSE, which allows us tocompute severalr in parallel. If an appropriate blockingpixel is found (object ID test, minimal distance), we storethe resulting attenuation factor for the given camera spacepixel. If the search fails, a value of1.0 or 0.0 is assigned(full illumination, hard shadow).

At the end, the contribution of the point light source ismodulated by the attenuation map using alpha blending.

4.3 Improvements

Subpixel Accuracy

When warping pixels from camera to light there are twoways to initialize the search routine. One would be tosimply round(P ′x, P

′y) to the nearest integer position and

compute distances using only integer operations. Whilethis should give the maximum performance, the quality ofthe computed penumbrae would suffer from quantizationartifacts. Consider the case where pixels representing alarge area in camera screen space are warped to the samepixel in the light source depth map. Since all pixels willfind the same blocker pixel at the same distance, a con-stant attenuation factor will be computed for the wholearea. This can be avoided by not rounding to the nearestinteger but performing the distance calculation at floatingpoint precision. As depicted in Figure 4, we compute thedistance of the warped pixel (grey) to the next blockerpixels, which lie at integer position. Quantization arti-facts can be further reduced if we also slightly jitter theinteger position of the blocker pixels. In practice we ob-served that the latter is only needed for very low resolu-tion depth maps.

������������� �������� ��� ���

������������ ����� ��!#"%$�& '�(�) )�*

+�,�-�.�/�01

23

4

5

6

7

8

9

:

;

<

=

>

?

Figure 5: Subdivision and interpolation.

Adaptive SamplingUp to now we only briefly discussed the cost of searchingthe depth map. Consider a scene where only 5% of thefrontmost pixels are in hard shadow. To compute accuratepenumbra regions we would need to perform neighbor-hood searches for 95% of the pixels in the worst case4. Sofor all completely lit pixels we have searched the largestregion (rmax) without finding any blocker pixel. Evenwith a highly optimized search routine and depth maps ofmoderate size it would be very difficult to reach interac-tive frame rates.

Instead we propose an interpolation scheme that effi-ciently reduces the number of exhaustive searches neededfor accurate shadowing. The interpolation starts by iter-ating over the columns of the camera depth map. In eachiteration step, we take groups of 5 pixels and do the hardshadow test for all of them. Additionally, we also storethe corresponding object IDs of the blockers, or, in thecase of lit pixels, the object ID of the receiver pixel. Next,we perform a soft shadow search for the two border pix-els in this group. As a criterion for the inner pixels wecheck if

• the object IDs are equal and

• the hard shadow test results are equal.

If this is true, we assume that there will be no dramaticshadow change within the pixel group and simply linearlyinterpolate the attenuation factors of the border pixelsacross the middle pixels. If the group test fails we refineby doing the soft shadow search for the middle pixel andsubdivide the group into two three pixel groups for whichwe repeat the group test, interpolation and subdivision.

Figure 5 shows an example of such an interpolationstep. Let us assume that the object ID of pixel3 differsfrom the rest. In the first phase we perform hard shadow

4Worst case occurs when all pixels are in the view of the light source.

Page 7: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

tests for all pixels and soft shadow searches for the twoborder ones. Since the interpolation criterion fails (IDsnot equal), the pixel group is refined by a soft shadowsearch for pixel2 and subdivided into two groups. Pixelgroup(0, 1, 2) fulfills the criterion and an interpolated at-tenuation factor is assigned to pixel1, whereas for pixelgroup (2, 3, 4) we need to compute the attenuation bysearch. As we will later also need object IDs for inter-polated pixels, we simply use the object ID of one inter-polant in that case. We repeat this for all pixel groups inthis and every 4th column, leaving a gap of three pixel inhorizontal direction.

Having linearly interpolated over the columns we nowprocess all rows in the same manner and fill up the hor-izontal gaps. This bi-linear interpolation mechanism iscapable of reducing the number of expensive searches. Inthe best case, the searching is only done for one pixel ina 16 pixel block. Since this case normally occurs veryoften (e.g. in fully illuminated areas), we can achieve agreat speed-up using the interpolation. On the other hand,quality loss is negligible or non-existent because of thevery conservative refinement.

The size of the pixel group used for interpolationshould depend on the image size. In experimentswe observed that blocks of4×4 pixels are a goodspeed/quality tradeoff when rendering images of mod-erate size (512×512, 800×600 pixels), whereas largerblock sizes may introduce artifacts due to the non-perspectively correct interpolation.

5 Results

We have implemented our soft shadow technique onan Intel Pentium 4 1.7GHz computer equipped with anNVIDIA GeForce3 graphics card. Since the generationof depth and ID maps is done using graphics hardware,we get an additional overhead due to the two frame bufferreads needed to transfer the sampled images back to hostmemory.

Figure 7 shows the results of our soft shadow algorithmfor a very simple scene consisting of one torus (occluder)and a receiver plane. We rendered the same scene threetimes varying only the position and orientation of the oc-cluder.

All images in Figure 7 were rendered using an imageresolution of 512×512 pixels and a light depth/ID mapresolution of 256×256 pixels. By default, we always usethe full image resolution when computing the depth andID map for the camera view. Frame rates for this sceneare about10 − 15fps. Computing only the hard shad-ows (shadow test done on the host CPU) the scene can berendered at about30fps.

In the left imagermax was set to 20 (20 pixel search

0

5

10

15

20

25

30

0 10 20 30 40 50 60

fps �

search radius

Timings - Torus1

Figure 6: Frame rates for the torus test scene (Figure 8)

radius) for the inner and outer penumbra. The receiverplane does not reach this maximum (due to the distancebetween receiver and light source). The average searchradius for pixels on the receiver plane is about16 pixels.The effect of increasing or decreasingrmax for this sceneis plotted in Figure 6. It must be pointed out that thedistance between occluder and receiver does not affectthe inital search radius. Therefore the cost of computingsoft shadows for the three images in Figure 7 is nearlyconstant.

In the left image artifacts can be seen (ring), where theinner and outer penumbra meet. This is because the atten-uation factors for inner and outer regions are computed ina slightly different way (see Section 3.3). Theoreticallythis transition should be smooth.

Figure 8 (left) shows a more crowded example scenewith objects placed at various heights. It can be seen thatobjects very near to the floor plane cast very sharp shad-ows, whereas the shadows from the three tori are muchsmoother. The other two images in Figure 8 show thescene with hard shadows and hard shadows with outerpenumbra. Since our soft shadow algorithm is based onthe shadow map technique, we are independent of thescene geometry, which means we can generate soft shad-ows for arbitrary geometry. There is no distinction be-tween receiver and occluder objects (apart from the miss-ing self shadowing due to the ID test).

Figure 9 shows two more complex scenes where weused our soft shadow algorithm for penumbra genera-tion. In order to assign reasonable object IDs we simplygroup polygons using the tree structure obtained whenparsing the scene file. This way all polygons sharing thesame transformation and material node are assigned thesame object ID. Both images were taken using a low-

Page 8: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

resolution light depth/ID map of 256×256 pixels and animage resolution of 512×512 pixel. In the right image wechoose a very large cutoff angle for the spotlight whichwould normally generate very coarse hard shadows. Herethe subpixel accuracy explained in Section 4.3 efficientlysmoothes the shadows. Both images can be rendered atinteractive frame rates (≈ 15fps).

Note that all the timings strongly vary with the sizeof the penumbra, so changing the light position or alter-ing rmax may speed up or slow down the computation,depending on the number of searches that have to be per-formed. When examining the shape of the penumbrae,one can observe that they do not perfectly correspond tothe occluder shape. This is due to the circular nature ofthe search routine, which rounds off corners when search-ing for the minimal distance.

6 Conclusions and Future Work

In this paper we have shown how good-looking, softpenumbra regions can be generated using only informa-tion obtained from a single light source sample. Althoughthe method is a very crude approximation it gives a dra-matic change in image quality, while still being compu-tationally efficient. We showed how the time consum-ing depth map search can be avoided for many regionsby interpolating attenuation factors across blocks of pix-els. Since the algorithm works on sampled representa-tions of the scene, computation time depends mostly onthe shadow sizes and image resolutions and not on geo-metric complexity, which makes the method suitable forgeneral situations.

In it is current state the algorithm still relies on a num-ber of user parameters (rmax, rshrink, etc.) which whereintroducedad-hoc. As future work we would like to tohide these parameters and compute them based on oneintuitive parameter (e.g. the radius of a spherical lightsource, defined in the scene’s coordinate system). Thisway it would also be possible to compare our method tomore accurate algorithms.

With real time frame rates as a future goal, anotherfocus will be on more sophisticated search algorithmsthat work on hierarchical and/or tiled depth maps as wellas investigating methods of pre-computed or cached dis-tance information. Further speed improvements couldalso be achieved by using graphics hardware, e.g. inter-leaved frame buffer reads, as well as on the host CPU byusing special processor instructions sets.

Another research direction will be the quality of shad-ows. Up to now we simply used a linear intensity fall-off, which of course is not correct. Assuming a diffusespherical light and an occluder with a straight edge (simi-lar to Parker’s original algorithm), a better approximation

would be a sinusoid as the attenuation function.Finally, we have only slightly addressed aliasing issues

that occur when working on sampled data. Our algorithmcan work on very low-resolution image data since thesearch technique efficiently smoothes blocky hard shad-ows. However, we expect an additional improvementof quality by using filtering schemes that also take intoaccount the stamp size of the warped pixel or work onsuper-sampled depth maps.

Acknowledgements

We would like to thank Prof. Wolfgang Heidrich of theUniversity of British Columbia, Canada, and the anony-mous reviewers for valuable discussions and commentson this topic.

References

[1] Maneesh Agrawala, Ravi Ramamoorthi, AlanHeirich, and Laurent Moll. Efficient image-basedmethods for rendering soft shadows.Proceedings ofSIGGRAPH 2000, pages 375–384, July 2000. ISBN1-58113-208-5.

[2] Stefan Brabec and Hans-Peter Seidel. Hardware-accelerated rendering of antialiased shadows withshadow maps.To appear in: Computer GraphicsInternational 2001, 2001.

[3] L. S. Brotman and N. I. Badler. Generating softshadows with a depth buffer algorithm.IEEE Com-puter Graphics and Applications, 4(10):71–81, Oc-tober 1984.

[4] Yiorgos Chrysanthou and Mel Slater. Shadow vol-ume bsp trees for computation of shadows in dy-namic scenes.1995 Symposium on Interactive 3DGraphics, pages 45–50, April 1995. ISBN 0-89791-736-7.

[5] Franklin C. Crow. Shadow algorithms for computergraphics. InComputer Graphics (SIGGRAPH ’77Proceedings), pages 242–248, July 1977.

[6] Bruce Gooch, Peter-Pike J. Sloan, Amy Gooch, Pe-ter S. Shirley, and Rich Riesenfeld. Interactive tech-nical illustration. In1999 ACM Symposium on In-teractive 3D Graphics, pages 31–38. ACM SIG-GRAPH, April 1999. ISBN 1-58113-082-1.

[7] Paul E. Haeberli and Kurt Akeley. The accumula-tion buffer: Hardware support for high-quality ren-dering. In Computer Graphics (SIGGRAPH ’90Proceedings), pages 309–318, August 1990.

[8] E. Haines. Soft planar shadows using plateaus.Journal of Graphic Tools, 6(1):19–27, 2001.

Page 9: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

[9] Paul Heckbert and Michael Herf. Simulating softshadows with graphics hardware. Technical ReportCMU-CS-97-104, Carnegie Mellon University, Jan-uary 1997.

[10] T. Heidmann. Real shadows real time.IRIS Uni-verse, 18:28–31, November 1991.

[11] Wolfgang Heidrich, Stefan Brabec, and Hans-PeterSeidel. Soft shadow maps for linear lights.Render-ing Techniques 2000: 11th Eurographics Workshopon Rendering, pages 269–280, June 2000. ISBN3-211-83535-0.

[12] J. C. Hourcade and A. Nicolas. Algorithms forantialiased cast shadows.Computers & Graphics,9(3):259–265, 1985.

[13] Brett Keating and Nelson Max. Shadow penumbrasfor complex objects by depth-dependent filtering ofmulti-layer depth images. InRendering Techniques’99 (Proc. of Eurographics Rendering Workshop),pages 197–212, June 1999.

[14] Daniel Lischinski, Filippo Tampieri, and Donald P.Greenberg. Discontinuity meshing for accurate ra-diosity. IEEE Computer Graphics & Applications,12(6):25–39, November 1992.

[15] Michael D. McCool. Shadow volume reconstruc-tion from depth maps. ACM Transactions onGraphics, 19(1):1–26, January 2000.

[16] Steven Parker, Peter Shirley, and Brian Smits.Single sample soft shadows. Technical Re-port UUCS-98-019, Computer Science Depart-ment, University of Utah, 1998. Available fromhttp://www.cs.utah.edu/vissim/bibliography/.

[17] William T. Reeves, David H. Salesin, and Robert L.Cook. Rendering antialiased shadows with depthmaps. InComputer Graphics (SIGGRAPH ’87 Pro-ceedings), pages 283–291, July 1987.

[18] Marc Segal, Carl Korobkin, Rolf van Widenfelt, JimForan, and Paul Haeberli. Fast shadow and lightingeffects using texture mapping. InComputer Graph-ics (SIGGRAPH ’92 Proceedings), pages 249–252,July 1992.

[19] Cyril Soler and Francois X. Sillion. Fast calcula-tion of soft shadow textures using convolution. InComputer Graphics (SIGGRAPH ’98 Proceedings),pages 321–332, July 1998.

[20] Lance Williams. Casting curved shadows on curvedsurfaces. InComputer Graphics (SIGGRAPH ’78Proceedings), pages 270–274, August 1978.

[21] Andrew Woo, Pierre Poulin, and Alain Fournier.A survey of shadow algorithms.IEEE Computer

Graphics & Applications, 10(6):13–32, November1990.

Page 10: 'Single Sample Soft Shadows using Depth Maps'graphicsinterface.org/wp-content/uploads/gi2002-25.pdf · 3 Soft Shadow Generation using Depth Maps 3.1 Single Sample Soft Shadows Parker

Figure 7: A simple test scene showing the effect of varying distance between receiver and occluder.

Figure 8: A more crowdedscene. Left: soft shadows, middle: hard shadows, right: hard shadows with outer penumbra.

Figure 9: Two more complex scenes rendered with our soft shadow algorithm.


Recommended