Appendix: Colour Images
355
Myszkowski et al. (pp. 5-18)
Fig. 1. The processing flow for inbetween frames
356
Walter et al. (pp. 19-30)
Fig. 2. Some frames from a render cache session. See main text for more detail
Fig.8. Some example images captured from interactive sessions. Some approximation artifacts are visible but the overall image quality is good. All scenes are ray traced except the lower right which is path traced. In the upper right image we have just moved the ice cream glass and you can see its shadow in the process of being updated on the table top
Bala et ale (pp. 31-44)
Plate A: Tunnels and Interpolant Dependencies
Fig. 1. 3 spheres. On the right, one interpolant for the red sphere is shown. The interpolant has: a reflection tunnel to the ground, and a direct tunnel to the light
Fig.2. Scene edit. The green sphere is replaced by the yellow cube. On the right, a color-coded image shows the impact of the edit. Blue-gray pixels are interpolated. Green, yellow and magenta pixels fail due to the error predicate. Interpolants are only invalidated and rebuilt for the dark red pixels
Fig. 3. Museum scene, interpolant dependencies. Interpolants that depend on the reflective mirror
357
358
Plate B: Scene Edits for Museum Scene
Museum eeDe Top to bottom: Edi t-(n) - delete top of EdiHb) - delete bou m Edit-ee) - add green ben h
Westermann et al. (pp. 45-56)
Fig. 6. Different polygonal models have been converted into sets of distance volumes (bottom row) and rendered via 3D textures (top row)
Fig. 7. The outer and the inner mesh used to render the thin boundary region around the surface of the voxelized horse data set
Fig. 8. The shading of voxelized models on a per-pixel basis using Phong's illumination model
359
360
Max et al. (pp. 57-62)
Fig. 1. Maple forest, rendered by IBR Fig. 2. Maple forest, rendered with polygons
Fig. 3. Mixed oak and maple forest, by IBR Fig. 4. Closer view of maple and oak forest
Fig. 5. A close-up, showing leaf texture Fig. 6. A long view of the whole forest
361
Udeshi and Hansen (pp. 63-76)
a Direct lighting b Gra -Ie cl had \ (25 light sample )
d RaYlnl ed ponion
e Inul blended image
Fig. 10. Images showing different stages of rendering. These images were generated using four pipes
362
Fig. 11. Images rendered by different pipes. Each pipe contributes to both the shadow and indirect lighting generation
Fig. 12. Office scene rendered at over tour frames a second using 8 pipes and software compositing
Premoze et al. (pp. 107-118)
Fig 2. A rendering of the same data as used to generate Fig. 1, after processing using the techniques from this paper
b Image with explici t plant geometry
363
.. ~:::i. '-~.
c Rendering for winter morning d Rendering for winter afternoon
c Late spring f Early ummer
Fig 7. Renderings of a 2 km by 2km region in the Wasatch Mountains at different time of day/year
364
Rocchini et al. (pp. 119-130)
Fig. 1. An example of optimized frontier faces management: in the initial configuration on the left we have 1,137 frontier faces out of the total 10,600 faces; after optimization (on the right) we get only 790 frontier faces
Fig. 2. A comparison of the accuracy of the synthetic models (right) wrt the original object images (left)
365
Marschner et al. (pp. 131-144)
Plate 1. A rendered image showing a scene containing objects made of the measured materials
Plate 2. Rendered images showing BRDFs measured from two different subjects
366
McAllister et al. (pp. 145-160)
Left: The scanning rig used to acquire environments. Middle: A range scan registered with a color image. White lines are silhouette edges. Right: An item buffer resulting from warping a source image to the viewpoint of another source image with Z comparison (McAllister et al.)
Upper Left: An environment made from two scans - one with Henry, one without. Lower Left: A plant scanned from four positions, composited using quadratic splats. Right: Library with plants and leather chairs, composited from two scans (McAllister et al.)
367
Fu et al. (pp. 161-174)
II
b
l t I
, ' ;:.~ 1 4~ .& --,,- /-= ~I ~ !~ , ..
h ,\,X~ ~ ~ 1 '~ ~r I ,
c
d
e
Fig. 13. Blending of two warped panoramas
368
Schaufler and Priglinger (pp. 175-186)
Fig. 12. The system for interactive frustum placement: The main window shows the object together with the frusta. One is highlighted for editing. In the lower left comer the view obtained using this frustum is given. Left: Displacement mapped globe. Middle: Scanned dragon head from the Stanford 3D Scanning Repository. Right: Polygons to be displaced into the dragon head (the image planes of the frusta)
original ge melry \ arped image
Image n i e due 10 d plh Ie I conniclS caused b . mpl s from differem reference image
inappropriate modeling: ge melry doe nOI nform I heighlfield as umplion
Fig. 13. Comparison of images obtained from original geometry and from warped reference images
Heidrich et al. (pp. 187-196)
Light field rendering with decoupled geometry and illumination, the latter being provided through an environment map. Top: color coded texture coordinates for the environment map, as extracted from the geometry light field. Bottom: final renderings. Compare Fig. 3
The glossy torus on the left is the result of a geometry rendering with a prefiltered environment map. The torus in the center represents the refraction part. Both parts have been weighted by the appropriate Fresnel term. The right image shows the combination of the two parts. Compare Fig. 4
Rendering of vector-quantized light fields. Left: uncompressed, center: reduced resolution on the (u, v)-plane, right: reduced resolution in all 4 dimensions
369
370
Keating and Max (pp. 197-212)
Fig. 5. Images from infinite light sources. A sunlight image is on the left and a 10 x sunlight image is on the right. Shadow computation times are 2.5s and 9.5s respectively
Fig. 6. Images from finite light sources. A 4 x 4 square light is used on the left and an 8 x 8 square light is used on the right. Shadow computation times are 11.8s and 20.6s respectively
Fig.7. Deterministic rays on the left vs. random rays on the right. Notice the tradeoff between increased bias on the left and increased variance on the right. Both shadows computed in -lOs
Fig. 8. Left: A demonstration of self-shadowing using a dense tree in sunlight, shadow computed in l8.6s
Fig. 9. Right: Using filter weights to simulate multiple light sources with one filter
371
Schoffel and Pomi (pp. 225-234)
Fig. 4. Examples of predicted bounding volumes: The red wirefrarne box indicates the current position of the dynamic object, the blue box outlines the previous one. Predicted volumes are drawn in green. In the left example, two future positions have been calculated, while in the right image, five predicted positions are shown, taking into account object translation and rotation
I
~ Fig. 5. Dynamic shaft management: In the bottom row, links are displayed for those shafts that are stored (only links for the light sources are shown); the same scene without links is shown above. The chair is being moved from the right to the left. Left: Shafts at the old position of the chair. Middle: Shafts for the new position are added. Right: After line-space update, outdated shafts are removed by the garbage collector
Fig.6. A more complex test environment. Only links from the light source to the floor that are affected by moving the seat are shown. Note that due to balancing, mesh resolution on the floor appears finer than link resolution
372
Damez and Sillion (pp. 235-246)
t ••. ~ .. i#'- . ~ ... ;, ,-
crOR LIGHT
Fig. 4. Representative frames from example animations
Kautz and McCool (pp. 247-260)
373
Ray traced
OpenGL single-term ND. L2 norm
Fig. 12. HTSG copper, PoulinIFoumier's brushed metal, LafortunelWillems' modified Phong, measured velvet, measured peacock feather, and measured grey vinyl
Fig. 13. Diffuse texture maps can be added to single-term separable decompositions for the specular highlight. Left to right: Ward's anisotropic BRDF; Ward's anisotropic BRDF (oriented orthogonally to the first example); (measured) varnished wood
374
leart and Arques (pp. 261-272)
-,
L "._ ..... __ . : __ ~_ .. _ 'j a b c d e r
Fig. 4. Directional diffuse intensity for em'n = 800 nm, emax = 1000 nm a, d 0" = 100 nm, 1: = 800 nm b, e 0" = 100 nm, 1: = 1600 nm c, f 0" = 130 nm, 1: = 1600 nm
• • • (j . '
j a b c d e r
Fig. 5. Directional diffuse intensity for 0" = 100 nm, 1: = 1 J.lIIl a, d em'n = 0.8 J.lIIl, em., = 1.2 J.lIIl b, e emin = 1.5 J.lIIl, emax = 1.7 J.lIIl c, f em'n = 0.8 J.lIIl, emax = I J.lIIl
: . . .... ~- - .. ' j a b c d e r
Fig. 6. Directional diffuse intensity for emin = 800 nm emax = 1000 nm a, d 0" = 80 nm, 1: = 800 nm b, e 0" = 120 nm, 1: = 1200 nrn c, f 0" = 150 nm, 1: = 1500 nm
a b c d
Fig. 7. a, b A piece of copper/copper oxide piping. c, d A piece of iron/iron oxide piping
Fig. 8. A set of iron saucepans coated with identical iron oxide thin films with different thicknesses: From left to right: emin = emax = 0 nm; em'n = 250 nm em" = 450 nm; em'n = 500 nm em" = 650 nm; em'n = 650 nm emax = 800 nm
Jensen et ale (pp. 273-281)
a b
c d
Fig.6. Rock on sandy beach with different wetness functions. a Dry, b mixed wet and dry, c water covering the rock, d completely wet
Fig.7. Cognac spilled on wood table
375
376
Mostefaoui et al. (pp. 283-292)
Fig.6. Three tori showing from left to right the difference between considering: no effect, self-shadowing and self-shadowing plus inter-reflections
Fig. 7. A comparison between a texturing approach and an "explicit" approach
Fig. 8. A scene showing textures under different lighting conditions
Willmott et al. (pp. 293-3(4)
Fig. 7. Face cluster radiosity (FCR) and hierarchical radiosity with volume clustering (HRVC) algorithms applied to a detailed dragon model
Fig. 8. Face clusters on the Venus head model. A total of 8000 randomly coloured clusters are shown
377
Fig. 9. The museum scene: diffuse interreflection simulated with face cluster radiosity. The input scene contained 2,700,000 polygons. Solution plus post-processing took two minutes and 120MB of memory to generate
378
van de Panne and Stewart (pp. 305-316) r
a
L .J b
c
Fig. 5. Clustering resulting from lossy compression. a Terrain dataset, b tunnels dataset, c rooms dataset
r M ' r .,
~ t' .. I .J L .J ~
a L
r ., r .,
b L L .J
r ., r ., r .,
c L .J L .J L .J
Fig.6. Clustering resulting from lossless compression. a Terrain dataset, b tunnels dataset, c rooms dataset
Costa et al. (pp. 317-328)
Fig. 11. Empirical lighting design solution
Fig. 13. Lighting design solution for 3600 DLs (6)
379
Fig. 12. Lighting design solution for 1200 DLs (3)
Fig. 14. Lighting design solution for predefined DLs (5)
380
Loscos et ale (pp. 329-340)
a
e Fig. 5. a The original radiance image (photo). b Original reprojected lighting conditions, displayed using the recomputed direct and indirect components, c a virtual light has been inserted into the scene adding the light took 3.1 seconds (for 400 x 300 resolution). d A virtual object has been inserted into the scene with both lights on; adding the object required I sec. e Moving the virtual object requires 1.1 sec.
ab c d Fig.6. Texture filling examples for real object removal. a Initial reflectance image. b The laptop is removed. The laptop was removed entirely synthetiCally since no additional image was captured. c The original relit image. d The relit image after removal. Removal of the laptop took 0.7 sec., since generated textures are pre-computed for "removahle" ohjects
c d
Fig. 7. A second real object removal example. a The original relit image, b the relit image after removal of the door, which took 2.9 sec. , for a resolution of 512 x 341. c A virtual chair has been added to the scene, requiring 3.4 sec., and d a virtual light added (needing 6.6 sec.)
381
reen (pp. 341-352)
• •• ••• • •
• • •• • • ...• • • • ' .. • ,. • ... •
inglc model rend 'red in a range of K:voo lyle,
382
ample non-pholo~li lie imalle ( une y or \I Prit hard)
SpringerEurographics
Eduard Gr()ller,
Helwig LMfelmann,
William Ribarsky (eds.)
Data Visualization '99
Proceedings of the Joint EUROGRAPHICS and IEEE TCVG Symposium on Visualization in Vienna, Austria, May 26-28,1999
1999. XII, 340 pages. 230 partly coloured figures. Softcover DM 118,-, oS 826,-, sFr 107,50
ISBN 3-211-83344-7. Eurographics
In the past decade visualization established its importance both in scientific research and in real-world applications. In this book 21 research papers and 9 case studies report on the latest results in volume and flow visualization and information visualization. Thus it is a valuable source of information not only for researchers but also for practitioners developing or using visualization applications.
~ SpringerWienNewYork
Michael Gervautz,
Axel Hildebrand,
DieLer Schmalstieg (eds.)
Virtual Environments '99
Proceedings of the Eurographics Workshop in Vienna, Austria, May 31-June 1, 1999
1999. X, 191 pages. 78 figures. Softcover DM 85,-, oS 595,-, sFr 77,50
ISBN 3-211-83347-1. Eurographics
The special focus of this volume lies on augmented reality. Problems like real-time rendering, tracking, registration and occlusion of real and virtual objects, shading and lighting interaction and interaction techniques in augmented environments are addressed. The papers collected in this book also address levels of detail, distributed environments, systems and applications and interaction techniques.
All prices are recommended retail prices
Sachsenplatz 4-6, P.O.Box 89, A-l201 Wien, Fax +43-1-330 24 26, e-mail: [email protected]. Internet: http://www.springer.at
New York, NY 10010, 175 Fifth Avenue. D-14197 Berlin, Heidelberger Platz 3. Tokyo 113, 3-13, Hongo3-chome, Bunkyo-ku
SpringerEurographics
Bruno Amaldi,
Gerard Hegron (eds.)
Computer Animation
and Simulation '98
Proceedings of the Eurographics Workshop in Lisbon, Portugal, August 31--September 1,1998
1999. VII, 126 pages. 82 figures. Softcover DM 85,-, oS 595,-, sFr 77,50
ISBN 3-211-83257-2. Eurographics
Contents: • J.-D. Gascuel et al.: Simulating Landslides
for Natural Disaster Prevention • G. Besuievsky, X. Pueyo: A Dynamic
Light Sources Algorithm for Radiosity Environments
• G. Moreau, S. Donikian: From Psychological and Real-Time Interaction Requirements to Behavioural Simulation
• N. Pazat, J.-1. Nougaret: Identification of Motion Models for Living Beings
• F. Faure: Interactive Solid Animation Using Linearized Displacement Constraints
• M. Kallmann, D. Thalmann: Modeling Objects for Interaction Tasks
• M. Teichmann, S. Teller: Assisted Articulation of Closed Polygonal Models
• S. Brandel, D. Bechmann, Y. Bertrand: STIGMA: a 4-dimensional Modeller for Animation
Martin Gobel,
~ SpringerWienNewYork
JUrgen Landauer, Ulrich Lang,
Matthias Wapler (eds.)
Virtual Environments '98
Proceedings of the Eurographics Workshop in Stuttgart, Germany, June 16-18, 1998
1998. VIII, 335 pages. 206 partly coloured figures. Softcover DM 128,-, oS 896,-, sFr. 116,50
ISBN 3-211-83233-5. Eurographics
Ten years after Virtual Environment research started with NASA's VIEW project, these techniques are now exploited in industry to speed up product development cycles, to ensure higher product quality, and to encourage early training on and for new products. Especially the automotive industry, but also the oil and gas industry are driving the use of these techniques in their works. The papers in this volume reflect all the different tracks of the workshop: reviewed technical papers as research contributions, summaries on panels of VE applications in the automotive, the medical, the telecommunication and the geoscience field, a panel discussing VEs as the future workspace, invited papers from experts reporting from VEs for entertainment industry, for media arts, for supercomputing and productivity enhancement. Short industrial case studies, reporting very briefly from ongoing industrial activities complete this state of the art snapshot.
All prices are recommended retail prices
Sachsenplatz 4-6, P'O.Box 89, A-120l Wien., Fax +43-1-330 24 26, e-mml: [email protected], Internet: http://www.springer.at
New York, NY 10010, 175 Fifth Avenue. D-14197 Berlin. Heidelberger PIau 3 • Tokyo 113, 3-13, Hongo 3-chorne. Bunkyo-ku