Visualization II Instructor: Jessica Crouch

Post on 24-Jan-2016

27 views 0 download

Tags:

description

Reconstruction and Visualization of Planetary Nebulae Authors: M. Magnor, G. Kindlmann, C. Hansen, N. Duric. Visualization II Instructor: Jessica Crouch. Problem. Would like to use 2D photographs to construct a 3D model of planetary nebulae (PNe) - PowerPoint PPT Presentation

transcript

Reconstruction and Visualization of Planetary Nebulae

Authors: M. Magnor, G. Kindlmann, C. Hansen, N. Duric

Visualization II

Instructor: Jessica Crouch

Problem

• Would like to use 2D photographs to construct a 3D model of planetary nebulae (PNe)– 3D model would allow fly-through and other

dynamic visualizations

• How do you build a 3D model from 2D data?– In general, insufficiently constrained problem

http://www.nightskyinfo.com/planetary_nebulae/

http://www.chiro.org/LINKS/WALLPAPER/AQUILA_PLANETARY_NEBULAE.JPG

http://nssdc.gsfc.nasa.gov/photo_gallery/photogallery-astro-nebula.html

Motivation• Education and entertainment of the public

– Visualization can be the fastest way to the “oooo”, “ahhh”

• Science is interesting, important, beautiful• Good PR helps with funding…

• Facilitate scientific exploration by astrophysicists– See the gas distributions predicted by different

models– Compare model images to real images

Motivation

• Prior work on 3D PNe visualization has focused on visualizing scientific models produced by astronomers

• Need a method to get 3D visualizations from observational data (telescope photographs)

Methods: Overview• This is an inverse volume rendering problem

– Instead of developing a method to create an image of a volume, we need a method to create a volume from an image

• An optimization approach is described– Guess the volume contents

– Render the hypothetical volume

– Compare the rendering to the photograph

– Make a better guess for the volume that (hopefully) reduces the difference between the rendering and the photograph

– RepeatUntil the photograph and rendered image match or other

termination criteria are met

PNe Model

• How do you make a reasonable guess about the contents of the volume?– Pick random values?– Use everything you know about the structure

of the volume to reduce the complexity of the problem

• Constrained Inverse Volume Rendering (CIVR)

– The more correct constraints you apply, the faster your optimization will converge

PNe Model

• What is known about the structure of PNe?– Axisymmetric

• Rotationally symmetric about an axis• Every slice through the volume that passes through

the axis shows the same data

=

Only need to guess the contents of a slice

Reconstruct the volume by rotating the slice around the axis

Model Justification

• Empirical evidence:– Different shapes visible in photographs, all

reasonably explained by different projections of axisymmetric volumes

Model Justification

• Why are PNe axisymmetric?

• Interacting solar wind theory:– Old gas collected around equator– High volume of new gas is deflected away

from equator out toward poles

Photo-Ionization

• Wind blows off PNe surface (predominantly hydrogen)

• Atoms are ionized by UV photons

• As ions and electrons recombine to form stable atoms they transition from a high energy state to successively lower energy states

• Each transition involves emission of a photon of a specific wavelength of light– Result: we see different colors

Rendering Model

• PNe photo-ionization creates light that travels out into space unhindered

• Model volume as completely emmissive– Each voxel generate a certain amount of light in each

wavelength– Light is not attenuated as it travels through the

volume– Rendered color is integral of all emmissions along a

ray

Rendering Algorithm

• How many times must be the model be rendered before convergence?– Estimate: 106

• How long will it take to do that many volume renderings?– Rendering efficiency is critical

Rendering Algorithm• Use GPU processing:

– Load 2D density map as texture image

– Create a series of texture mapped viewport-filling parallel quadrilaterals

• Automatically generate Cartesian texture coordinates for the quads

• Use a fragment shader program to convert Cartesian (u,v,t) coordinates to cylindrical (theta, h, r) coordinates

– Accumulate (add) the color contributions of each quad to get the final image

- Axis directions on texture map correspond to height & radius of axisymmetric volume

Rendering Algorithm

• Hardware supported texture mapping approach is much faster than ray casting– Could be parallelized. How?

• 10 fps for 128 x 128 x128 with an nVidia GeForce FX 3000 graphics card

Error Function

• Optimization functions try to pick a set of input parameters that minimize the value returned by the error function

– If the rendered image closely matches the photograph, the error function should return a small value

– If the rendered image poorly matches the photograph, the error function should return a large value

Error Function: SSD

• Sum of Squared Differences (SSD)– Very commonly used to gauge image similarity– Subtract image A from image B to get the difference

image– Square the difference image’s pixel values, and sum

over the whole image– SSD = 0 iff the images are identical

• Additional error penalty for negative emmision values

Optimization Algorithm

• Initial guess:– Density map = 0 everywhere– PN center is guessed based on brightest spot

in the photograph– Axis orientation is guessed based on first

eigenvector of photograph• Intuition: First eigenvector will be parallel to the

primary swath of brightness• Guess for axis inclination toward earth is 0.

Optimization Algorithm

• Given:– A guess for the density map– A guess for the axis direction (2 angles)– A rendering algorithm– An error metric (objective function)

• How do you intelligently improve your guess?– Specifically, how do you make a guess that

reduces the SSD?

Powell’s Optimization Method• A conjugate gradient optimization algorithm

– Famous, widely used– In Numerical Recipes in C

• Requires evaluation of the gradient of the error function– Since we don’t have an analytical representation of the

error function, the derivatives must be evaluated numerically

• For each model input parameter, evaluate for (parameter +Δ) and (parameter - Δ) .

• Use central difference formula to estimate the error derivative

Powell’s Optimization Method

• Try to take steps “downhill” on the error function– You find local minima, if it works

Optimization Efficiency• How does the resolution of the image impact the number of

frames that must be volume rendered? – How many parameters must be differentiated at each optimization

iteration?

• Further improvement in efficiency:– Take a multi-scale approach

• Compute the best coarse image• After coarse image converges, up-sample and optimize for a higher

resolution

• Paper indicates 4 level density map pyramid:16 x 4 image32 x 8 image64 x 16 image128 x 32 image

Higher resolution model would require less noisy photographic data to produce a meaningful result. Earth’s atmospheric turbulence contributes to noise.

Run time: 1 Day

Optimize Per Color (Per Element)

• Results were generated for separate images of the hydrogen, oxygen, and nitrogen/sulfur gasses

• Whole optimization process is repeated for each element type

• Final result is the sum of the three optimized volumes

Evaluation: Results for 3 PNe

• Left: Photograph Right: Model viz.

• 2 new views

Evaluation: Results for 3 PNe

Evaluation: Results for 3 PNe

• Left: Photograph Right: Model viz.

• 2 new views

Evaluation: Results for 3 PNe

Evaluation: Results for 3 PNe

• Left: Photograph Right: Model viz.

• 2 new views

Evaluation: Results for 3 PNe

Final Visualization

• Typical rendering methods can be applied to reconstructed volume– Iso-surfaces, etc.

Evaluation: Error Visualization

• Local contrast image

Evaluation

• No data exists for the actual PNe gas distribution– Can only evaluate for real PNe by

demonstrating image match– Is this a reliable way to evaluate the method?

Evaluation

• Method can be validated more rigorously when synthetic data is input– Develop a artificial axisymmetric volume– Render to create one image– Apply reconstruction algorithm to rendered

image, and see how well the reconstructed volume matches the artifical volume

Evaluation

• Performance is tied to axis inclination angle– Works well from 90º (perpendicular) to 40º inclination– Problem is too ill-defined from 0º-40º

• At 0º the axis is parallel to the view direction

• Authors report the reconstructed density values for the synthetic data fall within the “optimization routine’s preset termination error threshold.”– Would be nice to know what this is, and get a sense

for the magnitude of the error in the reconstruction

Conclusion

• Nice results for a difficult problem

• Validation is nicely done, as thorough as possible given the nature of the problem

• Pretty slow, but ok for an off-line solution

• Better resolution would be nice:– Paper gives 256 x 256 images of something that has a diameter

of 1 light year = 5.9 trillion miles– Must be missing important detail

• Authors note that ignoring light absorption by dust causes error in some situations

Discussion Questions

What changes would be necessary to account for light absorption by dust? What effect would they have on computational complexity?

Discussion Questions

Can you think of any other visualization problems that could benefit from inverse volume reconstruction?

Discussion Questions

Are there likely to be problems with the optimization algorithm getting stuck in local error function minima that are not global minima?