Perceptually Based Depth-Ordering Enhancement for Direct Volume Rendering

Post on 25-May-2015

211 views 1 download

Tags:

description

Visualizing complex volume data usually renders selected parts of the volume semi-transparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depthordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. Our work has appeared in IEEE TVCG and was selected for presentation in IEEE VIS 2013. Project page: http://research.microsoft.com/en-us/um/people/ycwu/projects/tvcg13_perception.html

transcript

Perceptually Based Depth-Ordering

Enhancement for Direct Volume Rendering Lin Zheng, Yingcai Wu, and Kwan-Liu Ma

VIDI Research Group, UC Davis

Introduction

• Depth Perception: • visual ability to perceive the distance of 3D objects.

• Depth cues

Binocular CuesMonocular Cues

Occlusions Size Shading Stereopsis Disparity

Introduction

• In many visualizations, the depth ordering is ambiguous.• If there is no interaction:• static images on the magazine• posters

• Possible approaches:• Perspective projection• Halos, shadows, warm/cool color

Neghip

Related Work: Halos

• Enhancing Depth-Perception with Flexible Volumetric Halos, Stefan Bruckner and M. Eduard Gröller, 2007

• Depth-Dependent Halos: Illustrative Rendering of Dense Line Data, MH Evert and etc., 2009

Related Work: Warm/Cool Color

• Color Design for Illustrative Visualization, L. Wang, J. Giesen, K.T. McDonnell, P. Zolliker, and K. Mueller, 2008

Perception Models

• Only change the inherent factors: Luminance, opacity• We introduce two major models for depth perception:• X-junction Model• Transmittance Anchoring Principle (TAP)

• X-junction Model has limitation• TAP can be a complement

X-Junction Model: C-configuration

• Which layer is in the front• A or B?• Luminance(s)• > Luminance (r)• > Luminance (p)• > Luminance (q)• The luminance

decreasing in a “C” configuration.

X-Junction Model: A-configuration

• Which layer is in the front• Luminance (r) = (q)• The Luminance decreasing

order can be s>r=q>p• Or s>q=r>p• A-ambiguity

X-Junction Model: Z-configuration

• Luminance s>r>q>p• The luminance decreasing

in a “Z”-configuration• Still ambiguous?• + TAP model

Application of Perception Models

• TAP: the highest contrast is perceived to be at the background• Applying X-junction Model and TAP Model.• Improve A-ambiguity to Z-configuration, then to C-configuration

Z-configuration C-configurationA-ambiguity

Energy Function Design

• Three terms :

• Enhance the Perceived Depth Ordering• Keep the Perceived Transparency• Keep the Image Faithfulness

depth ordering transparency image faithfulness

Energy Function Design

• Perceived Depth Ordering:configuration of the junction area• Wrong C-configuration will not appear in semi-transparent structure• Four configurations (in DVR):• Wrong Z-configurations

• A configuration (A-ambiguity)

• Correct Z-configuration

• Correct C-configuration

Energy Function Design

• Perceived Transparency:

Metelli’s episcotister modelLuminance of transparent layers

Information EntropyConditional entropy

• Image Faithfulness:

Optimization

NO

Optimal

User Study

• Design:• A between-subjects study (12 subjects)• 60 cases total: 30 enhanced and 30 original

• Fisher’s exact test• Users were significantly more accurate in

enhanced cases: P-value = 0.0016

task interface

Results: neghip

• Although the difference is subtle, our user study shows that enhancement improves depth perception significantly

initial enhanced

Results: neghip

initial enhanced

Results: neghip

initial enhanced

Results: vortex dataset

initial enhanced

Results initial enhanced

Discussion

• + Easy to be embedded in current visualization system• + Luminance as the visual cue:• a primary visual cue in visual psychology• does not introduce additional overhead

• - Limitations of perception models:• deal with two overlapping layers at a time• do not work for enclosing and separate structures• consistency problem with intertwined structures

Conclusion and Future Work

• Investigated how to perceptually enhance depth ordering• Used perception models for quantitative measurement• Depth ordering (X-junction Model, TAP)• Image quality (Metelli episcotic Model)

• Designed an optimization framework for enhancing depth perception

• Conducted a user study showing the effectiveness of our approach• Future work: animation

Thank you!

• This research has been sponsored in part by the US National Science Foundation (NSF) through grant CCF-0811422 and US Department of Energy (DOE) with award DE-SC0002289.