+ All Categories
Home > Documents > IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume...

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume...

Date post: 19-Jun-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
14
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 1 Skeleton Cuts - An Efficient Segmentation Method for Volume Rendering Dehui Xiang, Jie Tian, Fellow, IEEE, Fei Yang, Qi Yang, Xing Zhang, Qingde Li and Xin Liu Abstract—Volume rendering has long been used as a key technique for volume data visualization, which works by using a transfer function to map color and opacity to each voxel. Many volume rendering approaches proposed so far for voxels classification have been limited in a single global transfer function, which is in general unable to properly visualize interesting structures. In this paper, we propose a localized volume data visualization approach which regards volume visualization as a combination of two mutually related processes: the segmentation of interesting structures and the visualization using a locally designed transfer function for each individual structure of interest. As shown in our work, a new interactive segmentation algorithm is advanced via skeletons to properly categorize interesting structures. In addition, a localized transfer function is subsequently presented to assign visual parameters via interesting information such as intensity, thickness and distance. As can be seen from the experimental results, the proposed techniques allow to appropriately visualize interesting structures in highly complex volume medical data sets. Index Terms—Volume rendering, classification, skeleton cuts, segmentation, localized transfer function 1 I NTRODUCTION V OLUME data rendering is one of the key tasks involved in the development of a medical data visualization system. In clinical practice, a properly rendered 3D image allows physicians or radiologists to observe and analyze meaningful and complex structures of human organs and tissues so as to make a quick and accurate diagnosis of cancers, cerebrovas- cular diseases, cardiovascular diseases, infectious dis- eases, and so on. Two crucial issues involved in volume rendering are the classification of volume data and the management of the visual parameters for each individual voxel element. Traditionally, this is achieved by specifying a single global transfer function [1], [2]. However, the specification of such a transfer function to meet the requirements of a volume visualization task can be very difficult. On the one hand, it is not easy Dehui Xiang, Fei Yang and Xing Zhang are with the Insti- tute of Automation Chinese Academy of Sciences and Gradu- ate University of Chinese Academy of Sciences, Beijing, 100190, China; Email: {dehui.xiang, xing.zhang, fei.yang}@ia.ac.cn, xiangde- [email protected]. Jie Tian is with the Medical Image Processing Group, In- stitute of Automation Chinese Academy of Sciences, Beijing, 100190, China. Telephone: 8610-82628760; Fax: 8610-62527995. Email: [email protected]; jie.tian@ ia.ac.cn. Website: http://www.mitk.net; http://www.3dmed.net. Qi Yang is with the Radiology Department of Xuanwu Hospital, Capital Medical University. No.45 Changchun Street, Beijing, 100053, China; Email:[email protected]. Qingde Li is with the Department of Computer Science, University of Hull, Hull,HU67RX, UK; Email: [email protected]. Xin Liu is with the Paul C. Lauterbur Biomedical Imaging Center, Institute of Biomedical and Health Engineering and Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, 518067, China; Email:[email protected]. to effectively and accurately perform the classifica- tion of volume data and the management of visual parameters in low dimensional transfer function do- main. For instance, for the Computed Tomography Angiography (CTA) data set (256×256×256) shown in Fig.1, it is very difficult to differ aorta from spine by just a 1D transfer function. As can be seen from the figure, many uninteresting tissues are also included because the intensity values of these materials are overlapping. The use of 2D transfer functions only improves differentiation fractionally with additional information being used to visualize the boundaries between different materials (see the hepar and stom- ach in Fig.1(c)). On the other hand, a higher dimen- sional transfer function for classification is difficult to operate for a user; meanwhile, the local features such as scalar value, gradient magnitude etc. are often not intuitive enough to capture desirable structures [3], [4]. Users often have to work in the abstract transfer function domain to accomplish the two assignments simultaneously, which is usually frustrating and time consuming. Therefore, the volume data classification and the visual parameters management should be separated in the traditional volume rendering pipeline [1]. In this paper, we present a two-step technique for volume visualization to classify volume data, explore interesting information and decrease user interaction. The first step is to classify voxels and recognize structures of interest with skeleton cuts. The new version algorithm is based on Euclidean distance transformation and skeleton extraction. The second step of our proposed volume rendering technique is to visualize the resulted structures using a locally designed transfer function. There are several advan-
Transcript
Page 1: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 1

Skeleton Cuts - An Efficient SegmentationMethod for Volume Rendering

Dehui Xiang, Jie Tian, Fellow, IEEE, Fei Yang, Qi Yang, Xing Zhang, Qingde Li and Xin Liu

Abstract—Volume rendering has long been used as a key technique for volume data visualization, which works by usinga transfer function to map color and opacity to each voxel. Many volume rendering approaches proposed so far for voxelsclassification have been limited in a single global transfer function, which is in general unable to properly visualize interestingstructures. In this paper, we propose a localized volume data visualization approach which regards volume visualization as acombination of two mutually related processes: the segmentation of interesting structures and the visualization using a locallydesigned transfer function for each individual structure of interest. As shown in our work, a new interactive segmentation algorithmis advanced via skeletons to properly categorize interesting structures. In addition, a localized transfer function is subsequentlypresented to assign visual parameters via interesting information such as intensity, thickness and distance. As can be seen fromthe experimental results, the proposed techniques allow to appropriately visualize interesting structures in highly complex volumemedical data sets.

Index Terms—Volume rendering, classification, skeleton cuts, segmentation, localized transfer function

F

1 INTRODUCTION

VOLUME data rendering is one of the key tasksinvolved in the development of a medical data

visualization system. In clinical practice, a properlyrendered 3D image allows physicians or radiologiststo observe and analyze meaningful and complexstructures of human organs and tissues so as to makea quick and accurate diagnosis of cancers, cerebrovas-cular diseases, cardiovascular diseases, infectious dis-eases, and so on.

Two crucial issues involved in volume rendering arethe classification of volume data and the managementof the visual parameters for each individual voxelelement. Traditionally, this is achieved by specifyinga single global transfer function [1], [2]. However,the specification of such a transfer function to meetthe requirements of a volume visualization task canbe very difficult. On the one hand, it is not easy

• Dehui Xiang, Fei Yang and Xing Zhang are with the Insti-tute of Automation Chinese Academy of Sciences and Gradu-ate University of Chinese Academy of Sciences, Beijing, 100190,China; Email: dehui.xiang, xing.zhang, [email protected], [email protected].

• Jie Tian is with the Medical Image Processing Group, In-stitute of Automation Chinese Academy of Sciences, Beijing,100190, China. Telephone: 8610-82628760; Fax: 8610-62527995.Email: [email protected]; jie.tian@ ia.ac.cn. Website: http://www.mitk.net;http://www.3dmed.net.

• Qi Yang is with the Radiology Department of Xuanwu Hospital,Capital Medical University. No.45 Changchun Street, Beijing, 100053,China; Email:[email protected].

• Qingde Li is with the Department of Computer Science, University ofHull, Hull,HU67RX, UK; Email: [email protected].

• Xin Liu is with the Paul C. Lauterbur Biomedical Imaging Center,Institute of Biomedical and Health Engineering and Shenzhen Instituteof Advanced Technology, Chinese Academy of Science, Shenzhen,518067, China; Email:[email protected].

to effectively and accurately perform the classifica-tion of volume data and the management of visualparameters in low dimensional transfer function do-main. For instance, for the Computed TomographyAngiography (CTA) data set (256×256×256) shown inFig.1, it is very difficult to differ aorta from spine byjust a 1D transfer function. As can be seen from thefigure, many uninteresting tissues are also includedbecause the intensity values of these materials areoverlapping. The use of 2D transfer functions onlyimproves differentiation fractionally with additionalinformation being used to visualize the boundariesbetween different materials (see the hepar and stom-ach in Fig.1(c)). On the other hand, a higher dimen-sional transfer function for classification is difficult tooperate for a user; meanwhile, the local features suchas scalar value, gradient magnitude etc. are often notintuitive enough to capture desirable structures [3],[4]. Users often have to work in the abstract transferfunction domain to accomplish the two assignmentssimultaneously, which is usually frustrating and timeconsuming. Therefore, the volume data classificationand the visual parameters management should beseparated in the traditional volume rendering pipeline[1].

In this paper, we present a two-step technique forvolume visualization to classify volume data, exploreinteresting information and decrease user interaction.The first step is to classify voxels and recognizestructures of interest with skeleton cuts. The newversion algorithm is based on Euclidean distancetransformation and skeleton extraction. The secondstep of our proposed volume rendering technique isto visualize the resulted structures using a locallydesigned transfer function. There are several advan-

Page 2: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2

(a) (b) (c) (d) (e)

Fig. 1. Volume rendering using different transfer functions. (a) A rendered image with a 1D transfer function; (b) 1D transferfunction user interface (top) and 2D transfer function user interface (bottom); (c) A rendered image with a 2D transfer function;(d) Marking interested structures on 2D slices; (e) A rendered image with localized transfer functions.

Fig. 2. A user’s intention to choose interesting tissues in MRdata (181 × 217 × 181). (a) Marking on 2D slices to get rightcerebrum; (b) A rendered right cerebrum (yellow) image withright cerebella; (c) Removing unwanted right cerebella on the2D slices; (d) Only the right cerebrum is extracted.

tages of our proposed volume rendering framework.First and foremost, it is its effectiveness. The interac-tive segmentation process introduced in the techniquecan insure the accuracy of volume data classificationin volume rendering systems, which produces morebelievable volume rendered images, since it is moreeffective to integrate both the internal informationof volume data and the user interaction. Secondly,it is its easy user interaction. Users only need toselect their interesting structures and remove thoseuninteresting structures (see Fig. 2). Moreover, thedesign of required transfer functions can now bebased on each individually recognized object in thevolume data, which correspondingly makes the spec-ification of a required transfer function much easiersince the intensity variation in a segment is relativelysmall. In addition, it is its efficiency. Our advancedgraph cuts algorithm has significantly improved theperformance of the conventional graph cuts algorithmin quickly processing large volume data sets. Therunning time and memory consumption required forthe conventional segmentation algorithm has beengreatly reduced.

2 RELATED WORK

Volume rendering has long been an active researcharea in volume data visualization and tremendousamount of dedicated studies have been conducted to

develop techniques for volume data classification andvisual parameters management.

Different Features Most of the previous methodshave been proposed mainly via designing a singleglobal transfer function in low dimensional featurespace. Levoy [1] introduced the gradient magnitudeof volume data. Kindlmann et al. [5] extended to thesecond derivative to capture the information about theboundaries. Lundstrom et al. [6] employed local his-tograms to detect and identify materials with similarintensities. Sereda et al. [7] presented a new transferfunction generation method by using the extendedLH histograms [8], which can be used more easilyto visualize boundaries. Johnson and Huang [9] in-troduced the local frequency distribution of intensityvalues in neighborhoods centered around each voxel.Many other features were also introduced such ascontour spectrum [10], local intensity of 3D structures[11], curvature [12] and spatial information [13]. Morerecently, sizes of structures [14], occlusion spectrum[15], patterns and textures [16] and the occurrence intime of the distribution of feature sizes and densities[17] were also taken into account. These kind ofmethods usually lead to an inaccurate classificationof voxels for a complex data set, since it is in gen-eral difficult for common users to specify a transferfunction globally to classify voxels precisely when theunderlying structures in a given data set are rich andcomplex.

Classification or Segmentation Many higher-dimensional classification or segmentationapproaches have also been developed for volumerendering. Tzeng et al. [3] introduced a machinelearning method for classification via consideringmulti-dimensional information, such as scalarvalue, scalar gradient magnitude, location, andneighboring values. The process needs to train along time to provide high-quality classification inthe entire volume. Weber et al. [18] segmented avolume via contour tree into regions to explore thetopology of interesting structures [19]. Huang andMa [20], Owada et al. [21] and Hadwiger et al. [17]integrated the region growing method into volume

Page 3: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 3

Over-segmentation

Euclidean Distance Transformation

Skeleton Extraction

Connection Metrics

Min-Cut/Max-Flow

Feature Extraction

1D/2D LTF

Original Volume

Skeletons

Step 1: Preprocessing Step 2: Segmentation Step 3: Volume Rndering

Binary Volume

Distance Volume

Weighted Graph

Features

Graph ConstructionSkeleton Graph

Mask Volume

Boundary Filtering

Visual Parameters

Fig. 3. An overview of our volume rendering framework.

visualization systems. This technique is unstable andnot appropriate for isolating homologous structures.

User Interaction The Design Gallery used an au-tomatic approach to generate sparse and possibletransfer functions based on automated analysis [22].Though much less effort is required from the usersin using this kind of volume rendering system, theautomatic process may miss some important features.The volume cutout approach allows a user to directlyinteract with rendered images to achieve the task ofsegmentation [23]; however, it can be time consumingfor visualizing a large and complex volume data set.Salama and Kohlmann presented a high-level seman-tic model, together with an easy-to-use interface forradiologists and physicians to design the requiredtransfer function [4]. This method needs to create atransfer function template and is limited in visualizingpathological or abnormal tissues. Wu and Qu [24]introduced a genetic algorithm, which allows a userto directly edit volume rendered images, but it needsto be appropriately initialized by the classificationwidgets in 2D transfer function domain.

3 OVERVIEW OF OUR FRAMEWORK

In this paper, a new framework shown in Fig.3 is pro-posed for volume rendering. The novelty of our worklies in the introduction of skeleton-based graph cutsalgorithm for fast and high quality classification, andthe localized transfer function designed separatelyfor each individual segmented object. Our approachregards classification for volume visualization as atwo-step process: object segmentation and local visualparameters assignments. In the preprocessing stage, acoarse boundary is first extracted from the originalvolume data to generate a binary volume. For eachvoxel in the foreground, its Euclidean distance to thebackground is computed by Euclidean distance trans-formation, then skeletons are extracted based on theEuclidean distance field to represent the coarse object.In the segmentation stage, skeletons are linked to theirneighbors to construct an unweighted skeleton graph,then the weights between two skeletons (nodes) are

computed based on the intersected area of two sub-volumes (called cells), the intensity means and radii ofinscribed spheres. The segmentation is subsequentlyachieved using the min-cut/max-flow algorithm. Inthe volume rendering stage, volume rendering is per-formed based on a set of localized transfer functionsfor users to analyze isolated structures. A lookup tableis constructed locally for each individual structureof interest to map those features, such as thicknessand distance, to color and/or opacity. In addition,a new boundary filtering method implemented inCUDA is developed to compute the object boundariesat pixel resolution for properly rendering segmentedstructures.

4 VOLUME DECOMPOSITION

In computer graphics and computer vision, objectrecognition and representation plays an importantrole in image understanding. For instance, image com-pression uses the representation of objects in digitalimages for storing original data in reduced space [25].This is because, when objects in a given image areproperly represented, a data compression operationwould be able to not only effectively simplify its inter-nal complex structures into some simple components,but also recover efficiently the original images.

The skeleton is a reversible descriptor for precisereconstruction of a shape in image representation.Skeletons provide meaningful cues for the descriptionof discrete objects and enriched information for visu-alization and analysis, such as feature extraction, pat-tern recognition, geometric modeling, or registration.Skeletons are also ideal shape representations withefficient memory usage while preserving topologicalstructures. Skeletonization [26] is an effective and in-tuitive feature detecting technique for data abstractiondue to its linearity and simplicity. In our approach,the decomposing an over-segmented volume is usedas a preprocessing step to obtain a set of informativecells and describe features in the regions, which isthen followed by the search for interesting skeletonsto visualize an object in the volume data with visualparameter settings.

4.1 Skeletons

In the discrete space, skeletons of volumetric objects,on the one hand, are often defined by topology-preserving thinning algorithms which iteratively re-move deletable object points until a thin, skeletalstructure is left [27]. The elimination of deletablepoints is disallowed when the deletion results in achange to the object’s topology. On the other hand,skeletons are also defined based on distance trans-formation when they are not necessarily connectedwith each other. In this paper, skeleton detection iscarried out by distance transformation and skeletal

Page 4: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 4

(a) (b)

Fig. 4. Skeletons of a 2D and 3D object. (a) Curves in arectangle are formed by numerous skeletons; (b) Skeletons(yellow balls) of a cuboid (light salmon) in a volume data set(80× 48× 48).

points extraction leads to an advanced segmentationalgorithm - skeleton cuts.

The skeletons of a shape are the locus of points thatare minimally equidistant from at least two boundarypoints. In other words, it is the set of centers ofmaximal disks (or spheres) that inscribe against theobject and touch the boundary at contact points. Askeleton of a regular 2D object (e.g. rectangle) canconsist of 1D curve Fig.4(a); however, the skeleton ofa regular 3D object (e.g. cuboid) may consist of somecurves and surfaces Fig.4(b). Skeleton calculation of aregion can be performed by either Euclidean spheres,or spheres defined by other metrics like Manhattanmetric. The mapping from a given region into a set ofcenters of maximal disks (or spheres), labeled by theircorresponding radii, is called distance transformation.A maximal disk (or sphere), e.g. the brown disk andthe purple disk in the rectangle, is defined to be adisk (or sphere) contained in the shape not entirelycovered by another one contained in the shape. Thecenter of a maximal disk (or sphere) is a skeletal pointin the region. The skeleton and the associated radiusare therefore used to represent the original objectboundary.

4.2 Euclidean Distance TransformationThe distance transformation is a powerful tool forskeletonization. We first introduce an algorithm tocompute Euclidean distance transformation of d-dimensional images in linear time [28]. This algorithmis time efficient and the results are exact for skeletonextraction. Taking 3D images for example, let B de-note the background region of an image F with sizen × m × l. The problem is to assign to every gridpoint p(i, j, k) the distance to the nearest point in B.The squared distance transformation of t(i, j, k) canbe calculated by

t (i, j, k) = minx,y,z

(i− x)2 + (j − y)2 + (k − z)2 |0 ≤ x ≤ m, 0 ≤ y ≤ n, 0 ≤ z ≤ l, p (x, y, z) ∈ B

(1)

4.3 Skeleton ExtractionSkeletons are extracted from a given distance trans-formation image and required to restore the d-dimensional original one. For instance, a voxel

p(x, y, z) in a 3D image, can be called a skeletal pointafter acquiring the result t(x, y, z) of Euclidean dis-tance transformation if there exists an element p(i, j, k)in the image satisfying [29]:

1) (x− i)2 + (y − j)2 + (z − k)2 < t (x, y, z);2) max

u,v,wt (u, v, w)− (u− x)2 − (v − y)2 − (w − z)2 ,

t (u, v, w) ∈ F = t (x, y, z)− (i− x)2 − (j − y)2

− (k − z)2 .

The first condition ensures that all elements p(i, j, k)are located on the disk centered at p(x, y, z) withradius

√t (i, j, k) > 0; and the second ensures that

the radius is maximal. That is, the elements of askeleton set are the pixels corresponding to all apexesof the elliptic paraboloids which constitute the upperenvelope with heights given by the Euclidean distancetransformation.

5 SKELETON CUTS

Our choice to graph cuts algorithm lies in its globaloptimization and its incorporation of users’ expertise.Firstly, global solutions to segmentation are generallypreferable because of their stability. Min-cut/max-flow algorithms guarantee the minimization of en-ergy function when computing minimum cuts ongraphs [30]. Secondly, general purpose classificationis a highly ambiguous problem. User guidance canbe provided to reduce its ambiguities. Furthermore,the integration of physician’s professional knowledgeis conducive to perform more accurate tissue identifi-cation (see Fig.2). In addition, the modification of theresulted tissues is advisable and required to improvethe accuracy if necessary. However, the running timeand the memory consumption of the conventionalgraph cuts limit its feasibility in many applications,such as the 3D segmentation for volume rendering.

5.1 Introduction to Graph CutsIn this section, we give a brief review on Boykovand Kolmogorov’s graph cuts algorithm in computervision [30]. A directed and weighted graph G =V,E, Ω is constituted by a set of nodes v ∈ V and aset of directed edges e ∈ E with nonnegative weightsω ∈ Ω. The nodes represent pixels or voxels in regular2D or 3D grid data. Two special nodes are calledterminals, respectively source S and sink T, which areabstracted from users’ labeled nodes and added intoG to connect to elements in V . Every edge e in thegraph is assigned some weight ω. The weight rep-resents the similarity between the connecting nodes.Edges are sorted into two types: n-links and t-links.n-links connect pixels or voxels with their severalneighbors (e.g., 4 ∼ 8 neighbors in 2D, and 6 ∼ 26neighbors in 3D). t-links connect nodes with terminals.A graph cut for an image F defined as a subset of

Page 5: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 5

(a) (b) (c)

Fig. 5. Some examples of skeleton graph. (a) The skeletons (yellow balls) of a cuboid (light salmon) and their connections(colored sticks) of the same data in Fig.4(b) described by a ball-and-stick model, where the connection values range from 0 to10; (b) The connection between skeletons of a coarse result (256× 320× 128) segmented by threshold value 87.9 describedby the colored lines, where the connection values range from 0 to 2; (c) Zoomed in connection between skeletons of (b).

edges in E is a separation of the nodes V into twodisjoint sets O (S ∈ O) and B (T ∈ B). Recognizingand partitioning an object with boundaries from animage is equivalent to find an appropriate cut in thegraph G. In the literature, this problem is regardedas an optimization of energy function [31] to find theminimum cost C among all feasible cuts C:

C = arg minC∈C

∑e(p,q)∈C

ω (p, q) (2)

where, ω (p, q) denotes the weight assigned to theedge e (p, q).

5.2 Graph Construction

In this section, we provide algorithmic details aboutour skeleton network construction. We decompose anover-segmented volume into a set of cells representedby skeletons which are detected by the algorithmsdescribed in Section 4. Indeed, given a set of voxels inthe foreground, the Euclidean distance transformationvalue at a voxel p(i, j, k) corresponds to the radius ofthe largest sphere centered at p(i, j, k) contained in theforeground. A maximal sphere is a sphere containedin the foreground not entirely covered by anothersphere contained in the foreground. Skeletons are theset of central voxels of maximal spheres covering theforeground. Once skeletons are extracted from theEuclidean distance field, each element p(i, j, k) in theforeground is scanned to find its skeleton p(x, y, z), sothat these elements constitute a set called a skeletoncell SC(x, y, z). The new version of graph incorporatesskeletons as nodes p(x, y, z) (see the yellow balls inFig.5(a)) and the intersection between two skeletoncells SC(x1, y1, z1) and SC(x2, y2, z2) as edges (seethe links in Fig.5(a)). The number of neighbors of askeletal point depends on those skeleton cells whichintersect it in this irregular system. Two special nodessource S and sink T are also designated as the labelsof foreground and background.

5.3 Connection Metrics

In this section, we concentrate on analyzing the inter-nal features in the regions where skeletons have beenextracted. The conventional graph cuts algorithm op-timize energy function (Equation (2)) in the pixel orvoxel level, and assign weights to n-links and t-linkssimply using their scalar values [32], local intensitygradient, Laplacian zero-crossing etc. In addition, thetraditional graph cuts algorithm for categorization involume data is limited both by its intense memory re-quirements and nonlinear time complexity. In order tokeep the representation as simple as possible, the areathat neighbored skeleton cells touch each other, radiiof maximal inscribed spheres and intensity means aretaken into account in our framework.

In our system, the edges of pairs of nodes are bi-directional except the edges with terminals. Each pairis connected by two directed edges e (p, q) and e (q, p)with homologic weight ω. If two connected nodes pand q are disjoined by an optimal cut so that p iscategorized as a part of object while q is categorizedas a part of background, then one single weight ωis added up to the energy function in the Equation(2) while the other is neglected, and vice versa. Asa matter of fact, one of the most important issues,which should be tackled to achieve an excellent cut, isto assign proper weights to edges. Hence, we exploitthe features contained in the skeletons and proposea novel metrics M to compute the connectivity ofneighboring nodes p and q:

M(p, q) = A(p, q) · S (p, q) (3)

where, nodes p and q represent two intersected cellsin the foreground; A(p, q) is the intersected area of pand q; and S(p, q) is the similarity between p and q;Given n measurements of features f1, f2 · · · fn, thenwe can compute the similarity of two nodes:

S (p, q) = exp

− n∑j=1

kj(fjp − fjq)

2

E[(fjp − fjq)

2] (4)

Page 6: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 6

where, kj is the coefficient of the jth feature; fjp, fjq

are respectively the measurements of the jth featureof nodes p and q; E [·] is the expectation of themeasurement of the jth feature; In this paper, we usetwo features to compute S(p, q): the intensity mean iof voxels in a node (a skeleton cell) and the radius rof its maximal inscribed sphere. That is, we have:

S (p, q) = exp(−ki · (ip−iq)2

E[(ip−iq)2]

−kr · (log rp−log rq)2

E[(log rp−log rq)2]

) (5)

Given that i and r are independent random vari-ables, we can get E

[(log rp − log rq)

2]

= 2D [log r]

and E[(ip − iq)

2]

=(

1Np

+ 1Nq

)D [i], where D [·] is

variance, Np and Nq are respectively the numbers ofvoxels in nodes p and q.

Here, n-links are defined in the irregular neighbor-ing system which are varied greatly from the tradi-tional graph cuts algorithm. They are expressed asthe connection of adjacent nodes with more features.Geometrically, the intersected area denotes the spatialrelationship of the intersected pairs. Intensity meanrepresents intensity information since similar featuresshould occur approximately at the adjacent regions;meanwhile, the geometric modality describes the sim-ilarity of two neighboring regions in structural con-formation. The property of intersected area, intensitymean and radius can be explained in Fig.5. Fig.5(a)shows the connection of adjacent skeletons of a regu-lar 3D object (cuboid). Fig.5(b) shows the connectionbetween skeletons of a coarse result segmented bythreshold value. Fig.5(c) reveals the zoomed in con-nection. It shows that the connection value of an objectdecreases from the center to boundary.

6 VOLUME RENDERING

6.1 Localized Transfer FunctionIn this section, we detail an interactive transfer func-tion design method. To decrease user interaction [4],only a few visual parameters should be managed toperform the operation of transfer functions. For agiven volume of interest (VOI) isolated by skeletoncuts, it can be displayed realistically by a localizedtransfer function and special features can be coded viavisual parameters. A localized transfer function can bedefined as a mapping F : f1, f2, · · · , fn →

−→V , where,

f1, f2, · · · , fn are n types of features like intensity,radius of maximal spheres, gradient magnitude andcurvature etc.; −→V denotes visual parameters such ascolor ~c or opacity o.

1) 1D localized transfer function A basic 1D local-ized transfer function can be defined with onlyone feature intensity in the following form:

F(i, j) = P I < i|voxel ∈ SVj , j = 1, 2, · · · , N(6)

which is the intensity distribution function P·of the jth subvolume SVj with the randomvariable I and intensity value i. N is the totalnumber of segmented structures. We can get thefinal 1D localized color (opacity) transfer func-tion −→c 1d(i, j) (o1d(i, j)) with only two controlpoints −→c1(j) and −→c2(j) (similarly o1(j) and o2(j))on the transfer function editor (see Fig. 10).

−→c 1d (i, j) = F (i, j)−→c1 (j) + (1−F (i, j))−→c2 (j)(7)

2) 2D localized transfer function• VOI Highlighting: We define a local neigh-

borhood B(p, r) centered at the point p withthe radius r in an object. For every pointinside the region B(p, r), we define somefiltering operations, called low passing, bandpassing and high passing, to map color oropacity to another feature radius (r). Thefeature seems to be similar to the size pro-posed by Correa and Ma [14], but the ra-dius r proposed here is an Euclidean scaleto highlight some regions of an object. Foran accurately recognized object, Euclideandistance transformation and skeleton extrac-tion described in Section 4 are employedagain to calculate the radii of the inscribedspheres and skeletal points. Low-pass, band-pass and high-pass widgets are created foroperating by users (see Fig.12). With a givencolor −→c0 (j) for highlighting the jth sub-volume SVj , we then can get the final 2Dlocalized color transfer function:

−→c 2d (i, j, r) = krad−→c0 (j) + (1− krad)−→c1d (i, j)

(8)where, krad is the highlight weight given bythe curve widget corresponding to the radiir.

• Non-VOI Diminishment: For a simple pointpos in the complement of interesting struc-tures, its Euclidean distance d to VOI iscomputed by the Euclidean distance trans-formation. The 2D localized opacity transferfunction o2d(i, j, d) can be implemented bymultiplying the 1D localized opacity transferfunction o1d(i, j) with the following func-tion:

kd = b0 − b1 · d (9)

where, b0 and b1 are set by the user with acurve widget (see Fig.13).

6.2 Boundary Filtering

The results of segmentation are stored in an integervolume called a mask volume. Each integer valueis an object ID corresponding to a segmented ob-ject. It has been well known that, for rendering a

Page 7: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 7

Fig. 6. A texture used for weight calculation in bound-ary interpolation.

segmented volume, a basic task is to calculate theobject boundaries at pixel resolution. One solutionwas given by Hadwiger et al. [33] which considereda mask volume as multiple binary volumes, and asample belongs to an object if the value interpolatedfrom the corresponding binary volume is bigger than0.5. The implementation in that paper utilized vectoroperations and was very efficient on vector GPUs.However, some most recent GPUs use scalar architec-ture for better programmability. On such platforms,vector operations have little advantages. Moreover,the time required for the boundary filtering is linearlyrelated to the number of objects, hence, an upperbound of object number is usually predefined and notscalable during execution. To overcome these limits,we develope our own boundary filtering scheme.

The purpose of the boundary filtering operation isto decide which localized transfer functions are to beused and how the transferred values are to be blendedfor a scalar value sampled along the viewing ray. Thisis because a viewing ray may intersect with differentobjects. As a result, different localized transfer func-tions may be involved in the calculation of the colorcorresponding to the viewing ray.

One solution to the boundary filtering problem isthrough trilinear interpolation, which can be achievedby calculating the weights for each vertex accordingto the sampling location, when a voxel is consideredas a cuboid. To take the advantage of hardware in-terpolation functionality, we build a 2× 2× 2 texturewith 4 channels as shown in Fig.6. For each sample,we first fetch the texture using the fraction part of thesample’s voxel coordinate, then, with the z-coordinateflipped, we fetch it again. So we get 8 values in total,which are exactly the weights for the vertices. Notethat due to platform specification, it may be requiredto add a 0.5 shift to the texture coordinates to makeit fall into the linear part of the texture.

Another crucial problem in the boundary filteringis to calculate the sum of vertex weights for eachrelated objects. The default approach is to use anarray of object weights to store the weights for allpossible object IDs. However, performance will dropwhen there are a large number of objects defined,

Algorithm 1 Calculate Object WeightsRequire: id[0] · · · id[7], w0 · · ·w7

BYTE : binMask ⇐ 11111111bwhile binMask > 0 do

curID ⇐ id[ffs(binMask)− 1]weight ⇐ 0if id[0] = curID then

weight ⇐ weight + w0

binMask ⇐ binMask&11111110bend ifif id[1] = curID then

weight ⇐ weight + w1

binMask ⇐ binMask&11111101bend if...if id[7] = curID then

weight ⇐ weight + w7

binMask ⇐ binMask&01111111bend ifProcess(curID,weight)

end while

since the more objects there are, the bigger the arrayis required and the more time is consumed to traversethe array. We notice that the eight vertices can referto no more than eight objects, so it is possible todesign an algorithm whose complexity is not relatedto the object number. The idea of our algorithm isto enumerate the object IDs one by one. The firstobject ID can be achieved from any of the vertices,then, all the vertices with the same object ID arechecked. If necessary, the second object ID is achievedfrom the first unchecked vertex, and all the verticeswith the same object ID are checked, and so on. Toefficiently find “the first uncheck vertex”, we encodethe checked status of the vertices into an 8bit integer,and a map from the 8bit integer to an “unchecked”bit index is needed. In some most recent graphicshardware, an integer operation called “find first setbit” is provided, which returns the index of the firstbit set beginning with the least significant bit, andbits are numbered starting at one. In the CUDA Cenvironment [34], such operation is implemented asan intrinsic function called “ ffs”. Since what weneed is an unchecked vertex index, we should encodethe checked status inversely to the bit mask. When“ ffs” is not present, an array with 256 elements canbe built, which can serve as a look up table. Notethat the lookup table can be shared between multiplethreads, so it can still be a very cheap techniqueif there are many threads executed concurrently onGPU. In the worst case, the eight vertices belongto eight different objects, and then eight rounds areneeded. However, such a case is unlikely to happen interms of probability. In most cases, all vertices belongto only one object, which is the case when a ray

Page 8: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 8

(a) (c) (e)

(b) (d) (f)

Fig. 7. Comparisons of mandible segmentation accuracy.(a)-(b) A manual method; (c)-(d) Conventional graph cuts;(e)-(f) Skeleton cuts.

is traveling inside an object. The pseudocode of thealgorithm is given by Algorithm1, assuming a “ffs”function is present. id[0] · · · id[7] denote the object IDsof the vertices, w0 · · ·w7 denote the weights of thevertices. Object IDs and corresponding weights aresuccessively generated, which can be processed.

Here, we actually have multiple options for theoperations on an object ID sorted out with corre-sponding weight value. One option, as with [33], isto compare the weight with 0.5. If it is bigger than0.5, the loop is terminated, and the localized transferfunction corresponding to the object ID can be exclu-sively used. Alternatively, we can use accumulatorsto calculate the weighted sum of the values (opacitiesand opacity weighted colors) transferred using multi-ple localized transfer functions corresponding to theIDs sorted out.

In this algorithm, the number of objects does notneed to be known in advance. Consequently, thelimit of object number is removed entirely and noadditional time is required when more objects aredefined.

7 EXPERIMENTS AND EVALUATION

Volume rendering can assist physicians in diseasesdiagnosis and pathologies analysis. However, due tothe diversity in imaging modality and the amount ofdata generated by modern 3D imaging devices, it cantake hours to perform classification or segmentationmanually to insure the accuracy of results. Automaticand semi-automatic techniques aim at creating a userfriendly environment to ease and speed up the pro-cess. However, the development of an automatic orsemi-automatic segmentation technique to achieve asatisfactory result has proved to be a tough task.This is because, firstly, the organs and tissues usuallyexhibit high variability in size and shape. In addition,

(a) (b) (c)

Fig. 8. Examples of artery segmentation with differentparameters ki and kr. (a) ki = 0 , kr = 0; (b) ki = 17 ,kr = 0; (c) ki = 0 , kr = 800.

(a) (b)

Fig. 9. The plot of Ratio, κ(as), κ(ms), and κ(tv) againstthe change of ki and kr. (a) The variation of ki when kr = 0;(b) The variation of kr when ki = 0.

their appearance and geometry can be distorted anddeformed by various diseases such as aneurysms,stenoses and traumas. All these factors make it diffi-cult to analyze interesting structures in a medical vol-ume data set. Accordingly, a great deal of experimentshave been carried out on medical datasets to evaluatethe performance of our approach. All the algorithmsinvolved in the technique are implemented in C++as part of our medical imaging processing toolkit(MITK) and application system 3DMed [35]. All theexperiments were done using a PC with an Intel Core2 Quad8200 2.33 GHz CPU with 2GB of RAM and anNVidia GeForce GTX260 GPU with 896 MB memory,and a workstation with an Intel Xeon 3.2GHz CPUwith 16G RAM. Volume rendering is achieved by raycasting and implemented in CUDA . All datasets werecollected by our partners within clinical practice atthe Radiology Department of Xuanwu Hospital, Cap-ital Medical University and Segmentation ChallengeMICCAI 2009.

7.1 Skeleton Cuts7.1.1 Accuracy and ComplexityThis section provides a quantitative analysis of theaccuracy and complexity of Boykov and Funka-Lea’s conventional graph cuts (CGC) [32] and skele-ton cuts (SC). The datasets used in our experi-ment are the head&neck data obtained from Seg-mentation Challenge MICCAI 2009 (http://grand-challenge2009.bigr.nl/). The goal of the experimentis to segment mandibles out of the data, which isan extremely challenge segmentation problem. Dicesimilarity coefficient [36] was used to test the resultswith CGC and SC (see Fig. 7(b) and (c)) against

Page 9: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 9

TABLE 1The accuracy and complexity of CGC vs. SC.

Datasets 1 2 3 4 5Size 180×220×100 200×240×80 200×230×80 170×210×80 190×240×80

κ(as) 0.8979 0.8813 0.886 0.8649 0.8703CGC κ(ms) 1 1 1 1 0.8998

κ(tv) 0.8482 0.844 0.8484 0.8322 0.8057κ(as) 0.9505 0.9081 0.9163 0.8951 0.9444

SC κ(ms) 1 1 1 1 0.9399κ(tv) 0.9123 0.871 0.841 0.8815 0.9101

No. 74188 60522 57414 54447 51057Ratio 1:53.4 1:63.4 1:64.1 1:52.5 1:71.4

the expert manual segmentation (see Fig. 7(a)). Thiscriterion is valued from 0 and 1, and is a measure ofthe total volumetric overlap κ(tv) computed as:

κ = 2× |X ∩ Y ||X|+ |Y |

,

where | · | is the number of voxels. This criterionis also evaluated in axial slices where the manualdelineations are present, and the average κ(as) andmedian κ(ms) are used to produce the slice overlapscores. The produced statistics are given in table 1.The number of skeletons generated in the processis shown in the ninth row. The ratio was computedbetween the number of skeletons and the number ofvoxels. These results were obtained via only placingthe same seeds at the joints between mandible andskull. These experimental results have demonstratedthat skeleton cuts can improve the segmentation accu-racy and achieve a satisfactory segmentation; mean-while, it can reduce the segmentation complexity andenable the CGC method to work on less nodes. Inaddition, we have also studied how the parametersof connect metrics affect segmentation results. Fig.5(b)and (c) describe the connection between skeletons ofan over-segmentation result. Its value ranges from 0to 2. The same seeds were then painted on only oneslice in this experiment. Intersected area was used tosegment the arteries as a standard (ki = 0, kr = 0).Fig. 9 shows the changes of Ratio, κ(as), κ(ms), andκ(tv) when additional features, namely, the intensitymean and radius, were taken into account. With theseparameters, less users’ inputs are needed to removeunwanted structures. In fact, it is difficult to achievea perfect segmentation of a given medical imagein most practical circumstances. Therefore, with theimplementation of an improved version of the CGCin our interactive volume rendering framework, it isvery convenient for end users to modify flexibly theresult of segmentation when necessary, and the resultof segmentation can be further refined by placingadditional seeds.

7.1.2 Performance in Speed and Memory Usage

The performance in speed and memory usage of ouradvanced segmentation algorithm for various volume

Fig. 10. Comparison between CGC and SC in segmentingorgans: (a) heart, (b) hepar. The threshold values were 1015for heart and 1020 for hepar. The segmented results wererendered with the same 1D localized transfer functions.

data sets has also been tested. Fig. 10 shows segmen-tation of two data sets: heart and hepar. The resultsin the left column were segmented with CGC, andthose in the right column were segmented by thenew version. As shown in the figures, the segmentedorgans obtained by the two algorithms appear to besimilar. However, the experiments show that it isrelatively easier to extract a complete coronary arteryand eliminate heart from hepar via SC than CGC (seethe blue arrows in Fig. 10(a) and (b)). Additionally,we tested the algorithm with another three more datasets: pulmo, kidneys and artery. The performanceof our newly proposed algorithm exceeds the CGCmethod both in time and memory usage (shown in ta-ble 2, S1: Step 1 and S2: Step 2 in Fig.3). The number ofskeletons and the ratio between skeletons and voxelswere also given in this table. We have also carried outsome experiments to test the performance in different

Page 10: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 10

TABLE 2Comparison of time and memory usage between CGC and SC.

CGC SCSize T(s) M(Mb) T(s)(S1+S2) M(Mb)(S1+S2) No. Ratio

Heart 256× 256× 128 65.1 1512.6 3.9(3.7+0.2) 64.4(32.4+32.0) 55103 1:152.2Hepar 256× 256× 128 168.2 1560.3 5.4(4.4+1.0) 69.9(32.6+36.3) 281269 1:29.8Pulmo 256× 256× 128 113.2 1614.3 4.8(4.1+0.8) 62.5(33.2+29.3) 361299 1:23.2

Kidneys 512× 512× 314 \ \ 33.3(30.8+2.5) 586.0(368.2+217.8) 1456066 1:56.5Artery 512× 512× 957 \ \ 186.7(184.2+2.5) 2082.0(1978.5+103.5) 822137 1:305.1

0

10

20

30

40

50

60

70CGC SC

Volume Size 0

200

400

600

800

1000

1200

1400

1600CGC SC

Volume Size

(a) (b)

Fig. 11. Comparison of time and memory usage be-tween the conventional graph cuts and our proposed version.(a)The processing time was tested in different volume sizes;(b)The memory usage was also tested in different volumesizes. These experiments demonstrate that our approachsignificantly reduces the processing time and memory con-sumption.

volume sizes. We have further applied our approachto five heart data sets by downsampling the CTA heartdata (256 × 256 × 128) at the factor of 2, 4, 8, 16 and32. Fig.11(a) shows the running time of CGC andSC. The performance of our proposed algorithm ismuch better than the conventional version. Moreover,as shown in Fig.11(b), the expense of memory has alsobeen significantly reduced with our advanced graphcuts algorithm.

As studied by Boykov and Kolmogorov [30], thetime complexity of graph cuts is O

(|V | |E|2

∣∣∣C∣∣∣),hence, those methods are advisable to improve theperformance of standard graph cuts if they can re-duce the number of nodes |V |, the number of edges|E|, and the cost of the minimum cut

∣∣∣C∣∣∣. Withour skeleton cuts, the link values along the contourproduced by the threshold are set to zero using over-segmentation approaches like a threshold. Also, mostof the segmentation result will be coordinated to thecontour, so that the min-cut cost can be made verysmall. Additionally, the decomposition of the volumeusing skeleton extraction can dramatically reduce thenumber of nodes and edges.

7.2 Volume Rendering7.2.1 VOI HighlightingIn many clinical scenarios, VOI highlighting is animportant illustration technique to analyze the shapeand geometry of interesting tissues. Users only needto visually choose color and move a curve widget to

(a) (b) (c)

Fig. 12. Some examples of VOI highlighting. (a) Carotidartery stenosis is highlighted by the low-pass curve widget;(b) Pelvis is highlighted with the band-pass curve widget.(c) Intracranial aneurysms are highlighted with the high-passcurve widget.

design a required transfer function (see the bottomof images in Fig.12). With the construction of theEuclidean distance and the skeletons for gray levelstructure, we were able to find the radii of inscribedmaximal spheres and their corresponding locations ina rendered image. In practice, this can be done easilyby users to just adapt the parameter r using the curvewidget. Then the structures will be automatically behighlighted by a specific color. The proposed tech-nique has also been applied to qualitatively displayblood vessel anomalies like stenoses and aneurysms[37]. In this case, a low-pass technique for stenosishighlight in CTA data (512× 512× 330) was used (seeFig. 12(a)). The curve widget was applied at upperband scales in order to detect stenose. This is doneby adjusting the parameter r in the low-pass curvewidget from min to max. Another application of ourtechnique was the visualization of CT pelvis data(512× 512× 276) (see Fig.12(b)), which illustrated theproposed method can obtain the geometrical informa-tion. A third high-pass technique was also applied tovisualize intracranial aneurysms. The experiment wascarried out with a DSA data set (512×512×256) shownin Fig.12(c). The high-pass curve widget allows usersto know the shape and rough size of the aneurysms,including their min radius and max radius of inscribedmaximal spheres, as well as the corresponding loca-tions.

Page 11: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 11

(a) (b)

(c) (d)

Fig. 13. VOI positioning by contextual tissues diminish-ments using the proposed transfer function. The upper ren-dered images demonstrate that the pulmonary vessels canbe positioned by eliminating the surrounded organs. (a) Theoriginal rendered image; (b)-(d) Volume rendering effects byscaling the Euclidean distance.

7.2.2 Non-VOI DiminishmentOur framework can be used not only as VOI high-lighting for intuitive observation, but also as an inter-active method for object positioning. In some practicalsituations such as the diagnosis of pulmonary cancer,the identification of spatial relationship between thelesion and pulmonary vessels is crucial for the differ-ential diagnosis of pulmonary malignant and benigntumor, as well as surgery planning. However, someconventional approaches are mainly designed to letusers subjectively select a view to perform the mea-surements in 2D or 3D space. By just using the originalCT images or following the traditional way of quan-tification, radiologists find it difficult to determine in aselected view if the pulmonary vessels and pleura are

pulled to the lesion, an important sign which stronglysuggests the possibility of pulmonary cancer. Withnon-VOI diminishment in volume rendering, radiol-ogists can intuitively and quickly assess the spatialrelationship between the lesion and its circumjacentstructures. As a kind of computer animation, thisprocess of interaction can be easily understood inclinical practice by common users. Since segmentationis embedded in our volume rendering framework, alookup table can be constructed for expressing theEuclidean distance between two tissues in volumevisualization. At first, Euclidean distance transforma-tion is performed to compute the Euclidean distancebetween the focused tissue and every voxel in thecontext. Then, the shortest distance between tissue ofinterest and other organs can be easily acquired inthe localized transfer function domain. As a scale tool,our system provides the specialized transfer functionto end users who can use it to position the objectsof interest in a volume. Another distinctive feature ofour approach is its simplification and intuitiveness touse.

Fig.13 shows an application to position pulmonaryvessels in a chest CTA data set (512 × 512 × 256). Auser can select pulmonary vessels at first, and thentheir Euclidean distances to kidneys, aorta and ribsare easily revealed in the proposed transfer functiondomain. In this application, the opacity is used to di-minish contextual tissues and estimate their distanceto VOI. We can also use color as an alternative meansto code the distance.

8 DISCUSSION AND FUTURE WORK

In this paper, we presented a novel framework basedon advanced graph cuts segmentation algorithm andlocalized transfer function. The proposed approach al-lows to fast classify volume data with high quality, ex-tract important information of those interesting struc-tures, and decrease user interaction. However, thereare some limitations of our approaches, which aremainly associated with the limitations of the relevantsegmentation techniques used in our framework forgenerating a coarse contour of interesting structures.

As have been shown by a large number of ex-periments, our approach works well for most realclinical data, but it may not be able to produce de-sirable visualizations when the volume data sets areextremely noisy. This is because in this situation, itis difficult to obtain required segmentations of thevolume data sets. However, we are not concernedmuch with this extreme situation as the inputs of ourmedical visualization system are mainly real medicaldata acquired with different kinds of scanners. Infact, when the noise exceeds certain level, some noisefiltering techniques can be applied to the data setto filter out certain noise. As a way to obtain adesirable result and avoid such problem, we can use

Page 12: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 12

the anisotropic diffusion filtering method proposedby Perona and Malik [38]. The algorithm can makethe anisotropic diffusion preserve edges while reducenoise in the internal region. Then we can obtain awell-segmented mask volume and fuse it into theoriginal volume. In this way, we can visualize theoriginal volume realistically via a localized transferfunction. Another limitation of our approach is itsapplicability to scientific data. Our approach workswell when structures of interest have comparativelyclear boundaries, which is the case in general formost volume medical data of different types. In somecases, such as in climatology simulation, the valuesin the volume data may vary smoothly across theentire domain, i.e., boundaries between regions arefuzzy. It is difficult to determine a coarse contourof interesting regions via thresholding. This can besolved by a fuzzy clustering approach [39], whichcategorizes voxels into a number of subgroups basedon a certain degree of closeness or similarity in thefuzzy environment.

As shown in our work, our approach can be easilyintegrated with previous features like gradient mag-nitude and curvature, to obtain different information.In addition, in the future, it is possible to searchfor more feature information to describe pathologies,such as blood vessels anomalies, in order to assistphysicians and radiologists to diagnose diseases moreintuitively and accurately via specialized localizedtransfer functions in our segmentation-based volumerendering framework.

ACKNOWLEDGMENTSThis paper is supported by the National Basic Re-search Program of China (973 Program) under GrantNo.2006CB705700, 2011CB707700, the Hundred Tal-ents Program of the Chinese Academy of Sciences,the Knowledge Innovation Project of the ChineseAcademy of Sciences under Grant No.KSCX2-YW-R-262, KGCX2-YW-129, the National Natural Sci-ence Foundation of China under Grant No.81042002,81071218, 30873462, 60910006. We would like to thankthe anonymous reviewers for their valuable com-ments.

REFERENCES[1] M. Levoy, “Display of surfaces from volume data,” IEEE

Computer Graphics and Applications, vol. 8, no. 3, pp. 29–37,1988.

[2] G. Kindlmann and J. W. Durkin, “Semi-automatic generationof transfer functions for direct volume rendering,” in VVS ’98:Proceedings of the 1998 IEEE symposium on Volume visualization.New York, NY, USA: ACM, 1998, pp. 79–86.

[3] F.-Y. Tzeng, E. B. Lum, and K.-L. Ma, “An intelligent sys-tem approach to higher-dimensional classification of volumedata,” IEEE Transactions on Visualization and Computer Graphics,vol. 11, no. 3, pp. 273–284, 2005.

[4] C. Salama, M. Keller, and P. Kohlmann, “High-level userinterfaces for transfer function design with semantics,” IEEETransactions on Visualization and Computer Graphics, vol. 12,no. 5, p. 1021, 2006.

[5] J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensionaltransfer functions for interactive volume rendering,” IEEETransactions on Visualization and Computer Graphics, vol. 8, no. 3,pp. 270–285, 2002.

[6] C. Lundstrom, P. Ljung, and A. Ynnerman, “Local histogramsfor design of transfer functions in direct volume render-ing,” IEEE Transactions on Visualization and Computer Graphics,vol. 12, no. 6, pp. 1570–1579, 2006.

[7] P. Sereda, A. V. Bartrolı, I. W. O. Serlie, and F. A. Gerritsen,“Visualization of boundaries in volumetric data sets using lhhistograms,” IEEE Transactions on Visualization and ComputerGraphics, vol. 12, no. 2, pp. 208–218, 2006.

[8] I. Serlie, R. Truyen, J. Florie, F. Post, L. van Vliet, and F. Vos,“Computed cleansing for virtual colonoscopy using a three-material transition model,” in Medical Image Computing andComputer-Assisted Intervention-MICCAI Proceedings. Montreal,Canada: MICCAI Society, 2003, pp. 175–183.

[9] C. R. Johnson and J. Huang, “Distribution-Driven Visualiza-tion of Volume Data,” IEEE Transactions on Visualization andComputer Graphics, vol. 15, no. 5, pp. 734–746, 2009.

[10] C. L. Bajaj, V. Pascucci, and D. R. Schikore, “The contourspectrum,” in SIGGRAPH ’97: ACM SIGGRAPH 97 VisualProceedings: The art and interdisciplinary programs of SIGGRAPH’97. New York, NY, USA: ACM, 1997, p. 192.

[11] Y. Sato, C.-F. Westin, A. Bhalerao, S. Nakajima, N. Shiraga,S. Tamura, and R. Kikinis, “Tissue classification based on3d local intensity structures for volume rendering,” IEEETransactions on Visualization and Computer Graphics, vol. 6, no. 2,pp. 160–180, 2000.

[12] G. Kindlmann, R. Whitaker, T. Tasdizen, and T. Moller,“Curvature-based transfer functions for direct volume render-ing: Methods and applications,” in VIS ’03: Proceedings of the14th IEEE Visualization 2003 (VIS’03). Washington, DC, USA:IEEE Computer Society, 2003, pp. 513–520.

[13] S. Roettger, M. Bauerand, and M. Stamminger, “Spatializedtransfer functions,” in Proceedings of Eurographics / IEEE-VGTCSymposium Visualization. Leeds, United Kingdom: Eurograph-ics / IEEE VGTC, 2005, pp. 271–278.

[14] C. Correa and K.-L. Ma, “Size-based transfer functions: A newvolume exploration technique,” IEEE Transactions on Visualiza-tion and Computer Graphics, vol. 14, no. 6, pp. 1380–1387, 2008.

[15] C. Correa and K.-l. Ma, “The occlusion spectrum for volumeclassification and visualization,” IEEE Transactions on Visual-ization and Computer Graphics, vol. 15, no. 6, pp. 1465–1472,2009.

[16] J. J. Caban and P. Rheingans, “Texture-based transfer functionsfor direct volume rendering,” IEEE Transactions on Visualizationand Computer Graphics, vol. 14, no. 6, pp. 1364–1371, 2008.

[17] M. Hadwiger, F. Laura, C. Rezk-Salama, T. Hollt, G. Geier, andT. Pabel, “Interactive volume exploration for feature detectionand quantification in industrial ct data,” IEEE Transactions onVisualization and Computer Graphics, vol. 14, no. 6, pp. 1507–1514, 2008.

[18] G. H. Weber, S. E. Dillard, H. Carr, V. Pascucci, and B. Hamann,“Topology-controlled volume rendering,” IEEE Transactions onVisualization and Computer Graphics, vol. 13, no. 2, pp. 330–341,2007.

[19] I. Fujishiro, T. Azuma, and Y. Takeshima, “Automating transferfunction design for comprehensible volume rendering basedon 3d field topology analysis (case study),” in VIS ’99: Proceed-ings of the conference on Visualization ’99. Los Alamitos, CA,USA: IEEE Computer Society Press, 1999, pp. 467–470.

[20] R. Huang and K.-L. Ma, “Rgvis: Region growing based tech-niques for volume visualization,” in 11th Pacific Conference onComputer Graphics and Applications, 2003. Proceedings, 2003, pp.355–363.

[21] S. Owada, F. Nielsen, and T. Igarashi, “Volume catcher,” inProceedings of the 2005 symposium on Interactive 3D graphics andgames. ACM, 2005, pp. 111–116.

[22] J. Marks, B. Andalman, P. Beardsley, W. Freeman, S. Gibson,J. Hodgins, T. Kang, B. Mirtich, H. Pfister, W. Ruml et al.,“Design galleries: a general approach to setting parameters forcomputer graphics and animation,” in Proceedings of the 24thannual conference on Computer graphics and interactive techniques.ACM Press/Addison-Wesley Publishing Co., 1997, p. 400.

Page 13: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 13

[23] X. Yuan, N. Zhang, M. Nguyen, and B. Chen, “Volume cutout,”The Visual Computer, vol. 21, no. 8, pp. 745–754, 2005.

[24] Y. Wu and H. Qu, “Interactive transfer function design basedon editing direct volume rendered images,” IEEE Transactionson Visualization and Computer Graphics, pp. 1027–1040, 2007.

[25] G. Klette, “Topologic, geometric, or graph-theoretic propertiesof skeletal curves.” Ph.D. dissertation, Groningen University,2007.

[26] D. Reniers, J. van Wijk, and A. Telea, “Computing multiscalecurve and surface skeletons of genus 0 shapes using a globalimportance measure,” IEEE Transactions on Visualization andComputer Graphics, vol. 14, no. 2, pp. 355–368, 2008.

[27] R. L. Blanding, G. M. Turkiyyah, D. W. Storti, and M. A. Gan-ter, “Skeleton-based three-dimensional geometric morphing,”Computational Geometry: Theory and Applications, vol. 15, no. 1-3, pp. 129–148, 2000.

[28] A. Meijster, J. B. T. M. Roerdink, and W. H. Hesselink, “Ageneral algorithm for computing distance transforms in lineartime,” in Mathematical Morphology and its Applications to Imageand Signal Processing, Kluwer Acad. Publ., 2000, pp. 331–340.

[29] D. Coeurjolly and A. Montanvert, “Optimal separable algo-rithms to compute the reverse euclidean distance transforma-tion and discrete medial axis in arbitrary dimension,” IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 29,no. 3, pp. 437–448, 2007.

[30] Y. Boykov and V. Kolmogorov, “An experimental comparisonof min-cut/max-flow algorithms for energy minimization invision,” IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 26, no. 9, pp. 1124–1137, 2004.

[31] H. Lombaert, Y. Sun, L. Grady, and C. Xu, “A multilevelbanded graph cuts method for fast image segmentation,” inICCV ’05: Proceedings of the Tenth IEEE International Conferenceon Computer Vision (ICCV’05) Volume 1. Washington, DC, USA:IEEE Computer Society, 2005, pp. 259–265.

[32] Y. Boykov and G. Funka-Lea, “Graph cuts and efficient n-dimage segmentation,” International Journal of Computer Vision,vol. 70, no. 2, pp. 109–131, 2006.

[33] M. Hadwiger, C. Berger, and H. Hauser, “High-quality two-level volume rendering of segmented data sets on consumergraphics hardware,” in VIS ’03: Proceedings of the 14th IEEEVisualization 2003 (VIS’03). Washington, DC, USA: IEEEComputer Society, 2003, p. 40.

[34] NVIDA, “Nvidia cuda programming guide version 2.3,”NVIDIA Corporation, July 2009.

[35] J. Tian, J. Xue, Y. Dai, J. Chen, and J. Zheng, “A novel softwareplatform for medical image processing and analyzing,” IEEETransactions on Information Technology in Biomedicine, vol. 12,no. 6, pp. 800–812, 2008.

[36] L. R. Dice, “Measures of the amount of ecologic associationbetween species,” Ecology, pp. 297–302, 1945.

[37] D. Lesage, E. D. Angelini, I. Bloch, and G. Funka-Lea, “A re-view of 3d vessel lumen segmentation techniques-models, fea-tures and extraction schemes,” Medical Image Analysis, vol. 13,no. 6, pp. 819–845, 2009.

[38] P. Perona and J. Malik, “Scale-space and edge detection usinganisotropic diffusion,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 12, no. 7, pp. 629–639, 1990.

[39] I. Gath and A. Geva, “Unsupervised optimal fuzzy clustering,”IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 11, no. 7, pp. 773–780, 1989.

Dehui Xiang received the B.E. degree in au-tomation from Sichuan University, Sichuan,China, in 2007. He is currently a Ph.D.candidate in the Medical Image ProcessingGroup at the Institute of Automation, Chi-nese Academy of Sciences, Beijing, China.His current research interests include volumerendering, medical image analysis, computervision, and pattern recognition.

Jie Tian (M’02−SM’06−F’10) received thePh.D. degree (with honors) in artificial intel-ligence from the Institute of Automation, Chi-nese Academy of Sciences, Beijing, China,in 1992.

From 1995 to 1996, he was a Postdoc-toral Fellow at the Medical Image ProcessingGroup, University of Pennsylvania, Philadel-phia. Since 1997, he has been a Profes-sor at the Institute of Automation, ChineseAcademy of Sciences, where he has been

involved in the research in Medical Image Processing Group. Hisresearch interests include the medical image process and analysisand pattern recognition.

Dr. Tian is the Beijing Chapter Chair of The Engineering inMedicine and Biology Society of the IEEE.

Fei Yang received the B.E. degree inbiomedical engineering from Beijing JiaotongUniversity, Beijing, China, in 2009. He iscurrently a Ph.D. candidate in the MedicalImage Processing Group at the Institute ofAutomation, Chinese Academy of Sciences,Beijing. His research interests include vol-ume rendering, medical image analysis andGPU computing.

Qi Yang received the M.S. degree in Medi-cal Imaging from Harbin Medical University,Heilongjiang, in 2006, and the Ph.D. degreein Medical Imaging from Xuanwu Hospital,Capital Medical University, Beijing, in 2009.The overall objective of his research is thedevelopment and clinical application of fastMR imaging techniques for the evaluation ofthe cardiovascular system. Specific researchinterests include: to develop fast MRI tech-niques to acquire high-resolution images of

coronary arteries; to characterize the composition of atherosclerosisusing MRI; to evaluate the utility of MR contrast agents in imagingthe anatomy and function of the heart.

Page 14: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER … · for visualizing a large and complex volume data set. Salama and Kohlmann presented a high-level seman-tic model, together with

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 14

Xing Zhang received her M.S. degree inbiomedical engineering from Beijing JiaotongUniversity, Beijing, China. Now she is a Ph.D.candidate in Institute of Automation, ChineseAcademy of Sciences. Her research interestsare centered on medical image processing,including medical image segmentation usinggraph cut method, 3D medical image seg-mentation based on statistical shape models.

Qingde Li received the B.S. degree in math-ematics from Beijing Normal University, Bei-jing, China, in 1982, and the Ph.D. degree incomputer science from the University of Hull,Hull, U.K., in 2002.

From 1998 to 2002, he was a Professorin the School of Mathematics and ComputerScience, Anhui Normal University, Anhui,China. He was a visiting scholar in the De-partment of Applied Statistics, University ofLeeds, U.K. from Oct 1990 to May 1992, and

in the Department of Computing, University of Bradford, Bradford,U.K., from Sept 1996 to Aug 1997. He is currently a lecturer inthe Department of Computer Science, University of Hull, Hull, UK.His most recent research interests include GPU-based volume datavisualization, implicit geometric modeling, and computer graphics.

Xin Liu received his M.D. from the MedicalSchool of Wuhan University, Wuhan, Chinain 1988 and Ph.D. in Diagnostic Radiologyfrom PLA Postgraduate Medical School (PLAGeneral Hospital), Beijing, China, in July2006. He worked as a research associate inthe Department of Radiology of Northwest-ern University, Chicago, USA, from August2006 to October 2008.

Dr. Liu currently is a full-time professor inShenzhen Institute of Advanced Technology,

Chinese Academy of Science. Dr. Liu’s research interests includecardiovascular magnetic resonance imaging, noninvasive coronaryangiography and vessel wall imaging, carotid atherosclerotic plaquecharacterization, and early detection of malignant tumor using MRI.He has published more than twenty peer-reviewed journal papersand international conference proceedings and one book chapterin Novel Techniques for Imaging the Heart published by AmericanHeart Associate.


Recommended