+ All Categories
Home > Documents > Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface...

Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface...

Date post: 10-Jun-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
9
Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel 1 , Stefan Gumhold 2 , Hans-Peter Seidel 3 1 Siemens AG Medical Solutions, Magnetic Resonance, Karl-Schall-Str. 4, 91052 Erlangen 2 TU Dresden, SMT CGV, 01062 Dresden 3 Max-Planck-Institut f ¨ ur Informatik, Stuhlsatzenhausweg 85, 66123 Saarbr¨ ucken Email: 1 [email protected] 2 [email protected] 3 [email protected] Abstract In this work we present a new analysis technique for dynamic 3D images. The method allows the re- construction of dynamic surfaces describing object boundaries in the 3D images over time. The only input of the user is a grey value threshold used for boundary detection and a bounding box around the surfaces of the object of interest. The output of our method is a dynamic surface mesh that tracks the object boundary over time. We first extract a set of weighted points from the MR image data using a local boundary vot- ing scheme. The dynamic surface is defined as an extremal surface on the resulting point-sampled boundary probability density. In this way a surface with adjustable smoothness is obtained that can be reconstructed using projection operators similar to the ones used for point set surfaces. We extend the projection operators to 3- manifolds in 4D and obtain a much better track- ing performance of the dynamic surface model. In contrast to computationally expensive active con- tour algorithms, the new algorithm can be easily parallelized. The proposed approach is applied to 4D-MR images of a human heart in motion. 1 Introduction Image segmentation is an important task in many medical applications. Almost all computer-based medical methods include a segmentation step to in- fer relevant information from image data. A variety of segmentation approaches have been proposed in- cluding model-based methods, like active contours, and pixel-based ones, e.g. watershed transforma- tions or morphological segmentation. We follow a probabilistic approach, where the probability density of object boundaries is derived from a local voting scheme. It is sampled on a set of points with the weights measuring the boundary probability. The object boundaries are defined as the surfaces of probability maxima in direction of the gradient of the probability density. The problem of dynamic boundary segmentation therefore boils down to a space-time meshing problem. We solve this by an implicit meshing of the first time frame and a tracking procedure that is guided by the 3- manifold of gradient maxima residing in 4D space- time. One important advantage is that the number of parameters that have to be adjusted is reduced to one intuitive intensity threshold and the surface component of interest. The proposed approach is applied to 4D space-time MR images of the human heart in motion. The paper is structured as follows. First we intro- duce the estimation of the object boundary probabil- ities in Section 2. The successive section describes the definition of the dynamic surface of maximal probability and introduces projection operators for almost orthogonal projection onto the dynamic sur- face. Section 4 describes the incremental meshing approach for the dynamic surface. The application to 4D cardiac MRI is presented in section 5. Related work is stated at the beginning of each section. 2 Boundary Probability Density In this work we aim at a tool to reconstruct surfaces from cardiac MR images. Measurement parameters of the given images make blood voxels appear at a higher intensity than tissue. The inner walls of the cardiac chambers are therefore determined by big discontinuities in image intensity values from high to low. A variety of algorithms have been proposed for analyzing image intensity variation, including sta- VMV 2005 Erlangen, Germany, November 16–18, 2005
Transcript
Page 1: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

Dynamic Surface Reconstruction from 4D-MR Images

Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

1Siemens AG Medical Solutions, Magnetic Resonance, Karl-Schall-Str. 4, 91052 Erlangen2TU Dresden, SMT CGV, 01062 Dresden

3Max-Planck-Institut fur Informatik, Stuhlsatzenhausweg 85, 66123 SaarbruckenEmail: [email protected]

[email protected]@mpi-sb.mpg.de

Abstract

In this work we present a new analysis techniquefor dynamic 3D images. The method allows the re-construction of dynamic surfaces describing objectboundaries in the 3D images over time. The onlyinput of the user is a grey value threshold used forboundary detection and a bounding box around thesurfaces of the object of interest. The output of ourmethod is a dynamic surface mesh that tracks theobject boundary over time.

We first extract a set of weighted points fromthe MR image data using a local boundary vot-ing scheme. The dynamic surface is defined asan extremal surface on the resulting point-sampledboundary probability density. In this way a surfacewith adjustable smoothness is obtained that can bereconstructed using projection operators similar tothe ones used for point set surfaces.

We extend the projection operators to 3-manifolds in 4D and obtain a much better track-ing performance of the dynamic surface model. Incontrast to computationally expensive active con-tour algorithms, the new algorithm can be easilyparallelized. The proposed approach is applied to4D-MR images of a human heart in motion.

1 Introduction

Image segmentation is an important task in manymedical applications. Almost all computer-basedmedical methods include a segmentation step to in-fer relevant information from image data. A varietyof segmentation approaches have been proposed in-cluding model-based methods, like active contours,and pixel-based ones, e.g. watershed transforma-tions or morphological segmentation.

We follow a probabilistic approach, where theprobability density of object boundaries is derived

from a local voting scheme. It is sampled on a setof points with the weights measuring the boundaryprobability. The object boundaries are defined asthe surfaces of probability maxima in direction ofthe gradient of the probability density. The problemof dynamic boundary segmentation therefore boilsdown to a space-time meshing problem. We solvethis by an implicit meshing of the first time frameand a tracking procedure that is guided by the 3-manifold of gradient maxima residing in 4D space-time.

One important advantage is that the number ofparameters that have to be adjusted is reduced toone intuitive intensity threshold and the surfacecomponent of interest. The proposed approach isapplied to 4D space-time MR images of the humanheart in motion.

The paper is structured as follows. First we intro-duce the estimation of the object boundary probabil-ities in Section 2. The successive section describesthe definition of the dynamic surface of maximalprobability and introduces projection operators foralmost orthogonal projection onto the dynamic sur-face. Section 4 describes the incremental meshingapproach for the dynamic surface. The applicationto 4D cardiac MRI is presented in section 5. Relatedwork is stated at the beginning of each section.

2 Boundary Probability Density

In this work we aim at a tool to reconstruct surfacesfrom cardiac MR images. Measurement parametersof the given images make blood voxels appear at ahigher intensity than tissue. The inner walls of thecardiac chambers are therefore determined by bigdiscontinuities in image intensity values from highto low.

A variety of algorithms have been proposed foranalyzing image intensity variation, including sta-

VMV 2005 Erlangen, Germany, November 16–18, 2005

Page 2: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

tistical methods [34, 11, 27, 12] and [16], differ-ential methods [30, 18] and [24] and curve fittingmethods [14, 17, 13, 28] and [29]. Edge detec-tion in noisy environments can be treated as an op-timal linear filter design problem [35, 6, 31, 23]and [32]. Canny [6] formulated edge detection asan optimization problem and defined an optimal fil-ter, which can be efficiently approximated by thefirst derivative of a Gaussian function in the one-dimensional case. Canny’s filter can be imple-mented recursively [8], which provides a more ef-ficient way for image noise filtering and edge de-tection. Parametric models, that are often found foredge detection, restrict the type of step edge geome-tries considered and cannot cope easily with closeby features.

There exist a lot of close by features in the hu-man heart and the images can be locally poor incontrast but contain significant noise. Moreover,MR images tend to have biases1. For the reasonsmentioned, high-end edge detectors are necessaryfor our application such as Canny’s edge detector.However, Canny has many parameters that mustbe adjusted to obtain an optimal result, which can-not be easily done interactively for 3D or 4D im-age data. Furthermore, generalization to higher di-mensions is complicated. Therefore, a differentapproach was chosen to infer boundary informa-tion from the given images, based on the follow-ing demands: only few input parameters should berequired and adaption to local contrast should bepossible. Furthermore, detected boundaries shouldbe narrow and generalization to higher boundariesstraightforward.

2.1 2-Means Cluster Edge Detection in 2D

In this section we describe a boundary detector,which fulfills the demands stated in the introduc-tion to this section, for edge detection in 2D. Thisallows a comparison to the Canny edge detector asshown in Figure 4.

The boundary detector analyzes the local neigh-borhood of each image pixel, searching for stepedges within the neighborhood. The pixels of theneighborhood are clustered into two sets accordingto their intensity values. This is done using a stan-dard k-means clustering approach based on the in-tensity values of the pixels. LetN (p) denote thelocal neighborhood around pixelp andI(p) its in-tensity value. The 2-means clustering procedurepartitions the pixels fromN (p) into two clustersC1 andC2 by assigning a representative intensity

1regions of equal tissues can have different intensities

valueI1/2 to each cluster. The representatives areupdated incrementally in a two step procedure. Firstall pixels pj ∈ N (p) are assigned to the clusterCi = argi min |Ii − I(p)| whose intensity valueis closer to the intensity value of the pixel. Theneach representative value is updated to the mean in-tensity valueIi = |Ci|−1 ∑

p∈CiI(p) of the pixels

assigned to its cluster. This Lloyd iteration is knownto converge to a global optimum, where the squareddistances of the pixel intensity values to their clus-ter representatives are minimized. In the simple 2-means clustering the Lloyd iteration converged afteran average of three to four iterations. The k-meansclustering of the pixels results in a stable classifi-cation of the local neighborhood into interior (highintensity) and exterior (low intensity) components.

In Figure 4 a typical local clustering result is il-lustrated with the interior cluster pixels with greenframes and the exterior ones with red frames. Onecan easily see that the boundary shape can be quitecomplicated even in the local neighborhood withoutdoing harm to the classification of each pixel intoinside and outside. The clustering is stable, whichmeans that pixel classifications do not change if theneighborhood mask is moved on to the surroundingpixels. The cluster boundary in the local neighbor-hood is not located on the pixels but on the edgesbetween pixels. The boundary probability of theedges is accumulated by a local voting scheme thatsweeps the neighborhood mask once over the im-age and counts for each edge between two pixelsthe number of transitions from one cluster to an-other. To avoid classifying boundary edges falselypositive, we introduce a significance test (the num-ber of pixels in each of the clusters must be largerthan ten percent of the number of pixels in the lo-cal neighborhood) and a step-height test (the dif-ference between the cluster representative intensityvalues must be larger than a user defined thresholdτ ) that a clustering has to pass before we vote itspixel boundary edges.

The final output of the boundary detector is a setof weighted edges, which we transform to a set ofweighted points that can be used as input to the sur-face definition as described in Section 3. The pointssimply inherit the weights of the edges and are lo-cated on the edge centers, which compose a stag-gered grid on the image. To compare the quality ofthe boundary detector to the Canny edge detector,we combine the staggered grid by adding for eachpixel the square root of the two weights of its topand right edge. The square root can be motivatedby considering the fact that diagonal edges wouldreceive too many counts because of the rasteriza-

666

Page 3: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

tion and thus obtain higher boundary strengths thanstraight edges. The detected boundary is blurredby this staggering- and combining-procedure a lit-tle bit and shifts half a pixel to the lower left ascan be seen in Figure 4 b). A comparison to theCanny edge detector is shown in c). Adapting thestandard derivative, the Gaussian smoothing and thetwo threshold values of the Canny filter to obtain agood result was a difficult task. Our algorithm, incontrast, only takes the intuitive step-height thresh-old τ . Despite the blurring, the boundaries detectedby the algorithm are reasonably good, i.e. they arecomparable to Canny’s edge detection, if not bet-ter. This contributes to the fact that our boundarydetector is much easier to handle. The generaliza-tion of our boundary detection approach to higherdimensions is straightforward. The dimension ofthe local neighborhood is increased. This results ina higher computational complexity but the 2-meansclustering procedure remains the same. The votingis finally done on the (hyper-)faces of the (hyper-)voxels.

3 Defining Dynamic Surfaces fromWeighted Points

Recent work on the approximation of point sam-pled surfaces provides powerful tools for the def-inition of approximating smooth surfaces. Levinet al. [21, 3, 22] defined a smooth surface that ap-proximates a set of scattered data points as the fixedpoints of a projection operator. Inspired by Levin’swork, Adamson and Alexa [1, 2] defined smoothapproximating surfaces from an implicit definition.Shen et al. [33] defined implicits that allow to ap-proximate or interpolate point clouds and polygonsoups. Amenta and Kil [4] pointed out the relationto extremal surfaces, which were previously usedby Medioni et al. [26] to reconstruct surfaces fromvery noisy point and normal data.

In this work, we closely follow the implicit sur-face definition by Adamson and Alexa [2]. We ap-ply it to the output of the boundary detector fromSection 2 in order to compute a surface of localprobability maxima in the gradient direction, wherethe boundary detector samples the probability den-sity on a set of weighted points. In Section 3.2 theframework is generalized to dynamic point cloudssampled on several time slices. To facilitate the ex-traction of a 3D mesh representing a time slice ofthe dynamic surface, we propose a new version ofthe almost orthogonal projection operator in Sec-tion 3.3, which works for arbitrary subspaces ofR4.

This operator allows to project points onto the dy-namic surface subject to linear constraints. We ex-tend this projection operator to a more robust ver-sion, using damping factors.

3.1 Static Surfaces for Weighted Points

The boundary detector samples the boundary prob-ability density on a set of weighted 3D pointsP ={(pi, ωi)}i,pi ∈ R3, ωi ∈ R. In order to de-fine the surface of probability extrema, a gradientdirection and a test for an extremum is necessary.Both ingredients can be derived from a weightedleast squares fitting plane similar to Adamson andAlexa [1] and Amenta and Kil [4]: For each pointxin space we define a weighted least-squares fittingplaneH(x) represented by a normal vectorn(x)and its signed distanced(x) from the origin. Wecall x the reference point because the input pointspi are weighted by their distancesri = ‖pi − x‖to x. The weighted least-squares plane is abbrevi-ated byWLS plane. A positive, monotonically de-creasing weighting functionθ(r) is used to map thedistances from the reference point to weights. Asecondary weighting is done by the point weightsωi. The WLS planeH(x) = (n(x), d(x)) is cho-sen to minimize the weighted least-squares energyfunctione(x,n, d)

e(x,n, d)def=

∑i

(ntpi − d

)2ωiθ(ri)∑

i

ωiθ(ri)(1)

in the argumentsn andd with ridef= ‖pi − x‖.

Formally, the WLS planeH(x) can be defined as

H(x) = (n(x), d(x))def= min argn,d e(x,n, d).

(2)Algorithmically, the WLS planeH(x) of eachreference pointx can be computed by the stan-dard least-squares fitting procedure from the nor-mal equations. Its normal can be considered as arobust estimation of the gradient of the probabilitydensity. Note that, in general, the reference point isnot located on its WLS plane itself, if the referencepoint is not a maximum of the probability densityin direction of the WLS normal. The surfaceS ofmaxima in gradient direction is then defined as thereference points in space that are contained in theirWLS planes:

S def= {x|x ∈ H(x)} . (3)

This definition is equivalent to the definition byAdamson and Alexa [1] but avoids the direct use of

666

Page 4: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

the weighted average pointa(x). It is replaced bythe distanced(x) = nta(x) of the WLS plane tothe origin. The surface can therefore be interpretedas an implicit surface:

S ={x|n(x)tx− d(x) = 0

}(4)

It is a smooth manifold in the region of space, inwhich the WLS plane is uniquely defined. This wasshown by Adamson and Alexa in [1] for the casewithout point weightsωi. The proof can be easilygeneralized to the weighted case by interpreting apoint with weightωi ≈ ni/N asni points at thesame location, whereN is a natural number thattends to infinity.

3.2 Dynamic Surfaces from WeightedPoints

In the dynamic case, the input is a setP of weightedpoints (pi = (pi, ti), ωi) in 4D space-time. Vec-tors and operators in space-time are marked by thebar over the symbol. All the definitions of the pre-vious section can be generalized easily. For eachreference pointx in space-time we define a WLShyperplaneH(x) from the distance weighted WLSenergy. The dynamic surfaceS is then the 3-manifold in space-time consisting of all referencepoints contained in their WLS hyperplanes:

S def=

{x|x ∈ H(x)

}. (5)

3.3 Projection onto lower DimensionalSub-Spaces

We did not extend the weighted least-squares ap-proach to the fitting of higher order polynomials asproposed by Levin [22] because the given surfacedefinitions 3 and 5 allow further projection opera-tors. A projection operatorΠ projects a pointp,located close to the [dynamic] surface, onto the sur-face. In Section 4, we need an operator that projectsonto the dynamic surface at a given timet = ti forthe extraction of dynamic meshes, which change intime. One cannot simply use the space-time versionΠ⊥ of the projection operator, because it does notnecessarily project onto the required hyperplane,defined byt = t0.

Instead we define a new projection operatorΠ⊥|R, which operates on a linear sub-spaceR ofR4. This sub-space can be defined by any set oflinear constraints. One can therefore restrict the op-erator to a 3D-plane in space-time, a 2D-plane inspace or space-time and a space-time line. The re-stricted projection operatorΠ⊥|R takes a pointp

from the sub-spaceR and projects it to that pointxon S withinR. The projection is almost orthogonalwithin R. In Section 4 we will use the time slicest = ti as a linear constraint.

The only change in the restricted projection pro-cedure is that we project the reference pointx notto the WLS hyperplaneH(x) but to the subset ofthe hyperplaneH(x) defined by the lower dimen-sional spaceR, i.e. we projectp ontoH(x) ∩ R.Together with a damping factorβ, that graduallyslows down the projection, resulting in more stabil-ity, the restricted projection is computed using thefollowing iteration:

procedure Π⊥|R(p)

x0def= p ∈ R

repeatxi+1

def= βΠH(xi)∩R(p) + (1− β)xi

until convergence

Figure 1: Illustration of the projection operatorΠ⊥|t=t0 restricted to the time slicet = t0.

Figure 1 illustrates the restricted projection op-erator for the case thatR is the time slicet = t0shown in light grey. The WLS hyperplane is fittedin 4D space-time to the weighted points. It has thespace-time normaln. The restriction of the WLShyperplane to the time slice is the bold line, whichrepresents a 2D-plane in the time slicet = t0 withnormaln. The pointp is projected alongn ontothis 2D-plane resulting in the next reference pointx. As p has to be in the restriction spaceR, thisprojection makes sense. In the next iteration the hy-perplane is fitted with distance weights relative tothe new reference pointx. Again, it is restrictedto the time slice andp is projected to the resulting2D-plane.

If the iterative projection procedure converges, itis clear that the resulting pointx is withinR. Whatremains to be shown is thatx is also on the dynamicsurfaceS. Let us assume that the procedure con-verged afteri iterations. For anyβ > 0 the conver-gence criterion tells us thatxi must have projectedonto itself and therefore be located on the restriction

666

Page 5: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

of its own WLS planeH(xi)∩R. This implies thatxi is also on the hyperplaneH(xi) itself, which isexactly definition 5 of a point onS, q.e.d.

4 Dynamic Meshing

Hoppe et al. [15] introduced the atomic connec-tivity operations used by most dynamic meshingapproaches: edge-collapse, edge-split and edge-flip. They used it in the context of mesh opti-mization. Lachaud and Montanvert [20] extendedHoppe’s basic operations by operations that allowto merge and split surface components and thereforeallow to change the topology. They also proposethe use of

√3-subdivision with successive edge-

flips to increase mesh resolution globally. McIn-erney and Terzopoulos [25] propose a dynamic sur-face mesh called T-Snakes, whose connectivity isadapted based on a surrounding regular grid. T-Snakes also allow changes in topology.

Kobbelt et al. [19] make use of three opera-tions proposed by Hoppe et al. and come upwith a multi-resolution framework for meshes withchanging connectivity but identical topology. First,all edges are adjusted to lengths within an inter-val [lmin, lmax] by application of edge-collapse andedge-split operations. In a second update stage,edges are flipped in order to bring the vertex va-lences as close to six as possible, i.e. they performan edge-flip whenever the sum of the squared dif-ference of the vertex valences to ordinary valencesix can be reduced.

Zhukov et al. [36] use the same update criteria fortheir deformable model. Each of these approachesdoes not allow changes of the topology. Cheng etal. [7] base dynamic meshing on the skin surfaceby Edelsbrunner [9]. The skin is defined by a fi-nite set of control spheres as the envelope of an infi-nite number of convex combinations of the spheres.Edge-collapse and edge-split operations are used toadjust the vertex density, which is given by a sam-pling of the skin according to the local curvaturewith a maximal meshing errorε.

4.1 Meshing the First Frame

There are different possibilities to create a mesh forthe first frame. We chose to use a marching cubesapproach, making use of the implicit surface def-inition given in equation 4. The advantage of themarching cubes algorithm is its ability to handlesurfaces of arbitrary topology. To improve the meshquality we extracted the first frame in a high resolu-tion and simplified it using the edge-collapse based

mesh simplification approach proposed by Garlandand Heckbert [10].

As input to the marching cubes algorithm weused a signed distance function derived from equa-tion 4. The distanceg(x) can directly be defined upto its sign as

g(x)def= ±

∣∣n(x)tx− d(x)∣∣ . (6)

The sign cannot be found by means of the WLS-fitting procedure as bothn and−n are solutions tothe normal equations.

a) b) c)

Figure 2: Calculation of the consistent normal ori-entation. a) shows the neighborhood and the pre-liminary normal. b) shows the transformation on anaxis along the normal direction. The least squaresfit is shown as an arrow. c) Positive slope of theleast squares fit results in a switch of the normal di-rection.

Consistent orientation of the normals can beachieved in our case by taking into account that inour MR images the inside of the heart chambersare regions of higher image intensities. Therefore,the correct normal must be oriented from the regionof high intensity to the region of low intensity asshown in Figure 2 c). Again we use a voting schemefor a robust determination of the correct sign of thedistance functiong evaluated at the reference pointx. First (Figure 2 a) we assume the normaln fromthe plane fitting procedure to be correct. Then wecompute the signed distances for all voxel locationsin the neighborhood ofx and construct a 2D dia-gram of the intensity values of the voxels over theirsigned distance as shown in Figure 2 b). If thenormal was oriented correctly, the diagram shouldshow a decreasing function, which can be checkedwith the slope of a least-squares fitted linear func-tion as illustrated by the arrow in Figure 2 b). Herethe slope is positive and therefore the normal mustbe oriented in opposite direction as shown in c).

The marching cubes algorithm is performed on aregion of interest, which is selected by a boundingbox provided by the user. The extracted mesh is firstbroken down into its connected components. Usu-ally, the component of interest is the one, consistingof the largest number of triangles.

666

Page 6: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

4.2 Time Evolution of Vertex Locations

The positions of the mesh vertices must be locatedon the 3-manifold in space-time. Given a meshmti

at time t = ti, to which positionspv should thevertices be moved at timet = ti+1? A simple ap-proach would be to just transform the 3D positionpv = {pv, ti} of a vertexv to pv = {pv, ti+1}and to perform a projection as defined in 3.3 start-ing at the space time pointpv = {pv, ti+1} andconverging in a vertex positionp′v = {p′v, ti+1}

However, this approach does not make use of thedynamic surface information that is available. Abetter idea is to use the space-time information atpv = {pv, ti} to make a prediction for the probableposition of a vertex at timet = ti+1 and to performa projection afterwards that is likely to converge thefaster the better the prediction.

Our approach for making a prediction is visual-ized in Figure 3 for a vertex pointpv = {pv, ti}from the dynamic surface att = ti. The hyper-surfaceS(t = ti) at t = ti is shown as a darkcurved line. We assume that motion of a vertex islinear with respect to the time domain. The idea isto calculate the WLS hyperplaneH framed in greyat pv = {pv, ti} and make use of the fact that it isclose to the tangent hyperplane of the dynamic sur-face aspv = {pv, ti} is a point on the surface. Ifwe project the unit vector in time direction onto thistangent hyperplane, we obtain a vectord, as visu-alized in the figure. Following the vectord to theintersection with the hyperplane att = ti+1 yieldsa prediction for the next position of the vertex. Thefollowing projection then yields the actual positionp′v = {p′v, ti+1} on the 3-manifold. Figure 3 vi-sualizes the steps of the prediction for one vertex.

Figure 3: Space-time view of the prediction opera-tion. The three manifold is shown in dark grey andthe hypersurfaceS(t = ti) at t = t0 as the darkcurved line. The tangent plane is framed in lightgrey, while the prediction vectord is painted bold.

4.3 Time Evolution of Mesh Connectivity

In this work we follow a similar approach to Zhukovet al. [36]. After providing an initial mesh, wemaintain mesh quality in the subsequent frames byapplying the following operations if necessary:

1. If the length of an edge drops below somethreshold valuelmin, it is collapsed, if themesh connectivity allows a collapse.

2. If the length of an edge exceeds some thresh-old valuelmax, it is split by midpoint inser-tion.

3. while there are vertices, whose valence dif-fers from an optimum of 6, edge flips are per-formed, until the sum

∑i(valence(vi)− 6)2

has reached a minimum.4. After applying all dynamic meshing opera-

tions, all vertices are projected back to the sur-face.

This approach allows maintaining mesh qualitywithin reasonable limits. The constantslmin andlmax are initialized by using statistical values of theaverage edge lengths of the initial mesh, i.e.lmin =cmin · laverage andlmax = cmax · laverage with theconstantscmin = 0.25 andcmax = 1.5.

During the time propagation process, as verticesare moved individually, triangles can flip chang-ing their normal orientations, which results in in-valid twisted mesh connectivities. These triangleflips can be avoided by either using smaller tem-poral sampling steps, thus preventing triangle flipsfrom happening at all, or by detecting flipped tri-angles and adapting connectivity consistently. Wehave chosen the first alternative. The second alter-native is left out for future work.

5 Application to 4D-MRI

5.1 Data Acquisition and Pre-Processing

25 frames of 25 slices were taken of the cardiac cy-cle of a 26-year old male proband.2 Each slice hadan in-plane resolution of 156x192 pixels with pixelspacings of 1.67 mm and a slice thickness of 5 mm.The slices were acquired without gaps perpendic-ular to the central long axis of the heart. The datawere prepared by assembling the slices according totheir spatial and temporal locations to a regular 4Dimage.

2The cardiac data were provided by courtesy of the Departmentof Radiology of the University Hospital of Tubingen.

666

Page 7: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

5.2 Slice Enrichment

Since resolution in direction of the slice normal isless than half of the resolution in both other di-rections, an interpolation step, that introduces addi-tional slices, was necessary. While usual techniquesof linear, sinc or spline interpolation did not yieldsatisfying results, an optical flow based interpola-tion scheme was chosen. In this approach, the op-tical flow between two neighbor slices is calculatedaccording to Bergen et al. [5], resulting in two densedisparity maps with a vector for each pixel, pointingto the pixel with the most probable correspondencein the neighbor slice. The interpolation method isvisualized in Figure 5. Interpolation using opticalflow is done by introducing a new image of identi-cal size between the two slices. We then warp theimage according to the disparity vectors given bythe optical flow field and rescale the image. Thismethod produces good results without over- or un-dersampling artifacts.

The slice enrichment process yields approxi-mately isotropic voxels. Subpixel interpolation onthe isotropic grid, which is necessary for the surfacereconstruction process, is done, for performancereasons, by using quadrilinear interpolation.

6 Conclusions and Future Work

We presented an algorithm capable of reconstruct-ing smooth dynamic surfaces from 4D image inputdata. We reconstructed the surface of the left ven-tricle and atrium of a human heart requiring onlylittle human interaction. The results are shown inFigure 6. Although the 4D-MR image data usedas input are not of optimal quality, the method hasproven stable. In clinical routine, however, the MRimage input is expected to be of even worse quality.Future work should therefore be directed towardsfurther improving the boundary detection. More-over, extraction of the first frame could be done by amethod that allows more flexibility like an adaptivestep size, which reduces or increases the samplingrate according to the local curvature. Thus, meshesof better quality would be obtained for the firstframe, making the preprocessing step of the connec-tivity unnecessary. The remeshing step should beimproved, introducing predefined quantitative sur-face error criteria for the collapse and splitting pro-cess. Future work should also consider connectivityadapting schemes that allow efficient resolution ofinverse triangles that have been flipped during thereconstruction process, as mentioned in Section 4.3.Furthermore, topology changes could be supported

during the reconstruction process by searching forcritical points. Last but not least, edge flips could bereplaced by a sequence of insertions, vertex trans-lations and collapses, avoiding popping effects andmaking the animation really smooth.

References

[1] A. Adamson and M. Alexa. Approximatingand intersecting surfaces from points. InPro-ceedings of Eurographics Symposium on Ge-ometry Processing, pages 230–239, 2003.

[2] A. Adamson and M. Alexa. On normals andprojection operators for surfaces defined bypoint sets. InProceedings of EurographicsSymposium on Point-based Graphics, pages149–156, 2004.

[3] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman,D. Levin, and C. T. Silva. Point set surfaces.IEEE Visualization 2001, pages 21–28, Octo-ber 2001. ISBN 0-7803-7200-x.

[4] N. Amenta and Y. J. Kil. Defining pointset surfaces.ACM Transactions on Graphics(TOG), 23(3):264–270, 8 2004.

[5] J.R. Bergen, P. Anandan, K.J. Hanna, andR. Hingorani. Hierarchical model-based mo-tion estimation. InECCV92, pages 237–252,1992.

[6] J. Canny, 1986. J. Canny, A computational ap-proach to edge detection. IEEE Trans. PatternAnal. Mach. Intell. PAMI-8 (1986), pp. 679–698.

[7] H.-L. Cheng, T. K. Dey, H. EdelsbrunnerEdelsbrunner, and John Sullivan. Dynamicskin triangulation. InProceedings of theTwelfth Annual ACM-SIAM Symposium onDiscrete Algorithms (SODA-01), pages 47–56,New York, January 7–9 2001. ACM Press.

[8] R. Deriche, 1987. R. Deriche, Optimal edgedetection using recursive filtering, Proceed-ings of the First International Conference onComputer Vision, London, 1987.

[9] H. Edelsbrunner. Deformable smooth surfacedesign. Discrete and Computational Geome-try, 21(1):87–115, January 1999.

[10] Michael Garland and Paul S. Heckbert. Sur-face simplification using quadric error met-rics. Computer Graphics, 31(Annual Confer-ence Series):209–216, 1997.

[11] J. Haberstroh, 1993. J. Haberstroh andL. Kurz, Line detection in noisy and struc-tured background using Graco-Latin squares.CVGIP: Graphical Models Image Process. 55(1993), pp. 161–179.

666

Page 8: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

[12] F.R. Hansen, 1982. F.R. Hansen and H. El-liot, Image segmentation using simple Markovfield models. Comput. Graphics Image Pro-cess. 20 (1982), pp. 101–132.

[13] R.M. Haralick, 1981. R.M. Haralick and L.Watson, A facet model for image data. Com-put. Graphics Image Process. 15 (1981), pp.113–129.

[14] R.M. Haralick, 1984. R.M. Haralick, Digitalstep edges from zero crossing second direc-tional derivatives. IEEE Trans. Pattern Anal.Mach. Intell. PAMI-6 (1984), pp. 58–68.

[15] H. Hoppe, T. DeRose, T. Duchamp, J. Mc-Donald, and W. Stuetzle. Mesh optimiza-tion. Computer Graphics, 27(Annual Confer-ence Series):19–26, 1993.

[16] J.S. Huang, 1988. J.S. Huang and D.H. Tseng,Statistical theory of edge detection. Comput.Vision Graphics Image Process. 43 (1988), pp.337–346.

[17] M.H. Huechel, 1971. M.H. Huechel, An oper-ator which locates edges in digitized pictures.J. Assoc. Comput. Mach. 18 (1971), pp. 113–125.

[18] R. Kirsh, 1971. R. Kirsh, Computer determi-nation of the constituent structure of biologi-cal images. Comput. Biomed. Res. 4 (1971),pp. 314–328.

[19] Leif P. Kobbelt, Thilo Bareuther, and SeidelSeidel. Multiresolution shape deformationsfor meshes with dynamic vertex connectiv-ity. Computer Graphics Forum, 19(3):249–260, August 2000. ISSN 1067-7055.

[20] J.-O. Lachaud and A. Montanvert. De-formable meshes with automated topologychanges for coarse-to-fine 3D surface extrac-tion. Medical Image Analysis, 3(2):187–207,1999.

[21] D. Levin. The approximation power of mov-ing least-squares.Mathematics of Computa-tion, 67(224):1517–1531, 1998.

[22] D. Levin. Geometric Modeling for ScientificVisualization, chapter Mesh-Independent Sur-face Interpolation, pages 37–49. Springer-Verlag, 2003.

[23] B.S. Manjunath, 1993. B.S. Manjunath andR. Chellappa, A unified approach to boundaryperception: edges, textures and illusory con-tours. IEEE Trans. Neural Networks 4 (1993),pp. 96–108.

[24] D. Marr, 1984. D. Marr and E. Hidreth, The-ory of edge detection. Proc. Roy. Soc. LondonPAMI-6 (1984), p. 58.

[25] T. McInerney and D. Terzopoulos. T-snakes:

Topology adaptive snakes.Medical ImageAnalysis, 4(2):73–91, 2000.

[26] G. Medioni, M.-S. Lee, and C.-K. Tang.AComputational Framework for Segmentationand Grouping. Elsevier, 2000.

[27] N.E. Nahi, 1972. N.E. Nahi and T. Assefi,Bayesian recursive image estimation. IEEETrans. Comput. 7 (1972), pp. 734–738.

[28] V.S. Nalwa, 1984. V.S. Nalwa and T.O. Bin-ford, On detecting edges. IEEE Trans. PatternAnal. Mach. Intell. PAMI-6 (1984), pp. 58–68.

[29] V.S. Nalwa, 1986. V.S. Nalwa and T.O. Bin-ford, On detecting edges. IEEE Trans. PatternAnal. Mach. Intell. PAMI-8 (1986), pp. 699–714.

[30] J.M.S. Prewitt, 1970. J.M.S. Prewitt, Objectenhancement and extraction. In: B.S. Lipkinand A. Rosenfeld, Editors, Picture Processingand Psychopictorics, Academic Press, NewYork (1970).

[31] T.D. Sanger, 1989. T.D. Sanger, Optimalunsupervised learning in a single layer feed-forward neural network. Neural Networks 2(1989), pp. 459–473.

[32] S. Sarkar, 1991. S. Sarkar and K.L. Boyer, Onoptimal infinite impulse response edge detec-tion filters. IEEE Trans. Pattern Anal. Mach.Intell. PAMI-13 (1991), pp. 1154–1171.

[33] C. Shen, J. F. O’Brien, and J. R. Shewchuk.Interpolating and approximating implicit sur-faces from polygon soup. InProceedings ofACM SIGGRAPH 2004. ACM Press, August2004.

[34] D. Stern, 1988. D. Stern and L. Kurz, Edge de-tection in correlated noise using Latin squaresmodels. Pattern Recognition 21 (1988), pp.119–129.

[35] V. Torre, 1986. V. Torre and T. Poggio,On edge detection. IEEE Trans. Pattern Anal.Mach. Intell. PAMI-2 (1986), pp. 147–163.

[36] L. Zhukov, Z. Bao, I. Guskov, J. Wood, andD. Breen. Dynamic deformable models for3d mri heart segmentation. InProceedings ofSPIE Medical Imaging, 2002.

666

Page 9: Dynamic Surface Reconstruction from 4D-MR Images › 6f17 › ad77a626602... · Dynamic Surface Reconstruction from 4D-MR Images Matthias Fenchel1, Stefan Gumhold2, Hans-Peter Seidel3

a) b) c)

Figure 4: a) one 2D slice of the MRI data set from Section 5 with a local 2-means clustering shown withvoxels framed red (low intensity) and voxels framed green (high intensity). b) proposed edge detectorwith a threshold of15; the staggered grids are merged for comparison. c) Canny edge detector withstandard derivative of0.5, lower threshold of10, higher threshold of20 and scaling factor of10.

a) b) c) d)

Figure 5: Interpolation between two slices a) and b) using optical flow based interpolation (c)) respec-tively linear interpolation (d)). c) shows higher contrast at the step edges between blood and tissue. d)shows a linear interpolation with blurred borders. Contrast at the step borders is smaller.

Frame 0 Frame 3 Frame 6

Frame 9 Frame 15 Frame 21

Figure 6: The figure shows the extracted mesh (red) blended with a volume rendering of the result of theboundary detection. Frame 6 shows the surface during the systole of the heart cycle, while frame 15 showsit during the diastole.

666


Recommended