+ All Categories
Home > Documents > Complex local phase based subjective surfaces (CLAPSS) and its application to DIC red blood cell...

Complex local phase based subjective surfaces (CLAPSS) and its application to DIC red blood cell...

Date post: 04-Jan-2017
Category:
Upload: tanveer
View: 213 times
Download: 0 times
Share this document with a friend
13
Complex local phase based subjective surfaces (CLAPSS) and its application to DIC red blood cell image segmentation Taoyi Chen a,b,n , Yong Zhang c , Changhong Wang b , Zhenshen Qu b , Fei Wang c , Tanveer Syeda-Mahmood c a The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang, Hebei 050081, China b Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150080, China c Healthcare Informatics, IBM Almaden Research Center, San Jose, CA 95120, USA article info Article history: Received 11 October 2011 Received in revised form 12 March 2012 Accepted 12 June 2012 Communicated by Qi Li Available online 17 July 2012 Keywords: Cell segmentation Subjective surfaces Complex local phase Red blood cell Level set abstract Differential Interference Contrast (DIC) microscopy is a common approach for researching the dynamics of cell behaviors. Segmentation of shape of erythrocyte (red blood cell) is the basis of quantitative analysis of its deformability and hence its filterability. Commonly used manual segmentation of shapes of individual cells from samples by human visual inspection requires a large amount of tedious work because it is time consuming and exhaustive. This makes automatic cell image analysis essential in biology studies. In this paper, a novel level set based technique, called Complex Local Phase based Subjective Surfaces (CLAPSS), is proposed for the segmentation of differential interference contrast (DIC) red blood cell microscopy images. Based on the framework of a generalized version of subjective surfaces (GSUBSURF), a complex local phase based edge indicator function is introduced to replace the traditional gradient based edge detector for the local image feature acquisition, which is the key for the evolution of the surface. In addition, we propose a new variation scheme for stretching factor to achieve relatively accurate segmentation results even if the reference point is located nearby cell boundaries. We show that the proposed method is more accurate and reliable than several existing methods in experiments. & 2012 Elsevier B.V. All rights reserved. 1. Introduction Red blood cells perform one of most important blood studies. A single drop of blood contains millions of red blood cells which are constantly traveling through human body delivering oxygen and removing waste. In Ref. [1], it was hypothesized that the blood could be filtered with the shape of red blood cell. Subsequently the shape variations of red blood cells have been reported to have significant relations with particular illnesses, such as Myalgic Encephalomye- litis (ME) and Multiple Sclerosis (MS). As a non-fluorescence, interference based microscopy imaging modality, the DIC microscopy system is particularly suitable for long-term investigation of living biological specimens. This project is therefore aimed to design automated methods to segment DIC cells to further analyze the changes of DIC cell shape and topology. While being visually contrastive, it is still challenging to auto- mate the cell segmentation and measurement of DIC images, due to the following observations: (1) Most DIC red blood cell images have low contrast with variational cell shapes. (2) The dual-beam inter- ference optics of DIC microscopes introduces non-uniform shadow- cast artifacts (uneven illumination). (3) The boundaries of some cells are vague or even missing. A wide variety of algorithms has been proposed over the years to tackle the problem of cell image segmentation. These algo- rithms can be categorized as follows: traditional segmentation, graph cut based segmentation, and active contour model. Traditional segmentation algorithms use the methods such as thresholding [2, 3], watershed [4, 5], and edge detection [6, 7]. Thres- holding method cannot separate cell pixels with low intensity contrast from the background, and cannot discriminate touching cells, as the spatial relations are not embedded in basic thresholding techniques. Watershed algorithm regards the intensity of an image as a topological surface and directly uses region minimums or ultimate eroded points as starting points. Though Original watershed can segment touching cells as long as the seeds are initialized properly, the over segmentation problem would be likely to occur. Subse- quently the two approaches to address the over segmentation of watershed: fragment merging [8] and marker-controlled [9] water- shed were proposed. However, none of these approaches analyzes jointly all available spectral information [10]. Edge detection is built on gradient or intensity. Image gradient values are determined in Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2012.06.015 n Corresponding author at: The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang, Hebei 050081, China. E-mail address: [email protected] (T. Chen). Neurocomputing 99 (2013) 98–110
Transcript

Neurocomputing 99 (2013) 98–110

Contents lists available at SciVerse ScienceDirect

Neurocomputing

0925-23

http://d

n Corr

Technol

E-m

journal homepage: www.elsevier.com/locate/neucom

Complex local phase based subjective surfaces (CLAPSS) and its applicationto DIC red blood cell image segmentation

Taoyi Chen a,b,n, Yong Zhang c, Changhong Wang b, Zhenshen Qu b, Fei Wang c,Tanveer Syeda-Mahmood c

a The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang, Hebei 050081, Chinab Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150080, Chinac Healthcare Informatics, IBM Almaden Research Center, San Jose, CA 95120, USA

a r t i c l e i n f o

Article history:

Received 11 October 2011

Received in revised form

12 March 2012

Accepted 12 June 2012

Communicated by Qi Libecause it is time consuming and exhaustive. This makes automatic cell image analysis essential in

Available online 17 July 2012

Keywords:

Cell segmentation

Subjective surfaces

Complex local phase

Red blood cell

Level set

12/$ - see front matter & 2012 Elsevier B.V. A

x.doi.org/10.1016/j.neucom.2012.06.015

esponding author at: The 54th Research In

ogy Group Corporation, Shijiazhuang, Hebei

ail address: [email protected] (T. Chen).

a b s t r a c t

Differential Interference Contrast (DIC) microscopy is a common approach for researching the dynamics

of cell behaviors. Segmentation of shape of erythrocyte (red blood cell) is the basis of quantitative

analysis of its deformability and hence its filterability. Commonly used manual segmentation of shapes

of individual cells from samples by human visual inspection requires a large amount of tedious work

biology studies. In this paper, a novel level set based technique, called Complex Local Phase based

Subjective Surfaces (CLAPSS), is proposed for the segmentation of differential interference contrast

(DIC) red blood cell microscopy images. Based on the framework of a generalized version of subjective

surfaces (GSUBSURF), a complex local phase based edge indicator function is introduced to replace the

traditional gradient based edge detector for the local image feature acquisition, which is the key for the

evolution of the surface. In addition, we propose a new variation scheme for stretching factor to achieve

relatively accurate segmentation results even if the reference point is located nearby cell boundaries.

We show that the proposed method is more accurate and reliable than several existing methods in

experiments.

& 2012 Elsevier B.V. All rights reserved.

1. Introduction

Red blood cells perform one of most important blood studies.A single drop of blood contains millions of red blood cells which areconstantly traveling through human body delivering oxygen andremoving waste. In Ref. [1], it was hypothesized that the blood couldbe filtered with the shape of red blood cell. Subsequently the shapevariations of red blood cells have been reported to have significantrelations with particular illnesses, such as Myalgic Encephalomye-litis (ME) and Multiple Sclerosis (MS).

As a non-fluorescence, interference based microscopy imagingmodality, the DIC microscopy system is particularly suitable forlong-term investigation of living biological specimens. This project istherefore aimed to design automated methods to segment DIC cellsto further analyze the changes of DIC cell shape and topology.

While being visually contrastive, it is still challenging to auto-mate the cell segmentation and measurement of DIC images, due tothe following observations: (1) Most DIC red blood cell images have

ll rights reserved.

stitute of China Electronics

050081, China.

low contrast with variational cell shapes. (2) The dual-beam inter-ference optics of DIC microscopes introduces non-uniform shadow-cast artifacts (uneven illumination). (3) The boundaries of some cellsare vague or even missing.

A wide variety of algorithms has been proposed over the yearsto tackle the problem of cell image segmentation. These algo-rithms can be categorized as follows: traditional segmentation,graph cut based segmentation, and active contour model.

Traditional segmentation algorithms use the methods such asthresholding [2,3], watershed [4,5], and edge detection [6,7]. Thres-holding method cannot separate cell pixels with low intensitycontrast from the background, and cannot discriminate touchingcells, as the spatial relations are not embedded in basic thresholdingtechniques. Watershed algorithm regards the intensity of an image asa topological surface and directly uses region minimums or ultimateeroded points as starting points. Though Original watershed cansegment touching cells as long as the seeds are initialized properly,the over segmentation problem would be likely to occur. Subse-quently the two approaches to address the over segmentation ofwatershed: fragment merging [8] and marker-controlled [9] water-shed were proposed. However, none of these approaches analyzesjointly all available spectral information [10]. Edge detection is builton gradient or intensity. Image gradient values are determined in

T. Chen et al. / Neurocomputing 99 (2013) 98–110 99

correspondence to significant edges empirically. However, edgedetection based methods are sensitive to variations in image illumi-nation, blurring, and magnification. Since these traditional algorithmsalone require significant user interaction to segment a wide range ofred blood cell images, they have been combined with other segmen-tation techniques in hybrid schemes to minimize user intervention.

Image segmentation can also be treated as a graph cut problem[11], where a graph is constructed from the original image with a setof vertices (nodes) representing pixels and a set of edges connectingthe nodes. The graph cut problem can be effectively solved usingmethods based on spectral graph theory [12] and combinatorialoptimization theory [13,14]. However, graph cut based methods areunable to accurately segment complex cell structures, where lowcontrast, intensity variation exists.

Active contour, including region based [15–17] and edge based[18,19] models, requires good initial contours that are close to thereal boundaries and a set of well-tuned parameters, otherwise theintensity variation will lead to edge leaking or early edge stop.

To summarize, though the aforementioned methods can gen-erate reasonably accurate cell segmentation results on theirspecific application for which they have been designed, they stilldepend on several preprocessing or post-processing steps andneed user interaction for parameter initialization.

Phase based methods use local phase information derivedfrom the monogenic signal, which is a multidimensional exten-sion of the analytic signal [20]. Phase information has been usedin several applications in the literature, such as image registration[21] and segmentation [22]. But to the best of our knowledge,there are few literatures in the field of DIC image cell segmenta-tion. Furthermore, the existing phase based techniques lack ofquantitative comparisons with classical gradient based methodsupon the segmentation accuracy.

In this paper, we present a novel level set based method, calledComplex Local Phase based Subjective Surfaces (CLAPSS) to segmentthe red blood cells from DIC microscopy image characterized by theabove mentioned challenges. Since neither image itself nor itsgradients information can well respond to a wide and oftenunpredictable range of intensity changes found along red blood cellboundaries, also the image intensity based or gradient based localfeature may fail especially for the situations with non-uniformbackground, we propose to use a phase congruency based methodto replace the traditional gradient based edge detector for the localimage feature acquisition, which is the key for the evolution of thesurface. In addition, to make the method robust against the off-center reference point selection, we propose a new variation schemefor stretching factor within the framework of GSUBSURF, whichhelps achieve the relatively accurate segmentation results even if thereference point is located nearby cell boundaries. In experimentalsection, we show the quantitative comparisons for proposed seg-mentation framework with several existing methods, and robustnessof proposed stretching factor variation scheme in the condition ofoff-center reference point selections, robustness against the additiveGaussian noise. We also show the preliminary segmentation resultson another image modality, the ultrasound left ventricle images, toshow the potential of our method to extend to other applications.

The major contributions for the proposed method are asfollows. (1) A more sensitive local image feature indicator isproposed to better measure the representation of the image edgeand help the hypersurface evolves in the sense of guiding thehypersurface to the desired boundary more effectively. (2) A newstrategy for stretching factor variation is introduced to ensure thestability and graduality of surface evolution in the situationswhere the reference point is located nearby cell boundaries.

The reminder of this paper is organized as follows. Section 2 andSection 3 give an overview for framework of subjective surfaces andcomplex local phase measurement, respectively. Section 4 introduces

the proposed method for DIC cell segmentation. In Section 5, wepresent the results obtained. Finally, we conclude this paper inSection 6.

2. Subjective surfaces

The subjective surfaces technique is particularly suitable forthe segmentation of vague or even missing contours, because itallows the reconstruction and integration of the lacking informa-tion. The original subjective surface method was proposed first bySarti et al. [23] and then extended in [24]. A generalized version ofsubjective surfaces applied tosegmentation of nuclei and mem-branes for 3-D time-lapse live embryos image was proposed in[25]. In this section, we give overviews for the original subjectivesurfaces (see Section 2.1) and a generalized version of subjectivesurfaces [25] (see Section 2.2).

2.1. Original subjective surfaces

The original subject surfaces technique was proposed for detect-ing missing boundaries [23,24]. With this technique, a user-selectedset of points from within the target is initialized and used as areference surface. An evolution process is then applied from thisreference surface based on local image features such as edges. Themissing boundaries are completed by combining the results of boththe level set evolution and the surface evolution.

To compute local image features, an edge indicator is intro-duced in the original subject surfaces method in [23], as follows:

g ¼ 1= 1þð rGsnI�� ��=ZÞ� �

ð1Þ

in which,Gs is a Gaussian kernel with standard deviation s and n

denotes the convolution. Z is related to the image contrast andacts as a scale factor by which the image gray levels are mappedinto the g function. Such a local image feature indicator plays avery important role in the object segmentation. Since the edgeindicator is based on image gradient, it is obvious that g is closeto 1 in smoothed regions and close to 0 where local features exist.

To achieve the edge integration for an input image I(x,y), asurface S(x,y,F) is differentiated along the two dimensions inthe Euclidean space. Assume an initial function F0¼F0(x,y), theevolution of F0 along time t, F(x,y,t), is a hypersurface evolutionbased on a mean curvature motion to minimize the area of thesurface. The evolution is defined as the following motion equation

Ft ¼ gH rF�� ��þrgrF with H¼ divðrF= rF

�� ��Þ ð2Þ

in which Ft¼qF/qt, H is the Euclidean mean curvature and g isthe edge indicator.

The first term indicates an evolution along the normal direc-tion of the hypersurface and its speed is controlled by both themean curvature and the edge indicator. The second term indicatesan evolution along the velocity field and it pushes the motiontowards edges.

2.2. Generalized version of subjective surfaces

In Ref. [25], a generalized version of subjective surfaces(GSUBSURF) was proposed to extract the shapes of cell mem-branes and nuclei from 3-D time lapse confocal images takenthroughout early zebrafish embryogenesis.

The novelty of GSUBSURF mainly lies in two contributions ascompared with original subjective surfaces [24]. One is that itgives different weight factors(m,n)between the first term and thesecond term in Eq. (2) to facilitate the control of evolutionprocess. Indeed a higher weight of the second term ensures betterclustering of different image gray levels on object boundary, and

T. Chen et al. / Neurocomputing 99 (2013) 98–110100

smoothness inside the object [25]. The second contribution is thatthey introduce a stretching factor to shift the evolution modelbetween two different dynamics (mean curvature flow of level setand that of graph).

Here is the evolution equation of GSUBSURF

Ft ¼ mgH rF�� ��

aþvrgrF with rF

�� ��a¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiaþ rF�� ��2q

ð3Þ

in which 9rF9a is the regularization of 9rF9 with a stretchingfactor a. The mean curvature for the hypersurface, H, can berepresented with the stretching factor a as follows:

H¼ðaþF2

x ÞFyyþðaþF2y ÞFxx�2FxFyFxy

ðaþF2xþF

2y Þ

3=2ð4Þ

where Fx¼qF/qx, Fy¼qF/qy, Fxx¼q2F/qx2, Fyy¼q

2F/qy2 andFxy¼q

2F/qxqy.

3. Local image feature detection

The proposed method for higher local image feature detection isbased on the monogenic signal, an approach to feature detectionthat has its roots in the complex local phase measurement methodof Morrone and Owens [26] and subsequently refined by [27].

Using the over-complete Log-Gabor complex wavelet transform[28], Morrone and Owens [26] proposed that local phase measure-ment at point x¼(x,y) can be quantified as the normalized weightedsummation of cosine-weighted complex phase deviation from theaverage phase across a scales and b orientations.

The equation for 2D phase congruency in a two-dimensionalsignal analysis, i.e., in images, can be obtained at each two-dimensional image location x as follows:

rðxÞ ¼Pb

O ¼ 1

PaS ¼ 1 WOðxÞ½AS,OðxÞLS,OðxÞ�TO�PbO ¼ 1

PaS ¼ 1 AS,OðxÞþe0

ð5Þ

whereLS,OðxÞ ¼ cosðfS,OðxÞ�fO ðxÞÞ, O and S refer to the filter’sorientation and scale, respectively. AS,O(x) and fS,O(x) representscomplex amplitude and phase respectively. TO is a threshold,acting as the expected influence of noise for a specific scale.bc denotes that the enclosed quantity is equal to itself when itsvalue is positive, and zero otherwise. The term WO(x) weights thefrequency spread. The usage of WO(x) in [27] is to cope with thelocations where the spread of filter responses is narrow (e.g.smoothed images). The numerator of the above fraction repre-sents the total energy of the 2D signal at x in images. e0 is used toavoid the ill condition of the local phase measure if all the Fourieramplitudes are very small.

Fig. 1. Example DIC red blood cell image. (a) Original cell image, (b) segmentation

result drawn by an expert and (c) histograms of pixel intensities inside and

outside the object.

4. Method

The essential statistical mechanism of DIC red blood cellimages implies it would be unrealistic for incorporating anyregion based priors. The level set segmentation framework byintroducing a possible region based intensity constraint term onlyworks when several assumptions can be satisfied [29–31]: theoverall image intensity distribution is assumed to be bimodalsuch that it can be factored into underlying foreground andbackground distributions which themselves are assumed to beunimodal, and whose parameters can be recovered from standardparameterizations or lower-order sufficient statistics. Then aclassification step can be processed which results in a featureimage where pixels labeled as object lead to an expansion of thecontour and pixels labeled as background lead to a contraction.The segmentation task of DIC red blood cell images by incorpor-ating local intensity priors is a challenging problem, because

objects and background have similar histograms (see Figs. 1 and 2).The two curves in Fig. 1(c) and Fig. 2(c) overlap each other overly,making intensity priors unsuitable in this case.

Also, researchers have proposed to enhance the level setmethod with statistical shape priors to better steer the contourto the expected shape [32–35] in recent years. Shape knowledgeis highly valuable for the situations where the surface is about toleak into other tissues or to be excluded erroneously, shape priorsgive us constrains to help prevent such situations occur. Also,such prior shape information was shown to drastically improvesegmentation results in the presence of noise or occlusion. Theshape prior requires reference shapes of desired objects. Given aset of reference shapes, statistical shape dissimilarity measureshould be taken using certain approaches such as PrincipalComponent Analysis [34] or non-parametric density estimation[30–36] to estimate the probability of a reference shape aligningwith the object. The DIC red blood cells have elliptical shape withdeformations, making that the segmentation accuracy can beimproved possibly if we incorporate statistical shape priors intothe segmentation algorithm model.

However, in this section, a complex local phase based subjectivesurfaces that only use the existing boundary information is proposed,making the whole unified framework much simpler. In Section 4.1,without any region or shape priors incorporated, we enhance edgemeasurement to better steer the evolution of surfaces and help guidethe hypersurface to the desired boundary more effectively. After theedge indicator information is acquired, the capability of missingboundary completion of subjective surfaces helps complete the wholeboundary detection. In Section 4.2, we introduce a new strategy forstretching factor variation, which successfully integrates the influenceof pure level set flow and diffusion motion, and ensures the stabilityand graduality of the hypersurface evolution.

The numerical schemes for discretization of subjective surfaceswere discussed in a number of references. For example, semi-implicit numerical schemes for the subjective surfaces method wereintroduced in [37] for 2-D situation and in [38] for 3-D situation.

Fig. 2. Example DIC red blood cell image. (a) Original cell image, (b) segmentation

result drawn by an expert and (c) histograms of pixel intensities inside and

outside the object.

Fig. 3. Comparisons of phase deviations as for Morrone and Owens’s model,

Kovesi model, Wong model and proposed model.

T. Chen et al. / Neurocomputing 99 (2013) 98–110 101

The semi-implicit numerical scheme for GSUBSURF method wasintroduced in [39] and it has been adopted in image analysis chain[40] and cell segmentation [41] for 3-Dþtime videos of Zebrafishembryogenesis. The modifications we propose to the subjectivesurfaces technique do not alter the original numerical schemesintroduced in [24,25].

It should also be noted that in the framework of GSUBSURF,the added different weight(m,n) shows strong influence on thesegmentation and missing contour completion for images of earlyzebrafish embryogenesis. This promising property also works wellin our segmentation experiment of DIC red blood cell images.Therefore we adopt the same strategy of different weight factors[25] in our proposed method.

4.1. Phase deviation weighting function

Let o represent the phase deviation between a phase anglefS,O(x) and the average phase angle fO ðxÞ

o¼fS,OðxÞ�fO ðxÞ ð6Þ

As in Ref. [26], the phase deviation weighting function isformulated as LðxÞ ¼ cosðoÞ. Because it requires a significant differ-ence between fS,O(x) and fO ðxÞ before its value falls appreciably,using the cosine of the phase deviation is a rather insensitivemeasure of complex local phase congruency. Afterward, [27] pro-posed a more sensitive deviation function for phase measurement,which is modeled as follows:

LðxÞ ¼ cosðoÞ� sinðoÞ�� �� ð7Þ

We propose a new novel phase deviation weighting functionbased on the standard Gaussian exponential distribution asfollows:

LNðxÞ ¼2

sB

ffiffiffiffiffiffi2pp exp �$2=ð2sB

2Þ� �

�3

2

� sinðx=pÞ�� ��þ1� �

ð8Þ

in which the standard deviation of Gaussian distribution sB has asignificant influence on the shape variation of function LN(x) andwe have found that sB¼0.3 here gives a relatively good perfor-mance of measurement for phase deviations.

We compare the proposed phase deviation weighting functionmodel with Morrone and Owens model [26], Kovesi model [27]and Wong model [42] in Fig. 3. As it can be seen, this functiongives a noticeable sensitivity response for high edge informationas indicated by the slope near the peak of LN(x) so high phasecoherence can be highlighted and emphasized noticeably and thelow phase coherence can be suppressed hard [42]. Moreover, itcan be observed that the proposed weighting function neverdecreases as phase deviation 9o9 decreases.

To further support this claim, in Fig. 4 we compare the perfor-mance of feature detection and edge detection of Kovesi andproposed phase deviation models, as well as the common-usedgradient based method. Along the 1D profile of a DIC cell image asindicated by red lines, we plot the normalized gradient (Fig. 4(b)),and two local phase measurements (Figs. 4(d) and (f)). Moreover, inFigs. 4(c), (e) and (g), the three edge indicators based on theircorresponding feature measurements are also plotted. Featurescaptured by the cross-sections include thick ridge, thin ridge, noboundary feature, and step changes. We have found that gradientbased edge indicators perform poorly, in the sense that they give aset of response that is so sparse that the level set framework can beonly attracted to local minima which may be far from the real cellboundary. On the other hand, using the proposed phase deviationmethod, the edge features are sharpened and enhanced for smallphase deviations, while suppressed and weakened for large phasedeviations (see Fig. 4(i) and (j), giving the best performance of edgedetection, as compared with Kovesi phase deviation model.

Given this new phase deviation weighting function, we canupdate Eq. (5) by replacing L(x) with LN(x). After r is acquired,the final local edge indicator can be easily calculated byg¼1/(1þ9r9).

Also, we can see the term �rg in Eq. (1) plays an importantrole in the evolution. It acts as the velocity field which aligns thehypersurface closer to the cell boundary, since it points towardsthe valley of g, the centerline of the cell boundary. In Fig. 5, weplot the maps of velocity fields �rg in the small box highlightedin Fig. 2(a) to compare the capability of cell boundary alignmentfor gradient based edge indicator, Kovesi phase based edgeindicator and proposed phase based edge indicator. In views ofthe challenges mentioned in Section 1, direct application ofgradient based velocity field suffers from several disadvantages(see Fig. 5(a)). Firstly, besides the boundary, there are also localminimums of g inside the cell and g will fail to push forward the

Fig. 4. Comparisons of performance of feature extraction and edge detection using gradient based method, Kovesi’s and proposed phase deviation model, along the 1D

profile indicated by the red lines. (a) Original cell image. (b) and (c) Normalized gradient and its edge indicator. (d) and (e) Local phase measurement and its edge indicator

using Kovesi’s phase deviation model. (f) and (g) Local phase measurement and its edge indicator using proposed phase deviation model. (h) The 1D intensity profile of (a).

(i) The 1D feature profiles of (b), (d) and (f). (j) The 1D edge indicator profiles of (c), (e) and (g). (For interpretation of the references to color in this figure legend, the reader

is referred to the web version of this article.)

T. Chen et al. / Neurocomputing 99 (2013) 98–110102

hypersurface to the desired cell boundary, leading to an earlyedge stop. Secondly, the boundary of cell is vague sometimes, andg will continue to push the hypersurface outside there, resultingin a boundary leaking. Thirdly, the capture range of �rg needs tobe expanded to yield a more sensitive result. As we can see inFig. 5(b) and Fig. 5(c), the two phase based velocity fields over-come these disadvantages indeed and the proposed phase devia-tion model gives a more sensitive measurement of velocity field

in the sense of guiding the hypersurface to the desired boundarymore effectively, enhanced nearby the real boundary and sup-pressed far away from the cell boundary.

4.2. Stretching factor

Subjective surfaces models have a promising property that theonly interactive part in the initialization step is the reference

Fig. 5. Comparisons of the three different velocity fields �rg in the small box in Fig. 2(a). (a) Gradient based velocity field with Z¼0.1, s¼0.3. (b) Phase based velocity

field using Kovesi’s phase deviation model with a¼4, b¼6. (c) Phase based velocity field using proposed phase deviation model with a¼4, b¼6.

Fig. 6. Curves of stretching factors in Ref. [24] (a) and proposed model (b).

T. Chen et al. / Neurocomputing 99 (2013) 98–110 103

point selection, without the requirement of an initial contourrequired by most of the other level set based segmentation frame-works. However, as we expect to avoid the need for any userinteraction and we are manipulating thousands of cells, the initi-alization of cell center reference points can be automaticallyachieved by the generalized Hough transform [43]. Each cell canbe processed with the proposed method separately from the others,making the process parallelizable. The problem of the 2-D general-ized Hough transform is that it may not be able to detect apromising cell center for every cell case. Thus, we aim to design anew variation scheme for stretching factor within the framework ofGSUBSURF, which helps achieve the relatively accurate segmenta-tion results even if the reference point is located far from the cellcenter, i.e., close to cell boundaries.

Following the analysis of the behavior of the flow between twodifferent dynamics discussed in [24], we rewrite Eq. (3), themotion equation of GSUBSURF, as follows:

Ft ¼ mgðaþF2

x ÞFyyþðaþF2y ÞFxx�2FxFyFxy

aþF2xþF

2y

þvðgxFxþgyFyÞ ð9Þ

in which gx¼qg/qx, gy¼qg/qy.When spatial derivatives Fx,Fyca, Eq. (9) is approximated by

Ft � mgF2

xFyyþF2yFxx�2FxFyFxy

F2xþF

2y

þvðgxFxþgyFyÞ ð10Þ

Thus, the motion equation becomes the geodesic active con-tour flow [18]. The advantage of using geodesic active contourflow is that the missing edge information can be propagated byconnecting existing edges with minimum length.

In regions where Fx,Fy{a, Eq. (9) is simplified as

Ft � mgðFxxþFyyÞþvðgxFxþgyFyÞ ð11Þ

This is a pure diffusion equation and this process smoothesand flattens the points inside the object.

In Ref. [24], the authors raise a problem involving the center-offreference point selection. If we consider a strong off-center refer-ence point, the adjacent structures nearby it may become predo-minant, resulting in a wrong segmentation (see [24], Fig. 7). Thesesituations are mainly caused by the fact that the level set flow leadsto a false surface gradient or the pure diffusion process is too strong.

Thus, it is clear that the position of the reference pointinfluences the result of segmentation and it reveals how impor-tant the stretching factor is in the hypersurface evolution as itshifts the evolution model between two dynamics (pure level setflow and pure diffusion process).

To solve the problem of center-off reference point, in Ref. [25],the authors introduces a stretching factor a and a shifts theevolution model from mean curvature flow of graph (a¼1) tomean curvature flow of level set (a¼0). This solution tells us that

GSUBSURF is designed together with the property that the surfacesmove as a pure diffusion flow at early stage and as a pure level setmotion at latter stage. The authors also propose that the numbers ofiterations for a¼1 should be smaller than that for a¼0 sincediffusive process is faster than the pure level set motion. However,indeed how to quantify the ratio of the number of iterations for a¼1to that for a¼0 still remain unknown. Alternatively, an adaptiveselection of the modeling parameter in subjective surface method,the epsilon or e, is addressed in Ref. [44]. In their method, theparameter is chosen according to the gradients to balance the levelset and mean curvature principles.

Practically, the decisions for the ratio of the number of iterationsfor a¼1 to that for a¼0 are made empirically and an improperchoice of the ratio may lead to an inadequate diffusion processfollowed by an excessive level set flow or otherwise. Moreover, thisis a jump change for stretching factor that it directly alternates from

T. Chen et al. / Neurocomputing 99 (2013) 98–110104

a¼1 to a¼0 at the transition point, giving a potential that may leadto an unstable evolution.

To remedy the drawback involved with off-center referencepoint, at least partially, we propose a new variation scheme forstretching factor during the evolution, which successfully integratesthe influence of pure level set flow and diffusion motion, andensures the stability and graduality of the hypersurface evolution.

Define Ti(1rTirT) as the current iteration time, in whichT (Ta1) denotes the total iteration times we set in advance. Thus,r, defined as the ratio of the current iteration times already spentto (T�1), is formulated as follows:

r¼ ðTi-1Þ=ðT-1Þ ð12Þ

Fig. 7. Segmentation results of DIC red blood cell image #4 (isolated cell case). (a) Orig

using Kovesi’s and proposed phase deviation weighting function, respectively. (e) Edge

respectively. (h) Initialization contour for Methods 1–3, respectively. (i)–(n) Segmenta

evolution and a selection of a level set, indicated by red line. (For interpretation of the r

this article.)

Table 1Parameter settings of methods 1–7.

Approach Parameter setting

Method 1 c¼�1

Method 2 v¼0.003, m¼1, s¼7

Method 3 Feature type: Yezzi neighborhood type: circle l¼0.2, r¼9

Method 4 Z¼0.1, s¼0.3, m¼0.1, v¼10

Method 5, 6 and 7 a¼4, b¼6, m¼0.1, v¼10

We plot the stretching factor a as the function of the ratio r inFig. 6(a), for the variation strategy of stretching factor in [25]. A timepoint r0 (r0o0.5), also called transition point, should be set simulta-neously so that the numbers of iterations for diffusion motion can besmaller than that of pure level set flow. The function for thestretching factor variation proposed in [25] is modeled as

a¼1rrr0

ear4 r0

(ð13Þ

in which ea is a very small value to avoid any ill condition incalculation of mean curvature H.

Our strategy is to integrate the influence of pure level set flowand diffusion motion together, making the diffusion motion pre-dominant at early stage and secondary at latter stage. This integra-tion makes the whole evolution of surfaces more smoothly, withoutany jump change in terms of the evolution speed. Therefore, theinfluence of the off-center reference point problem can be weakened.We define the function for the proposed strategy of stretching factorvariation by

a¼ 1�ffiffiffirp

ð14Þ

From Fig. 6(b), we see that the absolute value of slope of thefunction monotonically decreases. This functionality leads to the

inal image. (b) Original image after contrast enhancement. (c) and (d) Phase map

indicator using gradient based method. (f) and (g) Edge indicators of (c) and (d),

tion contour results for Methods 1–6, respectively. (o) The surface at the end of

eferences to color in this figure legend, the reader is referred to the web version of

T. Chen et al. / Neurocomputing 99 (2013) 98–110 105

effect that the pure diffusion process and pure level set processevolves simultaneously and that the level set flow evolves in alonger period than the diffusion motion, giving more influenceon the local sharpening for discontinuities and missing boundarycompletion.

5. Experimental results

To evaluate the effectiveness of the proposed method, we usethe Human Red Blood Cells DIC image set available from theBroad Bioimage Benchmark Collection (http://www.broad.mit.edu/bbbc) for the segmentation of the DIC red blood cells. In thissection, we illustrate the ability of proposed approach in terms ofcell segmentation accuracy (Section 5.1), robustness to off-centerreference point selection (Section 5.2), and robustness against theadditive Gaussian noise (Section 5.3).

To be brief, we conclude the methods we involve in thissection

Figusin

resp

evo

this

Method 1: Caselles’s method [18].

� Method 2: Li’s method [17]. � Method 3: Lankton’s method [16]. � Method 4: GSUBSURF, with gradient based edge indicator and

original stretching factor variation scheme.

� Method 5: GSUBSURF, with Kovesi’s phase deviation weighting

function and proposed factor stretching factor variation scheme.

. 8. Segmentation results of DIC red blood cell image #10 (touching cell case). (a) Or

g Kovesi’s and proposed phase deviation weighting function, respectively. (e) Edge

ectively. (h) Initialization contour for Methods 1–3, respectively. (i)–(n) Segmenta

lution and a selection of a level set, indicated by red line. (For interpretation of the r

article.)

igin

ind

tion

efer

Method 6: GSUBSURF, with proposed phase deviation weight-ing function and proposed factor stretching factor variationscheme, also mentioned as CLAPSS.

� Method 7: GSUBSURF, with proposed phase deviation weight-

ing function and original stretching factor variation scheme.

among which, Methods 1–6 are employed for the validation ofsegmentation accuracy and the comparison results off-centerreference point selection test are generated between Method 6 andMethod 7.

Table 1 lists the detailed parameter settings of methods 1–7.The definitions of the relative parameters for Methods 1–3 inTable 1 can be easily found at www.creatis.insalyon.fr/bernard/creaseg/userguide.html. We set the time step Dt¼0.1 for Method1–7 and 100 iteration times for Methods 1–3. As the sameadopted in [25], we set 10,000 iteration times for Methods 4–7.Moreover, r0 ¼0.4 is set for Methods 4 and 7.

5.1. Segmentation accuracy validation

We evaluate the segmentation accuracy performance by qua-litatively and quantitatively comparing the results of CLAPSS(Method 6) with original GSUBSURF (Method 4), Kovesi’s phasedeviation weighting model (Method 5) and geodesic active con-tour model (Method 1), as well as two recently proposed regionbased level set methods (Methods 2 and 3), since these methods

al image. (b) Original image after contrast enhancement. (c) and (d) Phase map

icator using gradient based method. (f) and (g) Edge indicators of (c) and (d),

contour results for Methods 1–6, respectively. (o) The surface at the end of

ences to color in this figure legend, the reader is referred to the web version of

T. Chen et al. / Neurocomputing 99 (2013) 98–110106

mentioned above all obtained relatively high success rates forsegmentation accuracy and are thus comparable. Specifically atotal of ten cell images are tested, which comprise both normaland abnormal cell shapes, isolated and touching cell cases, andvarying levels of contrast enhancement, and each image containssingle cell to be segmented.

Two similarity criterions, Dice and Hausdorff Distance, areemployed for quantitative comparisons based on the availableground truth. Dice and Hausdorff Distance are defined as follows:

Dice¼2(A\B)/(AþB), where A and B are the reference mask regionand the result mask region of an algorithm, respectively; HausdorffDistance¼max(D(A, B), D(B, A)), where A and B are the referencecontour and the result contour of an algorithm andDðA, BÞ ¼ maxxAA

ðminyABð:x�y:ÞÞ.The representative segmentation results of cell #4 and #10

using Methods 1–6 are provided in Fig. 7 and Fig. 8, respectively.Note that, for a better visual inspection, the original images aftercontrast enhancement are shown in Fig. 7(b) and Fig. 8(b), respec-tively, and in subsequent segmentation results. To be fair, we usethe same initialization contour for Methods 1–3, as indicated inFig. 7(h) and Fig. 8(h).

It should be noted that as compared with phase based localedge indicators (see Fig. 7(f) and Fig. 7(g), as well as Fig. 8(f) andFig. 8(g)), the edge representation based on image gradient (seeFig. 7(e) and Fig. 8(e)) shows confusing and unreliable results.Due to the nonuniform shadow-cast artifacts (uneven illumina-tion) caused by the imaging system, there exist both high and low

Fig. 9. Comparison results of Dice (a) and Hausd

values for the edge representation in Fig. 7(e) and Fig. 8(e),respectively, which definitely prevent the surface to reach thereal boundaries.

What’s more, it is obvious that the portions of segmented cellboundaries using Methods 1 both in Fig. 7(i) and Fig. 8(i) show asmall distance deviation from the real cell boundary, and this ismainly due to double contours (internal and the external side ofcells) which are derived by the gradient based edge indicators.However, this does not hold in gradient based Method 4 and thisis because the edge detection model in Method 4 has beenimproved as a single contour (see [25], Eq. (2)).

As for the comparison of Kovesi’s and proposed phase devia-tion weighting models, Fig. 7(n) and Fig. 8(n) show a moderatelyaccurate result for better estimating the real position of cellboundaries as confirmed by visual inspection. This is because inthe proposed phase deviation method, the edge features aresharpened and enhanced for small phase deviations, while sup-pressed and weakened for large phase deviations. Thus hypersur-faces are attracted onto more accurate edge locations. The overallbest cell segmentation results are still achieved by Method 6 byvisual, due to the proposed phase deviation weighting model withmore sensitive edge extraction property, as well as the capabilityof missing boundary completion from subjective surfaces.

Fig. 9 illustrates the detailed statistical results. We can seeMethod 5 and Method 6 shows better performance on segmenta-tion accuracy due to phase based local feature detection. Asshown in Fig. 9(a), as for the measurement of Dice, our method

orff Distance (b) for ten testing cell images.

T. Chen et al. / Neurocomputing 99 (2013) 98–110 107

achieves constantly highest scores for all the ten cell cases(Dice¼1 for best corresponding case). Examining Fig. 9(b), ourmethod does not yield the smallest Hausdorff Distance scores fortest case # 1, but best segmentation results are still acquired onaverage by our method since the cell case group has varyingshapes and levels of contrast enhancement (Hausdorff Dis-tance¼0 for best corresponding case).

Fig. 10 shows five cases, each containing multiple DIC redblood cell images and their corresponding segmentation resultsusing Method 6. We can see the moderately accurate results forbetter estimating the real position of cell boundaries are stillachieved though the fact exists that strong variations within the

Fig. 10. Segmentation results of DIC red blood cell images from Broad Bioimage

Benchmark Collection using Method 6. Left column: original cell images; right

column: corresponding segmentation results.

cell boundaries, changing illumination conditions over imagesand some clutters make the segmentation difficult.

We implement Methods 1–3 and Method 6 for comparison ofcomputing time using Matlab. Given a sample image with size60�60, the average computing time for Methods 1–3 andMethod 6 is 1.0 s, 1.6 s, 1.5 s and 21 s, respectively, on a computerwith a 2.60 GHz Intel Core 2 Duo CPU and 2G Memory. In general,our algorithm is measurably slower than the other three algo-rithms, although our algorithm achieves a higher accuracy (Fig. 9).

5.2. Off-center reference point selection

To obtain an objective measure of performance for proposedstretching factor variation scheme, we perform the cell segmenta-tion in the condition of strong off-center reference point selections.The adjacent structures nearby the off-center reference point maybecome predominant if we consider a strong off-center referencepoint, resulting in a wrong segmentation. Thus in our experimentwe choose cell image #8 (plotted in Fig. 11 after contrast enhance-ment for better visualization) as it includes touching cells andadjacent structures around the object. We choose three differentlocations, point 1, 2 and 3, for initial reference point selections.These three points have varying Euclidean distances from the centerof the object cell. Point 1 is almost at the center of the object cell,and the point 3 is located very nearby the boundary of the objectcell. The black line indicates the major axis and the three points areall located on the black line, assuming the object cell has a shape ofstandard ellipse.

We compare M6 and M7 for segmentation accuracy based ondifferent locations of reference points, since they both have the samelevel set evolution framework except for the different variationschemes of stretching factor. The Dice measurement is concluded inTable 2 and we have found that 0.870 is just the value indicated inthe Fig. 9(a), a bit larger than that for using M6 (0.853). As theEuclidean distance increases, both of the Dice measurementsdecrease for the two methods. More specifically, using the proposedvariation schemes of stretching factor, Method 6 shows a morerobust property since Dice measurement of it decreases more slowlythan that of Method 7.

Fig. 11. Selections of initial reference points with different locations in cell #8.

Table 2Dice measurement of cell #8 for method 6 and method

7 on different reference points.

Point 1 Point 2 Point 3

M7 0.853 0.730 0.593

M6 0.870 0.772 0.665

T. Chen et al. / Neurocomputing 99 (2013) 98–110108

5.3. Robustness to noise

To evaluate the behavior of the proposed CLAPSS in thepresence of noise we perform the cell segmentation in imagesfrom the Human Red Blood Cells DIC image set after adding theadditive Gaussian noise.

For this purpose we randomly select five cell images (includingboth normal and abnormal cell shapes, isolated and touching cellcases, and varying levels of contrast enhancement), and each imagecontains a single cell to be segmented. Since the original cell imageshave very low contrast, and intensity levels of the background andforeground are both distributed within a very narrow range (seeFig. 1(c) and Fig. 2(c)), the images are highly sensitive to the addedadditive Gaussian noise. Considering the intensity level from 0 to 1,we corrupt the five images with zero-mean additive Gaussian noise,

Fig. 12. Noise influence on the average precision, recall and F-score for

Fig. 13. Results of robustness procedure. Left: processed images. Right: segmentation

data (Variance 0.0005). (d) Noisy data (Variance 0.0010).

with variance ranging from 0.0001 to 0.001.To obtain an objective measure of segmentation performance

for our proposed approach, we use precision (or positive pre-dictive value) and recall (or sensitivity) for evaluating the qualityof results. The precision and recall are defined as

precision¼ A \ B�� ��= Bj j,recall¼ A \ B

�� ��= A�� �� ð15Þ

where A represents the reference mask region and B denotes theresult mask region of an algorithm.

The metric of precision and recall can be combined into asingle measurement of segmentation accuracy, by taking theharmonic mean, resulting in the F-score

F ¼2� precision� recall

precisionþrecallð16Þ

five randomly selected images of DIC red blood cell using CLAPSS.

result. (a) Original data (ground truth). (b) Noisy data (Variance 0.0001). (c) Noisy

T. Chen et al. / Neurocomputing 99 (2013) 98–110 109

Since we are testing the robustness of our method against theadditive Gaussian noises, all the parameters are not changed, asthe same as set in Table 1. Finally, we compute the segmentationaccuracy measurement mentioned above for all the five imagesand obtained their average values for each noise level.

Fig. 12 shows the performance curves for measures of cell shapesegmentation using the metrics of precision, recall and F-score, forvarying noise variations. We set the values of precision, recall andF-score at the noise level with variance¼0.0001 (0.8838, 0.985 and0.9299) as the corresponding reference values, respectively. As theincrease of the variance of the added noise, the largest variationamplitude is only 5.6% (variance¼0.0002) higher than referencevalue for precision, �5.64% (variance¼0.0008) lower than referencevalue for recall, and subsequently �2.72% (variance¼0.001) lowerthan reference value for F-score.

Fig. 13 shows the cell segmentation results for one of the fiveimages with different levels of added noises. The cell shape detec-tions remain considerately stable, even when the noise added leadsto an image with limited information to human eyes. Theseexperimental results demonstrate the strong robustness againstthe additive Gaussian noises for the proposed local complex phasebased feature detection in the CLAPSS framework.

6. Conclusions

In this paper, we propose a novel DIC red blood cell imagesegmentation method called CLAPSS embedded with complexlocal phase information and a new variation scheme of stretchingfactor. A local image feature indicator is proposed to bettermeasure the representation of the object boundary. It helps thehypersurface evolve in the sense of guiding the hypersurface tothe desired boundary more effectively. The new variation schemefor stretching factor helps ensure the stability and graduality ofthe surface evolution in the situations where the reference pointis located nearby cell boundaries. The experimental results showthe proposed method yields more accurate and reliable segmen-tation results than several typical state-of-the-art methods. Infuture work, a better numerical scheme will be designed to speedup the computation of the proposed framework of the generalizedversion of subjective surfaces.

References

[1] L.O. Simpson, Blood from healthy animals and humans contains nondiscocy-tic erythrocytes, Br. J. Haematol. 73 (1989) 561–564.

[2] N. Otsu, A threshold selection method from gray-level histograms, IEEE Trans.Syst. Man Cybern. 9 (1979) 62–66.

[3] K. Li, T. Kanade, Nonnegative Mixed-Norm Preconditioning for MicroscopyImage Segmentation presented at the Proceedings of the 21st InternationalConference on Information Processing in Medical Imaging, Williamsburg,Virginia, 2009.

[4] X. Yang, et al., Nuclei segmentation using marker-controlled watershed,tracking using mean-shift, and Kalman filter in time-lapse microscopy, IEEETrans. Circuits Syst. 53 (2006) 2405–2414.

[5] O. Debeir, et al., Phase Contrast Image Segmentation by Weak WatershedTransform Assembly, Presented at the 5th IEEE International Symposium onBiomedical Imaging: From Nano to Macro, Paris, 2008.

[6] D. Marr, E. Hildreth, Theory of edge detection, Proc. R. Soc. London, Ser. B 207(1980) 187–217, February 29, 1980.

[7] J. Canny, A computational approach to edge detection, IEEE Trans. PatternAnal. Mach. Intell. 8 (1986) 679–698.

[8] P.S. Umesh Adiga, B.B. Chaudhuri, An efficient method based on watershedand rule-based merging for segmentation of 3-D histo-pathological images,Pattern Recognit. 34 (2001) 1449–1458.

[9] R.C. Gonzalez, R.E. Woods, Digital Image Processing, Prentice Hall, 2002.[10] P. Quelhas, et al., Cell nuclei and cytoplasm joint segmentation using the

sliding band filter, IEEE Trans. Med. Imaging 29 (2010) 1463–1473.[11] H.-F. Yang, Y. Choe, Cell Tracking and Segmentation in Electron Microscopy

Images using Graph Cuts, Presented at the Proceedings of the Sixth IEEE

international conference on Symposium on Biomedical Imaging: From Nanoto Macro, Boston, Massachusetts, USA, 2009.

[12] J. Shi, J. Malik, Normalized cuts and image segmentation, IEEE Trans PatternAnal. Mach. Intell. 22 (2000) 888–905.

[13] V. Kolmogorov, R. Zabih, What energy functions can be minimizedvia graphcuts? IEEE Trans. Pattern Anal. Mach. Intell. 26 (2004) 147–159.

[14] Y. Boykov, G. Funka-Lea, Graph cuts and efficient N-D image segmentation,Int. J. Comput. Vision 70 (2006) 109–131.

[15] T.F. Chan, L.A. Vese, Active contours without edges, IEEE Trans. ImageProcess. 10 (2001) 266–277.

[16] LANKTON, et al., Localizing Region-Based Active Contours, 17, Institute ofElectrical and Electronics Engineers, New York, NY, ETATS-UNIS, 2008.

[17] C.M. Li, et al., Minimization of region-scalable fitting energy for imagesegmentation, IEEE Trans. Image Process. 17 (2008) 1940–1949.

[18] V. Caselles, et al., Geodesic active contours, Int. J. Comput. Vision 22 (1997)61–79.

[19] C. Li, et al., Level set evolution without re-initialization: a new variationalformulation, Proc. IEEE Comput. Soci. Conf. Comput. Vision Pattern Recogn.1 (2005) 430–436.

[20] M. Felsberg, G. Sommer, The monogenic signal, IEEE Trans. Acoust., SpeechSignal Process. 49 (2001) 3136–3144.

[21] V. Grau, et al., Registration of multiview real-time 3-D echocardiographicsequences, IEEE Trans. Med. Imaging 26 (2007) 1154–1165.

[22] I. Hacihaliloglu, et al., Bone Segmentation and Fracture Detection in Ultra-sound Using 3D Local Phase Features, Presented at the Proceedings of the11th International Conference on Medical Image Computing and Computer-Assisted Intervention—Part I, New York, NY, USA, 2008.

[23] A. Sarti, et al., Subjective surfaces: a method for completing missingboundaries, PNAS 97 (2000) 6258–6263.

[24] A. Sarti, et al., Subjective surfaces: a geometric model for boundary comple-tion, Int. J. Comput. Vision 46 (2002) 201–221.

[25] C. Zanella, et al., Cells segmentation from 3-D confocal images of earlyzebrafish embryogenesis, Trans. Image Process. 19 (2010) 770–781.

[26] M.C. Morrone, R.A. Owens, Feature detection from local energy, PatternRecognit. Lett. 6 (1987) 303–313.

[27] P. Kovesi, Phase congruency: a low-level image invariant, Psychol. Res. 64(2000) 136–148.

[28] S. Fischer, et al., Self-invertible 2D log-gabor wavelets, Int. J. Comput. Vision75 (2007) 231–246.

[29] M. Rousson, D. Cremers, Efficient kernel density estimation of shape andintensity priors for level set segmentation, Med. Image Comput. Computer-Assisted Intervention 3750 (2) (2005) 757–764, miccai 2005.

[30] A. Wimmer et al., A Generic Probabilistic Active Shape Model for OrganSegmentation, Presented at the Proceedings of the 12th International Con-ference on Medical Image Computing and Computer-Assisted Intervention:Part II, London, UK, 2009.

[31] X. Xue et al., PICE: Prior Information Constrained Evolution for 3-D and 4-DBrain Tumor Segmentation, Presented at the Proceedings of the 2010 IEEEInternational Conference on Biomedical Imaging: From Nano to Macro,Rotterdam, Netherlands, 2010.

[32] D. Cremers, et al., in: A. Heyden, et al., (Eds.), Nonlinear Shape Statistics inMumford—Shah Based Segmentation Computer Vision—ECCV 2002, 2351,Springer, Berlin/Heidelberg, 2002, pp. 516–518.

[33] X. Bresson, et al., A variational model for object segmentation using boundaryinformation and shape prior driven by the Mumford–Shah functional, Int. J.Comput. Vision 68 (2006) 145–162.

[34] D. Cremers, et al., Kernel density estimation and intrinsic alignment for shapepriors in level set segmentation, Int. J. Comput. Vision 69 (2006) 335–351.

[35] D. Cremers, et al., A review of statistical approaches to level set segmenta-tion: integrating color, texture, motion and shape, Int. J. Comput. Vision 72(2007) 195–215.

[36] M. Farzinfar, et al., A novel approach for curve evolution in segmentation ofmedical images, Computerized Med. Imaging Graphics (2010).

[37] K. Mikula, et al., Co-volume level set method in subjective surface basedmedical image segmentation, Handbook of Biomed. Image Anal. (2005)583–626.

[38] S. Corsaro, et al., Semi-implicit covolume method in 3D image segmentation,SIAM J. Sci. Comput. 28 (2006) 2248.

[39] K. Mikula, et al., 3D embryogenesis image segmentation by the generalizedsubjective surface method using the finite volume technique in: FiniteVolumes for Complex Applications V: Problems and Perspectives, London,2008, 585-592.

[40] P. Bourgine, et al., 4D embryogenesis image analysis using PDE methods ofimage processing, Kybernetika 46 (2010) 226–259.

[41] K. Mikula, et al., Segmentation of 3D cell membrane images by PDE methodsand its applications, Comput. Biol. Med. (2011).

[42] A. Wong, Alignment of confocal scanning laser ophthalmoscopy photorecep-tor images at different polarizations using complex phase relationships, IEEETrans. Biomed. Eng. 56 (2009) 1831–1837.

[43] D.H. Ballard, Generalizing the Hough transform to detect arbitrary shapes,Pattern Recognit. 13 (1981) 111–122.

[44] Z. Kriva, Finite-volume level set method and its adaptive version in complet-ing subjective contours, Kybernetika 43 (2007) 509–522.

T. Chen et al. / Neurocomputing 99 (2013) 98–110110

Taoyi Chen received the Ph.D. degree in ElectricalEngineering from Harbin Institute of Technology, China,in 2012. He is currently working in The 54th ResearchInstitute of China Electronics Technology Group Corpora-tion. He also has been a research fellow in The MethodistHospital Research Institute of Weill Cornell MedicalCollege, USA in 2009. His research interests includeimage processing and pattern recognition.

Yong Zhang received the B.E. degree and M.E. degreein Electrical Engineering from Harbin Institute ofTechnology, China. He received the Ph.D degree inElectrical Engineering from West Virginia University,Morgantown, USA, in 2006. He did half of his Ph.Dresearch in the Brigham and Women’s Hospital, Har-vard Medical School, Boston, USA. He is a postdoctoralresearcher at IBM Almaden Research Center, San Jose,CA, USA. His research interests include digital imageand video processing, bioinformatics, medical imaging,and computer vision. He also has working experienceon multimedia communication and embedded system

design.

Changhong Wang received the Ph.D. degree in Elec-trical Engineering from Harbin Institute of Technology,China, in 1991. He is now a professor of Department ofControl Science and Engineering, Harbin Institute ofTechnology. His research interests include intelligentcontrol and intelligent systems, inertial technologyand its testing equipment, robotics, precision servosystems, and network control.

Zhenshen Qu received the Ph.D. degree in ElectricalEngineering from Harbin Institute of Technology,China, in 2003. He is now an associate professor ofDepartment of Control Science and Engineering, Har-bin Institute of Technology. His research interestsinclude computer vision and applications, especiallyin the fields of motion estimation and visual tracking.

Fei Wang received his B.Sc. degree in Electrical Eng-ineering from the University of Science and Technologyof China in 2001, and his M.Sc. (2003) and Ph.D. (2006)in Computer Science from the University of Florida. He isnow a research staff member with the Health Infor-matics Department at IBM Almaden Research Center,and is currently working on the multimodal miningproject for healthcare decision support at IBM. His workcovers multiple aspects of medical image analysis,computer vision, machine learning, computer graphicsand shape modeling. He is an associate editor of Inter-national Journal of Image and Graphics (IJIG), and

International Journal of Advances in Optical Commu-

nication and Networks (IJAOCN); He served as a reviewer for many major interna-tional conferences and journals. He is a senior member of IEEE since 2011.

Tanveer Syeda-Mahmood: Dr. Tanveer Syeda-Mah-mood is a research manager at IBM Almaden ResearchCenter where she heads a program on multimodalmining for healthcare data. Dr. Syeda-Mahmood grad-uated from the MIT AI Lab in 1993 with a Ph.D. inComputer Science. She worked as a Research StaffMember at Xerox Webster Research Center, Webster,NY before joining IBM in 1998. Dr. Syeda-Mahmoodled the image indexing program at Xerox Research andwas one of the early originators of the field of content-based image and video retrieval. Currently, she isworking on applications of content-based retrieval in

healthcare. Over the past 28 years, her research inter-

ests have been in a variety of areas relating to artificial intelligence includingcomputer vision, image and video databases, medical image analysis, bioinfor-matics, and signal processing. She has over 140 refereed publications and over 55issued patents.

Dr. Syeda-Mahmood has chaired numerous workshops and conferences includ-ing several early workshops that helped established the field of content-basedretrieval and event recognition (Event’2001–2004) and medical decision support(MCBR-CDS’09,‘11.‘12). Dr. Syeda-Mahmood was the program co-chair of CVPR2008 recently and the general chair of IEEE HISB’2011 conference. Dr. Syeda-Mahmood has received several awards over the years including university goldmedals for academic excellence, Outstanding Achievement Awards at Xerox andIBM, and an Outstanding Innovation Award at IBM. In 2012 she was named MasterInventor. Dr. Syeda-Mahmood is an IEEE Fellow and a Member of IBM Academy ofTechnology.


Recommended