+ All Categories
Home > Documents > Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image...

Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image...

Date post: 02-Oct-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
18
Research Article Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on Generalized Gamma and Gaussian Distributions Yi Zhou and Hongqing Zhu School of Information Science & Engineering, East China University of Science and Technology, Shanghai 200237, China Correspondence should be addressed to Hongqing Zhu; [email protected] Received 24 July 2017; Revised 14 December 2017; Accepted 26 December 2017; Published 24 January 2018 Academic Editor: Xiangyu Meng Copyright © 2018 Yi Zhou and Hongqing Zhu. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Finite mixture model (FMM) is being increasingly used for unsupervised image segmentation. In this paper, a new finite mixture model based on a combination of generalized Gamma and Gaussian distributions using a trimmed likelihood estimator (GGMM- TLE) is proposed. GGMM-TLE combines the effectiveness of Gaussian distribution with the asymmetric capability of generalized Gamma distribution to provide superior flexibility for describing different shapes of observation data. Another advantage is that we consider the spatial information among neighbouring pixels by introducing Markov random field (MRF); thus, the proposed mixture model remains sufficiently robust with respect to different types and levels of noise. Moreover, this paper presents a new component-based confidence level ordering trimmed likelihood estimator, with a simple form, allowing GGMM-TLE to estimate the parameters aſter discarding the outliers. us, the proposed algorithm can effectively eliminate the disturbance of outliers. Furthermore, the paper proves the identifiability of the proposed mixture model in theory to guarantee that the parameter estimation procedures are well defined. Finally, an expectation maximization (EM) algorithm is included to estimate the parameters of GGMM-TLE by maximizing the log-likelihood function. Experiments on multiple public datasets demonstrate that GGMM-TLE achieves a superior performance compared with several existing methods in image segmentation tasks. 1. Introduction Segmenting an object from its background has an important role in machine learning and computer vision [1]. In recent years, several unsupervised image segmentation algorithms have been presented [2, 3]. Statistical approach, particularly finite mixture model (FMM), is one of the most widely known ones [4]. is can be used to model arbitrary univariate or multivariate observed data. In particular, modelling the probability density function of pixel attributes with Gaussian mixture model (GMM) has proved successful in segmenta- tion tasks [5]. It is mainly because the parameters of GMM can be easily estimated by maximizing the maximum likeli- hood (ML) of the observed data using the expectation maxi- mization (EM) algorithm. However, there remain limitations preventing GMM from achieving improved performance. e first challenge is sensitivity to noise, which is caused by the independence of the spatial relationship of pixels during parameter learning. e second challenge derives from the difficulty in fitting asymmetric observed data. In addition to these limitations, GMM is sensitive to outliers and can lead to excessive sensitivity to a small number of data points. Recent studies have attempted to overcome the above disadvantages. e existing schemes can be categorised as follows. (1) Schemes Based on Markov Random Field (MRF).A wide variety of approaches, especially Markov random field, have been introduced for resisting noise. ese schemes utilize MRF smoothness prior to modelling the joint prior distribution of pixel labels; hence, the spatial information of the pixels is considered via the contextual constraints of the neighbouring pixels [6, 7]. erefore, the MRF-based mixture model has stronger ability to resist noise. However, MRF suffers from the fact that parameter estimation is difficult and suffers high computational complexity. Recently, Hindawi Mathematical Problems in Engineering Volume 2018, Article ID 3468967, 17 pages https://doi.org/10.1155/2018/3468967
Transcript
Page 1: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Research ArticleImage Segmentation Using a Trimmed LikelihoodEstimator in the Asymmetric Mixture Model Based onGeneralized Gamma and Gaussian Distributions

Yi Zhou and Hongqing Zhu

School of Information Science amp Engineering East China University of Science and Technology Shanghai 200237 China

Correspondence should be addressed to Hongqing Zhu hqzhuecusteducn

Received 24 July 2017 Revised 14 December 2017 Accepted 26 December 2017 Published 24 January 2018

Academic Editor Xiangyu Meng

Copyright copy 2018 Yi Zhou and Hongqing ZhuThis is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Finite mixture model (FMM) is being increasingly used for unsupervised image segmentation In this paper a new finite mixturemodel based on a combination of generalized Gamma and Gaussian distributions using a trimmed likelihood estimator (GGMM-TLE) is proposed GGMM-TLE combines the effectiveness of Gaussian distribution with the asymmetric capability of generalizedGamma distribution to provide superior flexibility for describing different shapes of observation data Another advantage is thatwe consider the spatial information among neighbouring pixels by introducing Markov random field (MRF) thus the proposedmixture model remains sufficiently robust with respect to different types and levels of noise Moreover this paper presents anew component-based confidence level ordering trimmed likelihood estimator with a simple form allowing GGMM-TLE toestimate the parameters after discarding the outliers Thus the proposed algorithm can effectively eliminate the disturbance ofoutliers Furthermore the paper proves the identifiability of the proposed mixture model in theory to guarantee that the parameterestimation procedures are well defined Finally an expectationmaximization (EM) algorithm is included to estimate the parametersof GGMM-TLE bymaximizing the log-likelihood function Experiments onmultiple public datasets demonstrate that GGMM-TLEachieves a superior performance compared with several existing methods in image segmentation tasks

1 Introduction

Segmenting an object from its background has an importantrole in machine learning and computer vision [1] In recentyears several unsupervised image segmentation algorithmshave been presented [2 3] Statistical approach particularlyfinitemixturemodel (FMM) is one of themostwidely knownones [4] This can be used to model arbitrary univariateor multivariate observed data In particular modelling theprobability density function of pixel attributes with Gaussianmixture model (GMM) has proved successful in segmenta-tion tasks [5] It is mainly because the parameters of GMMcan be easily estimated by maximizing the maximum likeli-hood (ML) of the observed data using the expectation maxi-mization (EM) algorithm However there remain limitationspreventing GMM from achieving improved performanceThe first challenge is sensitivity to noise which is caused bythe independence of the spatial relationship of pixels during

parameter learning The second challenge derives from thedifficulty in fitting asymmetric observed data In addition tothese limitations GMM is sensitive to outliers and can leadto excessive sensitivity to a small number of data points

Recent studies have attempted to overcome the abovedisadvantages The existing schemes can be categorised asfollows

(1) Schemes Based on Markov Random Field (MRF) Awide variety of approaches especially Markov random fieldhave been introduced for resisting noise These schemesutilize MRF smoothness prior to modelling the joint priordistribution of pixel labels hence the spatial informationof the pixels is considered via the contextual constraints ofthe neighbouring pixels [6 7] Therefore the MRF-basedmixture model has stronger ability to resist noise HoweverMRF suffers from the fact that parameter estimation isdifficult and suffers high computational complexity Recently

HindawiMathematical Problems in EngineeringVolume 2018 Article ID 3468967 17 pageshttpsdoiorg10115520183468967

2 Mathematical Problems in Engineering

mean template is employed along with a spatially vary-ing mixture model to alleviate the influence of noise inimage segmentation [8] It is a natural approach to preventnoise because it automatically filters noise using a meanfilter

(2) Schemes Based on Asymmetric Probability Distribution Ingeneral GMM does not fit well if the shapes of the observeddata are asymmetric [9] Indeed in many real applicationsthe intensity distribution of the observed data is not symmet-ric Thus it would seem that FMM with asymmetric distri-bution such as Gamma distribution [3] Weibull distribution[10] and Rayleigh distribution [11] could overcome thislimitation Another typical case is to obtain the asymmetricdistribution via the linear weighted aggregativemethod usingtwo ormore symmetric probability distributions One typicalexample is asymmetric Studentrsquos 119905 mixture model (NSMM)[12] where each component density is modelled with multi-ple Studentrsquos 119905 distributions Another example of this case isthe Bayesian-bounded asymmetric mixture model (BAMM)[13] which was developed by a subset of the authors [12] andother coauthors for unsupervised image segmentation Eachcomponent of their approach can model different shapes ofobserved data with different bounded support regions Aclose relative of this framework involves a bounded asym-metrical Studentrsquos 119905 mixture model [14] Peculiarly we notethat the mixture of two or more different distributions hascaused great concern and yet has developed rapidly in recentyears Typical algorithms includeZhou et alrsquos statisticalmodel[15] which is a mixture of 119870-distribution and lognormaldistribution This also includes De Angelis et al [16] whooffered a robust time interval measurement method basedon a Gaussian-uniform mixture model Browne et al [17]incorporated a multivariate Gaussian and uniform distribu-tions as the component density which allowed for superiormixture possessing These methods demonstrate a compet-itive performance in fitting different shapes of observeddata

(3) Schemes Based on Trimming Method In general forGMM-based algorithms the parameters are estimated by theML estimator through the EM algorithm However the MLestimator is overly sensitive to outliers and GMM cannotaddress outliers properly Therefore outliers seriously dete-riorate the performance of Gaussian-based clustering algo-rithms To overcome this shortcoming a common approachis to consider a mixture model with Studentrsquos 119905 distribution(SMM) which provides a longer-tailed alternative to theGaussian distribution [18] Therefore SMM is more robustto outliers than the GMM for heavier tails Another model-based method which presents a theoretically well-basedsegmentation criterion in presence of outliers is the trim-ming method [19] The main principle in trimming is tolocate and discard the outliers from the likelihood functionSegmentation results benefit from the trimming approachMuller and Neykov proposed the fast trimmed likelihoodestimator (FAST-TLE) [20] Galimzianova et al developedthe confidence level ordering trimmed likelihood estimator(CLO-TLE) [21] However they do not function effectively in

noisy samples especially when each group has a different sizeof observations

Motivated by the aforementioned considerations in thispaper we present a two-step procedure (GGMM-TLE)beginning with a component-based confidence level order-ing trimmed likelihood estimator Because the majority ofobserved data contains outliers it is necessary to discardthese in a previous step before robustly estimating parame-ters As a new algorithm the proposed technique considersthe components with lower mixture weights This avoidseliminating the samples belonging to the components witha small number of observations as the outliers Then wepropose a novel finite mixture model based on a mixture ofgeneralized Gamma and Gaussian distributions (GGMM)The proposed GGMM with Markov random fields hashigh flexibility and can be used to fit the asymmetric dataowing to the introduction of the asymmetric generalizedGamma distribution Moreover we theoretically prove theproperty of identifiability of the GGMM through the strat-egy presented by Atienza et al [22 23] which indicatesthat the GGMMrsquos mixture representation is unique Thisproperty is crucial to ensure that the parameter estimationproblem is well posed Therefore the proposed algorithmcan be effectively applied for segmenting images More-over by imposing spatial smoothness constraints amongneighbouring pixels using MRF the neighbouring pixelsshould have the same label Therefore the proposed modelreduces the segmentation sensitivity to noise in a stillimage We demonstrate through simulation study that theproposed framework is superior to other related methodsin terms of the misclassification ratio and Dice similaritycoefficient

The remainder of this paper is organized as followsSection 2 introduces the proposedmixturemodel in detail InSection 3 we prove the identifiability of the proposedmixturemodel The process of parameter learning is described inSection 4 The ordering method for likelihood trimming isreported in Section 5 Section 6 provides the experimentalresults and analysis Finally we conclude with a discussionin Section 7

2 Model Formulation

Assume a set of data 119883 = 119909119894 | 119894 = 1 2 119873 where each119909119894 denotes an observation at the 119894th pixel of an image 119873 isthe total number of pixels in an imageThe proposedmixturemodel assumes that the density function at pixel 119909119894 is givenby

119891 (119909119894 | Θ) = 119870sum119895=1

120587119895119870119895sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896) (1)

where Θ = 120587119895 ℏ119895119896 120579119895119896 is the complete parameter set of theproposed mixture model 119870 denotes the number of mixturecomponents The prior 120587119895 represents the probability that theobservation 119909119894 belongs to the 119895th label and ℏ119895119896 is called theweighting factor they satisfy the following constraints

Mathematical Problems in Engineering 3

119870sum119895=1

120587119895 = 1119870119895sum119896=1

ℏ119895119896 = 10 le 120587119895 ℏ119895119896 le 1

(2)

In this paper we set 119870119895 = 2 thus 1199011198951(119909119894 | 1205791198951) is thegeneralized Gamma distribution defined by

1199011198951 (119909119894 | 1205791198951)= 10038161003816100381610038161003816V11989510038161003816100381610038161003816120590119895Γ (119896119895) (

119909119894120590119895)119896119895V119895minus1

expminus(119909119894120590119895)V119895 (3)

where 1205791198951 = V119895 119896119895 120590119895 is the parameter set of generalizedGamma distribution V119895 is the power parameter 119896119895 is theshape parameter 120590119895 is the scale parameter and Γ(sdot) denotesthe Gamma function The probability density function ofGaussian distribution 1199011198952(119909119894 | 1205791198952) is defined as

1199011198952 (119909119894 | 1205791198952)= 1radic2120587Σ119895 exp minus

12 (119909119894 minus 119906119895)119879 Σminus1119895 (119909119894 minus 119906119895) (4)

where 1205791198952 = 119906119895 Σ119895 is the parameter set of Gaussiandistribution 119906119895 is the mean and Σ119895 denotes the covariance

According to Bayesian rules we express the posteriorprobability density function of the proposed model as

119891 (Θ | 119883) prop 119891 (119883 | Θ) 119901 (Θ) (5)

To train the proposed mixture model based on the aboveformulation we define the following maximum a posteriorilog-likelihood function

119871 (Θ | 119883) = log (119891 (Θ | 119883))prop log 119891 (119883 | Θ) + log119901 (Θ) (6)

TheMarkov random field based on Gibbs distribution can becharacterized by

119901 (Θ) = 119885minus1 expminus119880 (Θ)119879 (7)

where 119879 and 119885 are temperature and normalizing constantsrespectively In the proposed approach a new energy func-tion of the following form is chosen to enforce spatialsmoothness

119880 (Θ) = minus 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 (8)

where

119866119894119895 = exp(120573 sum119898isin119873119894

(119911119898119895 + 120587119895)) (9)

where 119873119894 is the neighbourhood of the 119894th pixel includingthe 119894th pixel itself for example 3 times 3 or 5 times 5 119911119898119895 denotesthe posterior probability Eventually we can formulate thesegmentation problem as a maximum a posteriori problemusing the log-likelihood function as

119871 (Θ | 119883) = 119873sum119894=1

log119870sum119895=1

120587119895 [ 2sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(10)

The above scheme contains two parts where the firstdenotes the proposed mixture model and the last is theMarkov model In general the EM algorithm is an efficientframework for estimating the mixture model parameters

3 Identifiability of the ProposedMixture Model

This section discusses the property of identifiability of theGGMM This property implies that GGMM can only beexpressed by specific components Obviously this property isimportant for finite mixture model because it can guaranteethe estimation procedures of the parameter set to be welldefined [3 22] The property of identifiability is described asfollows

We defined the following set

119865 = 119891 119891120579 (119909) = ℏ11199011 (119909 | V 119896 120590) + ℏ21199012 (119909 | 119906 Σ) (11)

119865 is the family of proposed distributions where V = 0119896 gt 0 120590 gt 0 ℏ1 + ℏ2 = 1 In this study 1199011(119909 |V 119896 120590) is the generalized Gamma distribution and 1199012(119909 |119906 Σ) is the Gaussian distribution the parameters of theproposed distribution are mutually independent The set ofthe proposed mixture model with 120587119895 satisfying (2) is119867119865 = 119867 119867 (119909) = 119870sum

119895=1

120587119895119891120579119895 (119909) 1198911205791le119895le119870 (119909) isin 119865 (12)

Theorem 1 The property of identifiability of 119867119865 means thatfor any two mixture models11986712 isin 119867119865 that is

1198671 = 1198701sum119895=1

12058711198951198911205791119895 (119909) 1198672 = 1198702sum

119895=1

12058721198951198911205792119895 (119909) (13)

if 1198671 = 1198672 then 1198701 = 1198702 and (1205871119895 1198911205791119895)1198701119895=1 = (12058721198951198911205792119895)1198702119895=1Proof According to [22] we prove that for a linear trans-form 119872 119891120579(119909) rarr 120601119891 with domain 119878(119891) Let 1198780(119891) =

4 Mathematical Problems in Engineering

119898 isin 119878(119891) 120601119891(119898) = 0 For a given point 1198980 and any twoproposed distributions 1198911 1198912 isin 119865 there exists a total order ≺on 119865 that satisfies

1198911 ≺ 1198912 lArrrArrlim119898rarr1198980

1206011198912 (119898)1206011198911 (119898) = 0(14)

Given the expression of the linear transform119872 is as follows

119872[119891120579 (119909)] 120601119891 (119898) = 119864 (119909119898) = int+infinminusinfin

119909119898119891120579 (119909) 119889119909 (15)

where 119891120579(119909) is the proposed density function 119891120579(119909) =ℏ11199011(119909 | V 119896 120590) + ℏ21199012(119909 | 119906 Σ) Let 1198921205791(119909) = 1199011(119909 | V 119896 120590)and ℎ1205792(119909) = 1199012(119909 | 119906 Σ) then 119891120579(119909) = ℏ11198921205791(119909) + ℏ2ℎ1205792(119909)Obviously if 1198921 ≺ 1198922 and ℎ1 ≺ ℎ2 we can obtain 1198911 ≺ 1198912According to (15) we have

120601119892 (119898) = 120590119898 Γ (119896 + 119898V)Γ (119896) 119898 isin (minus119896V +infin) (16)

where 1198780(119892) = (minus119896V +infin) and 1198980 = +infin To facilitatethe proof procedure this study utilizes Stirlingrsquos formula asfollows

Γ (119911 + 1) sim radic2120587119911 (119911119890 )119911 119911 997888rarr +infin (17)

Thus we have

120601119892 (119898) sim 120590119898Γ (119896)radic2120587(119898V )119896+119898Vminus12

sdot (1 + 119896 minus 1119898 V)119896+119898Vminus12 exp 1 minus 119896 minus 119898V sim radic2120587Γ (119896)

sdot exp 119898 log120590 minus 119898V

sdot exp (119896 + 119898Vminus 12) (log119898 minus log V)

(18)

The sign ldquosimrdquo indicates that the expressions on both sides areequivalent up to constant term when 119898 rarr +infin Hence for119898 rarr 1198980 we have

1206011198922 (119898)1206011198921 (119898) sim 119862 exp( 1V2minus 1V1)119898 log119898

+ [(log1205902 minus log1205901) minus ( 1V2 minus 1V1)

+ ( 1V2

log 1V2minus 1V1

log 1V1)]119898

+ (1198962 minus 1198961) log119898

(19)

where 119862 is a constant From (19) we can derive 1198921 ≺ 1198922 hArr[V2 gt V1] or [V2 = V1 1205902 lt 1205901] or [V2 = V1 1205902 = 1205901 1198962 lt 1198961]which is apparently a total order Analogously we have

120601ℎ (119898) = exp 119898119906 + 12Σ1198982 (20)

where 1198780(ℎ) = (minusinfin +infin) and 1198980 = +infin For 119898 rarr 1198980 wehave

120601ℎ2 (119898)120601ℎ1 (119898) = exp 12 (Σ2 minus Σ1)1198982 + (1199062 minus 1199061)119898 (21)

From (21) we can determine that ℎ1 ≺ ℎ2 hArr [Σ2 lt Σ1] or[Σ2 = Σ1 1199062 lt 1199061] which is clearly a total order Overall wehave 1198911 lt 1198912 there exists a total order ≺ on 119865 Hence we candraw the conclusion that the GGMMs are identifiable

4 Parameter Learning

The main task of this section is to estimate the completeparameter set In general the EM algorithm provides anefficient scheme for unsupervised segmentation using itera-tive updating and guarantees that the log-likelihood functionconverges to a local maximum Considering the complexityof (10) it is difficult to apply the EM algorithm directlyfor maximizing the log-likelihood function (10) Thereforewe employ Jensenrsquos inequality by defining the two hiddenvariables 119911119894119895 and 119910119894119895119896 which are respectively

119911(119905)119894119895= 120587119895 [ℏ11989511199011198951 (119909119894 | 1205791198951) + ℏ11989521199011198952 (119909119894 | 1205791198952)]sum119870119898=1 120587119898 [ℏ11989811199011198981 (119909119894 | 1205791198981) + ℏ11989821199011198982 (119909119894 | 1205791198982)]

(22)

119910(119905)119894119895119896 = ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)sum119870119895119898=1 ℏ119895119898119901119895119898 (119909119894 | 120579119895119898) (23)

Clearly 119911119894119895 and 119910119894119895119896 satisfy the constraints sum119870119895=1 119911119894119895 = 1and sum119870119895

119896=1119910119894119895119896 = 1 Using Jensenrsquos inequality one has

log(sum119870119895=1 119890119895) ge log(sum119870119895=1 119911119894119895119890119895) ge sum119870119895=1(119911119894119895 log 119890119895) thus the log-likelihood function (10) can be rewritten as follows

119871 (119883 | Θ) ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895

+ log[[119870119895sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]] minus log119885 + 1119879

Mathematical Problems in Engineering 5

sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896 + log119901119895119896 (119909119894 | 120579119895119896)] minus log119885+ 1119879119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(24)

Thus we can define the following new objective function119876(119883 | Θ) in terms of Jensenrsquos inequality

119876 (119883 | Θ) = 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895 + 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896+ log119901119895119896 (119909119894 | 120579119895119896)] minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895= 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119910(119905)1198941198951 [log ℏ1198951 + log1199011198951 (119909119894 | 1205791198951)]+ 119910(119905)1198941198952 [log ℏ1198952 + log1199011198952 (119909119894 | 1205791198952)] minus log119885 + 1119879sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895

(25)

To realize clustering we must maximize the log-likelihoodfunction in (10) which is equivalent to maximizing theobjective function in (25) In particular to estimate the priorprobability 120587119895 we take the partial derivative of the objectivefunction in (25) with respect to 120587119895 yielding

120597120597120587119895 [[119876 (119883 | Θ) minus 120582( 119870sum119895=1

120587119895 minus 1)]] = 0 (26)

where 120582 is the Lagrange multiplier in consideration of theconstraint sum119870119895=1 120587119895 = 1 we have

120587(119905+1)119895 = 119911(119905)119894119895 + 119866(119905)119894119895sum119870119898=1 119911(119905)119894119898 + 119866(119905)119894119898 (27)

Similarly to estimate the weighting factor ℏ119895119896 we take thepartial derivative of the objective function in (25)with respectto ℏ119895119896

120597120597ℏ119895119896 [[119876 (119883 | Θ) minus 120591( 119870119895sum119896=1

ℏ119895119896 minus 1)]] = 0 (28)

where 120591 is the Lagrange multiplier in consideration of theconstraint sum119870119895

119896=1ℏ119895119896 = 1 we have

ℏ(119905+1)119895119896 = sum119873119894=1 119911(119905)119894119895 119910(119905)119894119895119896sum119873119894=1 119911(119905)119894119895 (29)

In the following we estimate the power parameter V119895 Wecalculate the partial derivative of the objective function (25)with this power parameter V119895 as follows

120597119876 (119883 | Θ)120597V119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119896119895 minus (119909119894120590119895)V119895) log(119909119894120590119895) + 1

V119895]

(30)

The solution of 120597119876(119883 | Θ)120597V119895 = 0 yields the estimates of V119895as follows

V(119905+1)119895

= sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [(V(119905)119895 )2 (119909119894120590(119905)119895 )V(119905)119895 log2 (119909119894120590(119905)119895 ) + 1]sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [120593(119905)119895 (119909119894120590(119905)119895 )V(119905)119895 minus 119896(119905)119895 ] log (119909119894120590(119905)119895 ) (31)

where 120593(119905)119895 = V(119905)119895 log(119909119894120590(119905)119895 ) + 1 Then to derive the solutionof the shape parameter 119896119895 we must calculate the partialderivative 119876(119883 | Θ) with respect to it We have

120597119876 (119883 | Θ)120597119896119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [V119895 log(119909119894120590119895) minus Φ0 (119896119895)] (32)

It is clear to see that the solution 120597119876(119883 | Θ)120597119896119895 = 0 yieldsthe updates for shape parameter 119896119895 by

Φ0 (119896119895) = V(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 log (119909119894120590(119905)119895 )sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 (33)

whereΦ0(sdot) is the Digamma function 119896(119905+1)119895 can be calculatedby solving (33) via the bisection method [23] In the samefashion to obtain the estimate of the scale parameter 120590119895 wemust derive the partial derivative of 119876(119883 | Θ) over it

120597119876 (119883 | Θ)120597120590119895 = V119895120590119895119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119909119894120590119895)V119895 minus 119896119895] (34)

6 Mathematical Problems in Engineering

Equating 120597119876(119883 | Θ)120597120590119895 to zero we can obtain the updateformulas for scale parameter 120590119895 by

120590(119905+1)119895 = [[[[sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951119909V(119905+1)119895119894 119896(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951

]]]]

1V(119905+1)119895

(35)

By calculating the partial derivative of the objective functionin (25) with parameter set 1205791198952 = 119906119895 Σ119895 we can obtain theestimation of the parameters mean 119906119895 and covariance Σ119895

120597119876 (119883 | Θ)120597119906119895 = 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906119895Σ119895 ) 120597119876 (119883 | Θ)120597Σ119895 = 119873sum

119894=1

119911(119905)119894119895 119910(119905)1198941198952(minus 12Σ119895 +(119909119894 minus 119906119895)22Σ2119895 )

(36)

Eventually the final updates for these two parameters can beobtained by

119906(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952119909119894sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (37)

Σ(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906(119905+1)119895 )2sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (38)

At this point the parameter learning procedure is complete

5 Ordering Method for Likelihood Trimming

For observed data with heavy outliers it is preferred todiscard the outliers and to estimate the parameters of theproposed mixture model using the remaining data Assumethat 119883 is a sample with 119873 observations 120578 is the trimmingfraction and 119883119872 is the subsample with a size 119872 =119873(1 minus 120578) Theoretically the trimming fraction 120578 should behigher than the real outlier fraction value After cutting theoutliers we estimate the model parameters by maximizingthe objective function 119876(119883119872 | Θ) in subsample 119883119872 Themost important step is to discard the outliers and select thesubsample This requires a specific ordering for all of theobservations in the sample Typically the number of outliersis unpredictableThus it is important for the proposedmodelto avoid allowing observed data that belongs to the labelswith a small number of observations falling into outliersThis study presents an effective component-based confidencelevel ordering method In the proposed GGMM-TLE wedo not calculate the density function value as in FAST-TLE[20] for every single observation Rather we only utilizethe concept of confidence level for these observations toeliminate the effects of mixture weights and sample scalesCombined with the posterior probability 119911(119905)119894119895 in (22) wecan order the observations that belong to the same groupseparately Thus it is more reasonable for ordering theobservation with GGMM-TLE Specifically we derive the

following increasing inequality based on component-basedconfidence level ordering

intΩ1(11990911988911 )

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596 le sdot sdot sdotle intΩ1(11990911988911198731

)

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596

intΩ119870(1199091198891198701 )

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596 le sdot sdot sdotle intΩ119870(119909119889119870119873119870

)

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596

(39)

where Ω119895(119909119894) = 120596 isin Ω119895 sum119870119895119896=1ℏ119895119896119901119895119896(120596 | 120579119895119896) gesum119870119895

119896=1ℏ119895119896119901119895119896(119909119894 | 120579119895119896) 119895 = (1 119870) Ω119895 is the 119895th

component determined by the posterior probability 119873119895 isthe number of observations belonging to the 119895th componentClearly 119873119895 satisfies sum119870119895=1119873119895 = 119873 119889119895 = (1198891198951 1198891198952 119889119895119873119895)is the ordering of sample indices of the 119895th component Bysorting and discarding each component individually with thesame trimming fraction 120578 we can obtain the subsample ofeach component 119883119872119895 = 119909119889119895119898119872119895119898=1 where119872119895 = 119873119895(1 minus 120578)Hence the total subsample119883119872 can be expressed by the unionof the subsample of each component119883119872119895

119883119872 = 1198831198721 cup 1198831198722 cup sdot sdot sdot cup 119883119872119870 (40)

Finally the parameters of the proposedmixturemodel can beestimatedwith subsample119883119872 and objective function119876(119883119872 |Θ) In the proposed GGMM-TLE by evaluating the intervalintegral rather than log-likelihood value of the observationswe can obtain superior performance compared with the clas-sical FAST-TLEThis is becausewe can order the observationswithin every individual label to retain the consistency oftrimming proportion of each label Therefore regardless ofthe mixture weights and sample scales all observations ofeach label are equally considered Finally combined with thesteps of GGMM in Section 2 we summarize the proceduresof GGMM-TLE as follows

Step 1 Input the trimming fraction 120578 Initialize the parameterset Θ = 120587119895 ℏ1198951 1205791198951 ℏ1198952 1205791198952 where 1205791198951 = V119895 119896119895 120590119895 and1205791198952 = 119906119895 Σ119895Step 2 Based on the current parameter set Θ(119905) evaluate theposterior probability 119911119894119895 using (22) compute the variable 119910119894119895119896using (23) and classify the observations

Step 3 Perform component-based confidence level orderingusing (39) to obtain subsample119883(119905+1)119872

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 2: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

2 Mathematical Problems in Engineering

mean template is employed along with a spatially vary-ing mixture model to alleviate the influence of noise inimage segmentation [8] It is a natural approach to preventnoise because it automatically filters noise using a meanfilter

(2) Schemes Based on Asymmetric Probability Distribution Ingeneral GMM does not fit well if the shapes of the observeddata are asymmetric [9] Indeed in many real applicationsthe intensity distribution of the observed data is not symmet-ric Thus it would seem that FMM with asymmetric distri-bution such as Gamma distribution [3] Weibull distribution[10] and Rayleigh distribution [11] could overcome thislimitation Another typical case is to obtain the asymmetricdistribution via the linear weighted aggregativemethod usingtwo ormore symmetric probability distributions One typicalexample is asymmetric Studentrsquos 119905 mixture model (NSMM)[12] where each component density is modelled with multi-ple Studentrsquos 119905 distributions Another example of this case isthe Bayesian-bounded asymmetric mixture model (BAMM)[13] which was developed by a subset of the authors [12] andother coauthors for unsupervised image segmentation Eachcomponent of their approach can model different shapes ofobserved data with different bounded support regions Aclose relative of this framework involves a bounded asym-metrical Studentrsquos 119905 mixture model [14] Peculiarly we notethat the mixture of two or more different distributions hascaused great concern and yet has developed rapidly in recentyears Typical algorithms includeZhou et alrsquos statisticalmodel[15] which is a mixture of 119870-distribution and lognormaldistribution This also includes De Angelis et al [16] whooffered a robust time interval measurement method basedon a Gaussian-uniform mixture model Browne et al [17]incorporated a multivariate Gaussian and uniform distribu-tions as the component density which allowed for superiormixture possessing These methods demonstrate a compet-itive performance in fitting different shapes of observeddata

(3) Schemes Based on Trimming Method In general forGMM-based algorithms the parameters are estimated by theML estimator through the EM algorithm However the MLestimator is overly sensitive to outliers and GMM cannotaddress outliers properly Therefore outliers seriously dete-riorate the performance of Gaussian-based clustering algo-rithms To overcome this shortcoming a common approachis to consider a mixture model with Studentrsquos 119905 distribution(SMM) which provides a longer-tailed alternative to theGaussian distribution [18] Therefore SMM is more robustto outliers than the GMM for heavier tails Another model-based method which presents a theoretically well-basedsegmentation criterion in presence of outliers is the trim-ming method [19] The main principle in trimming is tolocate and discard the outliers from the likelihood functionSegmentation results benefit from the trimming approachMuller and Neykov proposed the fast trimmed likelihoodestimator (FAST-TLE) [20] Galimzianova et al developedthe confidence level ordering trimmed likelihood estimator(CLO-TLE) [21] However they do not function effectively in

noisy samples especially when each group has a different sizeof observations

Motivated by the aforementioned considerations in thispaper we present a two-step procedure (GGMM-TLE)beginning with a component-based confidence level order-ing trimmed likelihood estimator Because the majority ofobserved data contains outliers it is necessary to discardthese in a previous step before robustly estimating parame-ters As a new algorithm the proposed technique considersthe components with lower mixture weights This avoidseliminating the samples belonging to the components witha small number of observations as the outliers Then wepropose a novel finite mixture model based on a mixture ofgeneralized Gamma and Gaussian distributions (GGMM)The proposed GGMM with Markov random fields hashigh flexibility and can be used to fit the asymmetric dataowing to the introduction of the asymmetric generalizedGamma distribution Moreover we theoretically prove theproperty of identifiability of the GGMM through the strat-egy presented by Atienza et al [22 23] which indicatesthat the GGMMrsquos mixture representation is unique Thisproperty is crucial to ensure that the parameter estimationproblem is well posed Therefore the proposed algorithmcan be effectively applied for segmenting images More-over by imposing spatial smoothness constraints amongneighbouring pixels using MRF the neighbouring pixelsshould have the same label Therefore the proposed modelreduces the segmentation sensitivity to noise in a stillimage We demonstrate through simulation study that theproposed framework is superior to other related methodsin terms of the misclassification ratio and Dice similaritycoefficient

The remainder of this paper is organized as followsSection 2 introduces the proposedmixturemodel in detail InSection 3 we prove the identifiability of the proposedmixturemodel The process of parameter learning is described inSection 4 The ordering method for likelihood trimming isreported in Section 5 Section 6 provides the experimentalresults and analysis Finally we conclude with a discussionin Section 7

2 Model Formulation

Assume a set of data 119883 = 119909119894 | 119894 = 1 2 119873 where each119909119894 denotes an observation at the 119894th pixel of an image 119873 isthe total number of pixels in an imageThe proposedmixturemodel assumes that the density function at pixel 119909119894 is givenby

119891 (119909119894 | Θ) = 119870sum119895=1

120587119895119870119895sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896) (1)

where Θ = 120587119895 ℏ119895119896 120579119895119896 is the complete parameter set of theproposed mixture model 119870 denotes the number of mixturecomponents The prior 120587119895 represents the probability that theobservation 119909119894 belongs to the 119895th label and ℏ119895119896 is called theweighting factor they satisfy the following constraints

Mathematical Problems in Engineering 3

119870sum119895=1

120587119895 = 1119870119895sum119896=1

ℏ119895119896 = 10 le 120587119895 ℏ119895119896 le 1

(2)

In this paper we set 119870119895 = 2 thus 1199011198951(119909119894 | 1205791198951) is thegeneralized Gamma distribution defined by

1199011198951 (119909119894 | 1205791198951)= 10038161003816100381610038161003816V11989510038161003816100381610038161003816120590119895Γ (119896119895) (

119909119894120590119895)119896119895V119895minus1

expminus(119909119894120590119895)V119895 (3)

where 1205791198951 = V119895 119896119895 120590119895 is the parameter set of generalizedGamma distribution V119895 is the power parameter 119896119895 is theshape parameter 120590119895 is the scale parameter and Γ(sdot) denotesthe Gamma function The probability density function ofGaussian distribution 1199011198952(119909119894 | 1205791198952) is defined as

1199011198952 (119909119894 | 1205791198952)= 1radic2120587Σ119895 exp minus

12 (119909119894 minus 119906119895)119879 Σminus1119895 (119909119894 minus 119906119895) (4)

where 1205791198952 = 119906119895 Σ119895 is the parameter set of Gaussiandistribution 119906119895 is the mean and Σ119895 denotes the covariance

According to Bayesian rules we express the posteriorprobability density function of the proposed model as

119891 (Θ | 119883) prop 119891 (119883 | Θ) 119901 (Θ) (5)

To train the proposed mixture model based on the aboveformulation we define the following maximum a posteriorilog-likelihood function

119871 (Θ | 119883) = log (119891 (Θ | 119883))prop log 119891 (119883 | Θ) + log119901 (Θ) (6)

TheMarkov random field based on Gibbs distribution can becharacterized by

119901 (Θ) = 119885minus1 expminus119880 (Θ)119879 (7)

where 119879 and 119885 are temperature and normalizing constantsrespectively In the proposed approach a new energy func-tion of the following form is chosen to enforce spatialsmoothness

119880 (Θ) = minus 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 (8)

where

119866119894119895 = exp(120573 sum119898isin119873119894

(119911119898119895 + 120587119895)) (9)

where 119873119894 is the neighbourhood of the 119894th pixel includingthe 119894th pixel itself for example 3 times 3 or 5 times 5 119911119898119895 denotesthe posterior probability Eventually we can formulate thesegmentation problem as a maximum a posteriori problemusing the log-likelihood function as

119871 (Θ | 119883) = 119873sum119894=1

log119870sum119895=1

120587119895 [ 2sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(10)

The above scheme contains two parts where the firstdenotes the proposed mixture model and the last is theMarkov model In general the EM algorithm is an efficientframework for estimating the mixture model parameters

3 Identifiability of the ProposedMixture Model

This section discusses the property of identifiability of theGGMM This property implies that GGMM can only beexpressed by specific components Obviously this property isimportant for finite mixture model because it can guaranteethe estimation procedures of the parameter set to be welldefined [3 22] The property of identifiability is described asfollows

We defined the following set

119865 = 119891 119891120579 (119909) = ℏ11199011 (119909 | V 119896 120590) + ℏ21199012 (119909 | 119906 Σ) (11)

119865 is the family of proposed distributions where V = 0119896 gt 0 120590 gt 0 ℏ1 + ℏ2 = 1 In this study 1199011(119909 |V 119896 120590) is the generalized Gamma distribution and 1199012(119909 |119906 Σ) is the Gaussian distribution the parameters of theproposed distribution are mutually independent The set ofthe proposed mixture model with 120587119895 satisfying (2) is119867119865 = 119867 119867 (119909) = 119870sum

119895=1

120587119895119891120579119895 (119909) 1198911205791le119895le119870 (119909) isin 119865 (12)

Theorem 1 The property of identifiability of 119867119865 means thatfor any two mixture models11986712 isin 119867119865 that is

1198671 = 1198701sum119895=1

12058711198951198911205791119895 (119909) 1198672 = 1198702sum

119895=1

12058721198951198911205792119895 (119909) (13)

if 1198671 = 1198672 then 1198701 = 1198702 and (1205871119895 1198911205791119895)1198701119895=1 = (12058721198951198911205792119895)1198702119895=1Proof According to [22] we prove that for a linear trans-form 119872 119891120579(119909) rarr 120601119891 with domain 119878(119891) Let 1198780(119891) =

4 Mathematical Problems in Engineering

119898 isin 119878(119891) 120601119891(119898) = 0 For a given point 1198980 and any twoproposed distributions 1198911 1198912 isin 119865 there exists a total order ≺on 119865 that satisfies

1198911 ≺ 1198912 lArrrArrlim119898rarr1198980

1206011198912 (119898)1206011198911 (119898) = 0(14)

Given the expression of the linear transform119872 is as follows

119872[119891120579 (119909)] 120601119891 (119898) = 119864 (119909119898) = int+infinminusinfin

119909119898119891120579 (119909) 119889119909 (15)

where 119891120579(119909) is the proposed density function 119891120579(119909) =ℏ11199011(119909 | V 119896 120590) + ℏ21199012(119909 | 119906 Σ) Let 1198921205791(119909) = 1199011(119909 | V 119896 120590)and ℎ1205792(119909) = 1199012(119909 | 119906 Σ) then 119891120579(119909) = ℏ11198921205791(119909) + ℏ2ℎ1205792(119909)Obviously if 1198921 ≺ 1198922 and ℎ1 ≺ ℎ2 we can obtain 1198911 ≺ 1198912According to (15) we have

120601119892 (119898) = 120590119898 Γ (119896 + 119898V)Γ (119896) 119898 isin (minus119896V +infin) (16)

where 1198780(119892) = (minus119896V +infin) and 1198980 = +infin To facilitatethe proof procedure this study utilizes Stirlingrsquos formula asfollows

Γ (119911 + 1) sim radic2120587119911 (119911119890 )119911 119911 997888rarr +infin (17)

Thus we have

120601119892 (119898) sim 120590119898Γ (119896)radic2120587(119898V )119896+119898Vminus12

sdot (1 + 119896 minus 1119898 V)119896+119898Vminus12 exp 1 minus 119896 minus 119898V sim radic2120587Γ (119896)

sdot exp 119898 log120590 minus 119898V

sdot exp (119896 + 119898Vminus 12) (log119898 minus log V)

(18)

The sign ldquosimrdquo indicates that the expressions on both sides areequivalent up to constant term when 119898 rarr +infin Hence for119898 rarr 1198980 we have

1206011198922 (119898)1206011198921 (119898) sim 119862 exp( 1V2minus 1V1)119898 log119898

+ [(log1205902 minus log1205901) minus ( 1V2 minus 1V1)

+ ( 1V2

log 1V2minus 1V1

log 1V1)]119898

+ (1198962 minus 1198961) log119898

(19)

where 119862 is a constant From (19) we can derive 1198921 ≺ 1198922 hArr[V2 gt V1] or [V2 = V1 1205902 lt 1205901] or [V2 = V1 1205902 = 1205901 1198962 lt 1198961]which is apparently a total order Analogously we have

120601ℎ (119898) = exp 119898119906 + 12Σ1198982 (20)

where 1198780(ℎ) = (minusinfin +infin) and 1198980 = +infin For 119898 rarr 1198980 wehave

120601ℎ2 (119898)120601ℎ1 (119898) = exp 12 (Σ2 minus Σ1)1198982 + (1199062 minus 1199061)119898 (21)

From (21) we can determine that ℎ1 ≺ ℎ2 hArr [Σ2 lt Σ1] or[Σ2 = Σ1 1199062 lt 1199061] which is clearly a total order Overall wehave 1198911 lt 1198912 there exists a total order ≺ on 119865 Hence we candraw the conclusion that the GGMMs are identifiable

4 Parameter Learning

The main task of this section is to estimate the completeparameter set In general the EM algorithm provides anefficient scheme for unsupervised segmentation using itera-tive updating and guarantees that the log-likelihood functionconverges to a local maximum Considering the complexityof (10) it is difficult to apply the EM algorithm directlyfor maximizing the log-likelihood function (10) Thereforewe employ Jensenrsquos inequality by defining the two hiddenvariables 119911119894119895 and 119910119894119895119896 which are respectively

119911(119905)119894119895= 120587119895 [ℏ11989511199011198951 (119909119894 | 1205791198951) + ℏ11989521199011198952 (119909119894 | 1205791198952)]sum119870119898=1 120587119898 [ℏ11989811199011198981 (119909119894 | 1205791198981) + ℏ11989821199011198982 (119909119894 | 1205791198982)]

(22)

119910(119905)119894119895119896 = ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)sum119870119895119898=1 ℏ119895119898119901119895119898 (119909119894 | 120579119895119898) (23)

Clearly 119911119894119895 and 119910119894119895119896 satisfy the constraints sum119870119895=1 119911119894119895 = 1and sum119870119895

119896=1119910119894119895119896 = 1 Using Jensenrsquos inequality one has

log(sum119870119895=1 119890119895) ge log(sum119870119895=1 119911119894119895119890119895) ge sum119870119895=1(119911119894119895 log 119890119895) thus the log-likelihood function (10) can be rewritten as follows

119871 (119883 | Θ) ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895

+ log[[119870119895sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]] minus log119885 + 1119879

Mathematical Problems in Engineering 5

sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896 + log119901119895119896 (119909119894 | 120579119895119896)] minus log119885+ 1119879119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(24)

Thus we can define the following new objective function119876(119883 | Θ) in terms of Jensenrsquos inequality

119876 (119883 | Θ) = 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895 + 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896+ log119901119895119896 (119909119894 | 120579119895119896)] minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895= 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119910(119905)1198941198951 [log ℏ1198951 + log1199011198951 (119909119894 | 1205791198951)]+ 119910(119905)1198941198952 [log ℏ1198952 + log1199011198952 (119909119894 | 1205791198952)] minus log119885 + 1119879sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895

(25)

To realize clustering we must maximize the log-likelihoodfunction in (10) which is equivalent to maximizing theobjective function in (25) In particular to estimate the priorprobability 120587119895 we take the partial derivative of the objectivefunction in (25) with respect to 120587119895 yielding

120597120597120587119895 [[119876 (119883 | Θ) minus 120582( 119870sum119895=1

120587119895 minus 1)]] = 0 (26)

where 120582 is the Lagrange multiplier in consideration of theconstraint sum119870119895=1 120587119895 = 1 we have

120587(119905+1)119895 = 119911(119905)119894119895 + 119866(119905)119894119895sum119870119898=1 119911(119905)119894119898 + 119866(119905)119894119898 (27)

Similarly to estimate the weighting factor ℏ119895119896 we take thepartial derivative of the objective function in (25)with respectto ℏ119895119896

120597120597ℏ119895119896 [[119876 (119883 | Θ) minus 120591( 119870119895sum119896=1

ℏ119895119896 minus 1)]] = 0 (28)

where 120591 is the Lagrange multiplier in consideration of theconstraint sum119870119895

119896=1ℏ119895119896 = 1 we have

ℏ(119905+1)119895119896 = sum119873119894=1 119911(119905)119894119895 119910(119905)119894119895119896sum119873119894=1 119911(119905)119894119895 (29)

In the following we estimate the power parameter V119895 Wecalculate the partial derivative of the objective function (25)with this power parameter V119895 as follows

120597119876 (119883 | Θ)120597V119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119896119895 minus (119909119894120590119895)V119895) log(119909119894120590119895) + 1

V119895]

(30)

The solution of 120597119876(119883 | Θ)120597V119895 = 0 yields the estimates of V119895as follows

V(119905+1)119895

= sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [(V(119905)119895 )2 (119909119894120590(119905)119895 )V(119905)119895 log2 (119909119894120590(119905)119895 ) + 1]sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [120593(119905)119895 (119909119894120590(119905)119895 )V(119905)119895 minus 119896(119905)119895 ] log (119909119894120590(119905)119895 ) (31)

where 120593(119905)119895 = V(119905)119895 log(119909119894120590(119905)119895 ) + 1 Then to derive the solutionof the shape parameter 119896119895 we must calculate the partialderivative 119876(119883 | Θ) with respect to it We have

120597119876 (119883 | Θ)120597119896119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [V119895 log(119909119894120590119895) minus Φ0 (119896119895)] (32)

It is clear to see that the solution 120597119876(119883 | Θ)120597119896119895 = 0 yieldsthe updates for shape parameter 119896119895 by

Φ0 (119896119895) = V(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 log (119909119894120590(119905)119895 )sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 (33)

whereΦ0(sdot) is the Digamma function 119896(119905+1)119895 can be calculatedby solving (33) via the bisection method [23] In the samefashion to obtain the estimate of the scale parameter 120590119895 wemust derive the partial derivative of 119876(119883 | Θ) over it

120597119876 (119883 | Θ)120597120590119895 = V119895120590119895119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119909119894120590119895)V119895 minus 119896119895] (34)

6 Mathematical Problems in Engineering

Equating 120597119876(119883 | Θ)120597120590119895 to zero we can obtain the updateformulas for scale parameter 120590119895 by

120590(119905+1)119895 = [[[[sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951119909V(119905+1)119895119894 119896(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951

]]]]

1V(119905+1)119895

(35)

By calculating the partial derivative of the objective functionin (25) with parameter set 1205791198952 = 119906119895 Σ119895 we can obtain theestimation of the parameters mean 119906119895 and covariance Σ119895

120597119876 (119883 | Θ)120597119906119895 = 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906119895Σ119895 ) 120597119876 (119883 | Θ)120597Σ119895 = 119873sum

119894=1

119911(119905)119894119895 119910(119905)1198941198952(minus 12Σ119895 +(119909119894 minus 119906119895)22Σ2119895 )

(36)

Eventually the final updates for these two parameters can beobtained by

119906(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952119909119894sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (37)

Σ(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906(119905+1)119895 )2sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (38)

At this point the parameter learning procedure is complete

5 Ordering Method for Likelihood Trimming

For observed data with heavy outliers it is preferred todiscard the outliers and to estimate the parameters of theproposed mixture model using the remaining data Assumethat 119883 is a sample with 119873 observations 120578 is the trimmingfraction and 119883119872 is the subsample with a size 119872 =119873(1 minus 120578) Theoretically the trimming fraction 120578 should behigher than the real outlier fraction value After cutting theoutliers we estimate the model parameters by maximizingthe objective function 119876(119883119872 | Θ) in subsample 119883119872 Themost important step is to discard the outliers and select thesubsample This requires a specific ordering for all of theobservations in the sample Typically the number of outliersis unpredictableThus it is important for the proposedmodelto avoid allowing observed data that belongs to the labelswith a small number of observations falling into outliersThis study presents an effective component-based confidencelevel ordering method In the proposed GGMM-TLE wedo not calculate the density function value as in FAST-TLE[20] for every single observation Rather we only utilizethe concept of confidence level for these observations toeliminate the effects of mixture weights and sample scalesCombined with the posterior probability 119911(119905)119894119895 in (22) wecan order the observations that belong to the same groupseparately Thus it is more reasonable for ordering theobservation with GGMM-TLE Specifically we derive the

following increasing inequality based on component-basedconfidence level ordering

intΩ1(11990911988911 )

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596 le sdot sdot sdotle intΩ1(11990911988911198731

)

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596

intΩ119870(1199091198891198701 )

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596 le sdot sdot sdotle intΩ119870(119909119889119870119873119870

)

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596

(39)

where Ω119895(119909119894) = 120596 isin Ω119895 sum119870119895119896=1ℏ119895119896119901119895119896(120596 | 120579119895119896) gesum119870119895

119896=1ℏ119895119896119901119895119896(119909119894 | 120579119895119896) 119895 = (1 119870) Ω119895 is the 119895th

component determined by the posterior probability 119873119895 isthe number of observations belonging to the 119895th componentClearly 119873119895 satisfies sum119870119895=1119873119895 = 119873 119889119895 = (1198891198951 1198891198952 119889119895119873119895)is the ordering of sample indices of the 119895th component Bysorting and discarding each component individually with thesame trimming fraction 120578 we can obtain the subsample ofeach component 119883119872119895 = 119909119889119895119898119872119895119898=1 where119872119895 = 119873119895(1 minus 120578)Hence the total subsample119883119872 can be expressed by the unionof the subsample of each component119883119872119895

119883119872 = 1198831198721 cup 1198831198722 cup sdot sdot sdot cup 119883119872119870 (40)

Finally the parameters of the proposedmixturemodel can beestimatedwith subsample119883119872 and objective function119876(119883119872 |Θ) In the proposed GGMM-TLE by evaluating the intervalintegral rather than log-likelihood value of the observationswe can obtain superior performance compared with the clas-sical FAST-TLEThis is becausewe can order the observationswithin every individual label to retain the consistency oftrimming proportion of each label Therefore regardless ofthe mixture weights and sample scales all observations ofeach label are equally considered Finally combined with thesteps of GGMM in Section 2 we summarize the proceduresof GGMM-TLE as follows

Step 1 Input the trimming fraction 120578 Initialize the parameterset Θ = 120587119895 ℏ1198951 1205791198951 ℏ1198952 1205791198952 where 1205791198951 = V119895 119896119895 120590119895 and1205791198952 = 119906119895 Σ119895Step 2 Based on the current parameter set Θ(119905) evaluate theposterior probability 119911119894119895 using (22) compute the variable 119910119894119895119896using (23) and classify the observations

Step 3 Perform component-based confidence level orderingusing (39) to obtain subsample119883(119905+1)119872

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 3: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 3

119870sum119895=1

120587119895 = 1119870119895sum119896=1

ℏ119895119896 = 10 le 120587119895 ℏ119895119896 le 1

(2)

In this paper we set 119870119895 = 2 thus 1199011198951(119909119894 | 1205791198951) is thegeneralized Gamma distribution defined by

1199011198951 (119909119894 | 1205791198951)= 10038161003816100381610038161003816V11989510038161003816100381610038161003816120590119895Γ (119896119895) (

119909119894120590119895)119896119895V119895minus1

expminus(119909119894120590119895)V119895 (3)

where 1205791198951 = V119895 119896119895 120590119895 is the parameter set of generalizedGamma distribution V119895 is the power parameter 119896119895 is theshape parameter 120590119895 is the scale parameter and Γ(sdot) denotesthe Gamma function The probability density function ofGaussian distribution 1199011198952(119909119894 | 1205791198952) is defined as

1199011198952 (119909119894 | 1205791198952)= 1radic2120587Σ119895 exp minus

12 (119909119894 minus 119906119895)119879 Σminus1119895 (119909119894 minus 119906119895) (4)

where 1205791198952 = 119906119895 Σ119895 is the parameter set of Gaussiandistribution 119906119895 is the mean and Σ119895 denotes the covariance

According to Bayesian rules we express the posteriorprobability density function of the proposed model as

119891 (Θ | 119883) prop 119891 (119883 | Θ) 119901 (Θ) (5)

To train the proposed mixture model based on the aboveformulation we define the following maximum a posteriorilog-likelihood function

119871 (Θ | 119883) = log (119891 (Θ | 119883))prop log 119891 (119883 | Θ) + log119901 (Θ) (6)

TheMarkov random field based on Gibbs distribution can becharacterized by

119901 (Θ) = 119885minus1 expminus119880 (Θ)119879 (7)

where 119879 and 119885 are temperature and normalizing constantsrespectively In the proposed approach a new energy func-tion of the following form is chosen to enforce spatialsmoothness

119880 (Θ) = minus 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 (8)

where

119866119894119895 = exp(120573 sum119898isin119873119894

(119911119898119895 + 120587119895)) (9)

where 119873119894 is the neighbourhood of the 119894th pixel includingthe 119894th pixel itself for example 3 times 3 or 5 times 5 119911119898119895 denotesthe posterior probability Eventually we can formulate thesegmentation problem as a maximum a posteriori problemusing the log-likelihood function as

119871 (Θ | 119883) = 119873sum119894=1

log119870sum119895=1

120587119895 [ 2sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(10)

The above scheme contains two parts where the firstdenotes the proposed mixture model and the last is theMarkov model In general the EM algorithm is an efficientframework for estimating the mixture model parameters

3 Identifiability of the ProposedMixture Model

This section discusses the property of identifiability of theGGMM This property implies that GGMM can only beexpressed by specific components Obviously this property isimportant for finite mixture model because it can guaranteethe estimation procedures of the parameter set to be welldefined [3 22] The property of identifiability is described asfollows

We defined the following set

119865 = 119891 119891120579 (119909) = ℏ11199011 (119909 | V 119896 120590) + ℏ21199012 (119909 | 119906 Σ) (11)

119865 is the family of proposed distributions where V = 0119896 gt 0 120590 gt 0 ℏ1 + ℏ2 = 1 In this study 1199011(119909 |V 119896 120590) is the generalized Gamma distribution and 1199012(119909 |119906 Σ) is the Gaussian distribution the parameters of theproposed distribution are mutually independent The set ofthe proposed mixture model with 120587119895 satisfying (2) is119867119865 = 119867 119867 (119909) = 119870sum

119895=1

120587119895119891120579119895 (119909) 1198911205791le119895le119870 (119909) isin 119865 (12)

Theorem 1 The property of identifiability of 119867119865 means thatfor any two mixture models11986712 isin 119867119865 that is

1198671 = 1198701sum119895=1

12058711198951198911205791119895 (119909) 1198672 = 1198702sum

119895=1

12058721198951198911205792119895 (119909) (13)

if 1198671 = 1198672 then 1198701 = 1198702 and (1205871119895 1198911205791119895)1198701119895=1 = (12058721198951198911205792119895)1198702119895=1Proof According to [22] we prove that for a linear trans-form 119872 119891120579(119909) rarr 120601119891 with domain 119878(119891) Let 1198780(119891) =

4 Mathematical Problems in Engineering

119898 isin 119878(119891) 120601119891(119898) = 0 For a given point 1198980 and any twoproposed distributions 1198911 1198912 isin 119865 there exists a total order ≺on 119865 that satisfies

1198911 ≺ 1198912 lArrrArrlim119898rarr1198980

1206011198912 (119898)1206011198911 (119898) = 0(14)

Given the expression of the linear transform119872 is as follows

119872[119891120579 (119909)] 120601119891 (119898) = 119864 (119909119898) = int+infinminusinfin

119909119898119891120579 (119909) 119889119909 (15)

where 119891120579(119909) is the proposed density function 119891120579(119909) =ℏ11199011(119909 | V 119896 120590) + ℏ21199012(119909 | 119906 Σ) Let 1198921205791(119909) = 1199011(119909 | V 119896 120590)and ℎ1205792(119909) = 1199012(119909 | 119906 Σ) then 119891120579(119909) = ℏ11198921205791(119909) + ℏ2ℎ1205792(119909)Obviously if 1198921 ≺ 1198922 and ℎ1 ≺ ℎ2 we can obtain 1198911 ≺ 1198912According to (15) we have

120601119892 (119898) = 120590119898 Γ (119896 + 119898V)Γ (119896) 119898 isin (minus119896V +infin) (16)

where 1198780(119892) = (minus119896V +infin) and 1198980 = +infin To facilitatethe proof procedure this study utilizes Stirlingrsquos formula asfollows

Γ (119911 + 1) sim radic2120587119911 (119911119890 )119911 119911 997888rarr +infin (17)

Thus we have

120601119892 (119898) sim 120590119898Γ (119896)radic2120587(119898V )119896+119898Vminus12

sdot (1 + 119896 minus 1119898 V)119896+119898Vminus12 exp 1 minus 119896 minus 119898V sim radic2120587Γ (119896)

sdot exp 119898 log120590 minus 119898V

sdot exp (119896 + 119898Vminus 12) (log119898 minus log V)

(18)

The sign ldquosimrdquo indicates that the expressions on both sides areequivalent up to constant term when 119898 rarr +infin Hence for119898 rarr 1198980 we have

1206011198922 (119898)1206011198921 (119898) sim 119862 exp( 1V2minus 1V1)119898 log119898

+ [(log1205902 minus log1205901) minus ( 1V2 minus 1V1)

+ ( 1V2

log 1V2minus 1V1

log 1V1)]119898

+ (1198962 minus 1198961) log119898

(19)

where 119862 is a constant From (19) we can derive 1198921 ≺ 1198922 hArr[V2 gt V1] or [V2 = V1 1205902 lt 1205901] or [V2 = V1 1205902 = 1205901 1198962 lt 1198961]which is apparently a total order Analogously we have

120601ℎ (119898) = exp 119898119906 + 12Σ1198982 (20)

where 1198780(ℎ) = (minusinfin +infin) and 1198980 = +infin For 119898 rarr 1198980 wehave

120601ℎ2 (119898)120601ℎ1 (119898) = exp 12 (Σ2 minus Σ1)1198982 + (1199062 minus 1199061)119898 (21)

From (21) we can determine that ℎ1 ≺ ℎ2 hArr [Σ2 lt Σ1] or[Σ2 = Σ1 1199062 lt 1199061] which is clearly a total order Overall wehave 1198911 lt 1198912 there exists a total order ≺ on 119865 Hence we candraw the conclusion that the GGMMs are identifiable

4 Parameter Learning

The main task of this section is to estimate the completeparameter set In general the EM algorithm provides anefficient scheme for unsupervised segmentation using itera-tive updating and guarantees that the log-likelihood functionconverges to a local maximum Considering the complexityof (10) it is difficult to apply the EM algorithm directlyfor maximizing the log-likelihood function (10) Thereforewe employ Jensenrsquos inequality by defining the two hiddenvariables 119911119894119895 and 119910119894119895119896 which are respectively

119911(119905)119894119895= 120587119895 [ℏ11989511199011198951 (119909119894 | 1205791198951) + ℏ11989521199011198952 (119909119894 | 1205791198952)]sum119870119898=1 120587119898 [ℏ11989811199011198981 (119909119894 | 1205791198981) + ℏ11989821199011198982 (119909119894 | 1205791198982)]

(22)

119910(119905)119894119895119896 = ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)sum119870119895119898=1 ℏ119895119898119901119895119898 (119909119894 | 120579119895119898) (23)

Clearly 119911119894119895 and 119910119894119895119896 satisfy the constraints sum119870119895=1 119911119894119895 = 1and sum119870119895

119896=1119910119894119895119896 = 1 Using Jensenrsquos inequality one has

log(sum119870119895=1 119890119895) ge log(sum119870119895=1 119911119894119895119890119895) ge sum119870119895=1(119911119894119895 log 119890119895) thus the log-likelihood function (10) can be rewritten as follows

119871 (119883 | Θ) ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895

+ log[[119870119895sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]] minus log119885 + 1119879

Mathematical Problems in Engineering 5

sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896 + log119901119895119896 (119909119894 | 120579119895119896)] minus log119885+ 1119879119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(24)

Thus we can define the following new objective function119876(119883 | Θ) in terms of Jensenrsquos inequality

119876 (119883 | Θ) = 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895 + 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896+ log119901119895119896 (119909119894 | 120579119895119896)] minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895= 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119910(119905)1198941198951 [log ℏ1198951 + log1199011198951 (119909119894 | 1205791198951)]+ 119910(119905)1198941198952 [log ℏ1198952 + log1199011198952 (119909119894 | 1205791198952)] minus log119885 + 1119879sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895

(25)

To realize clustering we must maximize the log-likelihoodfunction in (10) which is equivalent to maximizing theobjective function in (25) In particular to estimate the priorprobability 120587119895 we take the partial derivative of the objectivefunction in (25) with respect to 120587119895 yielding

120597120597120587119895 [[119876 (119883 | Θ) minus 120582( 119870sum119895=1

120587119895 minus 1)]] = 0 (26)

where 120582 is the Lagrange multiplier in consideration of theconstraint sum119870119895=1 120587119895 = 1 we have

120587(119905+1)119895 = 119911(119905)119894119895 + 119866(119905)119894119895sum119870119898=1 119911(119905)119894119898 + 119866(119905)119894119898 (27)

Similarly to estimate the weighting factor ℏ119895119896 we take thepartial derivative of the objective function in (25)with respectto ℏ119895119896

120597120597ℏ119895119896 [[119876 (119883 | Θ) minus 120591( 119870119895sum119896=1

ℏ119895119896 minus 1)]] = 0 (28)

where 120591 is the Lagrange multiplier in consideration of theconstraint sum119870119895

119896=1ℏ119895119896 = 1 we have

ℏ(119905+1)119895119896 = sum119873119894=1 119911(119905)119894119895 119910(119905)119894119895119896sum119873119894=1 119911(119905)119894119895 (29)

In the following we estimate the power parameter V119895 Wecalculate the partial derivative of the objective function (25)with this power parameter V119895 as follows

120597119876 (119883 | Θ)120597V119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119896119895 minus (119909119894120590119895)V119895) log(119909119894120590119895) + 1

V119895]

(30)

The solution of 120597119876(119883 | Θ)120597V119895 = 0 yields the estimates of V119895as follows

V(119905+1)119895

= sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [(V(119905)119895 )2 (119909119894120590(119905)119895 )V(119905)119895 log2 (119909119894120590(119905)119895 ) + 1]sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [120593(119905)119895 (119909119894120590(119905)119895 )V(119905)119895 minus 119896(119905)119895 ] log (119909119894120590(119905)119895 ) (31)

where 120593(119905)119895 = V(119905)119895 log(119909119894120590(119905)119895 ) + 1 Then to derive the solutionof the shape parameter 119896119895 we must calculate the partialderivative 119876(119883 | Θ) with respect to it We have

120597119876 (119883 | Θ)120597119896119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [V119895 log(119909119894120590119895) minus Φ0 (119896119895)] (32)

It is clear to see that the solution 120597119876(119883 | Θ)120597119896119895 = 0 yieldsthe updates for shape parameter 119896119895 by

Φ0 (119896119895) = V(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 log (119909119894120590(119905)119895 )sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 (33)

whereΦ0(sdot) is the Digamma function 119896(119905+1)119895 can be calculatedby solving (33) via the bisection method [23] In the samefashion to obtain the estimate of the scale parameter 120590119895 wemust derive the partial derivative of 119876(119883 | Θ) over it

120597119876 (119883 | Θ)120597120590119895 = V119895120590119895119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119909119894120590119895)V119895 minus 119896119895] (34)

6 Mathematical Problems in Engineering

Equating 120597119876(119883 | Θ)120597120590119895 to zero we can obtain the updateformulas for scale parameter 120590119895 by

120590(119905+1)119895 = [[[[sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951119909V(119905+1)119895119894 119896(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951

]]]]

1V(119905+1)119895

(35)

By calculating the partial derivative of the objective functionin (25) with parameter set 1205791198952 = 119906119895 Σ119895 we can obtain theestimation of the parameters mean 119906119895 and covariance Σ119895

120597119876 (119883 | Θ)120597119906119895 = 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906119895Σ119895 ) 120597119876 (119883 | Θ)120597Σ119895 = 119873sum

119894=1

119911(119905)119894119895 119910(119905)1198941198952(minus 12Σ119895 +(119909119894 minus 119906119895)22Σ2119895 )

(36)

Eventually the final updates for these two parameters can beobtained by

119906(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952119909119894sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (37)

Σ(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906(119905+1)119895 )2sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (38)

At this point the parameter learning procedure is complete

5 Ordering Method for Likelihood Trimming

For observed data with heavy outliers it is preferred todiscard the outliers and to estimate the parameters of theproposed mixture model using the remaining data Assumethat 119883 is a sample with 119873 observations 120578 is the trimmingfraction and 119883119872 is the subsample with a size 119872 =119873(1 minus 120578) Theoretically the trimming fraction 120578 should behigher than the real outlier fraction value After cutting theoutliers we estimate the model parameters by maximizingthe objective function 119876(119883119872 | Θ) in subsample 119883119872 Themost important step is to discard the outliers and select thesubsample This requires a specific ordering for all of theobservations in the sample Typically the number of outliersis unpredictableThus it is important for the proposedmodelto avoid allowing observed data that belongs to the labelswith a small number of observations falling into outliersThis study presents an effective component-based confidencelevel ordering method In the proposed GGMM-TLE wedo not calculate the density function value as in FAST-TLE[20] for every single observation Rather we only utilizethe concept of confidence level for these observations toeliminate the effects of mixture weights and sample scalesCombined with the posterior probability 119911(119905)119894119895 in (22) wecan order the observations that belong to the same groupseparately Thus it is more reasonable for ordering theobservation with GGMM-TLE Specifically we derive the

following increasing inequality based on component-basedconfidence level ordering

intΩ1(11990911988911 )

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596 le sdot sdot sdotle intΩ1(11990911988911198731

)

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596

intΩ119870(1199091198891198701 )

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596 le sdot sdot sdotle intΩ119870(119909119889119870119873119870

)

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596

(39)

where Ω119895(119909119894) = 120596 isin Ω119895 sum119870119895119896=1ℏ119895119896119901119895119896(120596 | 120579119895119896) gesum119870119895

119896=1ℏ119895119896119901119895119896(119909119894 | 120579119895119896) 119895 = (1 119870) Ω119895 is the 119895th

component determined by the posterior probability 119873119895 isthe number of observations belonging to the 119895th componentClearly 119873119895 satisfies sum119870119895=1119873119895 = 119873 119889119895 = (1198891198951 1198891198952 119889119895119873119895)is the ordering of sample indices of the 119895th component Bysorting and discarding each component individually with thesame trimming fraction 120578 we can obtain the subsample ofeach component 119883119872119895 = 119909119889119895119898119872119895119898=1 where119872119895 = 119873119895(1 minus 120578)Hence the total subsample119883119872 can be expressed by the unionof the subsample of each component119883119872119895

119883119872 = 1198831198721 cup 1198831198722 cup sdot sdot sdot cup 119883119872119870 (40)

Finally the parameters of the proposedmixturemodel can beestimatedwith subsample119883119872 and objective function119876(119883119872 |Θ) In the proposed GGMM-TLE by evaluating the intervalintegral rather than log-likelihood value of the observationswe can obtain superior performance compared with the clas-sical FAST-TLEThis is becausewe can order the observationswithin every individual label to retain the consistency oftrimming proportion of each label Therefore regardless ofthe mixture weights and sample scales all observations ofeach label are equally considered Finally combined with thesteps of GGMM in Section 2 we summarize the proceduresof GGMM-TLE as follows

Step 1 Input the trimming fraction 120578 Initialize the parameterset Θ = 120587119895 ℏ1198951 1205791198951 ℏ1198952 1205791198952 where 1205791198951 = V119895 119896119895 120590119895 and1205791198952 = 119906119895 Σ119895Step 2 Based on the current parameter set Θ(119905) evaluate theposterior probability 119911119894119895 using (22) compute the variable 119910119894119895119896using (23) and classify the observations

Step 3 Perform component-based confidence level orderingusing (39) to obtain subsample119883(119905+1)119872

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 4: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

4 Mathematical Problems in Engineering

119898 isin 119878(119891) 120601119891(119898) = 0 For a given point 1198980 and any twoproposed distributions 1198911 1198912 isin 119865 there exists a total order ≺on 119865 that satisfies

1198911 ≺ 1198912 lArrrArrlim119898rarr1198980

1206011198912 (119898)1206011198911 (119898) = 0(14)

Given the expression of the linear transform119872 is as follows

119872[119891120579 (119909)] 120601119891 (119898) = 119864 (119909119898) = int+infinminusinfin

119909119898119891120579 (119909) 119889119909 (15)

where 119891120579(119909) is the proposed density function 119891120579(119909) =ℏ11199011(119909 | V 119896 120590) + ℏ21199012(119909 | 119906 Σ) Let 1198921205791(119909) = 1199011(119909 | V 119896 120590)and ℎ1205792(119909) = 1199012(119909 | 119906 Σ) then 119891120579(119909) = ℏ11198921205791(119909) + ℏ2ℎ1205792(119909)Obviously if 1198921 ≺ 1198922 and ℎ1 ≺ ℎ2 we can obtain 1198911 ≺ 1198912According to (15) we have

120601119892 (119898) = 120590119898 Γ (119896 + 119898V)Γ (119896) 119898 isin (minus119896V +infin) (16)

where 1198780(119892) = (minus119896V +infin) and 1198980 = +infin To facilitatethe proof procedure this study utilizes Stirlingrsquos formula asfollows

Γ (119911 + 1) sim radic2120587119911 (119911119890 )119911 119911 997888rarr +infin (17)

Thus we have

120601119892 (119898) sim 120590119898Γ (119896)radic2120587(119898V )119896+119898Vminus12

sdot (1 + 119896 minus 1119898 V)119896+119898Vminus12 exp 1 minus 119896 minus 119898V sim radic2120587Γ (119896)

sdot exp 119898 log120590 minus 119898V

sdot exp (119896 + 119898Vminus 12) (log119898 minus log V)

(18)

The sign ldquosimrdquo indicates that the expressions on both sides areequivalent up to constant term when 119898 rarr +infin Hence for119898 rarr 1198980 we have

1206011198922 (119898)1206011198921 (119898) sim 119862 exp( 1V2minus 1V1)119898 log119898

+ [(log1205902 minus log1205901) minus ( 1V2 minus 1V1)

+ ( 1V2

log 1V2minus 1V1

log 1V1)]119898

+ (1198962 minus 1198961) log119898

(19)

where 119862 is a constant From (19) we can derive 1198921 ≺ 1198922 hArr[V2 gt V1] or [V2 = V1 1205902 lt 1205901] or [V2 = V1 1205902 = 1205901 1198962 lt 1198961]which is apparently a total order Analogously we have

120601ℎ (119898) = exp 119898119906 + 12Σ1198982 (20)

where 1198780(ℎ) = (minusinfin +infin) and 1198980 = +infin For 119898 rarr 1198980 wehave

120601ℎ2 (119898)120601ℎ1 (119898) = exp 12 (Σ2 minus Σ1)1198982 + (1199062 minus 1199061)119898 (21)

From (21) we can determine that ℎ1 ≺ ℎ2 hArr [Σ2 lt Σ1] or[Σ2 = Σ1 1199062 lt 1199061] which is clearly a total order Overall wehave 1198911 lt 1198912 there exists a total order ≺ on 119865 Hence we candraw the conclusion that the GGMMs are identifiable

4 Parameter Learning

The main task of this section is to estimate the completeparameter set In general the EM algorithm provides anefficient scheme for unsupervised segmentation using itera-tive updating and guarantees that the log-likelihood functionconverges to a local maximum Considering the complexityof (10) it is difficult to apply the EM algorithm directlyfor maximizing the log-likelihood function (10) Thereforewe employ Jensenrsquos inequality by defining the two hiddenvariables 119911119894119895 and 119910119894119895119896 which are respectively

119911(119905)119894119895= 120587119895 [ℏ11989511199011198951 (119909119894 | 1205791198951) + ℏ11989521199011198952 (119909119894 | 1205791198952)]sum119870119898=1 120587119898 [ℏ11989811199011198981 (119909119894 | 1205791198981) + ℏ11989821199011198982 (119909119894 | 1205791198982)]

(22)

119910(119905)119894119895119896 = ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)sum119870119895119898=1 ℏ119895119898119901119895119898 (119909119894 | 120579119895119898) (23)

Clearly 119911119894119895 and 119910119894119895119896 satisfy the constraints sum119870119895=1 119911119894119895 = 1and sum119870119895

119896=1119910119894119895119896 = 1 Using Jensenrsquos inequality one has

log(sum119870119895=1 119890119895) ge log(sum119870119895=1 119911119894119895119890119895) ge sum119870119895=1(119911119894119895 log 119890119895) thus the log-likelihood function (10) can be rewritten as follows

119871 (119883 | Θ) ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895

+ log[[119870119895sum119896=1

ℏ119895119896119901119895119896 (119909119894 | 120579119895119896)]] minus log119885 + 1119879

Mathematical Problems in Engineering 5

sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896 + log119901119895119896 (119909119894 | 120579119895119896)] minus log119885+ 1119879119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(24)

Thus we can define the following new objective function119876(119883 | Θ) in terms of Jensenrsquos inequality

119876 (119883 | Θ) = 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895 + 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896+ log119901119895119896 (119909119894 | 120579119895119896)] minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895= 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119910(119905)1198941198951 [log ℏ1198951 + log1199011198951 (119909119894 | 1205791198951)]+ 119910(119905)1198941198952 [log ℏ1198952 + log1199011198952 (119909119894 | 1205791198952)] minus log119885 + 1119879sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895

(25)

To realize clustering we must maximize the log-likelihoodfunction in (10) which is equivalent to maximizing theobjective function in (25) In particular to estimate the priorprobability 120587119895 we take the partial derivative of the objectivefunction in (25) with respect to 120587119895 yielding

120597120597120587119895 [[119876 (119883 | Θ) minus 120582( 119870sum119895=1

120587119895 minus 1)]] = 0 (26)

where 120582 is the Lagrange multiplier in consideration of theconstraint sum119870119895=1 120587119895 = 1 we have

120587(119905+1)119895 = 119911(119905)119894119895 + 119866(119905)119894119895sum119870119898=1 119911(119905)119894119898 + 119866(119905)119894119898 (27)

Similarly to estimate the weighting factor ℏ119895119896 we take thepartial derivative of the objective function in (25)with respectto ℏ119895119896

120597120597ℏ119895119896 [[119876 (119883 | Θ) minus 120591( 119870119895sum119896=1

ℏ119895119896 minus 1)]] = 0 (28)

where 120591 is the Lagrange multiplier in consideration of theconstraint sum119870119895

119896=1ℏ119895119896 = 1 we have

ℏ(119905+1)119895119896 = sum119873119894=1 119911(119905)119894119895 119910(119905)119894119895119896sum119873119894=1 119911(119905)119894119895 (29)

In the following we estimate the power parameter V119895 Wecalculate the partial derivative of the objective function (25)with this power parameter V119895 as follows

120597119876 (119883 | Θ)120597V119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119896119895 minus (119909119894120590119895)V119895) log(119909119894120590119895) + 1

V119895]

(30)

The solution of 120597119876(119883 | Θ)120597V119895 = 0 yields the estimates of V119895as follows

V(119905+1)119895

= sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [(V(119905)119895 )2 (119909119894120590(119905)119895 )V(119905)119895 log2 (119909119894120590(119905)119895 ) + 1]sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [120593(119905)119895 (119909119894120590(119905)119895 )V(119905)119895 minus 119896(119905)119895 ] log (119909119894120590(119905)119895 ) (31)

where 120593(119905)119895 = V(119905)119895 log(119909119894120590(119905)119895 ) + 1 Then to derive the solutionof the shape parameter 119896119895 we must calculate the partialderivative 119876(119883 | Θ) with respect to it We have

120597119876 (119883 | Θ)120597119896119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [V119895 log(119909119894120590119895) minus Φ0 (119896119895)] (32)

It is clear to see that the solution 120597119876(119883 | Θ)120597119896119895 = 0 yieldsthe updates for shape parameter 119896119895 by

Φ0 (119896119895) = V(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 log (119909119894120590(119905)119895 )sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 (33)

whereΦ0(sdot) is the Digamma function 119896(119905+1)119895 can be calculatedby solving (33) via the bisection method [23] In the samefashion to obtain the estimate of the scale parameter 120590119895 wemust derive the partial derivative of 119876(119883 | Θ) over it

120597119876 (119883 | Θ)120597120590119895 = V119895120590119895119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119909119894120590119895)V119895 minus 119896119895] (34)

6 Mathematical Problems in Engineering

Equating 120597119876(119883 | Θ)120597120590119895 to zero we can obtain the updateformulas for scale parameter 120590119895 by

120590(119905+1)119895 = [[[[sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951119909V(119905+1)119895119894 119896(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951

]]]]

1V(119905+1)119895

(35)

By calculating the partial derivative of the objective functionin (25) with parameter set 1205791198952 = 119906119895 Σ119895 we can obtain theestimation of the parameters mean 119906119895 and covariance Σ119895

120597119876 (119883 | Θ)120597119906119895 = 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906119895Σ119895 ) 120597119876 (119883 | Θ)120597Σ119895 = 119873sum

119894=1

119911(119905)119894119895 119910(119905)1198941198952(minus 12Σ119895 +(119909119894 minus 119906119895)22Σ2119895 )

(36)

Eventually the final updates for these two parameters can beobtained by

119906(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952119909119894sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (37)

Σ(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906(119905+1)119895 )2sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (38)

At this point the parameter learning procedure is complete

5 Ordering Method for Likelihood Trimming

For observed data with heavy outliers it is preferred todiscard the outliers and to estimate the parameters of theproposed mixture model using the remaining data Assumethat 119883 is a sample with 119873 observations 120578 is the trimmingfraction and 119883119872 is the subsample with a size 119872 =119873(1 minus 120578) Theoretically the trimming fraction 120578 should behigher than the real outlier fraction value After cutting theoutliers we estimate the model parameters by maximizingthe objective function 119876(119883119872 | Θ) in subsample 119883119872 Themost important step is to discard the outliers and select thesubsample This requires a specific ordering for all of theobservations in the sample Typically the number of outliersis unpredictableThus it is important for the proposedmodelto avoid allowing observed data that belongs to the labelswith a small number of observations falling into outliersThis study presents an effective component-based confidencelevel ordering method In the proposed GGMM-TLE wedo not calculate the density function value as in FAST-TLE[20] for every single observation Rather we only utilizethe concept of confidence level for these observations toeliminate the effects of mixture weights and sample scalesCombined with the posterior probability 119911(119905)119894119895 in (22) wecan order the observations that belong to the same groupseparately Thus it is more reasonable for ordering theobservation with GGMM-TLE Specifically we derive the

following increasing inequality based on component-basedconfidence level ordering

intΩ1(11990911988911 )

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596 le sdot sdot sdotle intΩ1(11990911988911198731

)

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596

intΩ119870(1199091198891198701 )

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596 le sdot sdot sdotle intΩ119870(119909119889119870119873119870

)

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596

(39)

where Ω119895(119909119894) = 120596 isin Ω119895 sum119870119895119896=1ℏ119895119896119901119895119896(120596 | 120579119895119896) gesum119870119895

119896=1ℏ119895119896119901119895119896(119909119894 | 120579119895119896) 119895 = (1 119870) Ω119895 is the 119895th

component determined by the posterior probability 119873119895 isthe number of observations belonging to the 119895th componentClearly 119873119895 satisfies sum119870119895=1119873119895 = 119873 119889119895 = (1198891198951 1198891198952 119889119895119873119895)is the ordering of sample indices of the 119895th component Bysorting and discarding each component individually with thesame trimming fraction 120578 we can obtain the subsample ofeach component 119883119872119895 = 119909119889119895119898119872119895119898=1 where119872119895 = 119873119895(1 minus 120578)Hence the total subsample119883119872 can be expressed by the unionof the subsample of each component119883119872119895

119883119872 = 1198831198721 cup 1198831198722 cup sdot sdot sdot cup 119883119872119870 (40)

Finally the parameters of the proposedmixturemodel can beestimatedwith subsample119883119872 and objective function119876(119883119872 |Θ) In the proposed GGMM-TLE by evaluating the intervalintegral rather than log-likelihood value of the observationswe can obtain superior performance compared with the clas-sical FAST-TLEThis is becausewe can order the observationswithin every individual label to retain the consistency oftrimming proportion of each label Therefore regardless ofthe mixture weights and sample scales all observations ofeach label are equally considered Finally combined with thesteps of GGMM in Section 2 we summarize the proceduresof GGMM-TLE as follows

Step 1 Input the trimming fraction 120578 Initialize the parameterset Θ = 120587119895 ℏ1198951 1205791198951 ℏ1198952 1205791198952 where 1205791198951 = V119895 119896119895 120590119895 and1205791198952 = 119906119895 Σ119895Step 2 Based on the current parameter set Θ(119905) evaluate theposterior probability 119911119894119895 using (22) compute the variable 119910119894119895119896using (23) and classify the observations

Step 3 Perform component-based confidence level orderingusing (39) to obtain subsample119883(119905+1)119872

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 5: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 5

sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895 ge 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896 + log119901119895119896 (119909119894 | 120579119895119896)] minus log119885+ 1119879119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895(24)

Thus we can define the following new objective function119876(119883 | Θ) in terms of Jensenrsquos inequality

119876 (119883 | Θ) = 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895 + 119870119895sum119896=1

119910(119905)119894119895119896 [log ℏ119895119896+ log119901119895119896 (119909119894 | 120579119895119896)] minus log119885 + 1119879

119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895= 119873sum119894=1

119870sum119895=1

119911(119905)119894119895 log120587119895+ 119910(119905)1198941198951 [log ℏ1198951 + log1199011198951 (119909119894 | 1205791198951)]+ 119910(119905)1198941198952 [log ℏ1198952 + log1199011198952 (119909119894 | 1205791198952)] minus log119885 + 1119879sdot 119873sum119894=1

119870sum119895=1

119866119894119895 log120587119895

(25)

To realize clustering we must maximize the log-likelihoodfunction in (10) which is equivalent to maximizing theobjective function in (25) In particular to estimate the priorprobability 120587119895 we take the partial derivative of the objectivefunction in (25) with respect to 120587119895 yielding

120597120597120587119895 [[119876 (119883 | Θ) minus 120582( 119870sum119895=1

120587119895 minus 1)]] = 0 (26)

where 120582 is the Lagrange multiplier in consideration of theconstraint sum119870119895=1 120587119895 = 1 we have

120587(119905+1)119895 = 119911(119905)119894119895 + 119866(119905)119894119895sum119870119898=1 119911(119905)119894119898 + 119866(119905)119894119898 (27)

Similarly to estimate the weighting factor ℏ119895119896 we take thepartial derivative of the objective function in (25)with respectto ℏ119895119896

120597120597ℏ119895119896 [[119876 (119883 | Θ) minus 120591( 119870119895sum119896=1

ℏ119895119896 minus 1)]] = 0 (28)

where 120591 is the Lagrange multiplier in consideration of theconstraint sum119870119895

119896=1ℏ119895119896 = 1 we have

ℏ(119905+1)119895119896 = sum119873119894=1 119911(119905)119894119895 119910(119905)119894119895119896sum119873119894=1 119911(119905)119894119895 (29)

In the following we estimate the power parameter V119895 Wecalculate the partial derivative of the objective function (25)with this power parameter V119895 as follows

120597119876 (119883 | Θ)120597V119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119896119895 minus (119909119894120590119895)V119895) log(119909119894120590119895) + 1

V119895]

(30)

The solution of 120597119876(119883 | Θ)120597V119895 = 0 yields the estimates of V119895as follows

V(119905+1)119895

= sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [(V(119905)119895 )2 (119909119894120590(119905)119895 )V(119905)119895 log2 (119909119894120590(119905)119895 ) + 1]sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 [120593(119905)119895 (119909119894120590(119905)119895 )V(119905)119895 minus 119896(119905)119895 ] log (119909119894120590(119905)119895 ) (31)

where 120593(119905)119895 = V(119905)119895 log(119909119894120590(119905)119895 ) + 1 Then to derive the solutionof the shape parameter 119896119895 we must calculate the partialderivative 119876(119883 | Θ) with respect to it We have

120597119876 (119883 | Θ)120597119896119895= 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [V119895 log(119909119894120590119895) minus Φ0 (119896119895)] (32)

It is clear to see that the solution 120597119876(119883 | Θ)120597119896119895 = 0 yieldsthe updates for shape parameter 119896119895 by

Φ0 (119896119895) = V(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 log (119909119894120590(119905)119895 )sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951 (33)

whereΦ0(sdot) is the Digamma function 119896(119905+1)119895 can be calculatedby solving (33) via the bisection method [23] In the samefashion to obtain the estimate of the scale parameter 120590119895 wemust derive the partial derivative of 119876(119883 | Θ) over it

120597119876 (119883 | Θ)120597120590119895 = V119895120590119895119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198951 [(119909119894120590119895)V119895 minus 119896119895] (34)

6 Mathematical Problems in Engineering

Equating 120597119876(119883 | Θ)120597120590119895 to zero we can obtain the updateformulas for scale parameter 120590119895 by

120590(119905+1)119895 = [[[[sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951119909V(119905+1)119895119894 119896(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951

]]]]

1V(119905+1)119895

(35)

By calculating the partial derivative of the objective functionin (25) with parameter set 1205791198952 = 119906119895 Σ119895 we can obtain theestimation of the parameters mean 119906119895 and covariance Σ119895

120597119876 (119883 | Θ)120597119906119895 = 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906119895Σ119895 ) 120597119876 (119883 | Θ)120597Σ119895 = 119873sum

119894=1

119911(119905)119894119895 119910(119905)1198941198952(minus 12Σ119895 +(119909119894 minus 119906119895)22Σ2119895 )

(36)

Eventually the final updates for these two parameters can beobtained by

119906(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952119909119894sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (37)

Σ(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906(119905+1)119895 )2sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (38)

At this point the parameter learning procedure is complete

5 Ordering Method for Likelihood Trimming

For observed data with heavy outliers it is preferred todiscard the outliers and to estimate the parameters of theproposed mixture model using the remaining data Assumethat 119883 is a sample with 119873 observations 120578 is the trimmingfraction and 119883119872 is the subsample with a size 119872 =119873(1 minus 120578) Theoretically the trimming fraction 120578 should behigher than the real outlier fraction value After cutting theoutliers we estimate the model parameters by maximizingthe objective function 119876(119883119872 | Θ) in subsample 119883119872 Themost important step is to discard the outliers and select thesubsample This requires a specific ordering for all of theobservations in the sample Typically the number of outliersis unpredictableThus it is important for the proposedmodelto avoid allowing observed data that belongs to the labelswith a small number of observations falling into outliersThis study presents an effective component-based confidencelevel ordering method In the proposed GGMM-TLE wedo not calculate the density function value as in FAST-TLE[20] for every single observation Rather we only utilizethe concept of confidence level for these observations toeliminate the effects of mixture weights and sample scalesCombined with the posterior probability 119911(119905)119894119895 in (22) wecan order the observations that belong to the same groupseparately Thus it is more reasonable for ordering theobservation with GGMM-TLE Specifically we derive the

following increasing inequality based on component-basedconfidence level ordering

intΩ1(11990911988911 )

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596 le sdot sdot sdotle intΩ1(11990911988911198731

)

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596

intΩ119870(1199091198891198701 )

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596 le sdot sdot sdotle intΩ119870(119909119889119870119873119870

)

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596

(39)

where Ω119895(119909119894) = 120596 isin Ω119895 sum119870119895119896=1ℏ119895119896119901119895119896(120596 | 120579119895119896) gesum119870119895

119896=1ℏ119895119896119901119895119896(119909119894 | 120579119895119896) 119895 = (1 119870) Ω119895 is the 119895th

component determined by the posterior probability 119873119895 isthe number of observations belonging to the 119895th componentClearly 119873119895 satisfies sum119870119895=1119873119895 = 119873 119889119895 = (1198891198951 1198891198952 119889119895119873119895)is the ordering of sample indices of the 119895th component Bysorting and discarding each component individually with thesame trimming fraction 120578 we can obtain the subsample ofeach component 119883119872119895 = 119909119889119895119898119872119895119898=1 where119872119895 = 119873119895(1 minus 120578)Hence the total subsample119883119872 can be expressed by the unionof the subsample of each component119883119872119895

119883119872 = 1198831198721 cup 1198831198722 cup sdot sdot sdot cup 119883119872119870 (40)

Finally the parameters of the proposedmixturemodel can beestimatedwith subsample119883119872 and objective function119876(119883119872 |Θ) In the proposed GGMM-TLE by evaluating the intervalintegral rather than log-likelihood value of the observationswe can obtain superior performance compared with the clas-sical FAST-TLEThis is becausewe can order the observationswithin every individual label to retain the consistency oftrimming proportion of each label Therefore regardless ofthe mixture weights and sample scales all observations ofeach label are equally considered Finally combined with thesteps of GGMM in Section 2 we summarize the proceduresof GGMM-TLE as follows

Step 1 Input the trimming fraction 120578 Initialize the parameterset Θ = 120587119895 ℏ1198951 1205791198951 ℏ1198952 1205791198952 where 1205791198951 = V119895 119896119895 120590119895 and1205791198952 = 119906119895 Σ119895Step 2 Based on the current parameter set Θ(119905) evaluate theposterior probability 119911119894119895 using (22) compute the variable 119910119894119895119896using (23) and classify the observations

Step 3 Perform component-based confidence level orderingusing (39) to obtain subsample119883(119905+1)119872

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 6: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

6 Mathematical Problems in Engineering

Equating 120597119876(119883 | Θ)120597120590119895 to zero we can obtain the updateformulas for scale parameter 120590119895 by

120590(119905+1)119895 = [[[[sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951119909V(119905+1)119895119894 119896(119905+1)119895 sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198951

]]]]

1V(119905+1)119895

(35)

By calculating the partial derivative of the objective functionin (25) with parameter set 1205791198952 = 119906119895 Σ119895 we can obtain theestimation of the parameters mean 119906119895 and covariance Σ119895

120597119876 (119883 | Θ)120597119906119895 = 119873sum119894=1

119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906119895Σ119895 ) 120597119876 (119883 | Θ)120597Σ119895 = 119873sum

119894=1

119911(119905)119894119895 119910(119905)1198941198952(minus 12Σ119895 +(119909119894 minus 119906119895)22Σ2119895 )

(36)

Eventually the final updates for these two parameters can beobtained by

119906(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952119909119894sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (37)

Σ(119905+1)119895 = sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (119909119894 minus 119906(119905+1)119895 )2sum119873119894=1 119911(119905)119894119895 119910(119905)1198941198952 (38)

At this point the parameter learning procedure is complete

5 Ordering Method for Likelihood Trimming

For observed data with heavy outliers it is preferred todiscard the outliers and to estimate the parameters of theproposed mixture model using the remaining data Assumethat 119883 is a sample with 119873 observations 120578 is the trimmingfraction and 119883119872 is the subsample with a size 119872 =119873(1 minus 120578) Theoretically the trimming fraction 120578 should behigher than the real outlier fraction value After cutting theoutliers we estimate the model parameters by maximizingthe objective function 119876(119883119872 | Θ) in subsample 119883119872 Themost important step is to discard the outliers and select thesubsample This requires a specific ordering for all of theobservations in the sample Typically the number of outliersis unpredictableThus it is important for the proposedmodelto avoid allowing observed data that belongs to the labelswith a small number of observations falling into outliersThis study presents an effective component-based confidencelevel ordering method In the proposed GGMM-TLE wedo not calculate the density function value as in FAST-TLE[20] for every single observation Rather we only utilizethe concept of confidence level for these observations toeliminate the effects of mixture weights and sample scalesCombined with the posterior probability 119911(119905)119894119895 in (22) wecan order the observations that belong to the same groupseparately Thus it is more reasonable for ordering theobservation with GGMM-TLE Specifically we derive the

following increasing inequality based on component-basedconfidence level ordering

intΩ1(11990911988911 )

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596 le sdot sdot sdotle intΩ1(11990911988911198731

)

119870119895sum119896=1

ℏ11198961199011119896 (120596 | 1205791119896) 119889120596

intΩ119870(1199091198891198701 )

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596 le sdot sdot sdotle intΩ119870(119909119889119870119873119870

)

119870119895sum119896=1

ℏ119870119896119901119870119896 (120596 | 120579119870119896) 119889120596

(39)

where Ω119895(119909119894) = 120596 isin Ω119895 sum119870119895119896=1ℏ119895119896119901119895119896(120596 | 120579119895119896) gesum119870119895

119896=1ℏ119895119896119901119895119896(119909119894 | 120579119895119896) 119895 = (1 119870) Ω119895 is the 119895th

component determined by the posterior probability 119873119895 isthe number of observations belonging to the 119895th componentClearly 119873119895 satisfies sum119870119895=1119873119895 = 119873 119889119895 = (1198891198951 1198891198952 119889119895119873119895)is the ordering of sample indices of the 119895th component Bysorting and discarding each component individually with thesame trimming fraction 120578 we can obtain the subsample ofeach component 119883119872119895 = 119909119889119895119898119872119895119898=1 where119872119895 = 119873119895(1 minus 120578)Hence the total subsample119883119872 can be expressed by the unionof the subsample of each component119883119872119895

119883119872 = 1198831198721 cup 1198831198722 cup sdot sdot sdot cup 119883119872119870 (40)

Finally the parameters of the proposedmixturemodel can beestimatedwith subsample119883119872 and objective function119876(119883119872 |Θ) In the proposed GGMM-TLE by evaluating the intervalintegral rather than log-likelihood value of the observationswe can obtain superior performance compared with the clas-sical FAST-TLEThis is becausewe can order the observationswithin every individual label to retain the consistency oftrimming proportion of each label Therefore regardless ofthe mixture weights and sample scales all observations ofeach label are equally considered Finally combined with thesteps of GGMM in Section 2 we summarize the proceduresof GGMM-TLE as follows

Step 1 Input the trimming fraction 120578 Initialize the parameterset Θ = 120587119895 ℏ1198951 1205791198951 ℏ1198952 1205791198952 where 1205791198951 = V119895 119896119895 120590119895 and1205791198952 = 119906119895 Σ119895Step 2 Based on the current parameter set Θ(119905) evaluate theposterior probability 119911119894119895 using (22) compute the variable 119910119894119895119896using (23) and classify the observations

Step 3 Perform component-based confidence level orderingusing (39) to obtain subsample119883(119905+1)119872

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 7: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 7

Step 4 Compute the objective function 119876(119883(119905+1)119872 | Θ(119905)) interms of (25) If 119876(119883(119905+1)119872 | Θ(119905)) ge 119876(119883(119905)119872 | Θ(119905)) continue toStep 5 else increase the value of the trimming fraction belowthe predefined threshold and obtain new subsample 119883(119905+1)119872lowastuntil the following condition is satisfied 119876(119883(119905+1)119872lowast | Θ(119905)) ge119876(119883(119905+1)119872 | Θ(119905)) set119872 = 119872lowast If the condition is not satisfiedterminate the procedure

Step 5 Update the prior probability 120587119895 and weighting factorℏ119895119896 using (27) and (29) respectively Compute the powerparameter V119895 shape parameter 119896119895 scale parameter 120590119895 meanparameter 119906119895 and the covariance parameter Σ119895 by solving(31) (33) (35) (37) and (38) respectively

Step 6 Maximize the objective function 119876(119883(119905+1)119872 | Θ(119905))using (25) and obtain the new parameter set Θ(119905+1) Ifthe termination condition is satisfied end the iterationsOtherwise set 119905 = 119905 + 1 and return to Step 2

6 Experimental Results

This section experimentally evaluates the proposed GGMM-TLE by considering the problem of real-world image seg-mentation and compares GGMM-TLE with other relatedalgorithms All algorithms are initialized using 119896-means Theexperiments were developed in MATLAB R2012b and wereexecuted on a personal computer with Intel(R) Core(TM)I7-6500U CPU 25GHz 8GB RAM 64-bit To obtain anobjective evaluation of the proposed method this paper usestwo measure criteria the misclassification ratio (MCR) [24]andDice similarity coefficient (DSC) [25]The former has thefollowing form

MCR = number of mis-segmented pixelstotal number of pixels

(41)

MCR is widely used in the literature to evaluate segmentationperformance For MCR the smaller the value of the MCRthe higher the accuracy of the segmentation The popularoverlap-based metric DSC is also employed to evaluate theproposed mixture model

DSC (119878119886 119878119898) = 2 1003816100381610038161003816119878119886 cap 119878119898100381610038161003816100381610038161003816100381610038161198781198861003816100381610038161003816 + 10038161003816100381610038161198781198981003816100381610038161003816 (42)

where 119878119886 denotes the shape of the automatic segmentationand 119878119898 indicates the shape of the manual segmentationobtained from the algorithm output The range of DSC isfrom zero to one with one denoting ideal segmentation andzero indicating poor segmentation

61 Test of the Proposed Trimming Approach The firstexperiment presented herein validates the behaviour ofthe proposed GGMM-TLE For this purpose we generatedthree labels of inlier observations and one label of outliersThe inlier observations consisted of 10000 points from a3-component bivariate GMM with prior probability 1205871 =

1205872 = 1205873 = 13 The means and variances of this bivariateGMM are

1205831 = (0 4)119879 1205832 = (4 0)119879 1205833 = (minus4 4)119879 Σ1 = (1 11 2) Σ2 = (1 00 1) Σ3 = ( 1 minus05minus05 3 )

(43)

Labels 1 2 and 3 have 2500 3500 and 4000 pointsrespectively Random noise with 1000 points was addedin the plusmn10 rectangle This random noise is considered asoutliers For comparison apart from the proposed approachwe also included the performance of FAST-TLE [20] andCLO-TLE [21] The trimming fraction 120578 varied in the range[01 05] The segmentation results of the different methodsare presented in Figure 1 From Figure 1 we can observethat the classical GMM was sensitive to outliers resulting inpoor clustering performance in terms of visual interpretationIt is clear that the performance of the FAST-TLE methodwas less influenced by the outliers However it was alsounstable especially when the trimming fraction was highThe CLO-TLE ordering strategy exhibited a higher stabilityversus the previous two algorithmswith the use of confidencelevel orderingHowever there remainedmisclassified outliersdemonstrating that the fitting influence of CLO-TLE was notideal Conversely we determined that the best performancewas with the proposed GGMM-TLE this is because everyobservation of the individual groups was considered Thisfigure indicates that the GGMM-TLE can extract clear pointaccumulations from noise data

62 Segmentation of Noise-Degraded Images To demonstratethe feasibility of GGMM-TLE the following experimentused four real-world images (ldquoBoatrdquo ldquoCowrdquo ldquoHouserdquo andldquoManrdquo) from the semantic boundaries dataset (SBD) [26]for comparison These images were segmented into threelabels All these images were contaminated by Salt and Peppernoise with intensity 5 Figure 2 presents the visualization ofthe segmentation task with trimming fraction 02 where thesecond third and fourth columns correspond to the FAST-TLE CLO-TLE and GGMM-TLE algorithms respectivelyFigure 2 shows the detailed parts of the corresponding seg-mentation results using different approaches Note that theGGMM-TLE eliminates obviously the noise as predicatedWedemonstrate the log-likelihood function versus the numberof iteration under trimming fractions 02 for the different testimages in Figure 3 It can be clearly observed that the log-likelihood functions of FAST-TLE and CLO-TLE are similar

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 8: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

8 Mathematical Problems in Engineering

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(a)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(b)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(c)

minus10 1086420minus2minus4minus6minus8minus10

minus8

minus6

minus4

minus2

0

2

4

6

8

10

= 01

= 03

= 05

(d)

Figure 1 Noisy synthetic data clustering using the considered methods (a) Classical GMM (b) FAST-TLE method (c) CLO-TLE methodand (d) GGMM-TLE method

to that of the proposed method However a closer inspectionof the iteration ranges [5 15] indicates that the GGMM-TLE method can moderately improve the convergence rateWhen the iterations is low (119905 le 5) the convergence ratewith GGMM-TLE is the biggest one In the general casethe GGMM-TLE method converges after five iterations InFigure 4 the MCR plots of each test image against differenttrimming fractions are displayed This figure implies that theproposed scheme achieved superior segmentation accuracyas the basic scheme because the MCR of the GGMM-TLEmethod was the least of all test images

The proposed algorithm was also assessed on a clinicalMR image to label the white mass (WM) and grey mass(GM) For this purpose a real MR image slice 42 of IBSR2from the IBSR dataset [27] was randomly selected to evaluatethe performance of the proposed GGMM-TLE against FAST-TLE and CLO-TLE Salt and Pepper noise with intensity5 and 10 was considered in our experiment Figure 5presents the performance of these methods under 5 Saltand Pepper noise and different trimming fractions It isclear that FAST-TLE did not demonstrate improved resultsfor heavier outliers in the segmentation task The CLO-TLE

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 9: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 9

(a) (b) (c) (d)

Figure 2 Segmentation results obtained by different methods for four test images (trimming fractions 02) (a) Original image with 5 Saltand Pepper noise The figures from (b) to (d) show FAST-TLE CLO-TLE and GGMM-TLE methods respectively

tended to achieve superior performance with an increase ofthe trimming fraction and could maintain its stability andeffectiveness A closer inspection of Figure 5 indicates that thesegmentation accuracy of GGMM-TLE was visually higherthan the other methods It is due to the fact that the proposedGGMM-TLE utilizes the advantages of confidence level forthese observations so that the effects of mixture weights andsample scales are eliminatedTherefore with the increasing ofthe trimming fractions GGMM-TLE exhibits better stabilityto outliers than CLO-TLE As shown in Figure 5 this is

especially apparent in GGMM-TLE with high trimmingfractions Figure 6 displays the evaluation results using theMCR metric It can be observed that the GGMM-TLE hadthe lowest MCR value thus its segmentation results weresuperior to those of FAST-TLE and CLO-TLE

Further we executed these algorithms 20 times each timewith different initializationThen we computed the (average)performance in terms of the number of correctly classifieddata points and the DSC for this MR image including whitematter and grey matter Table 1 lists the mean values and the

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 10: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

10 Mathematical Problems in Engineering

GGMM-TLECLO-TLEFAST-TLE

times104

54

55

56

57

58

59

6Li

kelih

ood

func

tion

10 20 30 40 500Iterations

(a)

46

48

5

52

54

56

58

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(b)

4

42

44

46

48

5

52

54

56

Like

lihoo

d fu

nctio

n

10 20 30 40 500Iterations

GGMM-TLECLO-TLEFAST-TLE

times104

(c)

10 20 30 40 500Iterations

38

4

42

44

46

48

5

52

54

Like

lihoo

d fu

nctio

n

GGMM-TLECLO-TLEFAST-TLE

times104

(d)

Figure 3 Comparison of likelihood values for the different test images with trimming faction 02 (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo(c) test image ldquoHouserdquo and (d) test image ldquoManrdquo

standard deviations of the DSC obtained from 20 executionsThe experiment results demonstrate that the accuracy wasmoderately improved compared with the other methods

To assess the robustness of the proposed GGMM-TLEat different levels of noise a set of real-world images fromthe Berkeley image dataset [28] was considered to comparethe performance of GMM [29] SMM [30] GΓMM [31]NSMM [12] and ACAP [8] The ground-truth informationwas freely obtained from the website [31] This was usedfor algorithm performance evaluation The experiment was

performed with noisy version images by adding Gaussiannoise (zero mean 001 variance) and Salt and Pepper noise(3) to the images as indicated in the first row of Figures 7and 8 The evaluated algorithms were initialized using the 119896-means algorithmThe number of label119870was set according tohuman visual inspection Figures 7 and 8 exhibit the resultsof image segmentation using the different methods Owingto the application of a mean filter we can observe that theperformance of ACAPwas superior to GMM SMM GΓMMand NSMM The results generated by the ACAP achieved

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 11: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 11

FAST-TLECLO-TLEGGMM-TLE

015 02 025 03 035 04 045 0501Trimming fraction

002

003

004

005

006

MCR

(a)

015 02 025 03 035 04 045 0501Trimming fraction

003

004

005

006

MCR

FAST-TLECLO-TLEGGMM-TLE

(b)

015 02 025 03 035 04 045 0501Trimming fraction

001

002

003

004

005

006

007

MCR

FAST-TLECLO-TLEGGMM-TLE

(c)

01 015 02 025 03 035 04 045 05001

002

003

004

005

006

007

Trimming fraction

MCR

FAST-TLECLO-TLEGGMM-TLE

(d)

Figure 4 Plot ofMCR of test images against different trimming fractions (a) Test image ldquoBoatrdquo (b) test image ldquoCowrdquo (c) test image ldquoHouserdquoand (d) test image ldquoManrdquo

Table 1 Average DSC of different methods with diverse trimming fractions Original MR image with 5 Salt and Pepper noise (mean plusmnstandard deviation)

Methods Trimming fraction (WM)01 02 03 04 05

FAST-TLE 07153 plusmn 04191 07269 plusmn 04785 07081 plusmn 02083 06771 plusmn 04762 06743 plusmn 05216CLO-TLE 08181 plusmn 02721 07968 plusmn 02814 07966 plusmn 03612 07748 plusmn 03264 07484 plusmn 03673GGMM-TLE 08994 plusmn 01023 08380 plusmn 01571 08177 plusmn 01238 08040 plusmn 00779 07803 plusmn 02317Methods Trimming fraction (GM)

01 02 03 04 05FAST-TLE 08367 plusmn 04020 08378 plusmn 04060 08313 plusmn 03553 08015 plusmn 03723 07829 plusmn 04062CLO-TLE 09107 plusmn 03841 08931 plusmn 02379 08944 plusmn 01540 08803 plusmn 02783 08593 plusmn 01761GGMM-TLE 09561 plusmn 01262 09227 plusmn 01040 09095 plusmn 01235 09002 plusmn 01156 08837 plusmn 01240

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 12: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

12 Mathematical Problems in Engineering

(a)

(b)

(c)

Figure 5 (a) to (c) display the segmentation results of MR image (slice 42 of IBSR2) using FAST-TLE CLO-TLE and GGMM-TLErespectively The trimming fractions from left to right are 01 02 03 04 and 05

FASTminusTLECLOminusTLEGGMMminusTLE

002

003

004

005

006

007

008

009

MCR

015 02 025 03 035 04 045 0501Trimming fraction

(a)

01

015

02

025

03

035

04

045

05

MCR

015 02 025 03 035 04 045 0501Trimming fraction

FAST-TLECLO-TLEGGMM-TLE

(b)

Figure 6 Trimming fractions versus classification accuracy under noise environment MCR of different methods for MR image (slice 42 ofIBSR2) (a) 5 Salt and Pepper noise and (b) 10 Salt and Pepper noise

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 13: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 13

Figure 7 Segmentation performance comparison for real-world images in the Gaussian noise environment (zero mean variance 001) Fromthe first row to the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the lastcolumn test image 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

similar results to GGMM-TLE however its performance wasimpaired when there was an abundance of rich details forexample in test image 241004 (the sixth row of Figure 7)TheGGMM-TLE provided a moderately improved performanceunder different noisy conditions and eliminated the influenceof widely spread noise data This characteristic is endemicto MRF and trimmed likelihood estimator The resultingDSC is reported in Tables 2 and 3 providing a quantitativecomparison among the algorithms The DSC and standarddeviation indicate that the proposed method outperformedthe other methods by preserving the highest DSC

To further demonstrate the goodness of GGMM-TLEagainst different noise in Figure 9 we display the meanvalues and standard deviations of the MCR obtained fromtwenty runs on two Berkeley test images (24063 and 35010)under different noise environments Considering the MCRon average the ACAP effectively eliminated the effects ofnoise during the segmentation processing and demonstratedacceptable segmentation results We determined that classi-cal GMM SMM and GΓMM were severely influenced byGaussian noise and could not accurately separate a regionfrom the background In the majority of cases the NSMM

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 14: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

14 Mathematical Problems in Engineering

Figure 8 Segmentation performance comparison for real-world images in the Salt and Pepper noise environment (3) From the first rowto the last noisy image GMM SMM GΓMM NSMM ACAP and GGMM-TLE respectively From the first column to the last column testimage 24063 8068 241004 55067 and 35010 (Berkeley Dataset)

approach was superior to SMM and GMM yet continuedto be influenced by varying degrees of Salt and Pepperand Gaussian noise As expected compared to the otheralgorithms the GGMM-TLEwas stable and achieved the bestsegmentation results according to the quantitative criterion

7 Concluding Remarks

In this paper a robust estimation of the proposed GGMM-TLE using a trimmed likelihood estimator for real-worldimage segmentation was proposed GGMM-TLE with MRF

implements a mixture of generalized Gamma and Gaussiandistributions

The main contribution of this paper is the presentationof an asymmetric finite model GGMM-TLE based on MRFWith this model we have high flexibility to fit differentshapes of observed data Further this study discussed theproperty of identifiability of the proposed mixture modelguaranteeing that the estimation procedure for the parame-ters was correctly developed Then to ensure that GGMM-TLE was robust against heavy outliers the paper offeredan effective method to discard the outliers in advance and

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 15: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 15

Trimming fraction 02

2 5 8 10Salt amp Pepper noise ()

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMM

NSMMACAPGGMM-TLEGΓMM

(a)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(b)

2 5 8 10Salt amp Pepper noise ()

Trimming fraction 02

GMMSMM

NSMMACAPGGMM-TLE

0

002

004

006

008

01

012

014

016

018

02

MCR

GΓMM(c)

001 004 006 008Gaussian noise (zero mean variance)

Trimming fraction 02

0

002

004

006

008

01

012

014

016

018

02

MCR

GMMSMMGΓMM

NSMMACAPGGMM-TLE

(d)

Figure 9 Average MCR for different methods on two test images under noisy environment (Berkeley Dataset) ((a) and (b)) Test image24063 ((c) and (d)) test image 35010

therefore GGMM-TLE demonstrated superior performanceunder modelling with samples contaminated with unknownoutliers Finally combined with MRF GGMM-TLE consid-ered the spatial relationship between neighbouring pixels anddemonstrated a stronger ability to resist different noise Thesegmentation results of synthetic data and real-world imagesconfirmed that the proposed method demonstrated superiorcompetitivenessThemain limitation of this algorithm is that

the segmentation task requires component-based confidencelevel ordering which increases the computational cost

As a future work one direction is to obtain other finitemixture models by testing different probability density func-tions Another possible direction is to extend the presentedmethod to a higher dimension in a straightforward mannersuch as fMRI time-series clustering We plan to address thesetopics in a separate paper

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 16: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

16 Mathematical Problems in Engineering

Table 2AverageDSCof differentmethods for Berkeley test imageswith zeromeanGaussian noise variance 001 (meanplusmn standard deviation)Images Label GMM SMM GΓMM24063 3 086005 plusmn 032173 089173 plusmn 020353 088071 plusmn 0204728068 3 085737 plusmn 028634 087439 plusmn 027448 086005 plusmn 031348241004 4 085388 plusmn 032745 089018 plusmn 024834 087803 plusmn 02089555067 4 087931 plusmn 021733 088342 plusmn 021631 088013 plusmn 03055635010 4 084904 plusmn 033850 088463 plusmn 023632 087991 plusmn 018584Images Label NSMM ACAP GGMM-TLE24063 3 091825 plusmn 015734 095004 plusmn 014848 097122 plusmn 0148918068 3 089173 plusmn 028042 094485 plusmn 016055 096108 plusmn 010723241004 4 092006 plusmn 021634 094670 plusmn 013114 097061 plusmn 01482655067 4 092601 plusmn 017203 096893 plusmn 012051 097175 plusmn 01387635010 4 092183 plusmn 020847 095380 plusmn 013954 096219 plusmn 015664

Table 3 Average DSC of different methods for Berkeley test images with 3 Salt and Pepper noise (mean plusmn standard deviation)

Images Label GMM SMM GΓMM24063 3 087126 plusmn 030566 090972 plusmn 023385 088482 plusmn 0188088068 3 085234 plusmn 032531 088082 plusmn 030318 087627 plusmn 033601241004 4 087294 plusmn 035061 088385 plusmn 030458 089714 plusmn 02505455067 4 088099 plusmn 032392 090400 plusmn 025767 088517 plusmn 02109235010 4 085337 plusmn 025011 089075 plusmn 021839 087630 plusmn 016432Images Label NSMM ACAP GGMM-TLE24063 3 092079 plusmn 016049 096103 plusmn 023214 098394 plusmn 0075148068 3 090213 plusmn 025321 095329 plusmn 011539 097328 plusmn 011482241004 4 093487 plusmn 016806 096271 plusmn 012378 097117 plusmn 01368355067 4 093247 plusmn 018579 098076 plusmn 010690 098587 plusmn 00665535010 4 093752 plusmn 017804 096053 plusmn 010842 097420 plusmn 012722Conflicts of Interest

The authors declare that they have no conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China under Grant no 61371150

References

[1] A Saglam and N A Baykan ldquoSequential image segmentationbased on minimum spanning tree representationrdquo PatternRecognition Letters vol 87 pp 155ndash162 2017

[2] S Yin Y Qian andMGong ldquoUnsupervised hierarchical imagesegmentation through fuzzy entropy maximizationrdquo PatternRecognition vol 68 pp 245ndash259 2017

[3] H-C Li V A Krylov P-Z Fan J Zerubia and W J EmeryldquoUnsupervised learning of generalized gamma mixture modelwith application in statistical modeling of high-resolution SARimagesrdquo IEEE Transactions on Geoscience and Remote Sensingvol 54 no 4 pp 2153ndash2170 2016

[4] A Roy A Pal and U Garain ldquoJCLMM A finite mixturemodel for clustering of circular-linear data and its applicationto psoriatic plaque segmentationrdquo Pattern Recognition vol 66pp 160ndash173 2017

[5] Y Bar-Yosef and Y Bistritz ldquoGaussian mixture models reduc-tion by variational maximummutual informationrdquo IEEE Trans-actions on Signal Processing vol 63 no 6 pp 1557ndash1569 2015

[6] R Zhang D H Ye D Pal J-B Thibault K D Sauer and C Bouman ldquoA Gaussian mixture MRF for model-based iterativereconstruction with applications to low-dose X-ray CTrdquo IEEETransactions on Computational Imaging vol 2 no 3 pp 359ndash374 2016

[7] H Z Yerebakan and M Dundar ldquoPartially collapsed parallelGibbs sampler for Dirichlet process mixture modelsrdquo PatternRecognition Letters vol 90 pp 22ndash27 2017

[8] H Zhang QM JWu and TM Nguyen ldquoIncorporatingmeantemplate into finite mixture model for image segmentationrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 24 no 2 pp 328ndash335 2013

[9] A Matza and Y Bistritz ldquoSkew Gaussian mixture models forspeaker recognitionrdquo IET Signal Processing vol 8 no 8 pp860ndash867 2014

[10] T-T Van Cao ldquoModelling of inhomogeneity in radar clutterusing weibull mixture densitiesrdquo IET Radar Sonar amp Naviga-tion vol 8 no 3 pp 180ndash194 2014

[11] Q Peng and L Zhao ldquoSAR image filtering based on the Cauchy-Rayleigh mixture modelrdquo IEEE Geoscience and Remote SensingLetters vol 11 no 5 pp 960ndash966 2014

[12] TMNguyen andQM JWu ldquoA nonsymmetricmixturemodelfor unsupervised image segmentationrdquo IEEE Transactions onCybernetics vol 43 no 2 pp 751ndash765 2013

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 17: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Mathematical Problems in Engineering 17

[13] T M Nguyen Q M J Wu D Mukherjee and H ZhangldquoA bayesian bounded asymmetric mixture model with seg-mentation applicationrdquo IEEE Journal of Biomedical and HealthInformatics vol 18 no 1 pp 109ndash119 2014

[14] T M Nguyen and Q M J Wu ldquoBounded asymmetricalstudentrsquos-t mixture modelrdquo IEEE Transactions on Cyberneticsvol 44 no 6 pp 857ndash869 2014

[15] X Zhou R Peng and C Wang ldquoA two-component K-Lognormal mixture model and its parameter estimationmethodrdquo IEEE Transactions on Geoscience and Remote Sensingvol 53 no 5 pp 2640ndash2651 2015

[16] A De Angelis G De Angelis and P Carbone ldquoUsing Gaussian-Uniform Mixture Models for Robust Time-Interval Measure-mentrdquo IEEE Transactions on Instrumentation andMeasurementvol 64 no 12 pp 3545ndash3554 2015

[17] R P Browne P D McNicholas and M D Sparling ldquoModel-based learning using a mixture of mixtures of gaussian anduniform distributionsrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 34 no 4 pp 814ndash817 2012

[18] J Sun A Zhou S Keates and S Liao ldquoSimultaneous BayesianClustering and Feature Selection Through Studentrsquos t MixturesModelrdquo IEEE Transactions on Neural Networks and LearningSystems 2017

[19] N Neykov P Filzmoser R Dimova and P Neytchev ldquoRobustfitting of mixtures using the trimmed likelihood estimatorrdquoComputational Statistics amp Data Analysis vol 52 no 1 pp 299ndash308 2007

[20] C H Muller and N Neykov ldquoBreakdown points of trimmedlikelihood estimators and related estimators in generalizedlinear modelsrdquo Journal of Statistical Planning and Inference vol116 no 2 pp 503ndash519 2003

[21] A Galimzianova F Pernus B Likar and Z Spiclin ldquoRobustestimation of unbalanced mixture models on samples withoutliersrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 37 no 11 pp 2273ndash2285 2015

[22] N Atienza J Garcia-Heras and J M Munoz-Pichardo ldquoAnew condition for identifiability of finite mixture distributionsrdquoMetrika vol 63 no 2 pp 215ndash221 2006

[23] W R Press S A TeukolskyW T Vetterling and B P FlanneryNumerical Recipes in C The Art of Scientific Computing Cam-bridge University Press Cambridge UK 2002

[24] Y Zhang M Brady and S Smith ldquoSegmentation of brain MRimages through a hidden Markov random field model andthe expectation-maximization algorithmrdquo IEEE Transactionson Medical Imaging vol 20 no 1 pp 45ndash57 2001

[25] LHan J HHipwell B Eiben et al ldquoA nonlinear biomechanicalmodel based registration method for aligning prone and supinemr breast imagesrdquo IEEE Transactions on Medical Imaging vol33 no 3 pp 682ndash694 2014

[26] B Hariharan P Arbelaez L Bourdev S Maji and J MalikldquoSemantic contours from inverse detectorsrdquo in Proceedings ofthe 2011 IEEE International Conference on Computer VisionICCV 2011 pp 991ndash998 Spain November 2011

[27] IBSR [Online] Available httpwwwnitrcorgprojectsibsr[28] S Meignen and H Meignen ldquoOn the modeling of small sample

distributions with generalized Gaussian density in a maximumlikelihood frameworkrdquo IEEE Transactions on Image Processingvol 15 no 6 pp 1647ndash1652 2006

[29] G McLachlan and D Peel Finite Mixture Models JohnWiley ampSons New York NY USA 2000

[30] D Peel and G J McLachlan ldquoRobust mixture modelling usingthe t distributionrdquo Statistics and Computing vol 10 no 4 pp339ndash348 2000

[31] T Elguebaly and N Bouguila ldquoBayesian learning of finite gen-eralized Gaussianmixturemodels on imagesrdquo Signal Processingvol 91 no 4 pp 801ndash820 2011

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 18: Image Segmentation Using a Trimmed Likelihood Estimator in ... · ResearchArticle Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom


Recommended