+ All Categories
Home > Documents > ResearchArticle Rician Noise Removal via a Learned Dictionary

ResearchArticle Rician Noise Removal via a Learned Dictionary

Date post: 09-Nov-2021
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
14
Research Article Rician Noise Removal via a Learned Dictionary Jian Lu , 1 Jiapeng Tian, 1 Lixin Shen, 2 Qingtang Jiang, 3 Xueying Zeng, 4 and Yuru Zou 1 Shenzhen Key Laboratory of Advanced Machine Learning and Applications, College of Mathematics and Statistics, Shenzhen University, Shenzhen , China Department of Mathematics, Syracuse University, Syracuse, NY , USA Department of Mathematics and Computer Science, University of Missouri-St. Louis, St. Louis, MO , USA School of Mathematical Sciences, Ocean University of China, Qingdao , China Correspondence should be addressed to Yuru Zou; [email protected] Received 16 September 2018; Revised 9 January 2019; Accepted 3 February 2019; Published 18 February 2019 Academic Editor: Elisa Francomano Copyright © 2019 Jian Lu et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper proposes a new effective model for denoising images with Rician noise. e sparse representations of images have been shown to be efficient approaches for image processing. Inspired by this, we learn a dictionary from the noisy image and then combine the MAP model with it for Rician noise removal. For solving the proposed model, the primal-dual algorithm is applied and its convergence is studied. e computational results show that the proposed method is promising in restoring images with Rician noise. 1. Introduction Image denoising is one of the most fundamental issues in image processing [1, 2]. In the past decades, many methods are proposed for removing Gaussian white noise [3, 4], impulse noise [5–9], Poisson noise [10–12], and multiplicative noise [13–21]. With the Magnetic Resonance Imaging (MRI) being widely used, people are becoming gradually concerned with another vital noise, Rician noise. In this paper, we mainly study Rician noise and propose a new model that can be more efficient to remove it. Mathematically, the image degraded by Rician noise can be given by = ( + 1 ) 2 + 2 2 , (1) where is the original image and 1 , 2 ∼ (0, 2 ). Our goal is to find the unknown true image from the degraded image as well as possible. In recent years, there have been many methods proposed for denoising on the images corrupted by Rician noise, such as filtering methods, wavelet methods, nonconvex methods, and convex methods. Based on the anisotropic diffusion process proposed by Perona and Malik [22], Gerig [23] presented the nonlinear anisotropic filtering method for Rician noise removal with the edge information preserved, but some other details may be lost. Studying the nonlocal means algorithm, in [24], Prima et al. proposed a nonlocal means variants method for denoising the diffusion-weighted and diffusion tensor MR image. And Manjn et al. [25] also improved a new filter (below we call it NLM model) which is based on the nonlocal means to reduce the noise in MR images. Inspired by it, Wiest-Daessle et al. [26] adapted the nonlocal means filter to data corrupted by Rician noise and presented nonlocal means filtering method for better respecting the image details and structures. In [27], Nowak came up with wavelet domain filtering method which adapts to variations in both the signal and the noise. From a single Rician-distributed image, Foi [28] presented a stable and fast iterative procedure for robustly estimating the noise level and also proposed a variance-stabilization method for efficiently removing Rician noise. In [29], Wood and Johnson proposed wavelet packet denoising method for Rician noise removal at low SNR. Wavelet methods have been efficient to remove Rician noise with the image details and edge preserved, but the problem that small dots influence image analysis process remains unresolved. In the mean time, the maximum Hindawi Mathematical Problems in Engineering Volume 2019, Article ID 8535206, 13 pages https://doi.org/10.1155/2019/8535206
Transcript
Page 1: ResearchArticle Rician Noise Removal via a Learned Dictionary

Research ArticleRician Noise Removal via a Learned Dictionary

Jian Lu 1 Jiapeng Tian1 Lixin Shen2 Qingtang Jiang3 Xueying Zeng4 and Yuru Zou 1

1 Shenzhen Key Laboratory of Advanced Machine Learning and Applications College of Mathematics and StatisticsShenzhen University Shenzhen 518060 China

2Department of Mathematics Syracuse University Syracuse NY 13244 USA3Department of Mathematics and Computer Science University of Missouri-St Louis St Louis MO 63121 USA4School of Mathematical Sciences Ocean University of China Qingdao 266100 China

Correspondence should be addressed to Yuru Zou yrzou163com

Received 16 September 2018 Revised 9 January 2019 Accepted 3 February 2019 Published 18 February 2019

Academic Editor Elisa Francomano

Copyright copy 2019 Jian Lu et al This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

This paper proposes a new effective model for denoising images with Rician noise The sparse representations of images have beenshown to be efficient approaches for image processing Inspired by this we learn a dictionary from the noisy image and thencombine the MAP model with it for Rician noise removal For solving the proposed model the primal-dual algorithm is appliedand its convergence is studied The computational results show that the proposed method is promising in restoring images withRician noise

1 Introduction

Image denoising is one of the most fundamental issues inimage processing [1 2] In the past decades many methodsare proposed for removing Gaussian white noise [3 4]impulse noise [5ndash9] Poisson noise [10ndash12] andmultiplicativenoise [13ndash21] With the Magnetic Resonance Imaging (MRI)being widely used people are becoming gradually concernedwith another vital noise Rician noise In this paper wemainlystudy Rician noise and propose a newmodel that can bemoreefficient to remove it Mathematically the image 119910 degradedby Rician noise can be given by

119910 = radic(119909 + 1205781)2 + 12057822 (1)

where 119909 is the original image and 1205781 1205782 sim 119873(0 1205902) Our goalis to find the unknown true image 119909 from the degraded image119910 as well as possible

In recent years there have been many methods proposedfor denoising on the images corrupted by Rician noise suchas filtering methods wavelet methods nonconvex methodsand convex methods Based on the anisotropic diffusionprocess proposed by Perona and Malik [22] Gerig [23]

presented the nonlinear anisotropic filtering method forRician noise removal with the edge information preservedbut some other details may be lost Studying the nonlocalmeans algorithm in [24] Prima et al proposed a nonlocalmeans variants method for denoising the diffusion-weightedand diffusion tensor MR image And Manjn et al [25] alsoimproved a new filter (below we call it NLM model) whichis based on the nonlocal means to reduce the noise in MRimages Inspired by it Wiest-Daessle et al [26] adaptedthe nonlocal means filter to data corrupted by Rician noiseand presented nonlocal means filtering method for betterrespecting the image details and structures In [27] Nowakcame up with wavelet domain filtering method which adaptsto variations in both the signal and the noise From a singleRician-distributed image Foi [28] presented a stable and fastiterative procedure for robustly estimating the noise level andalso proposed a variance-stabilization method for efficientlyremoving Rician noise In [29] Wood and Johnson proposedwavelet packet denoising method for Rician noise removalat low SNR Wavelet methods have been efficient to removeRician noise with the image details and edge preservedbut the problem that small dots influence image analysisprocess remains unresolved In the mean time the maximum

HindawiMathematical Problems in EngineeringVolume 2019 Article ID 8535206 13 pageshttpsdoiorg10115520198535206

2 Mathematical Problems in Engineering

a posteriori (MAP) estimation model was proposed which isconsidered from the feature of noise-free image and includesthe data fidelity term TheMAP model is as follows

argmin119909

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 ) 119889119905+ 120582int

Ω|119863119909| 119889119905

(2)

where 1198680 is the modified Bessel function of the first kind withorder zero [30] But it is a nonconvex function and leads to adifficult problem to solve In view of the MAP model GTVmodel [31] was put forward by Getreuer et al which is aconvex approximation of the MAP model and can be easilysolved but the fidelity item of GTV model is a complicatedpiecewise function Chen [32] proposed a new convex modelthat added a statistical property of Ricain noise into the MAPmodel leading to a new strictly convex model under mildcondition that can be easily solved by primal-dual algorithmand below we call it CZ model

In this paper we study the Rician noise and propose anew reasonable and efficient model for Rician noise removalAs we know natural images have a vital feature that issparseness and dictionary learning is being widely used forimage denoising Dictionary learning has been demonstratedto be efficient for various noise removal Aharon and Elad [3334] proposed the K-SVD algorithm for designing dictionarywith sparse representation and it is proven to be effectivefor additive white Gaussian noise removal Inspired by theK-SVD algorithm Huang [15] proposed a new model thatcombined the ldquoAArdquo model [35] and K-SVD algorithm toremove multiplication noise and also presented a log minus1198970 minimization approach to solve it Xiao [11] and Ma[10] also proposed new model via dictionary learning forPoisson noise removal and Poisson image deblurring Inaddition Liu et al [36] applied two-level Bregmanmethod todictionary updating and proposed an efficient algorithm forreconstructing MR images In [37] integrating total variation(TV) and dictionary learning Liu et al also proposed anovel gradient for image recovery Similarly integrating totalgeneralized variation and adaptive dictionary learning Luet al [38] presented a novel dictionary learning model forMRI reconstruction So we attempt to apply the sparserepresentation and dictionary learning into the MAP modelfor Rician noise removal Owing to the nonconvexity ofthe MAP model we add the sparse representation term toovercome the drawback so we can use the classical primal-dual algorithm to solve the model

The following is the outline of our paper In Section 2we first briefly introduce the dictionary learning and sparserepresentation and then we propose the new model thatcombines the MAP model with sparse representation termAlso we give and elaborate the two-step algorithm for solvingour model In Section 3 we demonstrate that our modeloutperforms the othermethods for Rician noise removal withnumerical results In the end we draw our conclusion inSection 4

2 Our Proposed Model

Generally speaking we consider that every signal instancefrom the family can be represented as a linear combination offew columns from a redundant dictionary For the degradedimage 119910 isin R

radic119873timesradic119873 regarding the image patch of size radic119899 timesradic119899 we order it as column vector Y isin R119899 lexicographicallyAnd we define a dictionary of size D isin R119899times119896 to simplyconstruct the Sparseland model where 119896 gt 119899 impliesthat the dictionary D is redundant Meanwhile we shouldalso make an assumption that the dictionary D is knownand fixed Then column vector Y can be sparsely linearlyrepresented by few atoms selected from the dictionary DThat is to say that there is a sparse solution of the followingproblem

= argmin120572

1205720 119904119906119887119895119890119888119905 119905119900 D120572 asymp Y (3)

That is 1205720 ≪ 119899 where 1205720 denotes the number of thenonzero entries in120572 And ∙0 is used to constrain the sparsityof representation

For simplicity we substitute Y minus D1205722 le 120598 for D120572 asymp YAlso replacing the constraint with a penalty term we can getthe equivalent problem of (3)

= argmin120572

Y minus D12057222 + 120583 1205720 (4)

and a suitable choice of 120583 can make problem (3) equivalent toproblem (4)

Now we consider the entire image 119910 that is to considerall the patches of image 119910 Then we can construct the sparserepresentation model for the noisy image 119910 First we indexthe image 119910 with Ω = 1 2 radic1198732 then the imagepatches of size radic119899 times radic119899 located in Ω can be indexed byΓ = 1 2 radic119873minusradic119899+12 From patches to image problem(4) becomes the following problem

119894119895 = argmin120572119894119895

sum(119894119895)isinΓ

10038171003817100381710038171003817R119894119895119910 minus D1205721198941198951003817100381710038171003817100381722 + sum(119894119895)isinΓ

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (5)

whereR119894119895 is an 119899 times119873matrix that we can use to extract the (ij) patch from the image

Looking back at the assumption that the dictionaryD is known and fixed we have the following questionhow to choose and settle the dictionary In [33] Aharonput forward the K-SVD algorithm for designing dictionaryGiving the initial dictionary Aharon applied the singu-lar value decomposition (SVD) into updating dictionaryIn [34] Elad and Aharon applied the MAP estimator toproblem (5) and had a comparison of different dictionaries(overcomplete DCT global trained dictionary and adap-tive dictionary trained on patches from the noisy image)whose results show that the adaptive dictionary training isbest

After studying the sparse representation and dictionarylearning of K-SVD algorithm [33] it inspires us to apply thesparse representation to the MAP model (2) and thus we

Mathematical Problems in Engineering 3

propose a new model for Rician noise removal which is asfollows

arg min119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119894119895 minusR119894119895119909100381710038171003817100381710038172 + sum120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170+ 120582int

Ω|119863119909| 119889119905

(6)

where 119878(Ω) fl V isin 119861119881(Ω) 0 le V le 255 and intΩ|119863119909|119889119905

is the total variation (TV) of 119909 The first and second termsof our model stem from the MAP model which is data-fidelity caused by the statistical properties of Rician noiseand the third and forth terms are inspired by the sparserepresentation The last TV regularization term can make thedenoised image smooth

In model (6) there are three unknown variables thenoise-free image 119909 that we need to solve the dictionary Dand the sparse coefficients 120572119894119895 Similar to [33 34] in order tosolve problem (6) effectively and efficiently here we have thefollowing two-step algorithm

(1) Based on the degraded image 119910 we give the initialdictionary and get the sparse representation coefficients 120572119894119895then use 120572119894119895 to learn dictionary D and update correspondingcoefficients 120572119894119895

(2) Use the primal-dual algorithm to get the recoveredimage that we wanted

21 Dictionary Learning In the first step of our algorithmwe will use the degraded image 119910 to train a dictionary and allthe image patches can be sparsely represented by the traineddictionary with the corresponding coefficients 120572119894119895 The wholeprocess is just using the orthogonal matching pursuit (OMP)and K-SVD algorithms [33 34] and the procedure is to solvethe following optimization problem

D 119894119895 = arg minD120572119894119895

12 sum(119894119895)

10038171003817100381710038171003817R119894119895119909 minus D120572119894119895100381710038171003817100381710038172

+ sum(119894119895)

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (7)

For solving the difficult problem we have the followingspecific steps

(1) Initialization Set 119909 = 119910 D = overcomplete DCTdictionary

(2) Iteration 119865119900119903 119898 = 0 119905119900 119873 119889119900(119886) Given 119909 and D we get the sparse representationcoefficients 120572119894119895 through solving the following problem

119894119895 = argmin120572119894119895

12sum(119894119895)

10038171003817100381710038171003817R119894119895119909 minus D120572119894119895100381710038171003817100381710038172 + sum(119894119895)

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (8)

and we can efficiently and effectively solve model (8) by usingthe orthogonal matching pursuit (OMP) method [33 34](119887) Given 119909 and 120572119894119895 we can update the dictionary D =[1198891 1198892 119889119896] column by column [33 34] For each column119889119897 119897 = 1 119896 we update it as follows(119894) For those patches represented by119889119897 wewrite down anddenote it by 120577119897 = (119894 119895) | 120572119894119895(119897) = 0

(119894119894) For each index (119894 119895) isin 120577119897 we compute the correspond-ing representation error through

119890119897119894119895 = R119894119895119909 minus sum119898 =119897

119889119898120572119894119895 (119898) (9)

and then we use the columns 119890119897119894119895(119894119895)isin120577119897 to define a matrix E119897(119894119894119894) For the matrix E119897 we apply the singular valuedecomposition (SVD) into E119897 and get E119897 = UΔVT

Let the first column of U be 119889119897 to update 119889119897 and multiplythe first column of V by Δ(1 1) to update 120572119894119895(119897)(119894119895)isin120577119897

End forHere the dictionary training and sparse representation

are completed

22 Primal-Dual Algorithm After the first step of our algo-rithm we get the spare dictionary representation D120572119894119895 fromeach patch R119894119895119910 and we now minimize (6) with respect to 119909that is

arg min119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119894119895 minus R119894119895119909100381710038171003817100381710038172 + 120582intΩ|119863119909| 119889119905

(10)

Proposition 1 Let 119910 be a bounded function such thatinfΩ 119910 gt 0 then the objective function in (10) is strictly convexwith the constraint 1205904120573 gt (supΩ 119910)2Proof Using the notations 119882 = sum(119894119895)isinΓR

119879119894119895R119894119895 and 119872 =

sum(119894119895)isinΓR119879119894119895D120572119894119895 model (10) can be rewritten as

arg min119909isin119878(Ω)

12057321205902 ⟨119909 119909⟩ minus 120573⟨log 1198680 (119910119909

1205902 ) 1Ω⟩+ 1

2 ⟨119882119909 119909⟩ minus ⟨119872 119909⟩ + 120582 119909119879119881 (11)

Also we define 119892(119909) = (12057321205902)⟨119909 119909⟩minus120573⟨log 1198680(1199101199091205902) 1Ω⟩+(12)⟨119882119909 119909⟩ minus ⟨119872 119909⟩(119909 gt 0) and ℎ(119909) = log 1198680(119909)(119909 gt0) According to [32] we have 0 lt ℎ10158401015840(119909) = 1 minus(2119909)(1198681(119909)1198680(119909)) minus (1198681(119909)1198680(119909))2 lt 1 Then 11989210158401015840(119909) = 1205731205902 minus120573[log 1198680(1199101199091205902)]10158401015840 + 119882 gt 1205731205902 minus 120573(11991021205904) + 119882 Due to1 le 119882119894119895 le 119899 11989210158401015840(119909) gt 0 holds when 1205904120573 gt (supΩ 119910)2and 119892(119909) is strictly convex We know that 119909 997888rarr 119909119879119881 isconvex so the objective function (11) is strictly convex thatis function (10) is strictly convex

Referring to Proposition 1 we know that model (10) isconvex then we can apply the primal-dual algorithm forsolving the minimization problem which enjoys nice saddle-point structures and has good performance on nonsmooth

4 Mathematical Problems in Engineering

1 Let 120581 and 120591 be given Set 1199090 = 119910 1199090 = 119910 and 1199010 = (0 0)119879 isin R21198892 Update 119901119899 119909119899 119909119899 iteratively as follows

(a) 119901119899+1 = argmax119901isin119875

120582⟨119909119899 div 119901⟩ minus 12120581119901 minus 11990111989922

(b) 119909119899+1 = arg min119909isin119878(Ω)

119865(119909) minus 120582⟨119909 div 119901119899+1⟩ + 12120591119909 minus 11990911989922

(c) 119909119899+1 = 2119909119899+1 minus 119909119899until some stop criterion is satisfied

Algorithm 1

convex optimization [39ndash41] Moreover we discuss a biascorrection technique and the convergence of our algorithm

Briefly we define 119865(119909) fl (12057321205902)1199092 minus 120573⟨log 1198680(1199101199091205902) 1⟩ + (12)sum D120572119894119895 minus R1198941198951199092 and model (10) can betranslated to the following format

min0le119909le255

119865 (119909) + 120582 nabla1199091 (12)

Because the 119879119881-model has the duality property we canchange the primal-dual formulation of optimization problemof (12) into the following format

max119901isin119875

min0le119909le255

119865 (119909) minus 120582 ⟨119909 div 119901⟩ (13)

where 119875 = 119901 isin R2119889 max119894isin1119889|(1199012119894 + 1199012

119894+119889)12| le 1Here 119901 is the dual variable and div = minusnabla119879 The specific

algorithm is as in Algorithm 1In particular the solution of the dual problem 2(a) can be

expressed as

119901119899+1119894 = 1205871 (120582120581 (nabla119909119899)119894 + 119901119899

119894 ) 119894 = 1 2119889 (14)

where 1205871(119902119894) = 119902119894max(1 |119902119894|) 1205871(119902119889+119894) = 119902119889+119894max(1 |119902119894|)and |119902119894| = radic1199022119894 + 1199022119894+119889 119894 = 1 119889 Moreover the solution ofthe primal problem2(b) can be obtained byNewtonrsquosmethod

Inspired by [39] we provide a bias correction techniquefor (13) so that the mean of the restored image 119909lowast equals thatof the observed image 119910 In numerical practice the followingstep is implemented in order to preserve the mean of 119910

119909119899119894 fl sum119889119895=1 119910119895

sum119889119895=1max (119909119899119895 0) max (119909119899119894 0) 119894 = 1 119889 (15)

after updating 119909119899 by Newtonmethod which also ensures that119909119899 ge 0Referring to Theorem 1 in [42] we will get the conver-

gence properties of our algorithm as follows

Proposition 2 Suppose that 1205811205911205822nabla2 lt 1 en the iterates(119909119899 119901119899) of our algorithm converge to a saddle point of (13)

Inspired by [42] the condition of convergence of ouralgorithm can be simplified into 1205811205911205822 lt 18 based on thefact that nabla2 le 8 with the unit spacing size between pixelsFor simplicity in our numerical experiments we let 120581 = 8120582

and 120591 = 0015120582 and just adjust 120582 and the convergence of thealgorithm is satisfied

Before we present numerical results here we give thebasic property of model (10) rewritten in the continuoussettings that is

inf119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(16)

where (Ω119904)119904isin119868 is a finite set of small patches covering Ω andRΩ119904

is the restriction onΩ119904

Theorem 3 Let Ω be a bounded open subset of R2 withLipschitz boundary Let 119910 be a positive bounded function For119909 satisfying 119909 isin 119861119881(Ω) 119909 gt 0 let

119864 (119909) = 12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 ) 119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(17)

where 120582 gt 0 Suppose that (Ω119904)119904isin119868 is a finite local coverage ofΩ andsuminfin

119904isin119868 1198631205721199042 lt +infin en 119864(119909) has a unique minimizerwhen 1205904120573 gt (supΩ 119910)2Proof Let119864(119909) = 120582119909119879119881+119869(119909) and 119869(119909) = (12057321205902) int

Ω1199092119889119905minus

120573intΩlog 1198680(1199101199091205902)119889119905 + (12)sum D120572119894119895 minus R1198941198951199092 By Proposi-

tion 1 we know that 119869(119909) is convex Thus 119864(119909) is convexand bounded from below Consider a minimizer sequence(119909119899) for 119864(119909) Then 119909119899119879119881 and 119869(119909119899) are bounded fromabove Hence (119909119899) is bounded in 119861119881(Ω) As 119861119881 is compactin 1198711(Ω) there is 1199090 isin 119861119881 such that a subsequence (119909119899119896)converges 1199090 in 1198711(Ω) and 119861119881(Ω) and we may assumethat 119909119899119896 997888rarr 1199090 pointwise almost everywhere As the 119861119881norm is lsc 1199090119879119881 le lim inf 119909119899119896119879119881 Since 119869(119909119899119896) isbounded from below by Fatoursquos Lemma we have 119869(1199090) lelim inf 119869(119909119899119896) Thus 119864(1199090) le lim inf 119864(119909119899119896) = inf119861119881(Ω)119864(119909)and 1199090 minimizes 119864(119909) Consequently 119864(119909) admits at leastone minimizer 1199090 119864(119909) is strictly convex function hence119864(119909) admits a unique minimizer

Mathematical Problems in Engineering 5

Table 1 Parameter 120582 values for all testing algorithms

Images 120590 = 10 120590 = 15 120590 = 20MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=1)

Lena 15 15 007 001 20 20 0055 002 25 25 0045 002Barbara 22 23 0045 001 30 30 491 002 37 37 003 002House 14 14 0075 001 19 19 100 05 24 24 0045 10Monarch 15 15 0065 001 20 20 005 002 30 30 004 002Brain 17 15 0065 001 25 22 005 001 35 22 0045 001Mouse 17 17 006 001 24 24 005 001 31 32 0035 001

3 Numerical Experiments

In this section we will present our numerical experimentsto evaluate the approximation accuracy and computationalefficiency of our proposed algorithm We compare ourmethod for the denoising cases with the MAP model (2) theGTV model [31] the CZ model [32] and the NLM model[25] Here we have a brief overview of the models that wecompared

GTV model

argmin119909

120582intΩ119866120590 (119909 119910) + int

Ω|119863119909| 119889119905 (18)

119866120590 (119909 119910) = 119867120590 (119909) if 119909 ge 119888120590119867120590 (119888120590) + 1198671015840

120590 (119888120590) (119909 minus 119888120590) if 119909 le 1198881205901198671015840

120590 (119909) = 1199091205902 minus 119910

1205902119861(1199091199101205902 )

119861 (119904) equiv 1198681 (119904)1198680 (119904)asymp 1199043 + 09500371199042 + 238944119904

1199043 + 1489371199042 + 257541119904 + 465314

(19)

where 119888 = 08426 1198681 is the modified Bessel function of thefirst kind with order one [30] Because it is an approximateequality we can not write down the final explicit restorationmodel

CZ model

arg min119909isin119878(Ω)

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 )119889119905+ 1

120590 intΩ(radic119909 minus radic119910)2 119889119905 + 120582int

Ω|119863119909| 119889119905

(20)

where 119878(Ω) fl ] isin 119861119881(Ω) 0 le ] le 255 And because119861119881(Ω) [35 43] is the subspace of function 119909 isin 1198711(Ω) thefollowing quantity is finite

119869 (119909) = supintΩ119909 (119905) div (120585 (119905)) 119889119905 | 120585

isin 119862infin0 (ΩR2) 10038171003817100381710038171205851003817100381710038171003817119871infin(ΩR2) le 1

(21)

NLMmodelBefore introducing the NLM model we have a descrip-

tion of the sign that will be used For a user-defined radius

119877119904119894119898 we define a square neighborhood window centeredaround pixel 119894 as 119873119894 And a Gaussian weighted Euclidiandistance of all the pixels of each neighborhood is defined as

119871 (119898 119899) = 1198661205881003817100381710038171003817119910 (119873119898) minus 119910 (119873119899)10038171003817100381710038172119877119904119894119898 (22)

where 119866120588 is a normalized Gaussian weighting functionwith zero mean and 120588 standard deviation (usually set to1) By giving more weight to pixels near the center wecan use 119866120588 to penalize pixels far from the center of theneighborhood window And based on the similarity betweenthe neighborhoods119873119898 and119873119899 of pixels119898 and 119899 we calculatethe similarity 119891(119898 119899) as

119891 (119898 119899) = 1119862 (119898)119890minus119871(119898119899)ℎ

2 119862 (119898) = sum

forall119899

119890minus119871(119898119899)ℎ2 (23)

119862(119898) is the normalizing constant and ℎ is a exponentialdecay control parameter

Then given an image 119910 using the NLM method we cancalculate the filtered value at a point 119898 by the followingformula119873119871119872(119910 (119898)) = sum

forall119899isin119910

119891 (119898 119899) 119910 (119899)

0 le 119891 (119898 119899) le 1 sumforall119899isin119910

119891 (119898 119899) = 1 (24)

The parameter 120582 values of MAP model GTV modelCZ model and our proposed model are listed in Table 1In order to preserve the mean of the observed image thebias correction technique (15) is utilized for the CZ methodand our method All the experiments are performed underWindows 7 andMATLAB R2017a running on a PC equippedwith 290GHz CPU and 4G RAM

In our tests we choose images ldquoLenardquo and ldquoBarbarardquo withsize of 512 times 512 ldquoHouserdquo and ldquoMonarchrdquo with size of 256 times256 ldquoBrainrdquo with size of 181 times 217 and ldquoMouse intestinerdquo(briefly we call it ldquoMouserdquo throughout this paper) with sizeof 248 times 254 which are shown in Figure 1 We evaluate thequality of recovered images obtained from various denoisingalgorithms by using the structural similarity index (SSIM)[44] and the peal-signal-to-noise ratio (PSNR) [45] definedby

PSNR (119909 119909) = 20 log10 ( 2552119909 minus 1199092) (25)

6 Mathematical Problems in Engineering

Table 2 The comparison of different denoising methods

Images Methods 120590 = 10 120590 = 15 120590 = 20PSNR SIM Time PSNR SSIM Time PSNR SIM Time

Lena

Noisy 2815 0874 2465 0773 2218 0680MAP 3413 0957 16000 3227 0935 21065 3098 0915 24988GTV 3410 0957 1338 3221 0935 1338 3090 0915 1335CZ 3414 0956 411 3224 0933 504 3094 0911 524NLM 3482 0961 455 3274 0939 4147 3100 0914 4116

Algorithm 1 3539 0964 48001 3336 0945 43397 3159 0924 44305

Barbara

Noisy 2816 0913 2466 0839 2221 0765MAP 3110 0950 20792 2855 0913 24568 2690 0876 28553GTV 3109 0950 1314 2852 0913 1316 2685 0875 1316CZ 3107 0948 441 2849 0910 491 2683 0873 573NLM 3332 0968 479 3110 0947 4090 2916 0921 4000

Algorithm 1 3443 0972 85205 3206 0956 60978 2994 0932 54764

House

Noisy 2815 0605 2463 0445 2214 0346MAP 3442 0883 3462 3265 0859 4228 3136 0838 5797GTV 3433 0882 343 3246 0858 335 3108 0839 335CZ 3435 0881 085 3255 0858 100 3129 0836 115NLM 3508 0886 1022 3323 0846 1022 3139 0802 1014

Algorithm 1 3599 0902 25391 3425 0876 18307 3274 0858 16834

Monarch

Noisy 2813 0735 2467 0611 2219 0517MAP 3304 0934 5241 3054 0907 6869 2902 0865 8311GTV 3299 0934 344 3044 0907 343 2887 0865 342CZ 3306 0936 099 3057 0905 120 2904 0885 139NLM 3256 0940 1008 3075 0908 1015 2924 0871 1016

Algorithm 1 3387 0951 80341 3135 0930 48784 2960 0909 34956

Brain

Noisy 2716 0657 2357 0549 2113 0467MAP 3295 0835 2123 3008 0769 2139 2797 0719 2312GTV 3363 0952 257 3109 0917 214 2941 0889 217CZ 3367 0950 056 3110 0919 064 2921 0891 069NLM 3515 0962 616 3207 0925 612 2972 0885 599

Algorithm 1 3540 0967 19460 3247 0942 12926 3007 0911 10805

Mouse

Noisy 2818 0828 2467 0706 2213 0592MAP 3199 0930 5287 2971 0890 6566 2801 0847 6397GTV 3194 0929 392 2960 0886 391 2789 0843 429CZ 3195 0929 117 2958 0887 131 2786 0842 152NLM 3249 0935 969 2984 0888 972 2755 0833 972

Algorithm 1 3371 0952 70002 3063 0914 39723 2812 0865 29506

where 119909 and 119909 denote the original image and the recoveredimage respectively

First we choose the test image of ldquoLenardquo and degrade it byRician noise with120590 = 10 and use differentmethods to recoverthe noisy image The numerical results are enumerated inTable 2 where the first column gives the test image names thesecond column gives the method names (here Algorithm 1denotes our method presented in Section 2) and columns

3-5 (resp 6-8 and 9-11) are the PSNR (dB) SSIM and CPUtime (s) for 120590 = 10 (resp 120590 = 15 120590 = 20) We observethat the PSNR values of the restored images by Algorithm 1are higher than those of MAP GTV CZ and NLM methodMoreover you can easily find that the PSNR values of therestored images by Algorithm 1 are always the highest amongall those five methods for all images As far as ldquoBarbarardquo isconcerned the PSNR value of our method is more than 3

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 2: ResearchArticle Rician Noise Removal via a Learned Dictionary

2 Mathematical Problems in Engineering

a posteriori (MAP) estimation model was proposed which isconsidered from the feature of noise-free image and includesthe data fidelity term TheMAP model is as follows

argmin119909

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 ) 119889119905+ 120582int

Ω|119863119909| 119889119905

(2)

where 1198680 is the modified Bessel function of the first kind withorder zero [30] But it is a nonconvex function and leads to adifficult problem to solve In view of the MAP model GTVmodel [31] was put forward by Getreuer et al which is aconvex approximation of the MAP model and can be easilysolved but the fidelity item of GTV model is a complicatedpiecewise function Chen [32] proposed a new convex modelthat added a statistical property of Ricain noise into the MAPmodel leading to a new strictly convex model under mildcondition that can be easily solved by primal-dual algorithmand below we call it CZ model

In this paper we study the Rician noise and propose anew reasonable and efficient model for Rician noise removalAs we know natural images have a vital feature that issparseness and dictionary learning is being widely used forimage denoising Dictionary learning has been demonstratedto be efficient for various noise removal Aharon and Elad [3334] proposed the K-SVD algorithm for designing dictionarywith sparse representation and it is proven to be effectivefor additive white Gaussian noise removal Inspired by theK-SVD algorithm Huang [15] proposed a new model thatcombined the ldquoAArdquo model [35] and K-SVD algorithm toremove multiplication noise and also presented a log minus1198970 minimization approach to solve it Xiao [11] and Ma[10] also proposed new model via dictionary learning forPoisson noise removal and Poisson image deblurring Inaddition Liu et al [36] applied two-level Bregmanmethod todictionary updating and proposed an efficient algorithm forreconstructing MR images In [37] integrating total variation(TV) and dictionary learning Liu et al also proposed anovel gradient for image recovery Similarly integrating totalgeneralized variation and adaptive dictionary learning Luet al [38] presented a novel dictionary learning model forMRI reconstruction So we attempt to apply the sparserepresentation and dictionary learning into the MAP modelfor Rician noise removal Owing to the nonconvexity ofthe MAP model we add the sparse representation term toovercome the drawback so we can use the classical primal-dual algorithm to solve the model

The following is the outline of our paper In Section 2we first briefly introduce the dictionary learning and sparserepresentation and then we propose the new model thatcombines the MAP model with sparse representation termAlso we give and elaborate the two-step algorithm for solvingour model In Section 3 we demonstrate that our modeloutperforms the othermethods for Rician noise removal withnumerical results In the end we draw our conclusion inSection 4

2 Our Proposed Model

Generally speaking we consider that every signal instancefrom the family can be represented as a linear combination offew columns from a redundant dictionary For the degradedimage 119910 isin R

radic119873timesradic119873 regarding the image patch of size radic119899 timesradic119899 we order it as column vector Y isin R119899 lexicographicallyAnd we define a dictionary of size D isin R119899times119896 to simplyconstruct the Sparseland model where 119896 gt 119899 impliesthat the dictionary D is redundant Meanwhile we shouldalso make an assumption that the dictionary D is knownand fixed Then column vector Y can be sparsely linearlyrepresented by few atoms selected from the dictionary DThat is to say that there is a sparse solution of the followingproblem

= argmin120572

1205720 119904119906119887119895119890119888119905 119905119900 D120572 asymp Y (3)

That is 1205720 ≪ 119899 where 1205720 denotes the number of thenonzero entries in120572 And ∙0 is used to constrain the sparsityof representation

For simplicity we substitute Y minus D1205722 le 120598 for D120572 asymp YAlso replacing the constraint with a penalty term we can getthe equivalent problem of (3)

= argmin120572

Y minus D12057222 + 120583 1205720 (4)

and a suitable choice of 120583 can make problem (3) equivalent toproblem (4)

Now we consider the entire image 119910 that is to considerall the patches of image 119910 Then we can construct the sparserepresentation model for the noisy image 119910 First we indexthe image 119910 with Ω = 1 2 radic1198732 then the imagepatches of size radic119899 times radic119899 located in Ω can be indexed byΓ = 1 2 radic119873minusradic119899+12 From patches to image problem(4) becomes the following problem

119894119895 = argmin120572119894119895

sum(119894119895)isinΓ

10038171003817100381710038171003817R119894119895119910 minus D1205721198941198951003817100381710038171003817100381722 + sum(119894119895)isinΓ

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (5)

whereR119894119895 is an 119899 times119873matrix that we can use to extract the (ij) patch from the image

Looking back at the assumption that the dictionaryD is known and fixed we have the following questionhow to choose and settle the dictionary In [33] Aharonput forward the K-SVD algorithm for designing dictionaryGiving the initial dictionary Aharon applied the singu-lar value decomposition (SVD) into updating dictionaryIn [34] Elad and Aharon applied the MAP estimator toproblem (5) and had a comparison of different dictionaries(overcomplete DCT global trained dictionary and adap-tive dictionary trained on patches from the noisy image)whose results show that the adaptive dictionary training isbest

After studying the sparse representation and dictionarylearning of K-SVD algorithm [33] it inspires us to apply thesparse representation to the MAP model (2) and thus we

Mathematical Problems in Engineering 3

propose a new model for Rician noise removal which is asfollows

arg min119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119894119895 minusR119894119895119909100381710038171003817100381710038172 + sum120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170+ 120582int

Ω|119863119909| 119889119905

(6)

where 119878(Ω) fl V isin 119861119881(Ω) 0 le V le 255 and intΩ|119863119909|119889119905

is the total variation (TV) of 119909 The first and second termsof our model stem from the MAP model which is data-fidelity caused by the statistical properties of Rician noiseand the third and forth terms are inspired by the sparserepresentation The last TV regularization term can make thedenoised image smooth

In model (6) there are three unknown variables thenoise-free image 119909 that we need to solve the dictionary Dand the sparse coefficients 120572119894119895 Similar to [33 34] in order tosolve problem (6) effectively and efficiently here we have thefollowing two-step algorithm

(1) Based on the degraded image 119910 we give the initialdictionary and get the sparse representation coefficients 120572119894119895then use 120572119894119895 to learn dictionary D and update correspondingcoefficients 120572119894119895

(2) Use the primal-dual algorithm to get the recoveredimage that we wanted

21 Dictionary Learning In the first step of our algorithmwe will use the degraded image 119910 to train a dictionary and allthe image patches can be sparsely represented by the traineddictionary with the corresponding coefficients 120572119894119895 The wholeprocess is just using the orthogonal matching pursuit (OMP)and K-SVD algorithms [33 34] and the procedure is to solvethe following optimization problem

D 119894119895 = arg minD120572119894119895

12 sum(119894119895)

10038171003817100381710038171003817R119894119895119909 minus D120572119894119895100381710038171003817100381710038172

+ sum(119894119895)

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (7)

For solving the difficult problem we have the followingspecific steps

(1) Initialization Set 119909 = 119910 D = overcomplete DCTdictionary

(2) Iteration 119865119900119903 119898 = 0 119905119900 119873 119889119900(119886) Given 119909 and D we get the sparse representationcoefficients 120572119894119895 through solving the following problem

119894119895 = argmin120572119894119895

12sum(119894119895)

10038171003817100381710038171003817R119894119895119909 minus D120572119894119895100381710038171003817100381710038172 + sum(119894119895)

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (8)

and we can efficiently and effectively solve model (8) by usingthe orthogonal matching pursuit (OMP) method [33 34](119887) Given 119909 and 120572119894119895 we can update the dictionary D =[1198891 1198892 119889119896] column by column [33 34] For each column119889119897 119897 = 1 119896 we update it as follows(119894) For those patches represented by119889119897 wewrite down anddenote it by 120577119897 = (119894 119895) | 120572119894119895(119897) = 0

(119894119894) For each index (119894 119895) isin 120577119897 we compute the correspond-ing representation error through

119890119897119894119895 = R119894119895119909 minus sum119898 =119897

119889119898120572119894119895 (119898) (9)

and then we use the columns 119890119897119894119895(119894119895)isin120577119897 to define a matrix E119897(119894119894119894) For the matrix E119897 we apply the singular valuedecomposition (SVD) into E119897 and get E119897 = UΔVT

Let the first column of U be 119889119897 to update 119889119897 and multiplythe first column of V by Δ(1 1) to update 120572119894119895(119897)(119894119895)isin120577119897

End forHere the dictionary training and sparse representation

are completed

22 Primal-Dual Algorithm After the first step of our algo-rithm we get the spare dictionary representation D120572119894119895 fromeach patch R119894119895119910 and we now minimize (6) with respect to 119909that is

arg min119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119894119895 minus R119894119895119909100381710038171003817100381710038172 + 120582intΩ|119863119909| 119889119905

(10)

Proposition 1 Let 119910 be a bounded function such thatinfΩ 119910 gt 0 then the objective function in (10) is strictly convexwith the constraint 1205904120573 gt (supΩ 119910)2Proof Using the notations 119882 = sum(119894119895)isinΓR

119879119894119895R119894119895 and 119872 =

sum(119894119895)isinΓR119879119894119895D120572119894119895 model (10) can be rewritten as

arg min119909isin119878(Ω)

12057321205902 ⟨119909 119909⟩ minus 120573⟨log 1198680 (119910119909

1205902 ) 1Ω⟩+ 1

2 ⟨119882119909 119909⟩ minus ⟨119872 119909⟩ + 120582 119909119879119881 (11)

Also we define 119892(119909) = (12057321205902)⟨119909 119909⟩minus120573⟨log 1198680(1199101199091205902) 1Ω⟩+(12)⟨119882119909 119909⟩ minus ⟨119872 119909⟩(119909 gt 0) and ℎ(119909) = log 1198680(119909)(119909 gt0) According to [32] we have 0 lt ℎ10158401015840(119909) = 1 minus(2119909)(1198681(119909)1198680(119909)) minus (1198681(119909)1198680(119909))2 lt 1 Then 11989210158401015840(119909) = 1205731205902 minus120573[log 1198680(1199101199091205902)]10158401015840 + 119882 gt 1205731205902 minus 120573(11991021205904) + 119882 Due to1 le 119882119894119895 le 119899 11989210158401015840(119909) gt 0 holds when 1205904120573 gt (supΩ 119910)2and 119892(119909) is strictly convex We know that 119909 997888rarr 119909119879119881 isconvex so the objective function (11) is strictly convex thatis function (10) is strictly convex

Referring to Proposition 1 we know that model (10) isconvex then we can apply the primal-dual algorithm forsolving the minimization problem which enjoys nice saddle-point structures and has good performance on nonsmooth

4 Mathematical Problems in Engineering

1 Let 120581 and 120591 be given Set 1199090 = 119910 1199090 = 119910 and 1199010 = (0 0)119879 isin R21198892 Update 119901119899 119909119899 119909119899 iteratively as follows

(a) 119901119899+1 = argmax119901isin119875

120582⟨119909119899 div 119901⟩ minus 12120581119901 minus 11990111989922

(b) 119909119899+1 = arg min119909isin119878(Ω)

119865(119909) minus 120582⟨119909 div 119901119899+1⟩ + 12120591119909 minus 11990911989922

(c) 119909119899+1 = 2119909119899+1 minus 119909119899until some stop criterion is satisfied

Algorithm 1

convex optimization [39ndash41] Moreover we discuss a biascorrection technique and the convergence of our algorithm

Briefly we define 119865(119909) fl (12057321205902)1199092 minus 120573⟨log 1198680(1199101199091205902) 1⟩ + (12)sum D120572119894119895 minus R1198941198951199092 and model (10) can betranslated to the following format

min0le119909le255

119865 (119909) + 120582 nabla1199091 (12)

Because the 119879119881-model has the duality property we canchange the primal-dual formulation of optimization problemof (12) into the following format

max119901isin119875

min0le119909le255

119865 (119909) minus 120582 ⟨119909 div 119901⟩ (13)

where 119875 = 119901 isin R2119889 max119894isin1119889|(1199012119894 + 1199012

119894+119889)12| le 1Here 119901 is the dual variable and div = minusnabla119879 The specific

algorithm is as in Algorithm 1In particular the solution of the dual problem 2(a) can be

expressed as

119901119899+1119894 = 1205871 (120582120581 (nabla119909119899)119894 + 119901119899

119894 ) 119894 = 1 2119889 (14)

where 1205871(119902119894) = 119902119894max(1 |119902119894|) 1205871(119902119889+119894) = 119902119889+119894max(1 |119902119894|)and |119902119894| = radic1199022119894 + 1199022119894+119889 119894 = 1 119889 Moreover the solution ofthe primal problem2(b) can be obtained byNewtonrsquosmethod

Inspired by [39] we provide a bias correction techniquefor (13) so that the mean of the restored image 119909lowast equals thatof the observed image 119910 In numerical practice the followingstep is implemented in order to preserve the mean of 119910

119909119899119894 fl sum119889119895=1 119910119895

sum119889119895=1max (119909119899119895 0) max (119909119899119894 0) 119894 = 1 119889 (15)

after updating 119909119899 by Newtonmethod which also ensures that119909119899 ge 0Referring to Theorem 1 in [42] we will get the conver-

gence properties of our algorithm as follows

Proposition 2 Suppose that 1205811205911205822nabla2 lt 1 en the iterates(119909119899 119901119899) of our algorithm converge to a saddle point of (13)

Inspired by [42] the condition of convergence of ouralgorithm can be simplified into 1205811205911205822 lt 18 based on thefact that nabla2 le 8 with the unit spacing size between pixelsFor simplicity in our numerical experiments we let 120581 = 8120582

and 120591 = 0015120582 and just adjust 120582 and the convergence of thealgorithm is satisfied

Before we present numerical results here we give thebasic property of model (10) rewritten in the continuoussettings that is

inf119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(16)

where (Ω119904)119904isin119868 is a finite set of small patches covering Ω andRΩ119904

is the restriction onΩ119904

Theorem 3 Let Ω be a bounded open subset of R2 withLipschitz boundary Let 119910 be a positive bounded function For119909 satisfying 119909 isin 119861119881(Ω) 119909 gt 0 let

119864 (119909) = 12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 ) 119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(17)

where 120582 gt 0 Suppose that (Ω119904)119904isin119868 is a finite local coverage ofΩ andsuminfin

119904isin119868 1198631205721199042 lt +infin en 119864(119909) has a unique minimizerwhen 1205904120573 gt (supΩ 119910)2Proof Let119864(119909) = 120582119909119879119881+119869(119909) and 119869(119909) = (12057321205902) int

Ω1199092119889119905minus

120573intΩlog 1198680(1199101199091205902)119889119905 + (12)sum D120572119894119895 minus R1198941198951199092 By Proposi-

tion 1 we know that 119869(119909) is convex Thus 119864(119909) is convexand bounded from below Consider a minimizer sequence(119909119899) for 119864(119909) Then 119909119899119879119881 and 119869(119909119899) are bounded fromabove Hence (119909119899) is bounded in 119861119881(Ω) As 119861119881 is compactin 1198711(Ω) there is 1199090 isin 119861119881 such that a subsequence (119909119899119896)converges 1199090 in 1198711(Ω) and 119861119881(Ω) and we may assumethat 119909119899119896 997888rarr 1199090 pointwise almost everywhere As the 119861119881norm is lsc 1199090119879119881 le lim inf 119909119899119896119879119881 Since 119869(119909119899119896) isbounded from below by Fatoursquos Lemma we have 119869(1199090) lelim inf 119869(119909119899119896) Thus 119864(1199090) le lim inf 119864(119909119899119896) = inf119861119881(Ω)119864(119909)and 1199090 minimizes 119864(119909) Consequently 119864(119909) admits at leastone minimizer 1199090 119864(119909) is strictly convex function hence119864(119909) admits a unique minimizer

Mathematical Problems in Engineering 5

Table 1 Parameter 120582 values for all testing algorithms

Images 120590 = 10 120590 = 15 120590 = 20MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=1)

Lena 15 15 007 001 20 20 0055 002 25 25 0045 002Barbara 22 23 0045 001 30 30 491 002 37 37 003 002House 14 14 0075 001 19 19 100 05 24 24 0045 10Monarch 15 15 0065 001 20 20 005 002 30 30 004 002Brain 17 15 0065 001 25 22 005 001 35 22 0045 001Mouse 17 17 006 001 24 24 005 001 31 32 0035 001

3 Numerical Experiments

In this section we will present our numerical experimentsto evaluate the approximation accuracy and computationalefficiency of our proposed algorithm We compare ourmethod for the denoising cases with the MAP model (2) theGTV model [31] the CZ model [32] and the NLM model[25] Here we have a brief overview of the models that wecompared

GTV model

argmin119909

120582intΩ119866120590 (119909 119910) + int

Ω|119863119909| 119889119905 (18)

119866120590 (119909 119910) = 119867120590 (119909) if 119909 ge 119888120590119867120590 (119888120590) + 1198671015840

120590 (119888120590) (119909 minus 119888120590) if 119909 le 1198881205901198671015840

120590 (119909) = 1199091205902 minus 119910

1205902119861(1199091199101205902 )

119861 (119904) equiv 1198681 (119904)1198680 (119904)asymp 1199043 + 09500371199042 + 238944119904

1199043 + 1489371199042 + 257541119904 + 465314

(19)

where 119888 = 08426 1198681 is the modified Bessel function of thefirst kind with order one [30] Because it is an approximateequality we can not write down the final explicit restorationmodel

CZ model

arg min119909isin119878(Ω)

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 )119889119905+ 1

120590 intΩ(radic119909 minus radic119910)2 119889119905 + 120582int

Ω|119863119909| 119889119905

(20)

where 119878(Ω) fl ] isin 119861119881(Ω) 0 le ] le 255 And because119861119881(Ω) [35 43] is the subspace of function 119909 isin 1198711(Ω) thefollowing quantity is finite

119869 (119909) = supintΩ119909 (119905) div (120585 (119905)) 119889119905 | 120585

isin 119862infin0 (ΩR2) 10038171003817100381710038171205851003817100381710038171003817119871infin(ΩR2) le 1

(21)

NLMmodelBefore introducing the NLM model we have a descrip-

tion of the sign that will be used For a user-defined radius

119877119904119894119898 we define a square neighborhood window centeredaround pixel 119894 as 119873119894 And a Gaussian weighted Euclidiandistance of all the pixels of each neighborhood is defined as

119871 (119898 119899) = 1198661205881003817100381710038171003817119910 (119873119898) minus 119910 (119873119899)10038171003817100381710038172119877119904119894119898 (22)

where 119866120588 is a normalized Gaussian weighting functionwith zero mean and 120588 standard deviation (usually set to1) By giving more weight to pixels near the center wecan use 119866120588 to penalize pixels far from the center of theneighborhood window And based on the similarity betweenthe neighborhoods119873119898 and119873119899 of pixels119898 and 119899 we calculatethe similarity 119891(119898 119899) as

119891 (119898 119899) = 1119862 (119898)119890minus119871(119898119899)ℎ

2 119862 (119898) = sum

forall119899

119890minus119871(119898119899)ℎ2 (23)

119862(119898) is the normalizing constant and ℎ is a exponentialdecay control parameter

Then given an image 119910 using the NLM method we cancalculate the filtered value at a point 119898 by the followingformula119873119871119872(119910 (119898)) = sum

forall119899isin119910

119891 (119898 119899) 119910 (119899)

0 le 119891 (119898 119899) le 1 sumforall119899isin119910

119891 (119898 119899) = 1 (24)

The parameter 120582 values of MAP model GTV modelCZ model and our proposed model are listed in Table 1In order to preserve the mean of the observed image thebias correction technique (15) is utilized for the CZ methodand our method All the experiments are performed underWindows 7 andMATLAB R2017a running on a PC equippedwith 290GHz CPU and 4G RAM

In our tests we choose images ldquoLenardquo and ldquoBarbarardquo withsize of 512 times 512 ldquoHouserdquo and ldquoMonarchrdquo with size of 256 times256 ldquoBrainrdquo with size of 181 times 217 and ldquoMouse intestinerdquo(briefly we call it ldquoMouserdquo throughout this paper) with sizeof 248 times 254 which are shown in Figure 1 We evaluate thequality of recovered images obtained from various denoisingalgorithms by using the structural similarity index (SSIM)[44] and the peal-signal-to-noise ratio (PSNR) [45] definedby

PSNR (119909 119909) = 20 log10 ( 2552119909 minus 1199092) (25)

6 Mathematical Problems in Engineering

Table 2 The comparison of different denoising methods

Images Methods 120590 = 10 120590 = 15 120590 = 20PSNR SIM Time PSNR SSIM Time PSNR SIM Time

Lena

Noisy 2815 0874 2465 0773 2218 0680MAP 3413 0957 16000 3227 0935 21065 3098 0915 24988GTV 3410 0957 1338 3221 0935 1338 3090 0915 1335CZ 3414 0956 411 3224 0933 504 3094 0911 524NLM 3482 0961 455 3274 0939 4147 3100 0914 4116

Algorithm 1 3539 0964 48001 3336 0945 43397 3159 0924 44305

Barbara

Noisy 2816 0913 2466 0839 2221 0765MAP 3110 0950 20792 2855 0913 24568 2690 0876 28553GTV 3109 0950 1314 2852 0913 1316 2685 0875 1316CZ 3107 0948 441 2849 0910 491 2683 0873 573NLM 3332 0968 479 3110 0947 4090 2916 0921 4000

Algorithm 1 3443 0972 85205 3206 0956 60978 2994 0932 54764

House

Noisy 2815 0605 2463 0445 2214 0346MAP 3442 0883 3462 3265 0859 4228 3136 0838 5797GTV 3433 0882 343 3246 0858 335 3108 0839 335CZ 3435 0881 085 3255 0858 100 3129 0836 115NLM 3508 0886 1022 3323 0846 1022 3139 0802 1014

Algorithm 1 3599 0902 25391 3425 0876 18307 3274 0858 16834

Monarch

Noisy 2813 0735 2467 0611 2219 0517MAP 3304 0934 5241 3054 0907 6869 2902 0865 8311GTV 3299 0934 344 3044 0907 343 2887 0865 342CZ 3306 0936 099 3057 0905 120 2904 0885 139NLM 3256 0940 1008 3075 0908 1015 2924 0871 1016

Algorithm 1 3387 0951 80341 3135 0930 48784 2960 0909 34956

Brain

Noisy 2716 0657 2357 0549 2113 0467MAP 3295 0835 2123 3008 0769 2139 2797 0719 2312GTV 3363 0952 257 3109 0917 214 2941 0889 217CZ 3367 0950 056 3110 0919 064 2921 0891 069NLM 3515 0962 616 3207 0925 612 2972 0885 599

Algorithm 1 3540 0967 19460 3247 0942 12926 3007 0911 10805

Mouse

Noisy 2818 0828 2467 0706 2213 0592MAP 3199 0930 5287 2971 0890 6566 2801 0847 6397GTV 3194 0929 392 2960 0886 391 2789 0843 429CZ 3195 0929 117 2958 0887 131 2786 0842 152NLM 3249 0935 969 2984 0888 972 2755 0833 972

Algorithm 1 3371 0952 70002 3063 0914 39723 2812 0865 29506

where 119909 and 119909 denote the original image and the recoveredimage respectively

First we choose the test image of ldquoLenardquo and degrade it byRician noise with120590 = 10 and use differentmethods to recoverthe noisy image The numerical results are enumerated inTable 2 where the first column gives the test image names thesecond column gives the method names (here Algorithm 1denotes our method presented in Section 2) and columns

3-5 (resp 6-8 and 9-11) are the PSNR (dB) SSIM and CPUtime (s) for 120590 = 10 (resp 120590 = 15 120590 = 20) We observethat the PSNR values of the restored images by Algorithm 1are higher than those of MAP GTV CZ and NLM methodMoreover you can easily find that the PSNR values of therestored images by Algorithm 1 are always the highest amongall those five methods for all images As far as ldquoBarbarardquo isconcerned the PSNR value of our method is more than 3

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 3: ResearchArticle Rician Noise Removal via a Learned Dictionary

Mathematical Problems in Engineering 3

propose a new model for Rician noise removal which is asfollows

arg min119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119894119895 minusR119894119895119909100381710038171003817100381710038172 + sum120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170+ 120582int

Ω|119863119909| 119889119905

(6)

where 119878(Ω) fl V isin 119861119881(Ω) 0 le V le 255 and intΩ|119863119909|119889119905

is the total variation (TV) of 119909 The first and second termsof our model stem from the MAP model which is data-fidelity caused by the statistical properties of Rician noiseand the third and forth terms are inspired by the sparserepresentation The last TV regularization term can make thedenoised image smooth

In model (6) there are three unknown variables thenoise-free image 119909 that we need to solve the dictionary Dand the sparse coefficients 120572119894119895 Similar to [33 34] in order tosolve problem (6) effectively and efficiently here we have thefollowing two-step algorithm

(1) Based on the degraded image 119910 we give the initialdictionary and get the sparse representation coefficients 120572119894119895then use 120572119894119895 to learn dictionary D and update correspondingcoefficients 120572119894119895

(2) Use the primal-dual algorithm to get the recoveredimage that we wanted

21 Dictionary Learning In the first step of our algorithmwe will use the degraded image 119910 to train a dictionary and allthe image patches can be sparsely represented by the traineddictionary with the corresponding coefficients 120572119894119895 The wholeprocess is just using the orthogonal matching pursuit (OMP)and K-SVD algorithms [33 34] and the procedure is to solvethe following optimization problem

D 119894119895 = arg minD120572119894119895

12 sum(119894119895)

10038171003817100381710038171003817R119894119895119909 minus D120572119894119895100381710038171003817100381710038172

+ sum(119894119895)

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (7)

For solving the difficult problem we have the followingspecific steps

(1) Initialization Set 119909 = 119910 D = overcomplete DCTdictionary

(2) Iteration 119865119900119903 119898 = 0 119905119900 119873 119889119900(119886) Given 119909 and D we get the sparse representationcoefficients 120572119894119895 through solving the following problem

119894119895 = argmin120572119894119895

12sum(119894119895)

10038171003817100381710038171003817R119894119895119909 minus D120572119894119895100381710038171003817100381710038172 + sum(119894119895)

120583119894119895 10038171003817100381710038171003817120572119894119895100381710038171003817100381710038170 (8)

and we can efficiently and effectively solve model (8) by usingthe orthogonal matching pursuit (OMP) method [33 34](119887) Given 119909 and 120572119894119895 we can update the dictionary D =[1198891 1198892 119889119896] column by column [33 34] For each column119889119897 119897 = 1 119896 we update it as follows(119894) For those patches represented by119889119897 wewrite down anddenote it by 120577119897 = (119894 119895) | 120572119894119895(119897) = 0

(119894119894) For each index (119894 119895) isin 120577119897 we compute the correspond-ing representation error through

119890119897119894119895 = R119894119895119909 minus sum119898 =119897

119889119898120572119894119895 (119898) (9)

and then we use the columns 119890119897119894119895(119894119895)isin120577119897 to define a matrix E119897(119894119894119894) For the matrix E119897 we apply the singular valuedecomposition (SVD) into E119897 and get E119897 = UΔVT

Let the first column of U be 119889119897 to update 119889119897 and multiplythe first column of V by Δ(1 1) to update 120572119894119895(119897)(119894119895)isin120577119897

End forHere the dictionary training and sparse representation

are completed

22 Primal-Dual Algorithm After the first step of our algo-rithm we get the spare dictionary representation D120572119894119895 fromeach patch R119894119895119910 and we now minimize (6) with respect to 119909that is

arg min119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119894119895 minus R119894119895119909100381710038171003817100381710038172 + 120582intΩ|119863119909| 119889119905

(10)

Proposition 1 Let 119910 be a bounded function such thatinfΩ 119910 gt 0 then the objective function in (10) is strictly convexwith the constraint 1205904120573 gt (supΩ 119910)2Proof Using the notations 119882 = sum(119894119895)isinΓR

119879119894119895R119894119895 and 119872 =

sum(119894119895)isinΓR119879119894119895D120572119894119895 model (10) can be rewritten as

arg min119909isin119878(Ω)

12057321205902 ⟨119909 119909⟩ minus 120573⟨log 1198680 (119910119909

1205902 ) 1Ω⟩+ 1

2 ⟨119882119909 119909⟩ minus ⟨119872 119909⟩ + 120582 119909119879119881 (11)

Also we define 119892(119909) = (12057321205902)⟨119909 119909⟩minus120573⟨log 1198680(1199101199091205902) 1Ω⟩+(12)⟨119882119909 119909⟩ minus ⟨119872 119909⟩(119909 gt 0) and ℎ(119909) = log 1198680(119909)(119909 gt0) According to [32] we have 0 lt ℎ10158401015840(119909) = 1 minus(2119909)(1198681(119909)1198680(119909)) minus (1198681(119909)1198680(119909))2 lt 1 Then 11989210158401015840(119909) = 1205731205902 minus120573[log 1198680(1199101199091205902)]10158401015840 + 119882 gt 1205731205902 minus 120573(11991021205904) + 119882 Due to1 le 119882119894119895 le 119899 11989210158401015840(119909) gt 0 holds when 1205904120573 gt (supΩ 119910)2and 119892(119909) is strictly convex We know that 119909 997888rarr 119909119879119881 isconvex so the objective function (11) is strictly convex thatis function (10) is strictly convex

Referring to Proposition 1 we know that model (10) isconvex then we can apply the primal-dual algorithm forsolving the minimization problem which enjoys nice saddle-point structures and has good performance on nonsmooth

4 Mathematical Problems in Engineering

1 Let 120581 and 120591 be given Set 1199090 = 119910 1199090 = 119910 and 1199010 = (0 0)119879 isin R21198892 Update 119901119899 119909119899 119909119899 iteratively as follows

(a) 119901119899+1 = argmax119901isin119875

120582⟨119909119899 div 119901⟩ minus 12120581119901 minus 11990111989922

(b) 119909119899+1 = arg min119909isin119878(Ω)

119865(119909) minus 120582⟨119909 div 119901119899+1⟩ + 12120591119909 minus 11990911989922

(c) 119909119899+1 = 2119909119899+1 minus 119909119899until some stop criterion is satisfied

Algorithm 1

convex optimization [39ndash41] Moreover we discuss a biascorrection technique and the convergence of our algorithm

Briefly we define 119865(119909) fl (12057321205902)1199092 minus 120573⟨log 1198680(1199101199091205902) 1⟩ + (12)sum D120572119894119895 minus R1198941198951199092 and model (10) can betranslated to the following format

min0le119909le255

119865 (119909) + 120582 nabla1199091 (12)

Because the 119879119881-model has the duality property we canchange the primal-dual formulation of optimization problemof (12) into the following format

max119901isin119875

min0le119909le255

119865 (119909) minus 120582 ⟨119909 div 119901⟩ (13)

where 119875 = 119901 isin R2119889 max119894isin1119889|(1199012119894 + 1199012

119894+119889)12| le 1Here 119901 is the dual variable and div = minusnabla119879 The specific

algorithm is as in Algorithm 1In particular the solution of the dual problem 2(a) can be

expressed as

119901119899+1119894 = 1205871 (120582120581 (nabla119909119899)119894 + 119901119899

119894 ) 119894 = 1 2119889 (14)

where 1205871(119902119894) = 119902119894max(1 |119902119894|) 1205871(119902119889+119894) = 119902119889+119894max(1 |119902119894|)and |119902119894| = radic1199022119894 + 1199022119894+119889 119894 = 1 119889 Moreover the solution ofthe primal problem2(b) can be obtained byNewtonrsquosmethod

Inspired by [39] we provide a bias correction techniquefor (13) so that the mean of the restored image 119909lowast equals thatof the observed image 119910 In numerical practice the followingstep is implemented in order to preserve the mean of 119910

119909119899119894 fl sum119889119895=1 119910119895

sum119889119895=1max (119909119899119895 0) max (119909119899119894 0) 119894 = 1 119889 (15)

after updating 119909119899 by Newtonmethod which also ensures that119909119899 ge 0Referring to Theorem 1 in [42] we will get the conver-

gence properties of our algorithm as follows

Proposition 2 Suppose that 1205811205911205822nabla2 lt 1 en the iterates(119909119899 119901119899) of our algorithm converge to a saddle point of (13)

Inspired by [42] the condition of convergence of ouralgorithm can be simplified into 1205811205911205822 lt 18 based on thefact that nabla2 le 8 with the unit spacing size between pixelsFor simplicity in our numerical experiments we let 120581 = 8120582

and 120591 = 0015120582 and just adjust 120582 and the convergence of thealgorithm is satisfied

Before we present numerical results here we give thebasic property of model (10) rewritten in the continuoussettings that is

inf119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(16)

where (Ω119904)119904isin119868 is a finite set of small patches covering Ω andRΩ119904

is the restriction onΩ119904

Theorem 3 Let Ω be a bounded open subset of R2 withLipschitz boundary Let 119910 be a positive bounded function For119909 satisfying 119909 isin 119861119881(Ω) 119909 gt 0 let

119864 (119909) = 12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 ) 119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(17)

where 120582 gt 0 Suppose that (Ω119904)119904isin119868 is a finite local coverage ofΩ andsuminfin

119904isin119868 1198631205721199042 lt +infin en 119864(119909) has a unique minimizerwhen 1205904120573 gt (supΩ 119910)2Proof Let119864(119909) = 120582119909119879119881+119869(119909) and 119869(119909) = (12057321205902) int

Ω1199092119889119905minus

120573intΩlog 1198680(1199101199091205902)119889119905 + (12)sum D120572119894119895 minus R1198941198951199092 By Proposi-

tion 1 we know that 119869(119909) is convex Thus 119864(119909) is convexand bounded from below Consider a minimizer sequence(119909119899) for 119864(119909) Then 119909119899119879119881 and 119869(119909119899) are bounded fromabove Hence (119909119899) is bounded in 119861119881(Ω) As 119861119881 is compactin 1198711(Ω) there is 1199090 isin 119861119881 such that a subsequence (119909119899119896)converges 1199090 in 1198711(Ω) and 119861119881(Ω) and we may assumethat 119909119899119896 997888rarr 1199090 pointwise almost everywhere As the 119861119881norm is lsc 1199090119879119881 le lim inf 119909119899119896119879119881 Since 119869(119909119899119896) isbounded from below by Fatoursquos Lemma we have 119869(1199090) lelim inf 119869(119909119899119896) Thus 119864(1199090) le lim inf 119864(119909119899119896) = inf119861119881(Ω)119864(119909)and 1199090 minimizes 119864(119909) Consequently 119864(119909) admits at leastone minimizer 1199090 119864(119909) is strictly convex function hence119864(119909) admits a unique minimizer

Mathematical Problems in Engineering 5

Table 1 Parameter 120582 values for all testing algorithms

Images 120590 = 10 120590 = 15 120590 = 20MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=1)

Lena 15 15 007 001 20 20 0055 002 25 25 0045 002Barbara 22 23 0045 001 30 30 491 002 37 37 003 002House 14 14 0075 001 19 19 100 05 24 24 0045 10Monarch 15 15 0065 001 20 20 005 002 30 30 004 002Brain 17 15 0065 001 25 22 005 001 35 22 0045 001Mouse 17 17 006 001 24 24 005 001 31 32 0035 001

3 Numerical Experiments

In this section we will present our numerical experimentsto evaluate the approximation accuracy and computationalefficiency of our proposed algorithm We compare ourmethod for the denoising cases with the MAP model (2) theGTV model [31] the CZ model [32] and the NLM model[25] Here we have a brief overview of the models that wecompared

GTV model

argmin119909

120582intΩ119866120590 (119909 119910) + int

Ω|119863119909| 119889119905 (18)

119866120590 (119909 119910) = 119867120590 (119909) if 119909 ge 119888120590119867120590 (119888120590) + 1198671015840

120590 (119888120590) (119909 minus 119888120590) if 119909 le 1198881205901198671015840

120590 (119909) = 1199091205902 minus 119910

1205902119861(1199091199101205902 )

119861 (119904) equiv 1198681 (119904)1198680 (119904)asymp 1199043 + 09500371199042 + 238944119904

1199043 + 1489371199042 + 257541119904 + 465314

(19)

where 119888 = 08426 1198681 is the modified Bessel function of thefirst kind with order one [30] Because it is an approximateequality we can not write down the final explicit restorationmodel

CZ model

arg min119909isin119878(Ω)

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 )119889119905+ 1

120590 intΩ(radic119909 minus radic119910)2 119889119905 + 120582int

Ω|119863119909| 119889119905

(20)

where 119878(Ω) fl ] isin 119861119881(Ω) 0 le ] le 255 And because119861119881(Ω) [35 43] is the subspace of function 119909 isin 1198711(Ω) thefollowing quantity is finite

119869 (119909) = supintΩ119909 (119905) div (120585 (119905)) 119889119905 | 120585

isin 119862infin0 (ΩR2) 10038171003817100381710038171205851003817100381710038171003817119871infin(ΩR2) le 1

(21)

NLMmodelBefore introducing the NLM model we have a descrip-

tion of the sign that will be used For a user-defined radius

119877119904119894119898 we define a square neighborhood window centeredaround pixel 119894 as 119873119894 And a Gaussian weighted Euclidiandistance of all the pixels of each neighborhood is defined as

119871 (119898 119899) = 1198661205881003817100381710038171003817119910 (119873119898) minus 119910 (119873119899)10038171003817100381710038172119877119904119894119898 (22)

where 119866120588 is a normalized Gaussian weighting functionwith zero mean and 120588 standard deviation (usually set to1) By giving more weight to pixels near the center wecan use 119866120588 to penalize pixels far from the center of theneighborhood window And based on the similarity betweenthe neighborhoods119873119898 and119873119899 of pixels119898 and 119899 we calculatethe similarity 119891(119898 119899) as

119891 (119898 119899) = 1119862 (119898)119890minus119871(119898119899)ℎ

2 119862 (119898) = sum

forall119899

119890minus119871(119898119899)ℎ2 (23)

119862(119898) is the normalizing constant and ℎ is a exponentialdecay control parameter

Then given an image 119910 using the NLM method we cancalculate the filtered value at a point 119898 by the followingformula119873119871119872(119910 (119898)) = sum

forall119899isin119910

119891 (119898 119899) 119910 (119899)

0 le 119891 (119898 119899) le 1 sumforall119899isin119910

119891 (119898 119899) = 1 (24)

The parameter 120582 values of MAP model GTV modelCZ model and our proposed model are listed in Table 1In order to preserve the mean of the observed image thebias correction technique (15) is utilized for the CZ methodand our method All the experiments are performed underWindows 7 andMATLAB R2017a running on a PC equippedwith 290GHz CPU and 4G RAM

In our tests we choose images ldquoLenardquo and ldquoBarbarardquo withsize of 512 times 512 ldquoHouserdquo and ldquoMonarchrdquo with size of 256 times256 ldquoBrainrdquo with size of 181 times 217 and ldquoMouse intestinerdquo(briefly we call it ldquoMouserdquo throughout this paper) with sizeof 248 times 254 which are shown in Figure 1 We evaluate thequality of recovered images obtained from various denoisingalgorithms by using the structural similarity index (SSIM)[44] and the peal-signal-to-noise ratio (PSNR) [45] definedby

PSNR (119909 119909) = 20 log10 ( 2552119909 minus 1199092) (25)

6 Mathematical Problems in Engineering

Table 2 The comparison of different denoising methods

Images Methods 120590 = 10 120590 = 15 120590 = 20PSNR SIM Time PSNR SSIM Time PSNR SIM Time

Lena

Noisy 2815 0874 2465 0773 2218 0680MAP 3413 0957 16000 3227 0935 21065 3098 0915 24988GTV 3410 0957 1338 3221 0935 1338 3090 0915 1335CZ 3414 0956 411 3224 0933 504 3094 0911 524NLM 3482 0961 455 3274 0939 4147 3100 0914 4116

Algorithm 1 3539 0964 48001 3336 0945 43397 3159 0924 44305

Barbara

Noisy 2816 0913 2466 0839 2221 0765MAP 3110 0950 20792 2855 0913 24568 2690 0876 28553GTV 3109 0950 1314 2852 0913 1316 2685 0875 1316CZ 3107 0948 441 2849 0910 491 2683 0873 573NLM 3332 0968 479 3110 0947 4090 2916 0921 4000

Algorithm 1 3443 0972 85205 3206 0956 60978 2994 0932 54764

House

Noisy 2815 0605 2463 0445 2214 0346MAP 3442 0883 3462 3265 0859 4228 3136 0838 5797GTV 3433 0882 343 3246 0858 335 3108 0839 335CZ 3435 0881 085 3255 0858 100 3129 0836 115NLM 3508 0886 1022 3323 0846 1022 3139 0802 1014

Algorithm 1 3599 0902 25391 3425 0876 18307 3274 0858 16834

Monarch

Noisy 2813 0735 2467 0611 2219 0517MAP 3304 0934 5241 3054 0907 6869 2902 0865 8311GTV 3299 0934 344 3044 0907 343 2887 0865 342CZ 3306 0936 099 3057 0905 120 2904 0885 139NLM 3256 0940 1008 3075 0908 1015 2924 0871 1016

Algorithm 1 3387 0951 80341 3135 0930 48784 2960 0909 34956

Brain

Noisy 2716 0657 2357 0549 2113 0467MAP 3295 0835 2123 3008 0769 2139 2797 0719 2312GTV 3363 0952 257 3109 0917 214 2941 0889 217CZ 3367 0950 056 3110 0919 064 2921 0891 069NLM 3515 0962 616 3207 0925 612 2972 0885 599

Algorithm 1 3540 0967 19460 3247 0942 12926 3007 0911 10805

Mouse

Noisy 2818 0828 2467 0706 2213 0592MAP 3199 0930 5287 2971 0890 6566 2801 0847 6397GTV 3194 0929 392 2960 0886 391 2789 0843 429CZ 3195 0929 117 2958 0887 131 2786 0842 152NLM 3249 0935 969 2984 0888 972 2755 0833 972

Algorithm 1 3371 0952 70002 3063 0914 39723 2812 0865 29506

where 119909 and 119909 denote the original image and the recoveredimage respectively

First we choose the test image of ldquoLenardquo and degrade it byRician noise with120590 = 10 and use differentmethods to recoverthe noisy image The numerical results are enumerated inTable 2 where the first column gives the test image names thesecond column gives the method names (here Algorithm 1denotes our method presented in Section 2) and columns

3-5 (resp 6-8 and 9-11) are the PSNR (dB) SSIM and CPUtime (s) for 120590 = 10 (resp 120590 = 15 120590 = 20) We observethat the PSNR values of the restored images by Algorithm 1are higher than those of MAP GTV CZ and NLM methodMoreover you can easily find that the PSNR values of therestored images by Algorithm 1 are always the highest amongall those five methods for all images As far as ldquoBarbarardquo isconcerned the PSNR value of our method is more than 3

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 4: ResearchArticle Rician Noise Removal via a Learned Dictionary

4 Mathematical Problems in Engineering

1 Let 120581 and 120591 be given Set 1199090 = 119910 1199090 = 119910 and 1199010 = (0 0)119879 isin R21198892 Update 119901119899 119909119899 119909119899 iteratively as follows

(a) 119901119899+1 = argmax119901isin119875

120582⟨119909119899 div 119901⟩ minus 12120581119901 minus 11990111989922

(b) 119909119899+1 = arg min119909isin119878(Ω)

119865(119909) minus 120582⟨119909 div 119901119899+1⟩ + 12120591119909 minus 11990911989922

(c) 119909119899+1 = 2119909119899+1 minus 119909119899until some stop criterion is satisfied

Algorithm 1

convex optimization [39ndash41] Moreover we discuss a biascorrection technique and the convergence of our algorithm

Briefly we define 119865(119909) fl (12057321205902)1199092 minus 120573⟨log 1198680(1199101199091205902) 1⟩ + (12)sum D120572119894119895 minus R1198941198951199092 and model (10) can betranslated to the following format

min0le119909le255

119865 (119909) + 120582 nabla1199091 (12)

Because the 119879119881-model has the duality property we canchange the primal-dual formulation of optimization problemof (12) into the following format

max119901isin119875

min0le119909le255

119865 (119909) minus 120582 ⟨119909 div 119901⟩ (13)

where 119875 = 119901 isin R2119889 max119894isin1119889|(1199012119894 + 1199012

119894+119889)12| le 1Here 119901 is the dual variable and div = minusnabla119879 The specific

algorithm is as in Algorithm 1In particular the solution of the dual problem 2(a) can be

expressed as

119901119899+1119894 = 1205871 (120582120581 (nabla119909119899)119894 + 119901119899

119894 ) 119894 = 1 2119889 (14)

where 1205871(119902119894) = 119902119894max(1 |119902119894|) 1205871(119902119889+119894) = 119902119889+119894max(1 |119902119894|)and |119902119894| = radic1199022119894 + 1199022119894+119889 119894 = 1 119889 Moreover the solution ofthe primal problem2(b) can be obtained byNewtonrsquosmethod

Inspired by [39] we provide a bias correction techniquefor (13) so that the mean of the restored image 119909lowast equals thatof the observed image 119910 In numerical practice the followingstep is implemented in order to preserve the mean of 119910

119909119899119894 fl sum119889119895=1 119910119895

sum119889119895=1max (119909119899119895 0) max (119909119899119894 0) 119894 = 1 119889 (15)

after updating 119909119899 by Newtonmethod which also ensures that119909119899 ge 0Referring to Theorem 1 in [42] we will get the conver-

gence properties of our algorithm as follows

Proposition 2 Suppose that 1205811205911205822nabla2 lt 1 en the iterates(119909119899 119901119899) of our algorithm converge to a saddle point of (13)

Inspired by [42] the condition of convergence of ouralgorithm can be simplified into 1205811205911205822 lt 18 based on thefact that nabla2 le 8 with the unit spacing size between pixelsFor simplicity in our numerical experiments we let 120581 = 8120582

and 120591 = 0015120582 and just adjust 120582 and the convergence of thealgorithm is satisfied

Before we present numerical results here we give thebasic property of model (10) rewritten in the continuoussettings that is

inf119909isin119878(Ω)

12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 )119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(16)

where (Ω119904)119904isin119868 is a finite set of small patches covering Ω andRΩ119904

is the restriction onΩ119904

Theorem 3 Let Ω be a bounded open subset of R2 withLipschitz boundary Let 119910 be a positive bounded function For119909 satisfying 119909 isin 119861119881(Ω) 119909 gt 0 let

119864 (119909) = 12057321205902 int

Ω1199092119889119905 minus 120573int

Ωlog 1198680 (119910119909

1205902 ) 119889119905+ 1

2 sum10038171003817100381710038171003817D120572119904 minusRΩ119904119909100381710038171003817100381710038172 + 120582 119909119879119881

(17)

where 120582 gt 0 Suppose that (Ω119904)119904isin119868 is a finite local coverage ofΩ andsuminfin

119904isin119868 1198631205721199042 lt +infin en 119864(119909) has a unique minimizerwhen 1205904120573 gt (supΩ 119910)2Proof Let119864(119909) = 120582119909119879119881+119869(119909) and 119869(119909) = (12057321205902) int

Ω1199092119889119905minus

120573intΩlog 1198680(1199101199091205902)119889119905 + (12)sum D120572119894119895 minus R1198941198951199092 By Proposi-

tion 1 we know that 119869(119909) is convex Thus 119864(119909) is convexand bounded from below Consider a minimizer sequence(119909119899) for 119864(119909) Then 119909119899119879119881 and 119869(119909119899) are bounded fromabove Hence (119909119899) is bounded in 119861119881(Ω) As 119861119881 is compactin 1198711(Ω) there is 1199090 isin 119861119881 such that a subsequence (119909119899119896)converges 1199090 in 1198711(Ω) and 119861119881(Ω) and we may assumethat 119909119899119896 997888rarr 1199090 pointwise almost everywhere As the 119861119881norm is lsc 1199090119879119881 le lim inf 119909119899119896119879119881 Since 119869(119909119899119896) isbounded from below by Fatoursquos Lemma we have 119869(1199090) lelim inf 119869(119909119899119896) Thus 119864(1199090) le lim inf 119864(119909119899119896) = inf119861119881(Ω)119864(119909)and 1199090 minimizes 119864(119909) Consequently 119864(119909) admits at leastone minimizer 1199090 119864(119909) is strictly convex function hence119864(119909) admits a unique minimizer

Mathematical Problems in Engineering 5

Table 1 Parameter 120582 values for all testing algorithms

Images 120590 = 10 120590 = 15 120590 = 20MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=1)

Lena 15 15 007 001 20 20 0055 002 25 25 0045 002Barbara 22 23 0045 001 30 30 491 002 37 37 003 002House 14 14 0075 001 19 19 100 05 24 24 0045 10Monarch 15 15 0065 001 20 20 005 002 30 30 004 002Brain 17 15 0065 001 25 22 005 001 35 22 0045 001Mouse 17 17 006 001 24 24 005 001 31 32 0035 001

3 Numerical Experiments

In this section we will present our numerical experimentsto evaluate the approximation accuracy and computationalefficiency of our proposed algorithm We compare ourmethod for the denoising cases with the MAP model (2) theGTV model [31] the CZ model [32] and the NLM model[25] Here we have a brief overview of the models that wecompared

GTV model

argmin119909

120582intΩ119866120590 (119909 119910) + int

Ω|119863119909| 119889119905 (18)

119866120590 (119909 119910) = 119867120590 (119909) if 119909 ge 119888120590119867120590 (119888120590) + 1198671015840

120590 (119888120590) (119909 minus 119888120590) if 119909 le 1198881205901198671015840

120590 (119909) = 1199091205902 minus 119910

1205902119861(1199091199101205902 )

119861 (119904) equiv 1198681 (119904)1198680 (119904)asymp 1199043 + 09500371199042 + 238944119904

1199043 + 1489371199042 + 257541119904 + 465314

(19)

where 119888 = 08426 1198681 is the modified Bessel function of thefirst kind with order one [30] Because it is an approximateequality we can not write down the final explicit restorationmodel

CZ model

arg min119909isin119878(Ω)

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 )119889119905+ 1

120590 intΩ(radic119909 minus radic119910)2 119889119905 + 120582int

Ω|119863119909| 119889119905

(20)

where 119878(Ω) fl ] isin 119861119881(Ω) 0 le ] le 255 And because119861119881(Ω) [35 43] is the subspace of function 119909 isin 1198711(Ω) thefollowing quantity is finite

119869 (119909) = supintΩ119909 (119905) div (120585 (119905)) 119889119905 | 120585

isin 119862infin0 (ΩR2) 10038171003817100381710038171205851003817100381710038171003817119871infin(ΩR2) le 1

(21)

NLMmodelBefore introducing the NLM model we have a descrip-

tion of the sign that will be used For a user-defined radius

119877119904119894119898 we define a square neighborhood window centeredaround pixel 119894 as 119873119894 And a Gaussian weighted Euclidiandistance of all the pixels of each neighborhood is defined as

119871 (119898 119899) = 1198661205881003817100381710038171003817119910 (119873119898) minus 119910 (119873119899)10038171003817100381710038172119877119904119894119898 (22)

where 119866120588 is a normalized Gaussian weighting functionwith zero mean and 120588 standard deviation (usually set to1) By giving more weight to pixels near the center wecan use 119866120588 to penalize pixels far from the center of theneighborhood window And based on the similarity betweenthe neighborhoods119873119898 and119873119899 of pixels119898 and 119899 we calculatethe similarity 119891(119898 119899) as

119891 (119898 119899) = 1119862 (119898)119890minus119871(119898119899)ℎ

2 119862 (119898) = sum

forall119899

119890minus119871(119898119899)ℎ2 (23)

119862(119898) is the normalizing constant and ℎ is a exponentialdecay control parameter

Then given an image 119910 using the NLM method we cancalculate the filtered value at a point 119898 by the followingformula119873119871119872(119910 (119898)) = sum

forall119899isin119910

119891 (119898 119899) 119910 (119899)

0 le 119891 (119898 119899) le 1 sumforall119899isin119910

119891 (119898 119899) = 1 (24)

The parameter 120582 values of MAP model GTV modelCZ model and our proposed model are listed in Table 1In order to preserve the mean of the observed image thebias correction technique (15) is utilized for the CZ methodand our method All the experiments are performed underWindows 7 andMATLAB R2017a running on a PC equippedwith 290GHz CPU and 4G RAM

In our tests we choose images ldquoLenardquo and ldquoBarbarardquo withsize of 512 times 512 ldquoHouserdquo and ldquoMonarchrdquo with size of 256 times256 ldquoBrainrdquo with size of 181 times 217 and ldquoMouse intestinerdquo(briefly we call it ldquoMouserdquo throughout this paper) with sizeof 248 times 254 which are shown in Figure 1 We evaluate thequality of recovered images obtained from various denoisingalgorithms by using the structural similarity index (SSIM)[44] and the peal-signal-to-noise ratio (PSNR) [45] definedby

PSNR (119909 119909) = 20 log10 ( 2552119909 minus 1199092) (25)

6 Mathematical Problems in Engineering

Table 2 The comparison of different denoising methods

Images Methods 120590 = 10 120590 = 15 120590 = 20PSNR SIM Time PSNR SSIM Time PSNR SIM Time

Lena

Noisy 2815 0874 2465 0773 2218 0680MAP 3413 0957 16000 3227 0935 21065 3098 0915 24988GTV 3410 0957 1338 3221 0935 1338 3090 0915 1335CZ 3414 0956 411 3224 0933 504 3094 0911 524NLM 3482 0961 455 3274 0939 4147 3100 0914 4116

Algorithm 1 3539 0964 48001 3336 0945 43397 3159 0924 44305

Barbara

Noisy 2816 0913 2466 0839 2221 0765MAP 3110 0950 20792 2855 0913 24568 2690 0876 28553GTV 3109 0950 1314 2852 0913 1316 2685 0875 1316CZ 3107 0948 441 2849 0910 491 2683 0873 573NLM 3332 0968 479 3110 0947 4090 2916 0921 4000

Algorithm 1 3443 0972 85205 3206 0956 60978 2994 0932 54764

House

Noisy 2815 0605 2463 0445 2214 0346MAP 3442 0883 3462 3265 0859 4228 3136 0838 5797GTV 3433 0882 343 3246 0858 335 3108 0839 335CZ 3435 0881 085 3255 0858 100 3129 0836 115NLM 3508 0886 1022 3323 0846 1022 3139 0802 1014

Algorithm 1 3599 0902 25391 3425 0876 18307 3274 0858 16834

Monarch

Noisy 2813 0735 2467 0611 2219 0517MAP 3304 0934 5241 3054 0907 6869 2902 0865 8311GTV 3299 0934 344 3044 0907 343 2887 0865 342CZ 3306 0936 099 3057 0905 120 2904 0885 139NLM 3256 0940 1008 3075 0908 1015 2924 0871 1016

Algorithm 1 3387 0951 80341 3135 0930 48784 2960 0909 34956

Brain

Noisy 2716 0657 2357 0549 2113 0467MAP 3295 0835 2123 3008 0769 2139 2797 0719 2312GTV 3363 0952 257 3109 0917 214 2941 0889 217CZ 3367 0950 056 3110 0919 064 2921 0891 069NLM 3515 0962 616 3207 0925 612 2972 0885 599

Algorithm 1 3540 0967 19460 3247 0942 12926 3007 0911 10805

Mouse

Noisy 2818 0828 2467 0706 2213 0592MAP 3199 0930 5287 2971 0890 6566 2801 0847 6397GTV 3194 0929 392 2960 0886 391 2789 0843 429CZ 3195 0929 117 2958 0887 131 2786 0842 152NLM 3249 0935 969 2984 0888 972 2755 0833 972

Algorithm 1 3371 0952 70002 3063 0914 39723 2812 0865 29506

where 119909 and 119909 denote the original image and the recoveredimage respectively

First we choose the test image of ldquoLenardquo and degrade it byRician noise with120590 = 10 and use differentmethods to recoverthe noisy image The numerical results are enumerated inTable 2 where the first column gives the test image names thesecond column gives the method names (here Algorithm 1denotes our method presented in Section 2) and columns

3-5 (resp 6-8 and 9-11) are the PSNR (dB) SSIM and CPUtime (s) for 120590 = 10 (resp 120590 = 15 120590 = 20) We observethat the PSNR values of the restored images by Algorithm 1are higher than those of MAP GTV CZ and NLM methodMoreover you can easily find that the PSNR values of therestored images by Algorithm 1 are always the highest amongall those five methods for all images As far as ldquoBarbarardquo isconcerned the PSNR value of our method is more than 3

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 5: ResearchArticle Rician Noise Removal via a Learned Dictionary

Mathematical Problems in Engineering 5

Table 1 Parameter 120582 values for all testing algorithms

Images 120590 = 10 120590 = 15 120590 = 20MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=01) MAP GTV CZ Algorithm 1 (120573=1)

Lena 15 15 007 001 20 20 0055 002 25 25 0045 002Barbara 22 23 0045 001 30 30 491 002 37 37 003 002House 14 14 0075 001 19 19 100 05 24 24 0045 10Monarch 15 15 0065 001 20 20 005 002 30 30 004 002Brain 17 15 0065 001 25 22 005 001 35 22 0045 001Mouse 17 17 006 001 24 24 005 001 31 32 0035 001

3 Numerical Experiments

In this section we will present our numerical experimentsto evaluate the approximation accuracy and computationalefficiency of our proposed algorithm We compare ourmethod for the denoising cases with the MAP model (2) theGTV model [31] the CZ model [32] and the NLM model[25] Here we have a brief overview of the models that wecompared

GTV model

argmin119909

120582intΩ119866120590 (119909 119910) + int

Ω|119863119909| 119889119905 (18)

119866120590 (119909 119910) = 119867120590 (119909) if 119909 ge 119888120590119867120590 (119888120590) + 1198671015840

120590 (119888120590) (119909 minus 119888120590) if 119909 le 1198881205901198671015840

120590 (119909) = 1199091205902 minus 119910

1205902119861(1199091199101205902 )

119861 (119904) equiv 1198681 (119904)1198680 (119904)asymp 1199043 + 09500371199042 + 238944119904

1199043 + 1489371199042 + 257541119904 + 465314

(19)

where 119888 = 08426 1198681 is the modified Bessel function of thefirst kind with order one [30] Because it is an approximateequality we can not write down the final explicit restorationmodel

CZ model

arg min119909isin119878(Ω)

121205902 int

Ω1199092119889119905 minus int

Ωlog 1198680 (119909119910

1205902 )119889119905+ 1

120590 intΩ(radic119909 minus radic119910)2 119889119905 + 120582int

Ω|119863119909| 119889119905

(20)

where 119878(Ω) fl ] isin 119861119881(Ω) 0 le ] le 255 And because119861119881(Ω) [35 43] is the subspace of function 119909 isin 1198711(Ω) thefollowing quantity is finite

119869 (119909) = supintΩ119909 (119905) div (120585 (119905)) 119889119905 | 120585

isin 119862infin0 (ΩR2) 10038171003817100381710038171205851003817100381710038171003817119871infin(ΩR2) le 1

(21)

NLMmodelBefore introducing the NLM model we have a descrip-

tion of the sign that will be used For a user-defined radius

119877119904119894119898 we define a square neighborhood window centeredaround pixel 119894 as 119873119894 And a Gaussian weighted Euclidiandistance of all the pixels of each neighborhood is defined as

119871 (119898 119899) = 1198661205881003817100381710038171003817119910 (119873119898) minus 119910 (119873119899)10038171003817100381710038172119877119904119894119898 (22)

where 119866120588 is a normalized Gaussian weighting functionwith zero mean and 120588 standard deviation (usually set to1) By giving more weight to pixels near the center wecan use 119866120588 to penalize pixels far from the center of theneighborhood window And based on the similarity betweenthe neighborhoods119873119898 and119873119899 of pixels119898 and 119899 we calculatethe similarity 119891(119898 119899) as

119891 (119898 119899) = 1119862 (119898)119890minus119871(119898119899)ℎ

2 119862 (119898) = sum

forall119899

119890minus119871(119898119899)ℎ2 (23)

119862(119898) is the normalizing constant and ℎ is a exponentialdecay control parameter

Then given an image 119910 using the NLM method we cancalculate the filtered value at a point 119898 by the followingformula119873119871119872(119910 (119898)) = sum

forall119899isin119910

119891 (119898 119899) 119910 (119899)

0 le 119891 (119898 119899) le 1 sumforall119899isin119910

119891 (119898 119899) = 1 (24)

The parameter 120582 values of MAP model GTV modelCZ model and our proposed model are listed in Table 1In order to preserve the mean of the observed image thebias correction technique (15) is utilized for the CZ methodand our method All the experiments are performed underWindows 7 andMATLAB R2017a running on a PC equippedwith 290GHz CPU and 4G RAM

In our tests we choose images ldquoLenardquo and ldquoBarbarardquo withsize of 512 times 512 ldquoHouserdquo and ldquoMonarchrdquo with size of 256 times256 ldquoBrainrdquo with size of 181 times 217 and ldquoMouse intestinerdquo(briefly we call it ldquoMouserdquo throughout this paper) with sizeof 248 times 254 which are shown in Figure 1 We evaluate thequality of recovered images obtained from various denoisingalgorithms by using the structural similarity index (SSIM)[44] and the peal-signal-to-noise ratio (PSNR) [45] definedby

PSNR (119909 119909) = 20 log10 ( 2552119909 minus 1199092) (25)

6 Mathematical Problems in Engineering

Table 2 The comparison of different denoising methods

Images Methods 120590 = 10 120590 = 15 120590 = 20PSNR SIM Time PSNR SSIM Time PSNR SIM Time

Lena

Noisy 2815 0874 2465 0773 2218 0680MAP 3413 0957 16000 3227 0935 21065 3098 0915 24988GTV 3410 0957 1338 3221 0935 1338 3090 0915 1335CZ 3414 0956 411 3224 0933 504 3094 0911 524NLM 3482 0961 455 3274 0939 4147 3100 0914 4116

Algorithm 1 3539 0964 48001 3336 0945 43397 3159 0924 44305

Barbara

Noisy 2816 0913 2466 0839 2221 0765MAP 3110 0950 20792 2855 0913 24568 2690 0876 28553GTV 3109 0950 1314 2852 0913 1316 2685 0875 1316CZ 3107 0948 441 2849 0910 491 2683 0873 573NLM 3332 0968 479 3110 0947 4090 2916 0921 4000

Algorithm 1 3443 0972 85205 3206 0956 60978 2994 0932 54764

House

Noisy 2815 0605 2463 0445 2214 0346MAP 3442 0883 3462 3265 0859 4228 3136 0838 5797GTV 3433 0882 343 3246 0858 335 3108 0839 335CZ 3435 0881 085 3255 0858 100 3129 0836 115NLM 3508 0886 1022 3323 0846 1022 3139 0802 1014

Algorithm 1 3599 0902 25391 3425 0876 18307 3274 0858 16834

Monarch

Noisy 2813 0735 2467 0611 2219 0517MAP 3304 0934 5241 3054 0907 6869 2902 0865 8311GTV 3299 0934 344 3044 0907 343 2887 0865 342CZ 3306 0936 099 3057 0905 120 2904 0885 139NLM 3256 0940 1008 3075 0908 1015 2924 0871 1016

Algorithm 1 3387 0951 80341 3135 0930 48784 2960 0909 34956

Brain

Noisy 2716 0657 2357 0549 2113 0467MAP 3295 0835 2123 3008 0769 2139 2797 0719 2312GTV 3363 0952 257 3109 0917 214 2941 0889 217CZ 3367 0950 056 3110 0919 064 2921 0891 069NLM 3515 0962 616 3207 0925 612 2972 0885 599

Algorithm 1 3540 0967 19460 3247 0942 12926 3007 0911 10805

Mouse

Noisy 2818 0828 2467 0706 2213 0592MAP 3199 0930 5287 2971 0890 6566 2801 0847 6397GTV 3194 0929 392 2960 0886 391 2789 0843 429CZ 3195 0929 117 2958 0887 131 2786 0842 152NLM 3249 0935 969 2984 0888 972 2755 0833 972

Algorithm 1 3371 0952 70002 3063 0914 39723 2812 0865 29506

where 119909 and 119909 denote the original image and the recoveredimage respectively

First we choose the test image of ldquoLenardquo and degrade it byRician noise with120590 = 10 and use differentmethods to recoverthe noisy image The numerical results are enumerated inTable 2 where the first column gives the test image names thesecond column gives the method names (here Algorithm 1denotes our method presented in Section 2) and columns

3-5 (resp 6-8 and 9-11) are the PSNR (dB) SSIM and CPUtime (s) for 120590 = 10 (resp 120590 = 15 120590 = 20) We observethat the PSNR values of the restored images by Algorithm 1are higher than those of MAP GTV CZ and NLM methodMoreover you can easily find that the PSNR values of therestored images by Algorithm 1 are always the highest amongall those five methods for all images As far as ldquoBarbarardquo isconcerned the PSNR value of our method is more than 3

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 6: ResearchArticle Rician Noise Removal via a Learned Dictionary

6 Mathematical Problems in Engineering

Table 2 The comparison of different denoising methods

Images Methods 120590 = 10 120590 = 15 120590 = 20PSNR SIM Time PSNR SSIM Time PSNR SIM Time

Lena

Noisy 2815 0874 2465 0773 2218 0680MAP 3413 0957 16000 3227 0935 21065 3098 0915 24988GTV 3410 0957 1338 3221 0935 1338 3090 0915 1335CZ 3414 0956 411 3224 0933 504 3094 0911 524NLM 3482 0961 455 3274 0939 4147 3100 0914 4116

Algorithm 1 3539 0964 48001 3336 0945 43397 3159 0924 44305

Barbara

Noisy 2816 0913 2466 0839 2221 0765MAP 3110 0950 20792 2855 0913 24568 2690 0876 28553GTV 3109 0950 1314 2852 0913 1316 2685 0875 1316CZ 3107 0948 441 2849 0910 491 2683 0873 573NLM 3332 0968 479 3110 0947 4090 2916 0921 4000

Algorithm 1 3443 0972 85205 3206 0956 60978 2994 0932 54764

House

Noisy 2815 0605 2463 0445 2214 0346MAP 3442 0883 3462 3265 0859 4228 3136 0838 5797GTV 3433 0882 343 3246 0858 335 3108 0839 335CZ 3435 0881 085 3255 0858 100 3129 0836 115NLM 3508 0886 1022 3323 0846 1022 3139 0802 1014

Algorithm 1 3599 0902 25391 3425 0876 18307 3274 0858 16834

Monarch

Noisy 2813 0735 2467 0611 2219 0517MAP 3304 0934 5241 3054 0907 6869 2902 0865 8311GTV 3299 0934 344 3044 0907 343 2887 0865 342CZ 3306 0936 099 3057 0905 120 2904 0885 139NLM 3256 0940 1008 3075 0908 1015 2924 0871 1016

Algorithm 1 3387 0951 80341 3135 0930 48784 2960 0909 34956

Brain

Noisy 2716 0657 2357 0549 2113 0467MAP 3295 0835 2123 3008 0769 2139 2797 0719 2312GTV 3363 0952 257 3109 0917 214 2941 0889 217CZ 3367 0950 056 3110 0919 064 2921 0891 069NLM 3515 0962 616 3207 0925 612 2972 0885 599

Algorithm 1 3540 0967 19460 3247 0942 12926 3007 0911 10805

Mouse

Noisy 2818 0828 2467 0706 2213 0592MAP 3199 0930 5287 2971 0890 6566 2801 0847 6397GTV 3194 0929 392 2960 0886 391 2789 0843 429CZ 3195 0929 117 2958 0887 131 2786 0842 152NLM 3249 0935 969 2984 0888 972 2755 0833 972

Algorithm 1 3371 0952 70002 3063 0914 39723 2812 0865 29506

where 119909 and 119909 denote the original image and the recoveredimage respectively

First we choose the test image of ldquoLenardquo and degrade it byRician noise with120590 = 10 and use differentmethods to recoverthe noisy image The numerical results are enumerated inTable 2 where the first column gives the test image names thesecond column gives the method names (here Algorithm 1denotes our method presented in Section 2) and columns

3-5 (resp 6-8 and 9-11) are the PSNR (dB) SSIM and CPUtime (s) for 120590 = 10 (resp 120590 = 15 120590 = 20) We observethat the PSNR values of the restored images by Algorithm 1are higher than those of MAP GTV CZ and NLM methodMoreover you can easily find that the PSNR values of therestored images by Algorithm 1 are always the highest amongall those five methods for all images As far as ldquoBarbarardquo isconcerned the PSNR value of our method is more than 3

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 7: ResearchArticle Rician Noise Removal via a Learned Dictionary

Mathematical Problems in Engineering 7

(a) Lena (b) Barbara (c) House

(d) Monarch (e) Brain (f) Mouse

Figure 1 The original images

(a) (b) (c) (d) (e) (f)

Figure 2 Results of ldquoLenardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b) MAPmethod (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (c) GTV method (row 1 120582 = 15 row 2 120582 = 20 row 3 120582 = 25) (d) CZ method (row 1120582 = 007 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLMmethod (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20)(f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

dB higher than CZ model at 120590 = 10 The characteristicsof SSIM values are almost consistent with those of PSNRvalues However the SSIM values are composed of luminancecomparison contrast comparison and structure comparisonwhich makes differences at some points

The images are corrupted respectively by Rician distri-bution with 120590 = 10 120590 = 15 and 120590 = 20 and the restoredimages of the above five algorithms for the images ldquoLenardquoldquoBarbarardquo ldquoHouserdquo and ldquoMonarchrdquo are shown inFigures 2ndash5respectively We clearly find that much noise remains in the

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 8: ResearchArticle Rician Noise Removal via a Learned Dictionary

8 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

Figure 3 Results of ldquoBarbarardquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) row4 patch with 120590 = 20 (b) MAP method (row 1 120582 = 22 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (c) GTV method (row 1120582 = 23 row 2 120582 = 30 row 3 120582 = 37) row 4 patch with 120590 = 20 (d) CZ method (row 1 120582 = 0045 row 2 120582 = 0035 row 3 120582 = 003) row 4patch with 120590 = 20 (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with 120590 = 20) (f) ours (row 1 120582 = 001row 2 120582 = 002 row 3 120582 = 002) row 4 patch with 120590 = 20

(a) (b) (c) (d) (e) (f)

Figure 4 Results of ldquoHouserdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (c) GTV method (row 1 120582 = 14 row 2 120582 = 19 row 3 120582 = 24) (d) CZ method(row 1 120582 = 0075 row 2 120582 = 0055 row 3 120582 = 0045) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 05 row 3 120582 = 10)

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 9: ResearchArticle Rician Noise Removal via a Learned Dictionary

Mathematical Problems in Engineering 9

(a) (b) (c) (d) (e) (f)

Figure 5 Results of ldquoMonarchrdquo by different methods (a) Degraded images (row 1 with 120590 = 10 row 2 with 120590 = 15 row 3 with 120590 = 20) (b)MAP method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (c) GTV method (row 1120582 = 15 row 2 120582 = 20 row 3 120582 = 30) (d) CZ method(row 1 120582 = 0065 row 2 120582 = 005 row 3 120582 = 004) (e) NLM method (row 1-row 3 are denoised image of 120590 = 10 15 20 row 4 patch with120590 = 20) (f) ours (row 1 120582 = 001 row 2 120582 = 002 row 3 120582 = 002)

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 6 Results of ldquoBrainrdquo with 120590 = 10 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images ofMAPmethod (120582 = 17) GTVmethod (120582 = 15) CZmethod (120582 = 0065) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

results of MAP GTV and CZmethod For example in Row 4of Figure 3 we can see from the background that ldquoBarbarardquoimage restored by our method is more clearer than thoserestored by other methods and textures of Barbararsquos trousersand scarf are kept better in the restored image by ourmethodThe restored images by our method also preserve moresignificant details thanMAPGTV CZ andNLMmethods onthe hat tidbits of the ldquoLenardquo image What is more the flowersof the lower left of ldquoMonarchrdquo also indicate that our method

is superior to other methods because our method is a patch-based method which is a different framework with TV-basedmethods (ie MAP method and CZ method) The imagesobtained from our method provide smoother regions andbetter shape preservation (eg the backgrounds in ldquoLenardquoand ldquoBarbarardquo)

Figures 6ndash11 are results of experiments on ldquoBrainrdquo andldquoMouserdquo images In particular for the ldquoBrainrdquo and ldquoMouserdquoimages we not only present the recovered images but also

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 10: ResearchArticle Rician Noise Removal via a Learned Dictionary

10 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 7 Results of Brain with 120590 = 15 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 25) GTVmethod (120582 = 22) CZmethod (120582 = 005) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 8 Results of Brain with 120590 = 20 by different methods (a) is the original ldquoBrainrdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images ofMAPmethod (120582 = 35) GTVmethod (120582 = 22) CZmethod (120582 = 0045) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

use residuals to make a comparison with the original imagesThe first row is original image and recovered images ofMAP method GTV method CZ method NLM methodand our method The second row is degraded image withvarious 120590 and residuals of those methods respectively InFigures 6ndash8 the recovered images of of MAP method GTVmethod CZ method and NLM method are still unclear sowe can conclude that the recovered images by ourmethod arebest through our visions and residuals As for the ldquoMouserdquoimage the recovered results are very similar in this casewe use residuals to see which result is better In general theblurrier residual is the better according recovered image isWe can find that the outline of residual by our method isblurriest

In conclusion the results of our numerical experimentsdemonstrate that our proposed method performs better thanthe MAP method GTV method CZ method and NLMmethod

4 Conclusion

In this paper we proposed a new effective model via alearned dictionary for denoising images with Rician noiseMore specifically based on the dictionary training we adda dictionary penalty term to the original nonconvex MAPmodel to establish a new denoising model and develop atwo-step algorithm to solve our model Also we carry out

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 11: ResearchArticle Rician Noise Removal via a Learned Dictionary

Mathematical Problems in Engineering 11

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 9 Results of ldquoMouserdquo with 120590 = 10 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 10(b)-(f) are the denoised images of MAPmethod (120582 = 17) GTVmethod (120582 = 17) CZmethod (120582 = 006) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 10 Results of ldquoMouserdquo with 120590 = 15 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 15(b)-(f) are the denoised images of MAPmethod (120582 = 24) GTV method (120582 = 24) CZ method (120582 = 005) NLM method and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

experiments on various images to demonstrate the effective-ness of our modelThe numerical experiments show that ourproposedmodel is promising in denoising imageswithRiciannoise

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare that there are no conflicts of interest

Acknowledgments

The authors would like to thank one of the authors of[32] for providing the source code of CZ method Thiswork was supported in part by the National Natural ScienceFoundation of China under Grants 11871348 61872429 and61373087 by the Natural Science Foundation of GuangdongChina under Grants 2015A030313550 and 2015A030313557by the HD Video RampD Platform for Intelligent Analy-sis and Processing in Guangdong Engineering TechnologyResearch Centre of Colleges and Universities (no GCZX-A1409) by Natural Science Foundation of Shenzhen underGrant JCYJ20170818091621856 and by the Guangdong Key

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 12: ResearchArticle Rician Noise Removal via a Learned Dictionary

12 Mathematical Problems in Engineering

(a) (b) (c) (d) (e) (f)

(g) (h) (i) (j) (k) (l)

Figure 11 Results of ldquoMouserdquo with 120590 = 20 by different methods (a) is the original ldquoMouserdquo image and (g) is the degraded image with 120590 = 20(b)-(f) are the denoised images of MAPmethod (120582 = 31) GTV method (120582 = 32) CZ method (120582 = 0035) NLMmethod and ours (120582 = 001)and (h)-(l) are residuals of those methods respectively

Laboratory of Intelligent Information Processing ShenzhenUniversity China (518060)

References

[1] J-F Cai B Dong and Z Shen ldquoImage restoration A waveletframe based model for piecewise smooth functions andbeyondrdquoApplied andComputational Harmonic Analysis vol 41no 1 pp 94ndash138 2016

[2] R H Chan and J Ma ldquoA multiplicative iterative algorithm forbox-constrained penalized likelihood image restorationrdquo IEEETransactions on Image Processing vol 21 no 7 pp 3168ndash31812012

[3] C A Micchelli L Shen and Y Xu ldquoProximity algorithmsfor image models Denoisingrdquo Inverse Problems vol 27 no 4Article ID 045009 p 30 2011

[4] YWang J Yang W Yin and Y Zhang ldquoA new alternatingmin-imization algorithm for total variation image reconstructionrdquoSIAM Journal on Imaging Sciences vol 1 no 3 pp 248ndash2722008

[5] Y Dong M Hintermuller and M Neri ldquoAn efficient primal-dual method for l1-TV image restorationrdquo SIAM Journal onImaging Sciences vol 2 no 4 pp 1168ndash1189 2009

[6] J Lu K Qiao L Shen and Y Zou ldquoFixed-point algorithmsfor a TVL1 image restoration modelrdquo International Journal ofComputer Mathematics vol 95 no 9 pp 1829ndash1844 2018

[7] J Lu Z Ye and Y Zou ldquoHuber fractal image coding based ona fitting planerdquo IEEE Transactions on Image Processing vol 22no 1 pp 134ndash145 2013

[8] C A Micchelli L Shen Y Xu and X Zeng ldquoProximityalgorithms for the L1TV image denoising modelrdquo Advances inComputational Mathematics vol 38 no 2 pp 401ndash426 2013

[9] J Yang Y Zhang and W Yin ldquoAn efficient TVL1 algorithm fordeblurring multichannel images corrupted by impulsive noiserdquoSIAM Journal on Scientific Computing vol 31 no 4 pp 2842ndash2865 2009

[10] L Ma L Moisan J Yu and T Zeng ldquoA dictionary learningapproach for poisson image deblurringrdquo IEEE Transactions onMedical Imaging vol 32 no 7 pp 1277ndash1289 2013

[11] Y Xiao and T Zeng ldquoPoisson noise removal via learned dictio-naryrdquo in Proceedings of the 17th IEEE International Conferenceon Image Processing (ICIP) pp 1177ndash1180 2010

[12] H Zhang Y Dong and Q Fan ldquoWavelet frame based Poissonnoise removal and image deblurringrdquo Signal Processing vol 137pp 363ndash372 2017

[13] Y Dong and T Zeng ldquoA convex variational model for restoringblurred images with multiplicative noiserdquo SIAM Journal onImaging Sciences vol 6 no 3 pp 1598ndash1625 2013

[14] Y Huang M K Ng and YWen ldquoA new total variation methodfor multiplicative noise removalrdquo SIAM Journal on ImagingSciences vol 2 no 1 pp 20ndash40 2009

[15] Y-M Huang L Moisan M K Ng and T Zeng ldquoMultiplicativenoise removal via a learned dictionaryrdquo IEEE Transactions onImage Processing vol 21 no 11 pp 4534ndash4543 2012

[16] Z Jin and X Yang ldquoA variational model to remove the multi-plicative noise in ultrasound imagesrdquo Journal of MathematicalImaging and Vision vol 39 no 1 pp 62ndash74 2011

[17] M Kang S Yun and H Woo ldquoTwo-level convex relaxedvariational model for multiplicative denoisingrdquo SIAM Journalon Imaging Sciences vol 6 no 2 pp 875ndash903 2013

[18] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalin imaging An exp-model and its fixed-point proximity algo-rithmrdquo Applied and Computational Harmonic Analysis vol 41no 2 pp 518ndash539 2016

[19] J Lu L Shen C Xu and Y Xu ldquoMultiplicative noise removalwith a sparsity-aware optimization modelrdquo Inverse Problemsand Imaging vol 11 no 6 pp 949ndash974 2017

[20] J Lu Z Yang L Shen Z Lu H Yang and C Xu ldquoA frameletalgorithm for de-blurring images corrupted by multiplicativenoiserdquoApplied Mathematical Modelling vol 62 pp 51ndash61 2018

[21] J Lu Y Chen Y Zou and L Shen ldquoA new total variationmodelfor restoring blurred and speckle noisy imagesrdquo InternationalJournal of Wavelets Multiresolution and Information Processingvol 15 no 2 19 pages 2017

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 13: ResearchArticle Rician Noise Removal via a Learned Dictionary

Mathematical Problems in Engineering 13

[22] P Perona and J Malik ldquoScale-space and edge detection usinganisotropic diffusionrdquo IEEE Transactions on Pattern Analysisand Machine Intelligence vol 12 no 7 pp 629ndash639 1990

[23] G Gerig O Kubler R Kikinis and F A Jolesz ldquoNonlinearanisotropic filtering ofMRI datardquo IEEE Transactions onMedicalImaging vol 11 no 2 pp 221ndash232 1992

[24] S Prima S P Morrissey and C Barillot ldquoNon-local meansvariants for denoising of diffusion-weighted and diffusiontensor MRIrdquo in Proceedings of the International Conference onMedical Image Computing and Computer-Assisted Interventionpp 344ndash351 Springer Berlin Germany 2007

[25] J V Manjon J Carbonell-Caballero J J Lull G Garcıa-MartiL Martı-Bonmati and M Robles ldquoMRI denoising using Non-LocalMeansrdquoMedical Image Analysis vol 12 no 4 pp 514ndash5232008

[26] N Wiest-Daessle S Prima P Coupe S P Morrissey and CBarillot ldquoRician noise removal by non-local means filteringfor low signal-to-noise ratio MRI Applications to DT-MRIrdquoMedical Image Computing and Computer-Assisted Interventionvol 5242 no 2 pp 171ndash179 2008

[27] R D Nowak ldquoWavelet-based Rician noise removal for mag-netic resonance imagingrdquo IEEE Transactions on Image Process-ing vol 8 no 10 pp 1408ndash1419 1999

[28] A Foi ldquoNoise estimation and removal in MR imaging Thevariance-stabilization approachrdquo in Proceedings of the 8th IEEEInternational Symposium on Biomedical Imaging From Nano toMacro (ISBI) pp 1809ndash1814 IEEE Chicago IL USA 2011

[29] J C Wood and K M Johnson ldquoWavelet packet denoising ofmagnetic resonance images Importance of Rician noise at lowSNRrdquo Magnetic Resonance in Medicine vol 41 no 3 pp 631ndash635 1999

[30] F Bowman Introduction to Bessel functions Dover PublicationsMineola NY USA 2012

[31] P Getreuer M Tong and L A Vese ldquoA variational model forthe restoration of MR images corrupted by blur and Riciannoiserdquo in Proceedings of the 7th international conference onAdvances in visual computing vol Part I of Lecture Notes inComputer Science pp 686ndash698 Springer Las Vegas NV USA2011

[32] L Chen and T Zeng ldquoA convex variational model for restoringblurred imageswith large Rician noiserdquo Journal ofMathematicalImaging and Vision vol 53 no 1 pp 92ndash111 2015

[33] M Aharon M Elad and A M Bruckstein ldquoK-SVD Analgorithm for designing overcomplete dictionaries for sparserepresentationrdquo IEEE Transactions on Signal Processing vol 54no 11 pp 4311ndash4322 2006

[34] M Elad and M Aharon ldquoImage denoising via sparse andredundant representations over learned dictionariesrdquo IEEETransactions on Image Processing vol 15 no 12 pp 3736ndash37452006

[35] G Aubert and J Aujol ldquoA variational approach to removingmultiplicative noiserdquo SIAM Journal on Applied Mathematicsvol 68 no 4 pp 925ndash946 2008

[36] Q Liu S Wang K Yang J Luo Y Zhu and D Liang ldquoHighlyundersampled magnetic resonance image reconstruction usingtwo-level Bregman method with dictionary updatingrdquo IEEETransactions on Medical Imaging vol 32 no 7 pp 1290ndash13012013

[37] Q Liu S Wang L Ying X Peng Y Zhu and D LiangldquoAdaptive dictionary learning in sparse gradient domain forimage recoveryrdquo IEEE Transactions on Image Processing vol 22no 12 pp 4652ndash4663 2013

[38] H Lu J Wei Q Liu Y Wang and X Deng ldquoA dictionarylearning method with total generalized variation for mri recon-structionrdquo International Journal of Biomedical Imaging vol2016 Article ID 7512471 13 pages 2016

[39] A Chambolle and T Pock ldquoA first-order primal-dual algorithmfor convex problems with applications to imagingrdquo Journal ofMathematical Imaging and Vision vol 40 no 1 pp 120ndash1452011

[40] E Esser X Zhang and T F Chan ldquoA general frameworkfor a class of first order primal-dual algorithms for convexoptimization in imaging sciencerdquo SIAM Journal on ImagingSciences vol 3 no 4 pp 1015ndash1046 2010

[41] N Komodakis and J-C Pesquet ldquoPlaying with duality Anoverview of recent primaldual approaches for solving large-scale optimization problemsrdquo IEEE Signal Processing Magazinevol 32 no 6 pp 31ndash54 2015

[42] A Chambolle ldquoAn algorithm for total variation minimizationand applicationsrdquo Journal of Mathematical Imaging and Visionvol 20 no 1-2 pp 89ndash97 2004

[43] L Ambrosio N Fusco and D Pallara Functions of BoundedVariation and Free Discontinuity Problem Oxford UniversityPress London 2000

[44] Z Wang A C Bovik H R Sheikh and E P SimoncellildquoImage quality assessment From error visibility to structuralsimilarityrdquo IEEE Transactions on Image Processing vol 13 no4 pp 600ndash612 2004

[45] J Gibson and A Bovik Handbook of Image and Video Process-ing Academic Press 2000

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 14: ResearchArticle Rician Noise Removal via a Learned Dictionary

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom


Recommended