+ All Categories
Home > Documents > Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single...

Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single...

Date post: 30-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
14
1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian Deng, Weihong Guo, and Ting-Zhu Huang Abstract—Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing ap- proaches use multiple low-resolution images to recover one high- resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method. Index Terms—Single image super-resolution, iterative RKHS, thin-plate spline, Heaviside function I. I NTRODUCTION I MAGE super-resolution (SR), a quite active research field currently, is a process to estimate a high-resolution (HR) image from one or multiple low-resolution (LR) images. High- resolution means more details and better visibility. Due to limitation of hardware devices and high cost, one sometimes only can collect low-resolution images. For instance, synthetic aperture radar (SAR) and satellite imaging can not get high- resolution images due to long distance and air turbulence. In medical imaging MRI, high-resolution images need more time and cost [48], [61]. Thus, developing a more accurate and faster image super-resolution algorithm is important and has a lot of applications. A. Literature review In this section we review some existing super-resolution methods, some of which will be compared with the proposed method. Many existing image super-resolution methods need mul- tiple low-resolution images as inputs. We refer to them as The first and the third author thank the support by 973 Program (2013CB329404), NSFC (61370147), Sichuan Province Sci. & Tech. Research Project (2012GZX0080). The first author is also supported by Fundamental Research Funds for the Central Universities and Outstanding Doctoral Stu- dents Academic Support Program of UESTC. The second author thanks US NIH 1R21EB016535-01 for partial support. L.-J. Deng and T.-Z. Huang are with the School of Mathematical Sci- ences, University of Electronic Science and Technology of China, Cheng- du, Sichuan, 611731, P. R. China. E-mail: [email protected] and [email protected], respectively. The Corresponding Author. W. Guo is with the Department of Mathematics, Case Western Reserve University, Cleveland, OH, 44106, USA. E-mail: [email protected] multiple image super-resolution. Mathematically, there are p low-resolution images y i R m available, y i is related to a high-resolution image x R n by y i = DB i x + n i , 1 i p, (1) where D R m×n is a down-sampling operator and B i R n×n is a blurring operator that might happen due to for instance out of focus, n i R m represents random noise [52]. This paper addresses single image super-resolution, i.e., p =1 in equation (1). Compared to multiple image super- resolution, single image super-resolution is more applicable when there is only one low-resolution image available. Obvi- ously, it is also more challenging. Existing super-resolution methods, for both multiple images and single image, can be roughly put into several categories: interpolation-based, statistics-based, learning-based and other- s. This classification is by no means the best but provides an organized way for literature review. Note that, ideas of methods in different category might have overlap. For instance, some learning-based methods might also involves statistics. Interpolation is a straightforward idea for image super- resolution. There are two popular classical interpolation meth- ods: nearest-neighbor interpolation and bicubic interpolation. Nearest-neighbor interpolation fills in intensity at an unknown location by that of its nearest neighbor point. It often causes jaggy effect (see Figure 1(c)). Bicubic interpolation is to utilize a cubic kernel to interpolate. It tends to create blur effect (see Figure 1(b)). Recently, some state-of-the-art interpolation methods have been proposed [26], [27], [40], [45], [65], [66], [74]. In [40], for instance, it presents a new edge-directed interpolation method. It estimates local covariance coefficients from a low-resolution image, and then applies the coefficients to interpolate high-resolution image. In [74], the proposed edge-guided nonlinear interpolation bases on directional fil- tering and data fusion. It can preserve sharp edges and reduce ring artifacts. In [45], Mueller et al. propose an interpolation algorithm by using contourlet transform and wavelet-based linear interpolation scheme. The proposed method belongs in this category. Maximum a Posterior (MAP) and Maximum Likelihood estimator (MLE) are popular statistics-based methods [6], [19], [20]. To preserve sharp edges, Fattal [20] utilized statistical edge dependency to relate edge features in low and high resolution images. Farsin et al. [19] proposed an alternate approach using L 1 norm minimization and a bilateral prior based robust regularization. Learning-based approaches are a powerful tool for image super-resolution [10], [21], [23]–[25], [30], [37], [38], [41], Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to [email protected].
Transcript
Page 1: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

Single image super-resolution via an iterativereproducing kernel Hilbert space method

Liang-Jian Deng, Weihong Guo, and Ting-Zhu Huang

Abstract—Image super-resolution, a process to enhance imageresolution, has important applications in satellite imaging, highdefinition television, medical imaging, etc. Many existing ap-proaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative schemeto solve single image super-resolution problems. It recovers ahigh quality high-resolution image from solely one low-resolutionimage without using a training data set. We solve the problemfrom image intensity function estimation perspective and assumethe image contains smooth and edge components. We model thesmooth components of an image using a thin-plate reproducingkernel Hilbert space (RKHS) and the edges using approximatedHeaviside functions. The proposed method is applied to imagepatches, aiming to reduce computation and storage. Visual andquantitative comparisons with some competitive approaches showthe effectiveness of the proposed method.

Index Terms—Single image super-resolution, iterative RKHS,thin-plate spline, Heaviside function

I. INTRODUCTION

IMAGE super-resolution (SR), a quite active research fieldcurrently, is a process to estimate a high-resolution (HR)

image from one or multiple low-resolution (LR) images. High-resolution means more details and better visibility. Due tolimitation of hardware devices and high cost, one sometimesonly can collect low-resolution images. For instance, syntheticaperture radar (SAR) and satellite imaging can not get high-resolution images due to long distance and air turbulence. Inmedical imaging MRI, high-resolution images need more timeand cost [48], [61]. Thus, developing a more accurate andfaster image super-resolution algorithm is important and hasa lot of applications.

A. Literature review

In this section we review some existing super-resolutionmethods, some of which will be compared with the proposedmethod.

Many existing image super-resolution methods need mul-tiple low-resolution images as inputs. We refer to them as

The first and the third author thank the support by 973 Program(2013CB329404), NSFC (61370147), Sichuan Province Sci. & Tech. ResearchProject (2012GZX0080). The first author is also supported by FundamentalResearch Funds for the Central Universities and Outstanding Doctoral Stu-dents Academic Support Program of UESTC. The second author thanks USNIH 1R21EB016535-01 for partial support.

L.-J. Deng and T.-Z. Huang are with the School of Mathematical Sci-ences, University of Electronic Science and Technology of China, Cheng-du, Sichuan, 611731, P. R. China. E-mail: [email protected] [email protected], respectively.

The Corresponding Author. W. Guo is with the Department of Mathematics,Case Western Reserve University, Cleveland, OH, 44106, USA. E-mail:[email protected]

multiple image super-resolution. Mathematically, there are plow-resolution images yi ∈ Rm available, yi is related to ahigh-resolution image x ∈ Rn by

yi = DBix+ ni, 1 ≤ i ≤ p, (1)

where D ∈ Rm×n is a down-sampling operator and Bi ∈Rn×n is a blurring operator that might happen due to forinstance out of focus, ni ∈ Rm represents random noise [52].

This paper addresses single image super-resolution, i.e.,p = 1 in equation (1). Compared to multiple image super-resolution, single image super-resolution is more applicablewhen there is only one low-resolution image available. Obvi-ously, it is also more challenging.

Existing super-resolution methods, for both multiple imagesand single image, can be roughly put into several categories:interpolation-based, statistics-based, learning-based and other-s. This classification is by no means the best but providesan organized way for literature review. Note that, ideas ofmethods in different category might have overlap. For instance,some learning-based methods might also involves statistics.

Interpolation is a straightforward idea for image super-resolution. There are two popular classical interpolation meth-ods: nearest-neighbor interpolation and bicubic interpolation.Nearest-neighbor interpolation fills in intensity at an unknownlocation by that of its nearest neighbor point. It often causesjaggy effect (see Figure 1(c)). Bicubic interpolation is to utilizea cubic kernel to interpolate. It tends to create blur effect(see Figure 1(b)). Recently, some state-of-the-art interpolationmethods have been proposed [26], [27], [40], [45], [65], [66],[74]. In [40], for instance, it presents a new edge-directedinterpolation method. It estimates local covariance coefficientsfrom a low-resolution image, and then applies the coefficientsto interpolate high-resolution image. In [74], the proposededge-guided nonlinear interpolation bases on directional fil-tering and data fusion. It can preserve sharp edges and reducering artifacts. In [45], Mueller et al. propose an interpolationalgorithm by using contourlet transform and wavelet-basedlinear interpolation scheme. The proposed method belongs inthis category.

Maximum a Posterior (MAP) and Maximum Likelihoodestimator (MLE) are popular statistics-based methods [6], [19],[20]. To preserve sharp edges, Fattal [20] utilized statisticaledge dependency to relate edge features in low and highresolution images. Farsin et al. [19] proposed an alternateapproach using L1 norm minimization and a bilateral priorbased robust regularization.

Learning-based approaches are a powerful tool for imagesuper-resolution [10], [21], [23]–[25], [30], [37], [38], [41],

Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission touse this material for any other purposes must be obtained from the IEEE by sending an email to [email protected].

Page 2: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

2

[55], [56], [58], [60], [69], [76]. They normally start from twolarge training data sets, one formed of low-resolution imagesand the other formed of high-resolution images, and then learna relation between low-resolution and high-resolution images.The relation is then applied to a given low-resolution image toget a high-resolution image. Learning-based methods usuallycan obtain high quality images but they are computationallyexpensive. The results might depend on the selection oftraining data. Additionally, they are not a completely singleimage super-resolution since two large data sets are requiredfor learning. In [56], Sun et al. utilized sketch priors to extendthe low vision learning approach in [25] to get clear edges,ridges and corners. Sun et al. in [55] proposed a novel profileprior of image gradient which can describe the shape andthe sharpness of an image to obtain super-resolution images.Xie et al. proposed a method via an example-based strategywhich divides the high-frequency patches of a low-resolutionimage into different classes [69]. This method can acceler-ate image super-resolution procedure. Fernandez-Granda andCandes used transform-invariant group-sparse regularization[21]. This method performs well for highly structured straightedges and high upscaling factors. In recent years, sparsitymethods, usually associated with learning-based ideas, havebeen widely discussed for image super-resolution [15], [33],[70]–[73], [75]. In [71], [72], Yang et al. utilized sparse signalrepresentation to develop a novel method for single imagesuper-resolution. The authors first sought a sparse representa-tion for each patch of the low-resolution image and computedcorresponding coefficients, then generated the high-resolutionimage via the computed coefficients. Recently, Zeyde andElad et al. [73] proposed a local sparse-land model on imagepatches based on the work of [71], [72], and obtained improvedresults.

In addition, many other image super-resolution methodsalso have been proposed, e.g., a frequency technique [3],pixel classification methods [1], [2], iterative back projectionmethods [11], [34], [39], [57], a hybrid method [14], a kernelregression method [59] and others [5], [9], [22], [53], [62].

In summary, single image super-resolution is still a chal-lenging problem. Existing single image super-resolution meth-ods either need training data sets and expensive computationor lead to blur or jaggy effects. The aim of this paper is touse a simple mathematical scheme to recover a high qualityhigh-resolution image from one low-resolution image.

B. Motivation and contributions of the proposed work

In this paper, we use RKHS and Heaviside functionsto study single image super-resolution with only one low-resolution image as an input. We cast the super-resolutionproblem as an image intensity function estimation problem.Since images contain edges and smooth components, we mod-el them separately. We assume that the smooth componentsbelong to a special Hilbert space called RKHS that can bespanned by a basis. We model the edges using a set ofHeaviside functions. We then use intensity information ofthe given low-resolution image defined on a coarser grid toestimate coefficients of the basis and redundant functions,

(a) (b) (c)

Fig. 1. (a) A low-resolution image; (b) The super-resolution image usingbicubic interpolation method (note the blur effect); (c) The super-resolutionimage by nearest-neighbor interpolation method (note the jaggy effect on theedges); The upscaling factor is 4.

and then utilize the coefficients to generate high-resolutionimages at any finer grids. For even better performance, wemake the procedure iterative to recover more details, motivatedby the iterative back projection method [34] and the iterativeregularization method [47].

This paper has the following main contributions:• To the best of our knowledge, this is the first work to

employ RKHS method to get competitive image super-resolution results. RKHS-based methods have been con-sidered as a powerful tool to address machine learning fora long time. In image processing, however, only limitedstudies have been done, e.g., image denoising [4], imagesegmentation [36] and image colorization [49].

• Employing Heaviside functions to recover more imagedetails, not only getting sharp image edges, but also pre-serving more high-frequency details on non-edge regions.

C. Organization of this paper

The organization of this paper is as follows. In Section II,we review RKHS and splines based RKHS. We will also maketwo remarks in this section. In Section III, we present theproposed iterative RKHS model based on Heaviside functionsand discuss the algorithms. Many visual and quantitativeexperiments are shown in Section IV to demonstrate theproposed method is a competitive approach for single imagesuper-resolution. Finally, we draw conclusions in Section V.

II. REVIEW ON SPLINES BASED RKHS

In this section, we review RKHS, splines based RKHS [63]and their applications in signal/image smoothing. We willuse splines based RKHS to model the smooth componentsof images.

A. Review on RKHS and its applications

Given a subset X ⊂ R and a probability measure P on X ,we consider a Hilbert space H ⊂ L2(P), a family of functionsg : X → R, with ‖g‖L2(P) < ∞, and an associated innerproduct 〈·, ·〉H under which H is complete. The space H isa reproducing kernel Hilbert space (RKHS), if there exists asymmetric function K : X × X → R such that: (a) for eachx ∈ X , the function K(·, x) belongs to Hilbert space H, and(b) there exists reproducing relation f(x) = 〈f, K(·, x)〉H

Page 3: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

3

for all f ∈ H. Any such symmetric kernel function mustbe positive semidefinite (see Definition 1). Under suitableregularity conditions, Mercer’s theorem [43] guarantees thatthe kernel has an eigen-expansion of the form K(x, x′) =∑∞k=1 λkφk(x)φk(x′), with λ1 ≥ λ2 ≥ λ3 ≥ . . . ≥ 0 being a

non-negative sequence of eigenvalues, and {φk}∞k=1 associatedeigenfunctions, taken to be orthonormal in L2(P).Definition 1 (Positive Semidefinite Kernel) Let X be anonempty set. The kernel K : X × X → R is positivesemidefinite if and only if Gram matrix K = [K(xi, xj)]N×N ,xi ∈ X , i, j = 1, 2, · · · , N , is a positive semidefinite matrix.

Since the eigenfunctions {φk}∞k=1 form an orthonor-mal basis, any function f ∈ H has an expan-sion of the form f(x) =

∑∞k=1

√λkakφk(x), where

ak = 〈f, φk〉L2(P) =∫X f(x)φk(x) dP(x) are (gen-

eralized) Fourier coefficients. Associated with any t-wo functions in H, where f =

∑k≥1√λkakφk and

g =∑k≥1√λkbkφk, are two distinct inner products. The

first is the usual inner product in the space L2(P) de-fined as 〈f, g〉L2(P) : =

∫X f(x)g(x) dP(x) =

∑∞k=1 λkakbk.

by Parseval’s theorem. The second inner product, denoted〈f, g〉H, defines the Hilbert space. It can be written interms of the kernel eigenvalues and generalized Fourier co-efficients as 〈f, g〉H =

∑∞k=1 akbk. Using this definition,

the Hilbert ball of radius 1 for H with eigenvalues λkand eigenfunctions φk(·), is BH(1) = {f ∈ H; f(·) =∑∞k=1

√λkbkφk(·) |

∑∞k=1 b

2k = ‖b‖22 ≤ 1}. The class of

RKHS contains many interesting classes that are widely usedin practice including polynomials of degree d (K(x, y) = (1+〈x, y〉)d), Sobolev spaces with smoothness ν, Lipschitz, andsmoothing splines. Moreover, kernel K(x, x′) = 1

2e−γ|x−x′|

leads to Sobolev space H1, i.e., a space consisted of squareintegrable functions whose first order derivative is squareintegrable. K(x, x′) ∝ |x−x′|,K(x, x′) ∝ |x−x′|3 correspondto 1D piecewise linear and cubic splines respectively.

RKHS has appeared for many years, and it has been usedas a powerful tool for machine learning [7], [8], [12], [13],[44], [46], [50], [54], [63]. Its application in image processingis not so common yet. In [4], Bouboulis et al. proposed anadaptive kernel method to deal with image denoising problemin the spatial domain. This method can remove many kindsof noise (e.g., Gaussian noise, impulse noise, mixed noise)and preserves image edges effectively. In addition, Kang etal. utilized RKHS method to do image segmentation [36] andimage/video colorization [49].

Wahba proposed splines based RKHS for smoothing prob-lems in [63]. It shows that the solution of an optimizationproblem consists of a set of polynomial splines. The proposedmethod is based on splines based RKHS. We thus review themin the following two subsections.

B. A 1D spline and signal smoothing

For a real-valued function

f ∈ G = {f : f ∈ Cm−1[0, 1], f (m) ∈ L2[0, 1]},

it can be expanded at t = 0 by Taylor series as:

f(t) =∑m−1ν=0

ν! f(ν)(0) +

∫ 1

0

(t−u)m−1+

(m−1)! f(m)(u)du

= f0(t) + f1(t),(2)

with

f0(t) =

m−1∑ν=0

ν!f (ν)(0),

and

f1(t) =

∫ 1

0

(t− u)m−1+

(m− 1)!f (m)(u)du,

where (u)+ = u for u ≥ 0 and (u)+ = 0 otherwise.Let

φν(t) =tν−1

(ν − 1)!, ν = 1, 2, ...,m,

and H0 = span{φ1, φ2, ..., φm} with norm ‖φ‖2 =∑m−1ν=0 [(D(ν)φ)(0)]2, then D(m)(H0) = 0. It has been proved

in [63] that H0 is a RKHS with reproducing kernel R0(s, t) =∑mν=1 φν(s)φν(t). For a function f0 ∈ H0, we can express

f0 using the basis of H0, i.e., f0(t) =∑mν=1 dνφν(t).

Let Bm be a set of functions satisfying boundary conditionf (ν)(0) = 0, ν = 0, 1, 2, · · · ,m−1 and Gm(t, u) =

(t−u)m−1+

(m−1)! ,then

f1(t) =

∫ 1

0

(t− u)m−1+

(m− 1)!f (m)(u)du =

∫ 1

0

Gm(t, u)f (m)(u)du,

belongs to space H1 defined as follows:

H1 = {f : f ∈ Bm, f, f ′, ..., f (m−1)absolutely continuous,f (m) ∈ L2},

(3)where H1 is a Hilbert space on [0,1] with norm‖f‖2 =

∫ 1

0(f (m)(t))2dt. H1 also has been proved to

be a RKHS in [63] with reproducing kernel R1(s, t) =∫ 1

0Gm(t, u)Gm(s, u)du. For a function f1 ∈ H1, we can

express f1 via the basis of H1, denoted by {ξi}ni=1, sothat f1(t) =

∑ni=1 ciξi(t) =

∑ni=1 ciR

1(si, t), where ξi =R1(si, ·).

Due to∫ 1

0

((D(m)f0)(u))2du = 0,

m−1∑ν=0

((D(ν)f1)(0))2 = 0,

we can construct a direct sum space Gm by the two RKHSspaces H0 and H1, i.e., Gm = H0 ⊕ H1. Gm is proved as aRKHS in [63] with the following reproducing kernel

R(s, t) =

m∑ν=1

φν(s)φν(t) +

∫ 1

0

Gm(t, u)Gm(s, u)du, (4)

and norm

‖f‖2 =

m−1∑ν=0

[(D(ν)f)(0)]2 +

∫ 1

0

(f (m))2(t)dt, (5)

where f ∈ Gm. As a summary, for f ∈ Gm, we have f =f0 + f1, with f0 ∈ H0, f1 ∈ H1. It also can be written as

f(t) =

m∑ν=1

dνφν(t) +

n∑i=1

ciξi(t), (6)

Page 4: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

4

where t ∈ [0, 1].let ~f = (f(t1), f(t1), · · · , f(tn))′ be intensity values of f

at ti ∈ [0, 1], i = 1, 2, · · · , n, let

~g = ~f + η, (7)

be a noisy observation with η an additive Gaussian noise.Let T be a n × m matrix with Ti,ν = φν(ti) and let Σ

be a n × n matrix with Σi,j =< ξi, ξj >, we have therelation ~f = Td + Σc where d = (d1, d2, · · · , dm)′ andc = (c1, c2, · · · , cn)′. In [63], the following model is usedto estimate ~f from noisy discrete measurements ~g,

minc,d

1

n‖~g − Td− Σc‖2 + λc′Σc, (8)

where the second term penalties nonsmoothness.The simple model (8) has a closed-form solution:

c = M−1(I − T (T ′M−1T )−1T ′M−1)~g,d = (T ′M−1T )−1T ′M−1~g,

where M = Σ + nλI with I an identity matrix. Thecomputation burden of matrix inverse can be reduced via QRdecomposition (see details in Chapter 1 of [63]).Remark 1 Once c and d are estimated from equation (8), onecan get an estimate for the signal function f(x),

f(x) =∑mν=1 dνφν(x) +

∑ni=1 ciξi(x)

=∑mν=1 dνφν(x) +

∑ni=1 ciR

1(si, x),(9)

for any x ∈ [0, 1].Next, we will review 2D thin-plate spline which can be

viewed as an extension of the mentioned 1D spline.

C. 2D thin-plate spline and image smoothing

We use 2D thin-plate spline based RKHS, introduced in[63], for image super-resolution in this paper. We thus reviewit.

Similar to the 1D case, let f be the intensity functionof a 2D image defined on a continuous domain E2 =[0, 1] × [0, 1]. We assume f belongs in a RKHS. Let ~f =(f(t1), f(t1), · · · , f(tn))′ be its discretization on grids ti =(xi, yi) ∈ [0, 1] × [0, 1], i = 1, 2, · · · , n, the noisy image ofvector form with an additive noise η can be described by

~g = ~f + η. (10)

In [63], an optimal estimate of f for spline smoothingproblems can be obtained by minimizing the following model

min1

n‖~g − ~f‖2 + λJm(f), (11)

where m is a parameter to control the total degree of polyno-mial, and the penalty term is defined as follows

Jm(f) =

m∑ν=0

∫ +∞

−∞

∫ +∞

−∞Cνm(

∂mf

∂xν∂ym−ν)2dxdy, (12)

From Chapter 2 of [63], we know that the null space ofthe penalty function Jm(f) is a M = Cdd+m−1 dimensionspace spanned by the polynomials of degree no more thanm − 1. In the experiments, we let d = 2 (for 2D), m = 3,then M = Cdd+m−1 = 6, so the null space can be spanned by

the following terms: φ1(x, y) = 1, φ2(x, y) = x, φ3(x, y) =y, φ4(x, y) = xy, φ5(x, y) = x2, φ6(x, y) = y2. Duchon(see [18]) has proved that if there exists {ti}ni=1 so that leastsquares regression on {φν}Mν=1 is unique, then the optimizationmodel (11) has a unique solution as follows

fλ(t) =

M∑ν=1

dνφν(t) +

n∑i=1

ciEm(t, ti), (13)

where Em(t, ti) is a Green’s function for the m-iteratedLaplacian defined as:

Em(s, t) = Em(|s− t|) = θm,d|s− t|2m−dln|s− t|,

where θm,d = (−1)d/2+m+1

22m−1πd/2(m−1)!(m−d/2)! , especially, Em(t, ti)

plays the same role with ξi(t) in 1D case.Similar with equation (8), model (11) can be rewritten as:

min1

n‖~g − (Td+Kc)‖2 + λc′Kc, (14)

where T is a n × M matrix with Ti,ν = φν(ti)and K is a n × n matrix with Ki,j = Em(ti, tj).This model also has a similar closed-form solution with1D case: d = (T ′W−1T )−1T ′W−1~g, c = W−1(I −T (T ′W−1T )−1T ′W−1)~g where W = K+nλI . Additionally,a more economical version that utilizing QR decompositionalso has been provided to compute the coefficients c and d(see details in [63]). Moreover, more information about thethin-plate spline can also be found in [16]–[18], [42], [51],[64].Remark 2 Once we have computed coefficients c and d, theunderlying function f on the continuous domain E2 can beestimated as

f(w) =

M∑ν=1

dνφν(w) +

n∑i=1

ciEm(ti, w), (15)

for any w = (x, y)′ ∈ E2. One thus can get an estimate off(w) at any w ∈ [0, 1] × [0, 1]. This is very powerful andmakes image super-resolution possible.

III. THE PROPOSED ITERATIVE METHOD

Let f represent intensity function of an image defined on acontinuous domain. Without loss of generality, we assume thedomain is E2 = [0, 1] × [0, 1]. Let H,L be a high-resolutionand a low-resolution discretization of f , respectively. Fornotation simplicity, we interchangeably use H,L to representtheir matrix and vector representations. H and L are usuallyformulated by L = DBH+ε as described in equation (1), withD,B a down-sampling and a blurring operator, respectively,ε some random noise or 0 for noise-free case. We note thatthe high-resolution image H ∈ RU×V can be obtained byHi = f(thi ) with thi = (xi, yi), xi ∈ {0, 1

U−1 ,2

U−1 , · · · , 1},yi ∈ {0, 1

V−1 ,2

V−1 , · · · , 1} on a finer grid. Low-resolutionimage L ∈ RQ×S is gotten by the discretization formulaLi = f(tli) with tli = (xi, yi), xi ∈ {0, 1

Q−1 ,2

Q−1 , · · · , 1},yi ∈ {0, 1

S−1 ,2

S−1 , · · · , 1} on a coarser grid. In particular,Q,S are smaller than U, V , respectively. Actually, T l ∈Rn×M , Kl ∈ Rn×n and Th ∈ RN×M , Kh ∈ RN×n are the

Page 5: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

5

−2 −1.5 −1 −0.5 0 0.5 1 1.5 20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

ψ(x

)

ψ(x)=1/2+1/pi*arctan(x/0.7)

ψ(x)=1/2+1/pi*arctan(x/0.05)

(a) (b)

Fig. 2. (a) 1D Heaviside function; (b) Two approximated Heaviside functionswith ξ = 0.7 (blue solid line) and ξ = 0.05 (black dash line), respectively;The smaller ξ the sharper edge (color images are better visualized in the pdffile).

T,K matrices and T , K matrices in Section II-C, respectively,where n = Q · S, N = U · V and M is the dimension of thenull space of the penalty term (see details also in Section II-C).Motivated by the smoothing model (14), c, d can be solvedusing the following model

min1

n‖L−DB(Thd+Khc)‖2 + λc′Klc, (16)

where H = Thd+Khc.However, model (16) is for image smoothing. Super-

resolution results via this model may smooth out image edges.In this work, we employ Heaviside functions to recover moreimages details such as edges.

A. Heaviside function

Heaviside function, or Heaviside step function (see Figure2(a)), is defined as follows

φ(x) =

{0, x < 0,1, x ≥ 0.

(17)

The Heaviside function is singular at x = 0 and describesa jump at x = 0 perfectly. We usually use its smoothapproximation for practical problems. In our work, we usethe following approximated Heaviside function (AHF),

ψ(x) =1

2+

1

πarctan(

x

ξ), (18)

which approximates to φ(x) when ξ → 0 and ξ ∈ R actuallycontrols the smoothness. The smaller ξ the sharper jump (seeFigure 2(b)).

The AHF ψ(·) is a 1D function. Its variation ψ(vi · x +ci) is however a 2D function when x ∈ R2. If we let vi =(cos θi, sin θi), ψ(vi · x + ci) can actually describe an edgewith orientation θi located at a position specified by ci. InFigure 3, we show some examples of ψ((cos θi, sin θi) · z +ci). One can see that as θi, ci vary, we get edges of variousorientation at different locations. Confirmed by the followingtheoretical foundation, we model edges in 2D images usinglinear combination of this type of function.

Theorem 1 (see [35]) For any positive integers m, d andany p ∈ [1,∞), spanmHd = {

∑mi=1 ωiψ(vi · x + ci)}, with

ωi ∈ R, vi ∈ Rd and ci ∈ R, is approximately a compactsubset of (Lp([0, 1]d), ‖ · ‖p).

00.2

0.40.6

0.81

00.2

0.40.6

0.810

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

00.2

0.40.6

0.81

00.2

0.40.6

0.810

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.2

0.4

0.6

0.8

1

00.2

0.40.6

0.810

0.2

0.4

0.6

0.8

1

00.2

0.40.6

0.81 0

0.20.4

0.60.8

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

00.2

0.40.6

0.81

00.2

0.40.6

0.81

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

00.2

0.40.6

0.81

00.2

0.40.6

0.81

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0

0.5

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0

0.2

0.4

0.6

0.8

10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0

0.2

0.4

0.6

0.8

10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 3. Left panel: 3D surface images of ψ under ξ = 10−4 and nineparameter pairs (θ, c); right panel: the corresponding 2D images. From leftto right and then from top to bottom: ( 4π

5, 511024

), ( 4π5, 2564

), ( 4π5, 175256

),( 6π

5, 1351024

), ( 6π5, 12

), ( 6π5, 2532

), ( 8π5, 564

), ( 8π5, 75256

), ( 8π5, 75128

) (for bettervisualization, some 3D surface images in the left panel are rotated so that wecan observe the edge jumps clearly).

2D images are defined in R2, i.e., d = 2. Based on theabove theorem, we model edges in 2D images using thefollowing:

g(z) =

m∑j=1

ωjψ((cos θj , sin θj) · z + cj), (19)

where small ξ = 10−4 is used in ψ, and

θj ∈ {0, π/12, 2π/12, 3π/12, · · · , 23/12π},

while cj ∈ {0, 1n−1 ,

2n−1 , · · · , 1} where n is the number of

all pixels of low-resolution image, m = kn where k is thenumber of orientations {θj}.

Actually, equation (19) can be written as g = Ψω whereΨ ∈ Rn×m, g ∈ Rn, ω ∈ Rm.

B. The proposed iterative method based on RKHS and Heav-iside functions

In this work, we assume the underlying image intensityfunction f is the sum of smooth components and edges,which are modeled using splines based RKHS and Heavisidefunctions, i.e., f = Td+Kc+ Ψβ. Since Ψ contains a prettyexhaustive list of functions while edges are pretty sparse inimages, it is thus reasonable to expect β to be sparse. Thefinal proposed model is as follows

min1

n‖L−DB(Thd+Khc+Ψhβ)‖2+λc′Klc+α‖β‖1, (20)

where H = Thd+Khc+Ψhβ and `1 sparsity is enforced forβ. For blur free case, B = I , an identity matrix, DB(Thd+Khc+Ψhβ) is considered as T ld+Klc+Ψlβ. Since ‖β‖1 isnot differentiable, we make a variable substitution and solvethe following equivalent problem:

min1

n‖L−(T ld+Klc+Ψlβ)‖2+λc′Klc+α‖u‖1, s.t., u = β,

(21)using alternating direction method of multipliers (ADMM)that is a very popular method for solving L1 problem [29],[31], [67]. In particular, the convergence of ADMM method

Page 6: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

6

is guaranteed by many works, e.g., [28], [32]. The augmentedLagrangian of problem (21) is as follows

L(c, d, β, u) = 1n‖L− (T ld+Klc+ Ψlβ)‖2 + λc′Klc+α‖u‖1 + ρ

2‖u− β + b‖2,(22)

where α, ρ ∈ R are regularization parameters, b is a La-grangian multiplier.

The energy functional in (22) is separable with respect to(c, d, β) and u. We can thus focus on the two subproblems:

(c, d, β)-subproblem :min(c,d,β)

1n‖L− (T ld+Klc+ Ψlβ)‖2

+λc′Klc+ ρ2‖u− β + b‖2,

(23)

u-subproblem : minu α‖u‖1 + ρ2‖u− β + b‖2. (24)

The u-subproblem (24) has a closed form solution and iscalculated for each ui (see [67]) as

ui = shrink(βi − bi,α

ρ), (25)

where shrink(a, b) = sign(a)max(|a−b|, 0) and 0.(0/0) = 0is assumed.

We employ least squares method to solve the (c, d, β)-subproblem (23). The normal equation reads as Kl′Kl + nλKl Kl′T l Kl′Ψl

T l′Kl T l

′T l T l

′Ψl

Ψl′Kl Ψl′T l Ψl′Ψl + nρ2 I

cdβ

=

Kl′L

T l′L

Ψl′L+ nρ2 (u+ b)

,

(26)Equation (26) can be rewritten as the following three equa-tions,

(Kl′Kl + nλKl)c+Kl′T ld+Kl′Ψlβ = Kl′L, (27)

T l′Klc+ T l

′T ld+ T l

′Ψlβ = T l

′L, (28)

Ψl′Klc+ Ψl′T ld+ (Ψl′Ψl +nρ

2I)β = Ψl′L+

2(u+ b).

(29)

We can solve for β from equation (29) in terms of c, d:

β = (Ψl′Ψl+nρ

2I)−1(Ψl′L+

2(u+b)−Ψl′Klc−Ψl′T ld).

(30)We then substitute equation (30) into equation (27) and

equation (28) and obtain

c = (A1 −A3A−14 A2)−1(e1 −A3A

−14 e2),

d = A−14 (e2 −A2c),

β = (Ψl′Ψl +nρ

2I)−1(Ψl′L+

2(u+ b)−Ψl′Klc−Ψl′T ld),

(31)

where A1 = (Kl′Kl+nλKl)−Kl′Ψl(Ψl′Ψl+ nρ2 I)−1Ψl′Kl,

A2 = T l′Kl − T l′Ψl(Ψl′Ψl + nρ

2 I)−1Ψl′Kl, A3 = Kl′T l −Kl′Ψl(Ψl′Ψl + nρ

2 I)−1Ψl′T l, A4 = T l′T l − T l′Ψl(Ψl′Ψl +

nρ2 I)−1Ψl′T l, e1 = Kl′L −Kl′Ψl(Ψl′Ψl + nρ

2 I)−1(Ψl′L +nρ2 (u+b)), e2 = T l

′L−T l′Ψl(Ψl′Ψl+ nρ

2 I)−1(Ψl′L+ nρ2 (u+

b)). Equation (31) looks complicated and involves some matrixinversions, but we only compute it once in the algorithm andthe matrix inversions are not ill-conditioned with proper λ andρ. If we apply the algorithm to image patches (see details inthe end of this section), the computation is very cheap.

The following algorithm is the corresponding ADMMscheme:

Algorithm 1Input: Given L, T l,Kl,Ψl, λ, α, ρ, γ ∈ (0, (

√5 + 1)/2)

Output: c, d, βj ← 0, (c(j), d(j), β(j))← 0, u(j) ← 0, b(j) ← 0while not converged do

1. j ← j + 12. (c(j), d(j), β(j))← solve subproblem (23) foru = u(j−1), b = b(j−1)

3. u(j) ← solve subproblem (24) for β = β(j), b = b(j−1)

4. b(j) ← b(j−1) + γ(u(j) − β(j))End while.

Note that the convergence of Algorithm 1 is guaranteed bythe following theorem that its proof can be found in [28].Theorem 2 For any γ ∈ (0, (

√5 + 1)/2), the sequence

{(c(j), d(j), β(j))} obtained by Algorithm 1 converges to thesolution of problem (20) for any initial points u(0) and b(0).

In particular, we set γ = 1, β(0) = b(0) = 0 in our work,the convergence of Algorithm 1 thus can be guaranteed.

Although model (20) can pick up more image details,it can not completely overcome blur effect along edges ofhigh-resolution image. Due to imperfect reconstruction fromthe model, we observe residual edges in difference imageL − DBH(1) where H(1) is the computed high-resolutionimage by model (20). Inspired by the iterative back projectionmethod [34] and the iterative regularization method [47], weconsider the difference L−DBH(1) as a new low-resolutioninput L, and recompute model (20) to get a residual high-resolution image H(2). We repeat this process until theresidual is small enough. The sum of the high-resolutionimage H(1) and its residual high-resolution images is theresulted super-resolution image H . The strategy can recovermore image details (see Figure 4). In our experiments, itis enough to iterate the process ten times. Algorithm 2summerizes the proposed iterative RKHS algorithm for singleimage super-resolution. This algorithm can work for generalD,B though we mainly tested it with bicubic down-samplingand blur-free in the experiments.

For Algorithm 2, note that although we introduce someparameters in the super-resolution algorithm, these parametersare all not sensitive and easy to select (see the parametersremark in Section IV). The solution of step 3a is obtained byAlgorithm 1. Down-sampling operators D, associating withstep 3a and step 3c in Algorithm 2, are done by bicubicinterpolation (In Matlab function: “imresize”).

Algorithm 2 can be applied on the whole image or patch bypatch. In our numerical experiments, we apply the algorithm toimage patches to reduce computation time and storage. We set

Page 7: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

7

Algorithm 2 (Single image super-resolution via RKHS (SR-RKHS))Input: one low-resolution image L ∈ RQ×S , λ > 0, α > 0,τ : maximum number of iteration

Output: high-resolution image H ∈ RU×VStep 1. Set coarse grids tl and fine grids th.Step 2. Construct matrices T l,Kl,Ψl (refer to SectionII-C and Section III-A) for i, j = 1, 2, · · · , n, n = Q.S,ν = 1, 2, · · · ,M . Similarly for Th,Kh,Ψh except i =1, 2, · · · , N ; j = 1, 2, · · · , n, where N = U · V .Step 3. Initialization: L(1) = L.

for k = 1: τa. Compute the coefficients: (c(k), d(k), β(k)) =

argmin 1n‖L

(k) −DB(Thd+Khc+ Ψhβ)‖2+λc′Klc+ α||β||1.

b. Update the high-resolution image:H(k) = Thd(k) +Khc(k) + Ψhβ(k).

c. Down-sampling H(k) to coarse grid: L = DBH(k).d. Compute residual: L(k+1) = L(k) − L.end

Step 4. Compute the final high-resolution image:H =

∑τi=1H

(i).

(a) H (b) H(1) (c) H(2) (d) H(3)

Fig. 4. Super-resolution image “lena” by Algorithm 2; (a) is the sum imageof H(i), i = 1, 2, 3; (b) is the computed image for first iteration. For bettervision, we add 0.5 to the intensities of H(2) and H(3) to obtain (c) and (d),respectively. From last two images, we know that H(2) and H(3) pick upsome image details.

patch size to be 6×6 with overlaps. Intensity at the boundaryis estimated by bicubic interpolation.

In what follows, we compare the proposed approach withsome competitive methods.

IV. NUMERICAL EXPERIMENTS

In this section, we mainly compare the proposed approachwith some state-of-the-art super-resolution methods: bicubicinterpolation, a fast upsampling method (“08’TOG” [53]),a learning-based method (“10’TIP” [71]). In addition, theproposed method actually can be viewed as an interpolation-based approach. Thus it is necessary to compare the proposedmethod with some state-of-the-art interpolation methods, e.g.,two contour stencils based interpolations (“11’IPOL” [27],“11’SIAM” [26]) and an interpolation and reconstructionbased method (“14’TIP” [65]). Furthermore, we also com-pare the proposed method with a kernel regression method(“07’TIP” [59]), multiscale geometric method (“07’SPIE”[45]).

We use two kinds of test images. One is low-resolution im-ages without high-resolution ground-truth (see Section IV-A).

The other is simulated low-resolution images from knownhigh-resolution images (see Section IV-B). In the later case,one has high-resolution ground-truth available for quantita-tive comparisons. For fair comparison, we set B = I inour experiments because some of the methods compared donot involve deblurring process. All experiments are done inMATLAB(R2010a) on a laptop of 3.25Gb RAM and Intel(R)Core(TM) i3-2370M CPU: @2.40 GHz, 2.40 GHz.

The proposed Algorithm 2 is for gray-scale images. Forcolor images such as RGB, there is redundancy in channels,we first transform it to “YCbCr” color space 1 where “Y”represents luminance component, “Cb” and “Cr” representblue-difference and red-difference components that are lessredundant. “Y” is essentially a grayscale copy of the colorimage and carries most of the high resolution details of thecolor image. This color space is very popular in image/videoprocessing. Because humans are more sensitive to luminancechanges, the proposed algorithm is only applied to the il-luminance channel and bicubic interpolation is applied tothe color layers (Cb, Cr). The upscaled images in YCbCrspace is transformed back to the original color space forvisualization/analysis. Color image results are better visualizedin the original pdf file.

We employ root-mean-square error (RMSE) for quantitativecomparisons, and the RMSE index is used in some super-resolution works, e.g., “10’TIP” [71]. Furthermore, a pop-ular index Peak Signal-Noise Ration (PSNR) is utilized toestimate the performance of different methods. In particular,we compute PNSR only on the luminance channel “Y” inthe experiments. In addition, we also employ the structuralsimilarity (SSIM) index 2 [68] to compare different methods.A remark on parameter selection: The related parametersin Algorithm 1 and Algorithm 2 are easy to select. We setλ = 10−11, α = 10−4, ρ = 10−5. The maximum iteration τis 3. For simplicity, we only do 10 iterations for Algorithm1. In addition, we set M = 6 so that φ1(t) = 1, φ2(t) =x,φ3(t) = y, φ4(t) = xy, φ5(t) = x2, φ6(t) = y2 (see detailsin Section II-C). Note that the proposed method includes manyparameters, e.g., λ, ρ, patch size, etc. However, they are easyto select because the proposed method that can be viewed asan interpolation approach is not sensitive to the selection ofparameters. Actually, choosing suitable parameters is alwaysa difficulty to many image algorithms. Tuning empirically is apopular way to determine parameters. In our work, we obtainthe parameters by tuning empirically.

A. Results on low-resolution images without ground-truth

In this section, experiments are based on natural imageswithout ground-truth, thus quantitative comparisons (e.g.,RMSE) are not available.

In Figure 5 and Figure 6, we compare the proposed SR-RKHS method with classical bicubic interpolation, “07’TIP”[59], “08’TOG” [53], “10’TIP” [71], “11’IPOL” [27],“11’SIAM” [26] and “14’TIP” [65]. The upscaling factors areall 3. From the figures, the results of bicubic interpolation,

1http://en.wikipedia.org/wiki/YCbCr2https://ece.uwaterloo.ca/∼z70wang/research/ssim/

Page 8: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

8

LR Bicubic 07’TIP 08’TOG 10’TIP

11’IPOL 11’SIAM 14’TIP Ours

LR Bicubic 07’TIP 08’TOG 10’TIP

11’IPOL 11’SIAM 14’TIP Ours

Fig. 5. Compare the proposed algorithm with some state-of-the-art approaches: Bicubic interpolation, 07’TIP [59], 08’TOG [53], 10’TIP [71], 11’IPOL [27],11’SIAM [26] and 14’TIP [65]. The upscaling factor is 3. No ground-truth high-resolution images are available for quantitative comparison. Color imagesare better visualized in the pdf file.

“07’TIP” and “08’TOG” show blur effect for the whole image.The results of “10’TIP” and “14’TIP” preserve sharp edgeswell, however, they smooth out image details on non-edgeregions, e.g., freckles on the skin (see close-ups in Figure6). The two contour interpolation methods “11’IPOL” and“11’SIAM” keep image edges and details well, but the resultscontain some artificial contours near true edges. The proposedmethod performs well, not only on edges but also for finedetails/textures away from edges.

B. Results on low-resolution images simulated from knownground-truth images

To provide quantitative comparisons in terms of RMSE,PSNR and SSIM, we start from some high-resolution images,treat them as ground-truth and simulate low-resolution imagesby bicubic interpolation.

In this section, we mainly compare the proposed methodwith several state-of-the-art methods: bicubic interpolation,“08’TOG” [53], “10’TIP” [71], “11’IPOL” [27] and “14’TIP”[65]. In Figures 7-9, upscaled high-resolution images by

Page 9: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

9

LR Bicubic 07’TIP 08’TOG 10’TIP

11’IPOL 11’SIAM 14’TIP Ours

Fig. 6. Compare the proposed algorithm with some state-of-the-art approaches: Bicubic interpolation, 07’TIP [59], 08’TOG [53], 10’TIP [71], 11’IPOL [27],11’SIAM [26] and 14’TIP [65]. The upscaling factor is 3. No ground-truth high-resolution images are available for quantitative comparison.

LR Ground Bicubic 08’TOG

10’TIP 11’IPOL 14’TIP Ours

Fig. 7. Qualitative comparison for the image “face” among the proposed method and Bicubic, “08’TOG” [53], “10’TIP” [71], “11’IPOL” [27] and “14’TIP”[65], with the upscaling factor of 2.

Page 10: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

10

LR Ground Bicubic 08’TOG 10’TIP 11’IPOL 14’TIP Ours

Fig. 8. Results of “baboon” (upscaling factor 4) and “forest” (upscaling factor 4); Compared methods: Bicubic interpolation, “08’TOG” [53], “10’TIP” [71],“11’IPOL” [27] and “14’TIP” [65] and ours. In particular, readers are recommended to zoom in all figures for better visualization.

bicubic interpolation show blur effect. Although we can getsharp edges via “08’TOG” [53], it flattens details on non-edgeregions. The method “11’IPOL” [27] recovers image detailswell, but introduces some artificial contours near true edges.For instance, for the “baboon” example in Figure 8, it hasmany artificial contours near the true edges (see the close-up). “14’TIP” [65] preserves sharp image edges but smoothsout image intensity not on edges. The method “10’TIP” [71]obtains competitive visual results, however, it generates worsequantitative results than the proposed method (see Table I). Inaddition, the results of “07’SPIE” [45] and “11’SIAM” [26] inFigure 9 also perform worse than the proposed method. Theproposed method not only preserves sharp edges but also keepshigh-frequency details well on non-edge regions. Furthermore,the proposed method also gets the best RMSE, PSNR andSSIM for almost all examples.

In Figure 10 and Table I, we find that the proposed methodgets better quantitative and visual results. The results ofbicubic interpolation and “08’TOG” show significant blureffect. The method “11’IPOL” also obtains excellent visualresults, but the visual results show obvious artificial contours.The method “14’TIP” gets the sharpest image edges, but itsmoothes out image details on non-edge regions. In addition,The method “10’TIP” obtains similarly visual results with theproposed method, but the proposed method has lower RMSE,larger PSNR and SSIM. In Figure 11, the proposed methodperforms best, especially for image details, e.g., hair of lion.The learning-based method “10’TIP” [71] obtain excellentvisual and quantitative results, it however needs extra trainingdata to generate dictionary. We also give corresponding errormaps in Figure 12. Furthermore, we can find more quantitativecomparisons in Table I. It demonstrates that the proposedmethod gets better quantitative performance than other meth-ods for almost all examples. In particular, instead of RKHS andHeaviside functions, one can use wavelet basis or frames in ourframework. We haven’t got time to compare the performance.

Computation issue: We present the computation com-parisons in Table II. From the table, we find that bicubicinterpolation is the fastest. However, we have to note thatbicubic interpolation is optimized in MATLAB, “08’TOG”

Ground Bicubic 07’SPIE

08’TOG 10’TIP 11’IPOL

11’SIAM 14’TIP Ours

Fig. 9. Results of “baby” with the upscaling factor of 2; First row: Ground-truth image, Bicubic interpolation (RMSE = 3.58; PSNR = 37.06; SSIM= 0.993), “07’SPIE” [45] (3.73; 36.70; 0.996); Second row: “08’TOG”[53] (4.32; 35.43; 0.982), “10’TIP” [71] (3.40; 37.51; 0.995), “11’IPOL”[27] (3.37; 37.58; 0.997); Third row: “11’SIAM” [26] (3.24; 37.93; 0.997),“14’TIP” [65] (4.19; 36.82; 0.985) and the proposed method (3.17; 38.19;0.997).

is optimized by an executable software 3, “11’IPOL” 4 and“14’TIP” 5 are speeded up via C language and Cmex, re-spectively. Only “10’TIP” 6 and the proposed method arebased on MATLAB codes that are not optimized. In particular,computation time with respect to the change of upscaling

3http://www.cse.cuhk.edu.hk/∼leojia/projects/upsampling/index.html4http://www.ipol.im/pub/art/2011/g iics/5http://www.escience.cn/people/LingfengWang/publication.html6http://www.ifp.illinois.edu/∼jyang29/ScSR.htm

Page 11: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

11

LR Ground Bicubic 08’TOG 10’TIP 11’IPOL 14’TIP Ours

Fig. 10. Results of “dog” (upscaling factor 3) and “field” (upscaling factor 3); Compared methods: Bicubic interpolation, “08’TOG” [53], “10’TIP” [71],“11’IPOL” [27] and “14’TIP” [65] and the proposed method.

LR Ground Bicubic 08’TOG

10’TIP 11’IPOL 14’TIP Ours

Fig. 11. Results of “lion” with the upscaling factor of 3; Compared methods: Bicubic interpolation, “08’TOG” [53], “10’TIP” [71], “11’IPOL” [27] and“14’TIP” [65] and the proposed method.

Bicubic 08’TOG 10’TIP

11’IPOL 14’TIP Ours

Fig. 12. Compare error maps of the proposed method and five other methods.The test image is “lion”. The error maps are brightened for better visualization.

factor and image size is presented in Figure 13. One can seethat it is acceptable to employ our method for image super-resolution. The computation time is based on non-optimizedMatlab code. It has a lot of room to speed up the code.For instance, the code contains a lot of loops that can besignificantly sped up using Cmex.

The relation between model (20) and model (20) com-bined with iterative strategy: Equation (20) is the proposedmodel in the work. In particular, we employ an iterativestrategy for the proposed model to recover more image details.Thus it is necessary to illustrate the relation between model(20) and model (20) combined with our iterative strategy.Actually, there is no significantly visual difference betweenthe two methods, especially in image details and edges (seethe almost dark error map in Figure 14(c)). However, it iseasy to know that the proposed model (20) combined with theiterative strategy performs lower RMSE comparing with theproposed model (20). In addition, the iterative strategy resultsin more computation obviously.

Page 12: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

12

2 3 4 5 6 7 8 93

3.5

4

4.5

5

5.5

6

upscaling factor

time(

s)

Computation Time vs. Upscaling Time

40 50 60 70 80 90 100 110 120 130 1400

2

4

6

8

10

12

size of LR image

time(

s)

Computation Time vs. Size of LR Image

(a) (b)

Fig. 13. (a) Computation time vs. upscaling factor for low-resolution image with size 80 × 80; (b) Computation time vs. size of low-resolution image, thesize of low-resolution image is from 40 × 40 to 140 × 140 and the upscaling factor is always 5.

V. CONCLUSIONS

Given a low-resolution image, the super-resolution prob-lem was casted as an image intensity function estimationproblem. Because images mainly contain smooth componentsand edges, we assumed smooth components belong to 2Dthin-plate spline based RKHS and edges can be representedby approximated Heaviside functions. The coefficients ofthe redundant basis were computed using the low-resolutionimage. We then applied the coefficients to generate high-resolution images. To recover sharp high-resolution images,we proposed an iterative scheme to preserve more imagedetails. In addition, we applied the proposed method to imagepatches to reduce computation and storage significantly. Manyexperiments showed that the proposed approach outperformedthe state-of-the-art methods, both visually and quantitatively.

(a) (b) (c)

Fig. 14. (a) Result of model (20) with the iterative strategy (i.e., τ = 3 inAlgorithm 2, RMSE = 10.25); (b) Result of model (20) without the iterativestrategy (i.e., τ = 1 in Algorithm 2, RMSE = 10.36); (c) Error map between(a) and (b). Upscaling factor: 3.

REFERENCES

[1] C. B. Atkins, C. A. Bouman, and J. P. Allebach, “Tree-based resolutionsynthesis,” International Conference on Image Processing (ICIP), pp.405–410, 1999.

[2] ——, “Optimal image scaling using pixel classification,” InternationalConference on Image Processing (ICIP), pp. 864–867, 2001.

[3] S. Borman and R. L. Stevenson, “Super-resolution from image sequences- a review,” Midwest Symposium on Circuits and Systems, pp. 374–378,1998.

[4] P. Bouboulis, K. Slavakis, and S. Theodoridis, “Adaptive kernel-based image denoising employing semi-parametric regularization,” IEEETransactions on Image Processing, vol. 19, pp. 1465–1479, 2010.

[5] E. Candes and C. Fernandez-Granda, “Towards a mathematical theoryof super-resolution,” To appear in Communications on Pure and AppliedMathematics.

[6] D. Capel and A. Zisserman, “Super-resolution enhancement of textimage sequences,” International Conference on Pattern Recognition(ICPR), vol. 1, pp. 600–605, 2000.

[7] A. Caponnetto, M. Pontil, C. Micchelli, and Y. Ying, “Universal multi-task kernels,” Journal of Machine Learning Research (JMLR), vol. 9,pp. 1615–1646, 2008.

[8] C. Carmeli, E. De Vito, and A. Toigo, “Vector valued reproducing kernelHilbert spaces of integrable functions and Mercer theorem,” Analysis andApplications, vol. 4, pp. 377–408, 2006.

[9] A. Chambolle and T. Pock, “A First-Order Primal-Dual Algorithm forConvex Problems with Applications to Imaging,” Journal of Mathemat-ical Imaging and Vision, vol. 40, pp. 120–145, 2011.

[10] H. Chang, D. Yeung, and Y. Xiong, “Super-Resolution through neighborembedding,” Computer Vision and Pattern Recognition (CVPR), vol. 1,2004.

[11] P. Chatterjee, S. Mukherjee, S. Chaudhuri, and G. Seetharaman, “Ap-plication of Papoulis-Gerchberg method in image super-resolution andinpainting,” The Computer Journal, vol. 52, pp. 80–89, 2007.

[12] R. R. Coifman and S. Lafon, “Geometric harmonics: a novel tool formultiscale out-of-sample extension of empirical functions,” Applied andComputational Harmonic Analysis, vol. 21, pp. 31–52, 2006.

[13] F. Cucker and S. Smale, “On the mathematical foundations of learning,”Bulletin of the American Mathematical Society, vol. 39, pp. 1–49, 2002.

[14] G. Daniel, S. Bagon, and M. Irani, “Super-Resolution from a singleimage,” ICCV, pp. 349–356, 2009.

[15] W. Dong, G. Shi, L. Zhang, and X. Wu, “Superresolution with nonlocalregularized sparse representation,” Proceeding of SPIE, 2010.

[16] J. Duchon, “Fonctions splines et vecteurs aleatoires,” Tech. Report 213,Seminaired Analyse Numerique, Universite Scientifique et Medicale,Grenoble, 1975.

[17] ——, “Fonctions-spline et esperances conditionnelles de champsgaussiens,” Ann. Sci. Univ. Clermont Ferrand II Math, pp. 19–27, 1976.

[18] ——, “Splines minimizing rotation-invariant semi-norms in Sobolevspaces,” Constructive Theory of Functions of Several Variables, pp. 85–100, 1977.

[19] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robustmultiframe super resolution,” IEEE transactions on image processing,vol. 13, pp. 1327–1344, 2004.

[20] R. Fattal, “Image upsampling via imposed edge statistics,” ACM Trans-actions on Graphics, vol. 26, 2007.

[21] C. Fernandez-Granda and E. Candes, “Super-resolution via transform-invariant group-sparse regularization,” ICCV, 2013.

[22] G. Freedman and R. Fattal, “Image and video upscaling from local self-examples,” ACM Trans. on Graphics (TOG), vol. 30, 2011.

[23] W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Computer Graphics and Applications, vol. 22, pp.56–65, 2002.

[24] W. T. Freeman and E. C. Pasztor, “Markov networks for super-resolution,” Proceedings of 34th Annual Conference on InformationSciences and Systems, 2000.

Page 13: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

13

TABLE IQUANTITATIVE COMPARISONS FOR DIFFERENT METHODS IN TERMS OF RMSE, PSNR AND SSIM (BOLD: THE BEST ONE; UNDERLINE: THE SECONDBEST). COMPARED METHODS: BICUBIC INTERPOLATION, “08’TOG” [53], “10’TIP” [71], “11’IPOL” [27] AND “14’TIP” [65] AND THE PROPOSED

METHOD.

Image(factor) Index Bicubic 08’TOG 10’TIP 11’IPOL 14’TIP Proposed

face(X2)RMSE 4.61 5.02 4.41 4.38 5.55 4.26PSNR 34.86 34.11 35.24 35.30 33.25 35.53SSIM 0.862 0.843 0.873 0.879 0.828 0.883

baboon(X4)RMSE 19.47 19.29 19.32 19.22 19.51 19.13PSNR 22.25 22.31 22.30 22.41 21.88 22.43SSIM 0.704 0.714 0.718 0.760 0.711 0.756

forest(X4)RMSE 18.63 18.30 18.47 18.44 18.84 18.09PSNR 22.78 22.93 22.85 22.86 22.79 23.03SSIM 0.685 0.701 0.701 0.738 0.646 0.748

baby(X2)RMSE 3.58 4.32 3.40 3.37 4.19 3.17PSNR 37.06 35.43 37.51 37.58 36.82 38.19SSIM 0.993 0.982 0.995 0.997 0.985 0.997

dog(X3)RMSE 9.15 9.24 9.09 9.11 9.99 9.02PSNR 28.90 28.82 28.96 28.94 28.14 29.04SSIM 0.914 0.904 0.920 0.927 0.893 0.928

field(X3)RMSE 12.49 12.32 12.27 12.22 13.74 12.06PSNR 26.20 26.32 26.36 26.39 25.37 26.51SSIM 0.600 0.595 0.615 0.638 0.559 0.641

lion(X3)RMSE 11.03 10.66 10.67 10.30 11.58 10.25PSNR 27.28 27.58 27.57 27.87 27.13 27.89SSIM 0.883 0.894 0.897 0.916 0.874 0.916

TABLE IITIME COMPARISON FOR DIFFERENT METHODS (BOLD: THE BEST ONE; UNDERLINE: THE SECOND BEST). COMPARED METHODS: BICUBIC

INTERPOLATION, “08’TOG” [53], “10’TIP” [71], “11’IPOL” [27], “14’TIP” [65] AND THE PROPOSED METHOD. NOTE THAT BICUBIC IS OPTIMIZEDIN MATLAB, “08’TOG” IS OPTIMIZED BY AN EXECUTABLE SOFTWARE, “11’IPOL” AND “14’TIP” ARE SPEEDED UP VIA C LANGUAGE AND CMEX,

RESPECTIVELY. ONLY “10’TIP” AND THE PROPOSED METHOD ARE BASED ON MATLAB CODES THAT ARE NOT OPTIMIZED. (UNIT: SECOND)

Image(factor) size of LR Bicubic 08’TOG 10’TIP 11’IPOL 14’TIP Proposedface(X2) 140 × 140 0.01 3.32 40.70 0.09 0.93 9.54

baboon(X4) 120 × 120 0.02 9.11 145.76 0.26 1.10 8.20tree(X4) 110 × 110 0.02 5.42 115.16 0.23 1.03 7.74baby(X2) 256 × 256 0.05 13.39 144.67 0.37 1.00 43.59dog(X3) 130 × 140 0.03 11.00 100.32 0.19 0.81 12.61field(X3) 100 × 133 0.02 9.00 81.54 0.14 0.67 9.65lion(X2) 168 × 168 0.03 13.98 126.49 0.25 0.86 13.75

[25] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael, “Learning low-levelvision,” International Journal of Computer Vision, vol. 40, pp. 25–47,2000.

[26] P. Getreuer, “Contour stencils: Total Variation along curves for adaptiveimage interpolation,” SIAM Journal on Imaging Sciences, vol. 4, pp.954–979, 2011.

[27] ——, “Image interpolation with contour stencils,” Image Processing OnLine, vol. 1, 2011.

[28] R. Glowinski, P. L. Tallec, H. R. Sheikh, and E. P. Simoncelli,“Augmented Lagrangian and Operator Splitting Methods in NonlinearMechanics,” Texas, 1989.

[29] T. Goldstein and S. Osher, “The split bregman method for l1-regularizedproblems,” SIAM Journal on Imaging Sciences, vol. 2, pp. 323–343,2009.

[30] E. Gur and Z. Zalevsky, “Single-Image digital super-resolution a re-vised Gerchberg-Papoulis algorithm,” IAENG International Journal ofComputer Science, vol. 34, pp. 251–255, 2007.

[31] B. He, M. Tao, and X. Yuan, “Alternating Direction Method withGaussian Back Substitution for Separable Convex Programming,” SIAMJournal on Optimization, vol. 22, pp. 313–340, 2012.

[32] B. He and H. Yang, “Some convergence properties of a method ofmultipliers for linearly constrained monotone variational inequalities,”Operations Research Letters, vol. 23, pp. 151–161, 1998.

[33] L. He, H. Qi, and R. Zaretzki, “Beta process joint dictionary learningfor coupled feature spaces with application to single image super-resolution,” CVPR, pp. 345–352, 2013.

[34] M. Irani and S. Peleg, “Super resolution from image sequence,” Proceed-

ings of 10th International Conference on Pattern Recognition (ICPR),pp. 115–120, 1990.

[35] P. C. Kainen, V. Kurkova, and A. Vogt, “Best approximation bylinear combinations of characteristic functions of half-space,” Journalof Approximation Theory, vol. 122, pp. 151–159, 2003.

[36] S. H. Kang, B. Shafei, and G. Steidl, “Supervised and TransductiveMulti-Class Segmentation Using p-Laplacians and RKHS methods,”Preprint at uni-kl.de, 2012.

[37] C. Kim, K. Choi, K. Hwang, and J. B. Ra, “Learning-based super-resolution using a multi-resolution wavelet approach,” Iternational work-shop on Advance Image Technology (IWAIT), 2009.

[38] C. Kim, K. Choi, and J. B. Ra, “Improvement on learning-based super-resolution by adopting residual information and patch reliability,” IEEEInternational Conference on Image Processing (ICIP), pp. 1197–1200,2009.

[39] K. Komatsu, T. Igarashi, and T. Saito, “Very high resolution imagingscheme with multiple different-aperture cameras,” Signal Processing:Image Communication, vol. 5, pp. 511–526, 1993.

[40] X. Li and M. Orchard, “New Edge-Directed Interpolation,” IEEE Trans.Image Processing, vol. 10, pp. 1521–1527, 2001.

[41] Liyakathunisa and V. K. Ananthashayana, “Super resolution blind re-construction of low resolution images using wavelets based fusion,”International Journal of Computer and Information Engineering, vol. 2,pp. 106–110, 2008.

[42] J. Meinguet, “Multivariate interpolation at arbitrary points made simple,”Journal of Applied Mathematics and Physics (ZAMP), vol. 30, pp. 292–304, 1979.

Page 14: Single image super-resolution via an iterative reproducing kernel … · 2020. 7. 14. · Single image super-resolution via an iterative reproducing kernel Hilbert space method Liang-Jian

1051-8215 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TCSVT.2015.2475895, IEEE Transactions on Circuits and Systems for Video Technology

14

[43] J. Mercer, “Functions of positive and negative type, and their connectionwith the theory of integral equations,” Philosophical transactions of theroyal society of London. Series A, containing papers of a mathematicalor physical character, vol. 209, pp. 415–446, 1909.

[44] C. A. Micchelli and M. Pontil, “On leaning vector-valued functions,”Neural Computation, vol. 17, pp. 177–204, 2005.

[45] N. Mueller, Y. Lu, and M. Do, “Image Interpolation Using MultiscaleGeometric Representations,” SPIE proceedings, 2007.

[46] A. Nosedal-Sanchez, C. B. Storlie, T. C. M. Lee, and R. Christensen,“Reproducing kernel Hilbert spaces for penalized regression: a tutorial,”The American Statistician, vol. 66, pp. 50–60, 2012.

[47] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regu-larization method for total variation-based image restoration,” MultiscaleModeling and Simulation, vol. 4, pp. 460–489, 2005.

[48] S. C. Park, M. K. Park, and M. G. Kang, “Super-Resolution imagereconstruction: a technical overview,” IEEE signal processing magazine,vol. 20, pp. 21–36, 2003.

[49] M. H. Quang, S. H. Kang, and T. M. Le, “Image and video colorizationusing vector-valued reproducing kernel Hilbert spaces,” Journal ofMathematical Imaging and Vision, vol. 37, pp. 49–65, 2010.

[50] B. Schokopf and A. Smola, “Learning with kernels: support vectormachines, regularization, optimization, and beyond,” MIT Press, Cam-bridge, 2002.

[51] R. Seaman and M. Hutchinson, “Compamtive real data tests of someobjective analysis methods by withholding,” Australian MeteorologicalMagazine, vol. 33, pp. 37–46, 1985.

[52] A. J. Shah and S. B. Gupta, “Image super resolution - a survey,”International Conference on Emerging Technology Trends in Electronics,Communication and Networking, 2012.

[53] Q. Shan, Z. Li, J. Jia, and C. Tang, “Fast Image/Video Upsampling,”ACM Transactions on Graphics (TOG), vol. 27, 2008.

[54] J. Shawe-Taylor and N. Cristianini, “Kernel methods for pattern analy-sis,” Cambridge University Press, Cambridge, 2004.

[55] J. Sun, J. Sun, Z. Xu, and H.-Y. Shum, “Image super-resolution usinggradient profile prior,” CVPR, pp. 1–8, 2008.

[56] J. Sun, N. N. Zheng, H. Tao, and H. Shum, “Image hallucination withprimal sketch priors,” IEEE Conference on Computer Vision and PatternRecognition (CVPR), vol. 2, pp. 729–736, 2003.

[57] S.-C. Tai, T.-M. Kuo, C.-H. Iao, and T.-W. Liao, “A fast algorithmfor single-image super resolution in both wavelet and spatial domain,”International Symposium on Computer, Consumer and Control, pp. 702–705, 2012.

[58] Y.-W. Tai, S. Liu, M. Brown, and S. Lin, “Super resolution using edgeprior and single image detail synthesis,” CVPR, pp. 2400–2407, 2010.

[59] H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for imageprocessing and reconstruction,” IEEE Transactions on Image Processing,vol. 16, pp. 349–366, 2007.

[60] M. F. Tappen, B. C. Russell, and W. T. Freeman, “Exploiting thesparse derivative prior for super-resolution and image demosaicing,”IEEE Workshop on Statistical and Computational Theories of Vision,2003.

[61] J. D. Van Ouwerkerk, “Image super-resolution survey,” Image and VisionComputing, vol. 24, pp. 1039–1052, 2006.

[62] F. Viola, A. W. Fitzgibbon, and R. Cipolla, “A unifying resolution-independent formulation for early vision,” CVPR, pp. 494–501, 2012.

[63] G. Wahba, “Spline models for observational data,” SIAM. CBMS-NSFRegional Conference Series in Applied Mathematics, vol. 59, 1990.

[64] G. Wahba and J. Wendelberger, “Some new mathematical methodsfor variational objective analysis using splines and cross-validation,”Monthly Weather Review, vol. 108, pp. 1122–1145, 1980.

[65] L. Wang, H. Wu, and C. Pan, “Fast image upsampling via the displace-ment field,” IEEE Trans. Image Processing, vol. 23, pp. 5123–5135,2014.

[66] L. Wang, S. Xiang, G. Meng, H. Wu, and C. Pan, “Edge-Directedsingle-image super-resolution via adaptive gradient magnitude self-interpolation,” IEEE Trans. Circuits and Systems for Video Technology,vol. 23, pp. 1289–1299, 2013.

[67] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimiza-tion algorithm for total variation image reconstruction,” SIAM Journalon Imaging Sciences, vol. 1, pp. 248–272, 2008.

[68] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Imagequality assessment: From error visibility to structural similarity,” IEEETrans. Image Processing, vol. 13, pp. 600–612, 2004.

[69] Q. Xie, H. Chen, and H. Cao, “Improved example-based single-imagesuperresolution,” International Congress on Image and Signal Process-ing (CISP), vol. 3, pp. 1204–1207, 2010.

[70] J. Yang, Z. Wang, L. Zhe, and T. Huang, “Coupled dictionary trainingfor image super-resolution,” IEEE transactions on image processing,vol. 21, pp. 3467–3478, 2011.

[71] J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution viasparse representation,” IEEE transactions on image processing, vol. 19,pp. 2861–1873, 2010.

[72] J. Yang, J. Wright, Y. Ma, and T. Huang, “Image super-resolutionas sparse representation of raw image patches,” IEEE Conference onComputer Vision and Pattern Recongnition (CVPR), pp. 1–8, 2008.

[73] R. Zeyde, M. Elad, and M. Protter, “On single image scale-up usingsparse-representations,” Curves and Surfaces, Lecture Notes in Comput-er Science, vol. 6920, pp. 711–730, 2012.

[74] L. Zhang and X. Wu, “An Edge-Guided Image Interpolation Algorithmvia Directional Filtering and Data Fusion,” IEEE Trans. Image Process-ing, vol. 15, pp. 2226–2238, 2006.

[75] Y. Zhao, J. Yang, Q. Zhang, S. Lin, Y. Cheng, and Q. Pan, “Hyper-spectral imagery superresolution by sparse representation and spectralregularization,” EURASIP Journal on Advances in Signal Processing,2011.

[76] H. Zheng, A. Bouzerdoum, and S. L. Phung, “Wavelet based nonlocal-means superresolution for video sequences,” IEEE International Con-ference on Image Processing (ICIP), pp. 2817–2820, 2010.

Liang-Jian Deng received B.S. degree fromthe School of Mathematical Sciences, Universi-ty of Electronic Science and Technology of Chi-na(UESTC), Chengdu, China, in 2010. He is current-ly pursuing the Ph.D. degree with School of Math-ematical Sciences of UESTC. His current researchinterest is image processing, including image super-resolution, deblurring&denoising, inpainting and de-hazing.

Weihong Guo received the B.S. degree in Compu-tational Math from Minzu University of China in1999, the M.S. degree in Statistics and the Ph.D.degree in Applied Math, both from University ofFlorida and both in 2007. She was a Math AssistantProfessor at the University of Alabama 2007-2009and is now an Applied Math Associate Professor atCase Western Reserve University, OH. Her researchinterests include variational image reconstruction,image super-resolution and image segmentation.

Ting-Zhu Huang is a professor at the School ofMathematical Sciences, University of Electronic Sci-ence and Technology of China. His research inter-ests include numerical linear algebra and scientificcomputation with applications in electromagnetics,modeling and algorithms for image processing, etc.He has published over 100 papers in internationaljournals, including SIAM J. Sci. Comput., SIAMJ. Matrix Anal. Appl., IMA J. Numerical Anal., J.Comput. Phys., Computer Phys. Comm., NumericalLin. Alg. Appl., Automatica, IEEE Trans. Antennas

and Propagation, IEEE Trans. Geoscience and Remote Sensing, InformationSciences, J. Optical Society of America A, Computing, Lin. Alg. Appl., Appl.Math. Letters, Comput. Math. Appl., Appl. Math. Modelling, J. FranklinInstitute, J. Comput. Appl. Math., Comm. Nonlin. Sci. Numer. Simul., etc. Hereceived the Science and Technology Progress Award of Sichuan Province,Chinese Information Ministry for several times. Dr. Huang has been servedin the editorial board of several international journals.


Recommended