+ All Categories
Home > Documents > Zoom Dependent Lens Distortion Mathematical...

Zoom Dependent Lens Distortion Mathematical...

Date post: 23-Mar-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
11
J Math Imaging Vis (2012) 44:480–490 DOI 10.1007/s10851-012-0339-x Zoom Dependent Lens Distortion Mathematical Models Luis Alvarez · Luis Gómez · Pedro Henríquez Published online: 11 May 2012 © Springer Science+Business Media, LLC 2012 Abstract We propose new mathematical models to study the variation of lens distortion models when modifying zoom setting in the case of zoom lenses. The new mod- els are based on a polynomial approximation to account for the variation of the radial distortion parameters through the range of zoom lens settings and, on the minimization of a global error energy measuring the distance between se- quences of distorted aligned points and straight lines after lens distortion correction. To validate the performance of the method we present experimental results on calibration pattern images and on sport event scenarios using broadcast video cameras. We obtain, experimentally, that using just a second order polynomial approximation of lens distortion parameter zoom variation, the quality of lens distortion cor- rection is as good as the one obtained individually frame by frame using independent lens distortion model for each frame. Keywords Camera calibration · Distortion model · Zoom dependent model · Radial distortion · 3D scenarios L. Alvarez · P. Henríquez CTIM, Departamento de Informática y Sistemas, Universidad de Las Palmas de Gran Canaria, Campus de Tafira, 35017, Las Palmas, Spain L. Alvarez e-mail: [email protected] P. Henríquez e-mail: [email protected] L. Gómez ( ) CTIM, Departamento de Ingeniería Electrónica y Automática, Universidad de Las Palmas de Gran Canaria, Campus de Tafira, 35017, Las Palmas, Spain e-mail: [email protected] 1 Introduction It is known that camera lens induces image distortion. The magnitude of such distortion depends on some factors as lens quality or lens zoom. One important consequence of lens distortion is that the projection of 3D straight lines in the image are curves (no longer straight lines). Usually, the lens distortion models used in computer vision depends on radial functions of image pixels coordinates and they can be estimated using just image information. The basic standard lens distortion model used in com- puter vision (see for instance [13]) is given by the follow- ing expression: ˆ x ˜ L(x) = x c + L(r)(x x c ) (1) where x = (x,y) is the original image point (distorted), ˆ x = ( ˆ x, ˆ y) is the corrected (undistorted) point, x c = (x c ,y c ) is the center of the camera distortion model, usually near the image center, r = (x x c ) 2 + (y y c ) 2 and L(r) is the function which defines the shape of the distortion model. L(r) is usually approximated by the polynomial L(r) = 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 +··· , (2) where vector k = (k 1 ,k 2 ,...,k N k ) T represents the radial distortion parameters. The complexity of the model is given by the polynomial degree we use to approximate L(r) (i.e. the dimension of k). Non radial terms to account for tan- gential or decentering effects can also be included in the models [36], although for standard camera lens are usually neglected. The distortion models given by (1) are well-known, sim- ple and can be estimated using just image information. In particular Alvarez, Gómez and Sendra [7] propose an alge- braic method to compute lens distortion models by correct- ing the line distortion induced by the lens distortion.
Transcript
Page 1: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

J Math Imaging Vis (2012) 44:480–490DOI 10.1007/s10851-012-0339-x

Zoom Dependent Lens Distortion Mathematical Models

Luis Alvarez · Luis Gómez · Pedro Henríquez

Published online: 11 May 2012© Springer Science+Business Media, LLC 2012

Abstract We propose new mathematical models to studythe variation of lens distortion models when modifyingzoom setting in the case of zoom lenses. The new mod-els are based on a polynomial approximation to account forthe variation of the radial distortion parameters through therange of zoom lens settings and, on the minimization ofa global error energy measuring the distance between se-quences of distorted aligned points and straight lines afterlens distortion correction. To validate the performance ofthe method we present experimental results on calibrationpattern images and on sport event scenarios using broadcastvideo cameras. We obtain, experimentally, that using just asecond order polynomial approximation of lens distortionparameter zoom variation, the quality of lens distortion cor-rection is as good as the one obtained individually frameby frame using independent lens distortion model for eachframe.

Keywords Camera calibration · Distortion model · Zoomdependent model · Radial distortion · 3D scenarios

L. Alvarez · P. HenríquezCTIM, Departamento de Informática y Sistemas, Universidad deLas Palmas de Gran Canaria, Campus de Tafira, 35017,Las Palmas, Spain

L. Alvareze-mail: [email protected]

P. Henríqueze-mail: [email protected]

L. Gómez (�)CTIM, Departamento de Ingeniería Electrónica y Automática,Universidad de Las Palmas de Gran Canaria, Campus de Tafira,35017, Las Palmas, Spaine-mail: [email protected]

1 Introduction

It is known that camera lens induces image distortion. Themagnitude of such distortion depends on some factors aslens quality or lens zoom. One important consequence oflens distortion is that the projection of 3D straight lines inthe image are curves (no longer straight lines). Usually, thelens distortion models used in computer vision depends onradial functions of image pixels coordinates and they can beestimated using just image information.

The basic standard lens distortion model used in com-puter vision (see for instance [1–3]) is given by the follow-ing expression:

x ≡ L(x) = xc + L(r)(x − xc) (1)

where x = (x, y) is the original image point (distorted), x =(x, y) is the corrected (undistorted) point, xc = (xc, yc) isthe center of the camera distortion model, usually near theimage center, r = √

(x − xc)2 + (y − yc)2 and L(r) is thefunction which defines the shape of the distortion model.L(r) is usually approximated by the polynomial

L(r) = 1 + k1r2 + k2r

4 + k3r6 + · · · , (2)

where vector k = (k1, k2, . . . , kNk)T represents the radial

distortion parameters. The complexity of the model is givenby the polynomial degree we use to approximate L(r) (i.e.the dimension of k). Non radial terms to account for tan-gential or decentering effects can also be included in themodels [3–6], although for standard camera lens are usuallyneglected.

The distortion models given by (1) are well-known, sim-ple and can be estimated using just image information. Inparticular Alvarez, Gómez and Sendra [7] propose an alge-braic method to compute lens distortion models by correct-ing the line distortion induced by the lens distortion.

Page 2: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

J Math Imaging Vis (2012) 44:480–490 481

Apart from the above, which is normally applied tomonofocal camera, the lens zoom settings must be takeninto account to correctly calibrate a zoom lens camera usedin a real scenario. This has been specially addressed withinthe scope of close-range photogrammetry measurement inthe last years [8–10]. By modifying the focus and the zoomvalues, a zoom camera can be adjusted to several fields ofviews, depth of fields and even lightning conditions. Ap-plications of zoom lenses are widely found in 3D scenedepth reconstruction [11], telepresence [12], robot naviga-tion [13, 14] or visual tracking [15, 16] among others.

This paper is organized as follows: Sect. 2 deals withthe more relevant related works devoted to lens distortionand zoom lens camera models. In Sect. 3 we present somefundamental aspects about the zoom lens geometry. Theproposed lens distortion model and the experimental setupis discussed in Sect. 4. Experimental results are shown inSect. 5, followed by some conclusions in Sect. 6.

2 Related Works

Zoom-dependent camera calibration traditionally is devotedto modelling the variation of the camera parameters (ma-trix of the camera intrinsic parameters and the rotation andtranslation matrices) along a predefined minimum and max-imum zoom settings. See [17] for a zoom model accountingfor the intrinsic variation only, or [18] for a model regardingboth intrinsic and extrinsic parameters variation.

To calibrate a zoom lens camera system within a givenzoom range, a number of lens settings and the related cal-ibration data are usually stored in a look-up table [19].Thus, for each lens setting, a considerable number of mea-surements, requiring a huge amount of time, have to bedone. Then, the collected data are processed using a least-squares method (Levenberg-Marquardt or other optimiza-tion method) and, by applying a convenient model, the resultis the matrix of the camera intrinsic parameters expressed asa function of the zoom setting [17, 20, 21]. Due to radiallens distortion varies with both changing zoom and focus,the effect of lens distortion is usually included as part of theintrinsic parameters and it is estimated during the calibrationprocedure by iteratively undistorting the images generatedby the camera [21, 22]. For a detailed analysis of the effectof radial lens distortion for consumer grade-cameras see [9],where it can be concluded that,

– the variation of the radial distortion is non-linear with thezoom,

– the radial distortion reaches a maximum at shortest focallength, even in cases where zero crossings occur.

Besides authors [9] show that, for medium-accuracy dig-ital close-range photogrammetry applications, the variation

Fig. 1 Pinhole projection model. f is the effective focal distance

of the first radial distortion coefficient along the zoom fieldcan be modelled by:

kci

1 = d0 + d1cd2i , (3)

where ci is the principal distance, di are empirical coeffi-cients and, for the cameras analyzed in [9], d2 ranges fromaround −0.2 to −4.1. These results are for a focus settingspanning from 5 to 21 mm.

In [23] authors discussed a method for automatically cor-recting the radial lens distortion in a zoom lens video camerasystem. The method uses 1-parameter lens distortion mod-els (i.e. only k1 is considered) and two local different mod-els to account for the barrel and pincushion distortion. Aftersampling some images (video frames) with different focallengths, authors use POVIS hardware system to estimate fo-cal length and k1 for each frame, then least-squares methodis applied to fit a quadratic polynomial for the first radialdistortion coefficient k1, having as a variable the inverse ofthe focal length f ,

k1(1/f ) = c0 + c1(1/f ) + c2(1/f )−2, (4)

where {ci} represent the polynomial coefficients.It can be noted that to build a zoom dependent lens dis-

tortion model for a set of m images, it is required to estimatethe frame by frame lens distortion model by minimizing anyappropriate energy function accounting for the deviation be-tween distorted points and corrected (undistorted) ones.

3 Zoom Lens Geometry

We assume that, after lens distortion correction, camera im-age formation follows Pinhole projection model which iswidely used in Computer Vision. In Fig. 1 we illustrate thebasic pinhole model where f is the effective focal distanceand d is the distance of a scene point to the camera projec-tion plane. Using trigonometric relations we can obtain:

rc

f= R

d − f. (5)

In the case of a real zoom lens, the effective lens f dependson 2 adjustable lens control parameters: (1) the zoom lenssetting and (2) the in-focus distance parameter, that is the

Page 3: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

482 J Math Imaging Vis (2012) 44:480–490

Fig. 2 Basic thin lens model. f ∞ is the focal distance for points situ-ated at infinity. f is the effective focal distance when lens is focused atdistance d

distance of the projection image plane to points in the scenewhere the lens is focused. In Fig. 2 we illustrate the basicthin lens model where we can appreciate the variation of theeffective focal distance with respect to focused distance.

Zoom lens setting is the most significant parameter inthe effective focal distance f value. Maximum zoom lenssetting interval value is usually provided by the manufac-turer. For instance in the numerical experiments we use aNIKKOR AF-S 18-200 lens with maximum zoom lens set-ting interval [18,200]. This maximum interval is obtainedby adequate combination of zoom lens and in-focus dis-tance settings and in the case we fix the in-focus distancesetting the interval of effective focal distance is smaller. Aswe will see in the numerical experiments such interval isabout [20.56, 127.36] for an in-focus distance of 1185 mm.

4 Proposed Lens Distortion Model

We start by introducing some basic concepts. We use thegeneral approach to estimate L(r) by imposing the require-ment that after lens distortion correction, the projection of3D lines in the image has to be 2D straight lines. This ap-proach has been used in [1, 7] to minimize the followingobjective error function, which is expressed in terms of thedistance of the primitive points to the associated line afterlens distortion correction:

E({ki}

) =Nl∑

l

Np(l)∑

p

dist2(L(xl,p), Sl)

Nl · Np(l), (6)

where Nl is the number of line primitives detected in theimage, Np(l) is the number of extracted points associatedto a line primitive, xl,p is a primitive point associated to lineSl , L(.) is the lens distortion model given by (1) and E({ki})is the average of the squared distance of the primitive linepoint to a straight line after lens distortion correction. Thiserror function is widely applied and the minimization can becarried out through any optimization method (gradient-like).

The main goal of this paper is to model the variation oflens distortion model parameters with respect to effectivefocal distance f . First we observe that, using (5) we obtain

R = d · rc 1

f− rc, (7)

so, in particular, the variation of R is linear with respect to1/f . We expect that for the in-focus plane, the lens distor-tion magnitude would depend on R and therefore, the natu-ral choice to model the variation of lens distortion parameterki is a function of 1/f , that is

ki(f ) ≡ Pi(1/f ), (8)

where ki(f ) represents the lens distortion parameter ki forthe effective focal length f . In fact, in [23], authors dividethe focal distance interval in 2 regions (pincushion and bar-rel areas) and in each area a different polynomial approxima-tion in 1/f variable is used to model zoom variation. Prob-ably as they use a single parameter lens distortion model, todivide the focal length interval value is required to improvethe accuracy. As we use more complex lens distortion mod-els, we can deal with the whole focal distance range withoutseparating the interval in several regions.

In what follows we will assume that Pi(.) is approxi-mated by a polynomial function, that is:

Pi(x) ≡ ai0 + ai

1x + ai2x

2 + · · · + aiNxN, (9)

therefore lens distortion model depends on {ain} and we de-

note by

L{ain}(f, r) ≡ 1 + P1(1/f )r2 + P2(1/f )r4 + · · · , (10)

the radial lens distortion model for the effective focal dis-tance f and by

x = L{ain}(f,x), (11)

the lens distortion correction of point x using the above lensdistortion model.

We propose to estimate the polynomial coefficients {ain}

by minimizing the error function:

EG

({ain

}) =M∑

m

Nl(m)∑

l

Np(l,m)∑

p

dist2(L{ain}(fm,xm,l,p), Sm,l)

M · Nl(m) · Np(l,m),

(12)

where M is the number of images, fm is the effective fo-cal distance associated to image m, Nl(m) is the number ofline primitives detected in image m, Np(l,m) is the numberof extracted points associated to a particular line primitives,xm,l,p is a primitive point associated to line Sm,l . We ob-serve that EG({ai

n}) is the average of the frame distortionerror given by (6) when distortion coefficients are estimatedusing the polynomial models.

Concerning the distortion center variation with respectto the effective focal length, we do not assume any model

Page 4: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

J Math Imaging Vis (2012) 44:480–490 483

Fig. 3 Calibration pattern composed by a collection of 31 × 23 whitestrips

because we do not expect a significant variation. As we willsee in the numerical experiments, the influence of distortioncenter variation is negligible. So we assume that the lensdistortion center is the image center.

In what follows, we call as frame by frame model tothe lens distortion model estimated independently for eachframe within the zoom range of interest. For a n-degree poly-nomial for the Taylor expansion of (1), the frame by framemodel for m images, is the set of the radial distortion coeffi-cients provided by minimizing (6), expressed as,

k = {(kp

1 , kp

2 , . . . , kpn

),p = 1,2, . . . ,m

}. (13)

4.1 Experimental Setup

To validate the proposed model we have built a calibrationpattern (see Fig. 3) composed by a collection of 31 × 23white strips. The dimensions of the calibration pattern is1330 × 1010 mm. The camera is fixed in front of the cal-ibration pattern and we take a number of photos by chang-ing the zoom lens setting of the camera covering its wholevalue interval. For each image we estimate the edge borderlines of the white strip (using for instance method proposedin [24]) which provide us with the distorted lines we useto validate our model. Moreover, for each image, effectivefocal distance is estimated using expression

f = d · rcR + rc

, (14)

obtained from (5) where d is the distance from the cameraprojection plane to the calibration pattern, rc is the distancebetween 2 consecutive strips in the image and R is the dis-tance between 2 consecutive strips in the calibration pattern.

Note that small errors related to the value of the distance-to-target, d in expression (14), would somehow be compen-sated during the global minimization and, therefore, it wouldnot affect the efficiency of the model

To summarize, the procedure we use to validate the pro-posed approach using the calibration pattern can be dividedin the following steps:

1. We take a collection of photos of the calibration patternfor a fixed in-focus distance, covering its whole zoomlens setting value interval.

2. We extract distorted lines in the image.3. For each image we compute effective focal distance us-

ing (14).4. We estimate zoom lens distortion polynomial coefficient

model by minimizing expression (12).5. We analyze the lens distortion error obtained using: (i)

the proposed zoom dependent lens distortion model forthe whole zoom lens setting interval, (ii) lens distortionmodel obtained independently for each image by mini-mizing energy error (6) and (iii) the original lens distor-tion error without using any lens distortion correction.

The expression (12), is minimized by a simple gradientmethod applying an appropriate step length to account forthe differences in magnitude of the variables (note that theyrange from 1.e-005 to 1.e-017). We estimate the initial solu-tion as:

1. We select some images and calculate the distortion coef-ficients for the frame by frame model.

2. We fit the quadratic polynomials (10) to the distortioncoefficients from the frame by frame model using leastsquares error.

From the experiments carried out, a reduced number ofestimations through the frame by frame model is enough.The results shown in the next sections were calculated usingonly three images for the frame by frame model: the imagecorresponding to the maximum focus fmax, the image corre-sponding to the minimun focus fmin, and an image capturedwith the focus (fmin + fmax)/2.

5 Numerical Experiments

We have performed numerical experiments in two differ-ent scenarios. First, we check the accuracy of the proposedmodel using the calibration pattern introduced above. Then,we apply the model to a real scenario: a video sequence of asoccer match with a significant zoom lens variation. In Fig. 4we show video frames of both scenarios.

In the calibration pattern experiment, we use a NikonD90 camera with a NIKKOR AF-S 18-200 mm lens, anda CCD geometry of 23.7 × 15.6 mm. The resolution of thecaptured image is 4288 × 2848 pixels. In the soccer videosequence, we deal with professional HD video TV camerawith frame resolution of 1920 × 1080 pixels (the class ofvideo camera typically used in broadcasting sport events).In this case, the camera manufacturer or zoom lens range isunknown. We estimate, for each frame, the effective focaldistance taking into account the real dimensions of soccer

Page 5: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

484 J Math Imaging Vis (2012) 44:480–490

Fig. 4 Example of image frames of test video sequences used in the numerical experiments: calibration pattern (left), real soccer match (right)

Fig. 5 On the left, primitives used in the numerical experiments for thegeometric pattern: top (f = 20.55 mm), and bottom (f = 127.35 mm).On the right, images and primitives with distortion removed applyingthe proposed zoom model. We can appreciate the variation of the lens

distortion models with respect to f , looking at the curvature of imageboundary: in the case of f = 127.35 mm, the lens distortion correctionis significantly smaller than in the case of f = 20.55 mm

court (which are “a priori” known) and the size of the pro-jected soccer court in the image. This size can be computedusing the homography from the image soccer court to thereal soccer model. Such homography can be estimated us-ing the approach proposed in the classical Zhang calibrationmethod [26]. Note that the expression (14) is not used in thiscase.

We remark the significant differences between the testscenarios selected, which point out the wide possibilities ofthe proposed methodology.

For the calibration pattern, in Fig. 5 we present the imageprimitives used to calculate the lens distortion models to be

embedded in the proposed zoom model. It can be seen theselected primitives for two cases corresponding to a mini-mum zoom (zoom lens setting = 18 mm which correspondsto an effective focal distance f = 20.55 mm) and to a max-imum zoom (zoom lens setting = 200 mm and an effec-tive focal distance f = 127.35 mm). We notice that thereis a significant difference between the effective focal dis-tance range and the range provided by lens manufacturer.The reason is: on the one hand that the effective focal lengthdepends on the “in focus” distance and on the other handthe effective focal length is obtained using (14) where someelements as d and R are estimated by hand so without a

Page 6: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

J Math Imaging Vis (2012) 44:480–490 485

Fig. 6 On the left, primitives used in the numerical experimentsfor the soccer video sequence: top (f = 45.16 mm), and bottom(f = 156.55 mm). On the right, images and primitives with distortionremoved by applying the proposed zoom model. We can appreciatethe variation of the lens distortion models with respect to f , looking

at the curvature of image boundary: in the case of f = 156.55 mm,the lens distortion correction is significantly smaller than in the case off = 45.16 mm. The zoom focus is unknown and it has been estimated(see Sect. 5)

high precision. In any case the focal length range is not veryrelevant in our work because a variation in the focal lengthrange is compensated in the estimation of the coefficients ofthe lens distortion model so we do not expect a significantdifference in the lens distortion correction quality. The im-age primitives consist of a set of edge points belonging, re-spectively, to the horizontal and vertical white stripes withinthe pattern. This can be performed using any edge detec-tor (see for instance [24] or [25] for further details). For in-stance, the number of primitives extracted were, for the casef = 20.55 mm, 303,623 points and 103 lines and for thecase f = 127.35 mm, 52,433 points and 18 lines. The totalamount of primitive points extracted in the 50 images weused in the experiment were 8,166,660.

For the case of the soccer video sequence, it can be seenin Fig. 6 the primitives selected to account for the radialdistortion model. In this case, the total number of availableprimitive points is smaller than in the case of the calibra-tion pattern and it corresponds to the line center of the whitestrips appearing on the soccer terrain (sideline, halfway linegoal line and the ones belonging to the goal box and to thepenalty box), as it can be appreciated in the figure. Note thatthese primitives may not be always available (visible), thus,calibrating this kind of images is a challenging problem be-cause there are a few number of visible primitives to performthe calibration.

We represent the cases for two zoom settings, f =45.16 mm and f = 156.55 mm which correspond to ef-fective focal distance extrema of the video sequence frame.The number of primitives extracted were, for the case f =45.16 mm, 1060 extracted points and 13 lines, and 957 se-lected points and 8 lines, for the case f = 156.55 mm. Thetotal amount of primitive points extracted in the 55 imageswe used in the experiment were 55,447.

We first evaluated the performance of the proposed zoomlens distortion model for the calibration geometric patternand, after a detailed evaluation, we applied the zoom lensdistortion model to the soccer video sequence.

5.1 Results for the Calibration Pattern

We have to note that the calibration pattern can be seen asan ideal zoom experiment with a dense distribution of lineprimitives which allow us to analyze accurately the lens dis-tortion model behavior.

We first evaluated the influence of lens distortion centervariation through the focal distance range (f = 20.55 mm tof = 127.35 mm). The center of radial distortion is also min-imized when estimating the k1 and k2 distortion coefficientsfor the frame by frame model. We used the algebraic method[7] to estimate these coefficients and, by means of the steep-est descent algorithm, we improved the solution calculatingthe RMS distance function as it was also explained in [7].

Page 7: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

486 J Math Imaging Vis (2012) 44:480–490

Fig. 7 Displacement of the center of distortion for the geometric pat-tern

Fig. 8 Relative error improvement percentage optimizing lens distor-tion center

According with the results obtained, we can concludethat the variation of lens distortion center can be neglectedfor two reasons. First, as it is shown in Fig. 7, the displace-ment of the center of distortion for the distortion model (13)is very small (with a maximum norm of ≈ 4 pixels). Second,as it is shown in Fig. 8, the relative improvement percentageof the energy error (6), when the lens distortion center is op-timized and when it is not optimized (the distortion center isthe image center), is very small (with a maximum percent-age of 1.5 %).

Therefore we can conclude that the influence of the lensdistortion center variation is negligible. Therefore, in what

Fig. 9 Variation of estimated distortion coefficients k1 (top) and k2(bottom) with the inverse of focal distance and the estimated secondorder polynomial approximation. We observe that the polynomials fitaccurately the distortion parameter distribution (specially in the case ofk1 which is the more important parameter). Moreover the variation ofk1 and k2 with respect to the polynomials move in opposite directionsso we expect a motion compensation in terms of lens distortion modelcorrection

follows we will consider that the lens distortion center is theimage center.

The variation of estimated distortion coefficients alongthe zoom field using the frame by frame model can be seenin Fig. 9 (represented as a function of the inverse of the fo-cal distance). We also represent the estimated second orderpolynomial approximation. We observe that the polynomialsfit accurately the distortion parameter distribution (speciallyin the case of k1 which is the more important parameter).

Page 8: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

J Math Imaging Vis (2012) 44:480–490 487

Moreover, we can observe that, in general, in the focal dis-tances where k1 varies with respect to the polynomial ap-proximation, the variation of k2 with respect to the polyno-mial move in the opposite directions. So we expect a motioncompensation in terms of lens distortion model correction.

Figure 10 shows the performances of the proposed zoommodel. In this figure we present for each frame: (i) the orig-inal lens distortion error (6) without using any lens distor-tion correction, (ii) lens distortion model obtained indepen-dently for each image by minimizing energy error (6) and(iii) energy error (6) computed using the proposed polyno-mial zoom dependent lens distortion model. We observe thatthe quality of the distortion correction obtained using theproposed zoom model is as good as the one obtained in anindependent way frame by frame.

In Table 1 we summarize the RMS values for the 3 mod-els presented in Fig. 10. From these results, it can be appre-ciated that the relative RMS value difference between thezoom dependent proposed model and model estimated in-dependently frame by frame is just around 1.43 % (for theresults in mm). Due to the fact that the calibration pattern isstanding in a front-parallel position with respect to the cam-

Fig. 10 Distance function for the geometric pattern estimated by thethree models (dashed line: original error function, solid line: proposedquadratic zoom model, dotted line: frame by frame model (13))

era projection plane, the pixels to mm units conversion istrivial using (5).

In Fig. 11 the maximum distortion error is shown. Thiserror has been calculated for a image pixel located at a cor-ner point (the top corner has been selected). Note that themaximum error spans around 200 pixels to correct the ra-dial distortion due to the zoom.

In this experiment, the optimized lens distortion zoommodel coefficients are given by the polynomials:

k1(f ) = 2.26 × 10−9 − 7.19 × 10−7(1/f )

+ 2.14 × 10−5(1/f )2,

k2(f ) = 1.74 × 10−17 + 7.36 × 10−15(1/f )

− 6.55 × 10−13(1/f )2.

5.2 Results for the Soccer Video Sequence

The soccer video sequence we use have been taken bybroadcast video camera and it has been provided to usby MEDIAPRODUCCION S.L. company. The video se-quence is in HD resolution (1920 × 1080 pixels) and it last28 seconds (841 frames). The zoom settings ranges from

Fig. 11 Maximum distortion error for the pattern estimated using theproposed model. The error is calculated as ‖x−xc‖−‖x−xc‖, where,x is the corrected (undistorted) point, x is a point located at a corner ofthe image, and xc is the center of distortion

Table 1 Summary of results forthe geometric pattern (RMSvalues)

Distortion model comparison Pixels Millimeters

Residue without using lens distortion model 4.2991 2.0421

Residue from frame by frame model 1.8192 0.8482

Residue from proposed zoom polynomial model 1.8241 0.8604

Page 9: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

488 J Math Imaging Vis (2012) 44:480–490

Fig. 12 Distance function for the soccer video sequence estimated bythe three models (dashed line: original error function, solid line: pro-posed quadratic zoom model, dotted line: frame by frame model (13))

45.16 mm to 156.55 mm. To estimate the proposed zoom de-pendent polynomial model we have selected 55 frames cov-ering the whole range of effective focal distance. We haveobtained the following polynomial models for the lens dis-tortion coefficients:

k1(f ) = 2.65 × 10−8 − 8.88 × 10−6(1/f )

+ 2.86 × 10−4(1/f )2,

k2(f ) = 1.99 × 10−14 + 3.07 × 10−12(1/f )

− 1.04 × 10−10(1/f )2.

In Fig. 12 the performances of the proposed zoom modelare illustrated. As in the case of the calibration pattern exper-iment, we present a comparison of the lens distortion errormeasures for (i) the original lens distortion error (6) with-out using any lens distortion correction, (ii) lens distortionmodel obtained independently for each image by minimiz-ing energy error (6) and (iii) energy error (6) computed us-ing the proposed polynomial zoom dependent lens distortionmodel. We observe that the quality of the distortion correc-tion obtained using the proposed zoom model is as good asthe one obtained in an independent way frame by frame.

In Table 2 we summarize the RMS values for the 3 mod-els presented in Fig. 12. The single zoom model has beenalso included to compare with. From these results, it can beappreciated that the relative RMS value difference betweenthe zoom dependent proposed model and model estimatedindependently frame by frame is just around 1.33 %. Theseresults are expressed only in pixels because, as the camera isnot in a front-parallel position respect to the view, we can notassociate a real single length measure (meters) to the pixelsize.

Fig. 13 Maximum distortion error for the soccer estimated using theproposed model. The error is calculated as ‖x−xc‖−‖x−xc‖, where,x is the corrected (undistorted) point, x is a point located at a corner ofthe image, and xc is the center of distortion

Table 2 Summary of results for the soccer video images set (RMSvalues)

Distortion model comparison Pixels

Residue without using lens distortion model 1.0601

Residue from frame by frame model 0.5478

Residue from proposed zoom polynomial model 0.5551

In Fig. 13 the maximum distortion error is shown. As itcan be seen, it varies around 20 pixels with the zoom to cor-rect the radial distortion.

One very important advantage of the proposed model isthat using the obtained polynomials we can estimate the lensdistortion model for any effective focal distance f . In par-ticular we can obtain lens distortion models for the wholevideo sequence (841 frames) in spite of just 55 frames havebeen used to estimate the polynomials.

To illustrate the application of the proposal to the wholevideo sequence, we have created a video where the lensdistortion is corrected for each frame using the proposedzoom dependent polynomial model (see this demo video athttp://www.ctim.es/demo101/).

6 Conclusions

New mathematical models to study the variation of lens dis-tortion models for a zoom camera have been discussed. Suchmodels are based on a polynomial approximation to accountfor the variation of the radial distortion parameters throughthe range of zoom and, on the minimization of a global er-

Page 10: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

J Math Imaging Vis (2012) 44:480–490 489

ror energy measuring the distance between sequences of dis-torted aligned points and straight lines after lens distortioncorrection.

We have obtained that using a second order polynomialapproximation, the quality of lens distortion correction is asgood as for the frame by frame approach. This is remark-able because using only 6 parameters (3 for the polynomialassociated to the first lens distortion coefficient k1 and 3parameters for the second coefficient k2) we can estimatethe lens distortion model for any effective focal distance ofzoom lens.

The proposed model have been applied to estimate thezoom dependent lens distortion model for a calibration pat-tern and for a real soccer video sequence filmed using a pro-fessional video camera. The results for both cases show thepotentiality of the proposed model.

Acknowledgements This research has partially been supported bythe MICINN project reference MTM2010-17615 (Ministerio de Cien-cia e Innovación. Spain). We acknowledge MEDIAPRODUCCIONS.L. company who has kindly provided to us with the real soccer videosequence we use in the numerical experiments.

References

1. Devernay, F., Faugeras, O.: Straight lines have to be straight.Mach. Vis. Appl. 13(1), 14–24 (2001)

2. Faugeras, O., Luong, Q.-T., Papadopoulo, T.: The Geometry ofMultiple Images. MIT Press, Cambridge (2001)

3. Tsai, R.Y.: A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cam-eras and lenses. IEEE J. Robot. Autom. 3(4), 323–344 (1987)

4. Faugeras, O.: Three-Dimensional Computer Vision. MIT Press,Cambridge (1993)

5. Weng, J., Cohen, P., Herniou, M.: Camera calibration with distor-tion models and accuracy evaluation. IEEE Trans. Pattern Anal.Mach. Intell. 14(10), 965–980 (1992)

6. Light, D.L.: The new camera calibration system at the US geo-logical survey. Photogramm. Eng. Remote Sens. 58(2), 185–188(1992)

7. Alvarez, L., Gomez, L., Sendra, R.: An algebraic approach to lensdistortion by line rectification. J. Math. Imaging Vis. 35(1), 36–50(2009)

8. Fraser, C.S., Shortis, M.R.: Variation of distortion within the pho-tographic field. Photogramm. Eng. Remote Sens. 58(6), 851–855(1992)

9. Fraser, C., Al-Ajlouni, S.: Zoom-dependent camera calibration indigital close-range photogrammetry. Photogramm. Eng. RemoteSens. 72(9), 1017–1026 (2006)

10. Bräuer-Burchardt, C., Heinze, M., Munkelt, C., Kühmstedt, P.,Notni, G.: Distance dependent lens distortion variation in 3D mea-suring systems using fringe projection. In: BMVC 2006, pp. 327–336 (2006)

11. Irani, M., Anandan, P.: A unified approach to moving object detec-tion in 2D and 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell.20(6), 577–589 (1998)

12. Hampapur, A., Brown, L., Connell, J., et al.: Smart video surveil-lance: exploring the concept of multiscale spatiotemporal track-ing. IEEE Signal Process. Mag. 22(2), 38–51 (2005)

13. Martinez, E., Torras, C.: Contour-based 3D motion recovery whilezooming. Robot. Auton. Syst. 44(3–4), 219–227 (2003)

14. Fahn, C., Lo, C.: A high-definition human face tracking systemusing the fusion of omni-directional and PTZ cameras mounted ona mobile robot. In: 5th IEEE Conference on Industrial Electronicsand Applications (ICIEA), Taichung, China, pp. 6–11 (2010)

15. Fayman, J., Sudarsky, O., Rivlin, E.: Zoom tracking. In: Proceed-ings of the International Conference on Robotics and Automation,Leuven, Belgium, pp. 2783–2788 (1998)

16. Peddigari, V., Kehtarnavaz, N.: A relational approach to zoomtracking for digital still cameras. IEEE Trans. Consum. Electron.51(4), 1051–1059 (2005)

17. Ergum, B.: Photogrammetric observing the variation of intrin-sic parameters for zoom lenses. Sci. Res. Essays 5(5), 461–467(2010)

18. Wilson, R., Shafer, S.: A perspective projection camera model forzoom lenses. In: Proc. Second Conference on Optical 3-D Mea-surements Techniques, Switzerland, October (1993)

19. Tarabanis, K., Tsai, R., Goodman, D.: Modeling of a computer-controlled zoom lens. In: Proceedings of IEEE International Con-ference on Robotics and Automation, vol. 2, pp. 1545–1551(1992)

20. Li, M., Lavest, J.: Some aspects of zoom lens camera calibra-tion. IEEE Trans. Pattern Anal. Mach. Intell. 18(11), 1105–1110(1996)

21. Atienza, R., Zelinsky, A.: A practical zoom camera calibrationtechnique: an application on active vision for human-robot inter-action. In: Proceedings of Australian Conference on Robotics andAutomation, Sydney, Australia, pp. 85–90 (2001)

22. Benhimane, S., Malis, E.: Self-calibration of the distortion of azooming camera by matching points at different resolutions. In:Proceedings of IEEE/RSJ International Conference on IntelligentRobots and Systems, Sendai, Japan, pp. 2307–2312 (2004)

23. Kim, D., Shin, H., Oh, J., Sohn, K.: Automatic radial distortioncorrection in zoom lens video camera. J. Electron. Imaging 19(4),43010–43017 (2010)

24. Alvarez, L., Esclarín, J., Trujillo, A.: A model based edge loca-tion with subpixel precision. In: Proceedings IWCVIA 03: In-ternational WorkShop on Computer Vision and Image Analysis,Las Palmas de Gran Canaria, Spain, pp. 29–32 (2003)

25. Alemán-Flores, M., Alvarez, L., Henríquez, P., Mazorra, L.:Morphological thick line center detection. In: ProceedingsICIAR’2010. LNCS, vol. 6111, pp. 71–80. Springer, Berlin(2010)

26. Zhang, Z.: A flexible new technique for camera calibration. IEEETrans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)

Luis Alvarez has received a M.Sc.in applied mathematics in 1985 anda Ph.D. in mathematics in 1988,both from Complutense University(Madrid, Spain). Between 1991 and1992 he worked as post-doctoralresearcher at CEREMADE labora-tory in the computer vision researchgroup directed by Prof. Jean-MichelMorel. Since 2000 he is full profes-sor at the University of Las Palmasof Gran Canaria (ULPGC). He hascreated the research group AnálisisMatemático de Imágenes (AMI) atthe ULPGC. He is an expert in com-

puter vision and applied mathematics. His main research interest ar-eas are the applications of mathematical analysis to computer visionincluding problems like multiscale analysis, mathematical morphol-ogy, optic flow estimation, stereo vision, shape representation, medicalimaging, synthetic image generation, camera calibration, etc.

Page 11: Zoom Dependent Lens Distortion Mathematical Modelsdev.ipol.im/~reyotero/bib/bib_all/2012_Alvarez_camera_distortion_model.pdf · 482 J Math Imaging Vis (2012) 44:480–490 Fig. 2 Basic

490 J Math Imaging Vis (2012) 44:480–490

Luis Gómez has received a M.Sc.in Physics in 1988 (UNED, Madrid,SPAIN) and a Ph.D. in Telecommu-nication engineering in 1992 (Uni-versity of Las Palmas de Gran Ca-naria (ULPGC, SPAIN). Since 1994he is an assistant professor at theUniversity of Las Palmas of GranCanaria (ULPGC). His main re-search interest areas are the applica-tions of optimization to engineeringproblems, such as ultrasound imag-ing, camera calibration, etc. . . . Heis working at the CTIM (Centro deTecnologías de la Imagen, ULPGC)

group directed by professor Luis Álvarez.

Pedro Henríquez has received aM.Sc. in Computer Science in 2008from University of Las Palmas deGran Canaria (ULPGC, SPAIN).Since 2008 he is a Ph.D. studentat the University of Las Palmasde Gran Canaria (ULPGC). He isworking at the CTIM (Centro deTecnologías de la Imagen, ULPGC)group directed by professor Luis Ál-varez. His main research interest ar-eas are video processing, lens dis-tortion model estimation, cameracalibration, features detection andtracking.


Recommended