+ All Categories
Home > Documents > Nonlinear registration for scanned retinal images: application to ocular polarimetry

Nonlinear registration for scanned retinal images: application to ocular polarimetry

Date post: 05-Oct-2016
Category:
Upload: pablo
View: 212 times
Download: 0 times
Share this document with a friend
7
Nonlinear registration for scanned retinal images: application to ocular polarimetry Vincent Nourrit, 1, * Juan M. Bueno, 2 Brian Vohnsen, 2,3 and Pablo Artal 2 1 The University of Manchester, Faculty of Life Sciences, Sackville Street, Manchester M60 1QD, UK 2 Laboratorio de Óptica, Centro de Investigación en Óptica y Nanofísica (CiOyN), Universidad de Murcia, Campus de Espinardo, 30071 Murcia, Spain 3 University College Dublin, School of Physics, Dublin 4, Ireland *Corresponding author: [email protected] Received 8 April 2008; revised 17 August 2008; accepted 8 September 2008; posted 9 September 2008 (Doc. ID 94576); published 7 October 2008 Retinal images of approximately 1° of visual field were recorded with a homemade scanning laser ophthalmoscope. The benefit of using a nonlinear registration technique to improve the summation pro- cess when averaging frames, rather than a standard approach based on correlation, was assessed. Re- sults suggest that nonlinear methods can surpass linear transformations, allowing improved contrast and more uniform image quality. The importance of this is also demonstrated with specific polarization measurements to determine the degree of polarization across an imaged retinal area. In such a context, where this parameter of polarization is extracted from a combination of registered images, the benefit of the nonlinear method is further increased. © 2008 Optical Society of America OCIS codes: 100.2000, 100.2980, 110.5405, 170.5755, 330.7327. 1. Introduction Imaging of the human eye fundus with high resolu- tion has gained in importance in recent years [14], not least because of the possibilities offered with adaptive optics to image with near diffraction- limited resolution near the fovea, resolving cone photoreceptors and ganglion cells, allowing blood cell tracking and optical slicing of the retina, and provid- ing improved diagnostics of retinal abnormalities [1,59]. Ideally, adaptive optics allows one to over- come the limitations imposed by the optics of the eye on the resolution of images obtained with scan- ning laser ophthalmoscopes (SLOs) and other related techniques. However, since SLO images are typically obtained by scanning at frame rates in the range of 1030 Hz, involuntary eye movements can become a nonnegligible limitation. This issue becomes even more relevant when the field of view covered by a sin- gle frame decreases and a higher resolution is reached. Ocular motion has a negative influence on image alignment and the often-needed summa- tion process to enhance the overall signal-to-noise ra- tio [10], as well as on further processing techniques such as deconvolution [11]. Hardware solutions have been investigated [12,13], but they typically rely on complex and costly setups. It is difficult to measure a given position at the retina with the required preci- sion repeatedly [14] (only recently have highly stabi- lized stimuli at the level of individual photoreceptor cones been achieved [15]), and for this reason soft- ware solutions have usually been implemented [16]. The standard approach when averaging consecu- tive image frames is to consider that eye movements can shift, rotate, or scale each image frame. For high- resolution images, however, local disruption can also degrade the quality within an image. For this reason, we investigated the potential benefit of using more complex registration functions and particularly the use of landmarks within the image [1720]. To assess the potential benefit of this technique, we applied it to two types of retinal image acquired with an SLO. First 0003-6935/08/295341-07$15.00/0 © 2008 Optical Society of America 10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5341
Transcript

Nonlinear registration for scanned retinal images:application to ocular polarimetry

Vincent Nourrit,1,* Juan M. Bueno,2 Brian Vohnsen,2,3 and Pablo Artal2

1The University of Manchester, Faculty of Life Sciences, Sackville Street, Manchester M60 1QD, UK2Laboratorio de Óptica, Centro de Investigación en Óptica y Nanofísica (CiOyN), Universidad de Murcia,

Campus de Espinardo, 30071 Murcia, Spain3University College Dublin, School of Physics, Dublin 4, Ireland

*Corresponding author: [email protected]

Received 8 April 2008; revised 17 August 2008; accepted 8 September 2008;posted 9 September 2008 (Doc. ID 94576); published 7 October 2008

Retinal images of approximately 1° of visual field were recorded with a homemade scanning laserophthalmoscope. The benefit of using a nonlinear registration technique to improve the summation pro-cess when averaging frames, rather than a standard approach based on correlation, was assessed. Re-sults suggest that nonlinear methods can surpass linear transformations, allowing improved contrastand more uniform image quality. The importance of this is also demonstrated with specific polarizationmeasurements to determine the degree of polarization across an imaged retinal area. In such a context,where this parameter of polarization is extracted from a combination of registered images, the benefit ofthe nonlinear method is further increased. © 2008 Optical Society of America

OCIS codes: 100.2000, 100.2980, 110.5405, 170.5755, 330.7327.

1. Introduction

Imaging of the human eye fundus with high resolu-tion has gained in importance in recent years [1–4],not least because of the possibilities offered withadaptive optics to image with near diffraction-limited resolution near the fovea, resolving conephotoreceptors and ganglion cells, allowing blood celltracking and optical slicing of the retina, and provid-ing improved diagnostics of retinal abnormalities[1,5–9]. Ideally, adaptive optics allows one to over-come the limitations imposed by the optics of theeye on the resolution of images obtained with scan-ning laser ophthalmoscopes (SLOs) and other relatedtechniques. However, since SLO images are typicallyobtained by scanning at frame rates in the range of10–30Hz, involuntary eye movements can become anonnegligible limitation. This issue becomes evenmore relevant when the field of view covered by a sin-gle frame decreases and a higher resolution is

reached. Ocular motion has a negative influenceon image alignment and the often-needed summa-tion process to enhance the overall signal-to-noise ra-tio [10], as well as on further processing techniquessuch as deconvolution [11]. Hardware solutions havebeen investigated [12,13], but they typically rely oncomplex and costly setups. It is difficult to measure agiven position at the retina with the required preci-sion repeatedly [14] (only recently have highly stabi-lized stimuli at the level of individual photoreceptorcones been achieved [15]), and for this reason soft-ware solutions have usually been implemented [16].

The standard approach when averaging consecu-tive image frames is to consider that eye movementscan shift, rotate, or scale each image frame. For high-resolution images, however, local disruption can alsodegrade the quality within an image. For this reason,we investigated the potential benefit of using morecomplex registration functions and particularly theuse of landmarks within the image [17–20]. To assessthe potential benefit of this technique,we applied it totwo types of retinal imageacquiredwithanSLO.First

0003-6935/08/295341-07$15.00/0© 2008 Optical Society of America

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5341

is the classic casewhere a collection of single frames isused to produce a final registered image. Second, thedegree of polarization (DOP) is computed from twodifferently registered sets of polarimetric images [21].The instrumental setup and the registration

scheme are described in Section 2. The results of re-gistration are then presented in Section 3. Section 4presents the conclusions of this study.

2. Method

A. Instrumentation and Experimental Procedure

The images processed in this study were obtainedwith a homemade SLO that uses a near-IR laserdiode (operating at a wavelength of 785nm) as a lightsource. This instrument has been described in moredetail elsewhere [21,22]. The signal was recordedwith a photomultiplier tube in front of which a100 μm confocal pinhole (corresponding to about17 μm when projected on the retina) discriminatedagainst unwanted light. At an off-fovea position sucha pinhole can suffice to resolve individual cone photo-receptors even without resorting to adaptive optics[23]. The use of a coherent light source can makespeckle an issue both in the incoming pathway tothe retina and for the light that reaches the confocaldetector. However, estimated speckle size at the pin-hole is only about one sixth of the pinhole diameter,meaning that some averaging will take place. Thescanning is realized with two galvanometric mirrors,one of which is resonant at 8kHz, giving a frame rateof approximately 15 frames=s with 512 × 512 pixelimages. The use of a frame grabber with a limited32Mbyte buffer for image collection does that seriesused contained no more than eight individual imageframes.For the polarimetric images, the experimental

system used was modified to include a polarimeter.This system has been described previously [21],but some information is included here for complete-ness. In brief, the polarization in the incomingpathway is kept linear by means of a polarizer withits axis in a vertical position; in the registration path-way a polarization state analyzer is included (whichcan be removed at any time). This is a combination ofa rotatory λ=4 plate and another linear polarizer(parallel to the former), placed in front of the detec-tor. Orienting the axis of the λ=4 plate at −45°, 0°, 30°,and 60° provides four independent polarizationstates (additional information on the reason forchoosing these angles can be found in [24,25]). Foreach polarization state, sets of retinal images wererecorded and named I45, I0, I30, and I60. Since interms of polarization these images are independent,the spatially resolved Stokes vector (SOUT ¼ ½S0;S1;S2;S3�T) of the light emerging from the eye canbe reconstructed from these images [21,26,27]. Ele-ments S1–S3 represent the pixel-by-pixel polariza-tion state of the light beam, and S0 is theintensity image. From these four elements of theStokes vector, the DOP map was computed as [28]

DOP ¼ ðS21 þ S2

2 þ S23Þ

S0

1=2

: ð1Þ

This parameter contains information on the depo-larizing properties of the optics (the eye in the pre-sent case) and ranges from 0 (depolarized light) to1 (totally polarized light). The DOP is directly relatedto scattering processes [29]. In our images, highervalues of the DOP (i.e., low levels of depolarization)would be associated with directional light returningfrom the photoreceptors. Conversely, lower DOP va-lues correspond to light suffering from different scat-tering processes through the retina. A diagram of thepolarimetric procedure is shown in Fig. 1.

B. Image Processing

Sets of approximately 1° retinal images arerecorded with the SLO. These correspond to extrafo-veal regions where cone photoreceptors were visible.Images extremely degraded (e.g., because of the eye-lid) are discarded; the others are corrected for thenonlinearity of the resonant scanning. A frame isthen selected as the frame of reference for registra-tion. This is perhaps the main drawback of digitaltechniques compared with hardware, since there isno evidence that this frame is not free of localdeformations itself. Stevenson and Roorda [14] pro-posed an elaborated technique to try to overcome thisproblem, but even so this approach has still somelimitations.

In the following, we describe the different registra-tion imaging techniques used in this work. Thequality of the final images is assessed by direct ob-servation (e.g., it facilitates the identification of pre-viously blurred structures) and through two objectiveimage quality parameters: the root mean square(RMS) contrast and the acutance. The RMS contrast

Fig. 1. Schematic diagram of the procedure followed to computethe elements of the Stokes vector and the degree of polarization inSLO images.

5342 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

is defined as the standard deviation, relative to themean, of the intensity values of each pixel in the im-age [30]. The acutance is an image quality metric ofsharpness and an indicator of intensity variations inthe presence of features in fundus images [3,31].

1. Classic Approach (Cross Correlation)

Since the SLO images have a low signal-to-noise ra-tio, we followed a strategy similar to that describedin [32] and applied a pyramidal low-pass filter (3 × 3region) to reduce detection noise before calculationof the cross correlation between individual imageframes before the summation (i.e., a correction ofshift only). We denote this method F0.

2. Registration Based on Control Points

A group of control points (CPs) visible in each imageis selected so that all CPs are used for calculations.Each CP is selected manually by clicking on the de-sired area (see Fig. 2 as an example). This operationis repeated for each frame. The crossed spots in theleft-hand part of Fig. 2 represent the positions of thedifferent selected CPs. The large size of the spot is forillustration only. In practice, the position of the CP isrecorded with a pixel precision. As the large size ofthe images impedes clicking on exactly the same fea-ture in each image, a cross-correlation technique isused to ensure coincidence between each CP (i.e.,to correct for user-related inaccuracy). When select-ing the CP, we tried to have them homogeneously dis-tributed over the image. This was not possible withareas presenting a high level of noise (see the right-hand panel in Fig. 2), in which case the quality of theregistration was typically negatively affected.Once the matched CPs have been selected in each

image, we need to determine a mapping function (ortransformation) that will use this information tomatch the rest of the points in the distorted image.This process has been extensively investigated, andseveral techniques have been developed [20,33–37].Since the parameters of geometric distortions are

not accurately known, we considered five differentmapping functions each of particular relevance forour study.

The first transformation (which we denote F1) cor-responds to the classic approach, that is, a combina-tion of translation, rotation, and scaling. The secondfunction (F2) is the projective transformation. This isthe most general linear transformation and allowsone to take into account shear, as produced by a hor-izontal motion parallel to the fast scan direction, ortilt (the eye turns around three axes) [38].

The eye rotation together with the angular scan-ning (at two different velocities) and the fact thatthe eye is a nonrigid structure that undergoes brutalaccelerations (over 20; 000deg=s2 [39]) during the re-cording process may produce complex deformations,and that is why we also considered three nonlinearfunctions. The first of these are polynomials [19,40].They allow taking into account torsions and can beseen as the terms in the Taylor series expansionsof the mapping function. We considered polynomialsof orders 2, 3, and 4 (and we denote the associatedfunctions, respectively, as F3.2, F3.3, and F3.4).

Similar to the two linear functions (F1, F2), poly-nomials are global mapping functions; i.e., a singlefunction is used to register the whole image. Thismeans that they may fail to characterize local defor-mation and that each CP will affect the whole imageequally and not only its immediate neighborhood.For this reason, two local methods (denoted F4 andF5) were also considered. For the first one, we followthe technique developed in [40,41]. The image is di-vided into several regions by Delaunay triangula-tion. Then affine functions are determined to overlaypairs of corresponding triangles. Points outside theconvex hull are associated with triangles and theirrespective registration functions. We refer to this re-gistration function as F4. The last function (F5) is thelocal weighted mean method [40,42]. In this case, alocal second-order polynomial is associated with eachCP that registers it and its N closest neighbors (N isan arbitrarily chosen number; we found that a valueof 10 provided us with a good compromise betweenthe number of required control points and the accu-racy of the registrations). The value at an arbitrarypoint in the image is then given by a weighted sum ofthe different polynomials. This weight varies inver-sely with the distance between the arbitrary pointand the considered CP or polynomial.

Provided with the minimum number of pairs ofCPs for transformations F1–F5 (in order, 2, 4, 6,10, 15, 4, and 6, i.e. the number of independent para-meters for these functions [38,40]), we obtain a sys-tem of linear equations that can be solved withoutdifficulties. If more CPs are supplied, then the pro-blem is overdefined, and we use a least squares mini-mization to find the optimum parameters. Unlikeprevious studies [16,43], our attempt is not to re-trieve the exact parameters of the eye movementsthat occurred. Therefore, no constraint based oneye movement characteristics is applied.

Fig. 2. Example of the distribution of the CPs in a retinal imagewhich subtends 1° (left). The crossed spots represent the positionsof the different CPs selected. The large size of the spot is for illus-tration only. In practice, the position of the landmark is recordedwith pixel precision. An SLO image, where no clear features are tobe used as CPs in the encircled area, is depicted in the right-handpanel. This can significantly affect the result of the registration.

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5343

3. Results

A. Standard Retinal Imaging

Our study involves a limited number of frames perseries of acquisitions. Even so, several conclusionscan be drawn based on the processed images.As expected, the results of image registration de-

pend not only on the chosen mapping function butalso on the number of CPs and their position inthe image. This point is illustrated in Fig. 3, wherethe same series of images have been registered twotimes by using function F3.2, but with only 7 CPs dis-tributed in the higher part of the image [Fig. 3(a)] or16 CPs distributed more uniformly [Fig. 3(b)]. Theresult of the registration with 16 CPs appearsvisually better, and the difference in RMS contrast(C) and acutance (A) between the two images is ap-proximately 8%.Another important parameter is the image series

itself, as different deformations can be involved.Thus, in some cases it was not possible to improvethe outcome significantly over that of translationand standard deformations (F0 and F1). This pointis illustrated in Fig. 4, where Fig. 4(a) is the directsum of the images, Fig. 4(b) the registration withthe classic technique (C − 3:53%, Aþ 12:54% com-pared with the result of the direct sum) and Fig. 4(c)the registration with F1 and nine CPs (Cþ 1:15%,Aþ 10:12%). Functions F2–F5 did not provide betterresults here.For all our series of images, methods F1–F2 per-

formed slightly better than the correlation methodF0 (with F2 giving better results than F1), but with-out a dramatic improvement, suggesting that takingthe rotation into account is of limited interest.Figure 5 illustrates this point [Fig. 5(c), registrationwith F1, RMS contrast þ6:76%, acutance þ3:62%compared with Fig. 5(b) registered with F0]. More-over, Fig. 6 shows that with a global transformationan improvement in one part of an image may beassociated with a degradation in another part ofthe image.Method F4 allowed noticeable improvements, out-

performing method F2, but in only 50% of our cases.This is probably due to the relatively gross sectioningof the image when a limited number of CPs are avail-

able and the fact that each triangle is registered withan affine function.

Method F3.2 and more particularly method F5provided the best results in 88% of our cases (i.e.,visually better and higher contrast or acutance) [18].An example of this is illustrated in Fig. 7, whereFig. 7(a) is the direct sum and Fig. 7(b) the registra-tion with F5 [Cþ 11:32%, Aþ 30:36% compared withFig. 7(a)]. This means that, even locally, second-orderpolynomials are best suited to take into account thenonlinear nature of the acquisition process and givethe best results. Polynomials of order 3 or 4 tend towarp the image too much, as we did not impose anyconstraints on the parameters of the deformation.

B. Polarimetric Imaging

In this second part of the study we applied the differ-ent methods described in Subsection 2.B to the regis-tration of polarimetric images I

−45, I0, I30, and I60.Images in the bottom row in Fig. 8 were obtainedas a result of using the standard frame correlation.As a comparison, landmark-based registration wasused in the images shown in the upper row. For thisparticular subject, the best results were obtainedwith functions F3.2, F5, or F2. It can subjectivelybe observed that polarimetric images registered withthe nonlinear technique (upper panels) show moredetails of the retinal structures (mainly photorecep-tors). Objectively, the images of the upper panel pre-sent an average increase in RMS contrast andacutance of up to approximately 12% and 8%, respec-tively, with respect to the images of the lower panel.

From these two sets of four images the correspond-ing pixel-by-pixel Stokes vector elements were com-puted. Figure 9 represents the elements S0 − S3. Asexpected, since the original sets of polarimetricimages are slightly different, the corresponding

Fig. 3. (a) 5-frame registered image, using the function F3.2 and(a) 7 CPs or (b) 16 CPs.

Fig. 4. (a) Direct sum of five frames; (b) registration with F0;(c) registration with F1 and nine CPs. Transformations F2–F5did not provide any improvement in this case.

Fig. 5. (a) Direct sum of 5 frames. Registration with functions (b)F0 and (c) F1 with 16 CPs. The areas delimited by the plain ordashed white rectangles are represented at a higher magnificationin Fig. 6.

5344 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

spatial distribution of the Stokes vector elementscomputed from them also differ, although differencesare the most noticeable in S1.Since S0 images represent the pixel-by-pixel inten-

sity of the beam emerging from the eye, both quali-tative and quantitative comparisons of those twoimages can be done. The blood vessel is better seenin the S0 image corresponding to the linear correla-tion (bottom left-hand image). However additionalinformation on other smaller retinal structures isavailable when the nonlinear correlation S0 imageis used (upper left-hand image). These details areof interest for the scope of the present work. This factcan be corroborated when different image qualitymetrics are computed across the images. In particu-lar, the RMS contrast was 32% higher for the upperleft-hand image, and the acutance 31% higher.Once the Stokes vector elements were known, we

used Eq. (1) to compute the corresponding DOPmapsfrom images in Fig. 9. As explained above, owing tothe physical meaning of the DOP, these maps mayenhance different structures and provide furtherdetails.In particular, two main noticeable differences ap-

pear when the two images are compared. First, theimage calculated by using the images registered withthe nonlinear method [Fig. 10(b)] appears more de-tailed and enhanced than the one calculated by usingthe images registered with the standard method[Fig. 10(a)]. Again this can objectively be tested whenthe acutance is computed. Acutance for the nonlinearregistration DOP was about 50% higher than that oflinear registration.Second, the retinal features present in each image

are not entirely similar. Such a difference was al-ready noticeable between each image recorded with

a given polarization (Fig. 8). Since the polarimetricparameter is calculated through the combinationof four registered images, the combined effect of mis-alignment is enforced, thereby highlighting the im-portance of the proper registration technique.

4. Conclusions

In this study we implemented a landmark-basedregistration technique and investigated whichmapping functions were the best adapted to correctfor eye-movement-caused degradations in high-resolution images recorded with an SLO. While morework is still needed to define the most suitable regis-tration technique, the results suggest that methodsF3.2 and F5 allow a more accurate registration thanstandard linear techniques.

This benefit is, however, strongly limited by the ne-cessity of using a large number of landmark tags(>10), which are difficult to detect automatically (im-age noise would make a threshold selection criteriaproblematic). User detection can also be difficultwhen image quality is poor. In practice it is a tenuousprocess, and it can be subject to errors due to repeti-tion or fatigue.

In comparison, linear transformations offer a goodcompromise in terms of results, robustness, andcomputational effort, despite relatively importantassumptions. These transformations have usuallybeen used by different authors in the literaturealthough most were applied to improve the imaging

Fig. 6. (a), (c) Details of Fig. 5(b); (b), (d) details of Fig. 5(c).

Fig. 7. (a) Registered image (a) using 5 frames and function F0 or(b) using F5 and 16 CPs.

Fig. 8. Polarimetric images obtained with standard frame corre-lation (bottom row) and the nonlinear registration method (upperrow). Images subtend approximately 0:8°, and they are shown inthe same order as those in [21].

Fig. 9. Spatially resolved Stokes vector elements (S0, S1, S2, andS3) computed from images in Fig. 8.

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5345

of larger retinal areas (optic nerve head, extendedpatches containing blood vessels, etc.) where correla-tions were easier because of the size of the referencefeatures [3,26,44].It should be stressed that a poor correlation of im-

age frames may cause the quality of images to dete-riorate, but its influence on properties derived fromthose images, such as the spatially resolved Stokesvector or, as shown here, the DOP, may be even moredetrimental and corrupt a proper interpretation.The calculation of maps of ocular polarization

properties and related physical parameters can beused for ophthalmological applications and clinicaldiagnosis. For instance, measurements of the retinalnerve fiber layer thickness around the optic nervehead, obtained from maps of retinal birefringence,are of value in the diagnostic of glaucoma [45]. Inaddition, depolarized light images have been re-ported to improve the visibility of pathological ret-inal areas [46,47].An appropriate image registration becomes of ut-

most importance when one is computing polarizationproperties in small retinal areas [48], improvingSLO imaging by the use of polarimetric techniques[21,26,45], or imaging in eyes with high levels of scat-tered light [3], among others. In such cases, the ben-efits allowed by a nonlinear mapping technique maywell overcome the difficulties involved in the process.

This research was supported in part by SpanishMinisterio de Educación y Ciencia grants (FIS2004-2153, FIS2007-64765 to P. Artal and RYC2002-006337 to B. Vohnsen), and Fundación Séneca,Murcia (04524/GERM/06 to P. Artal). The authorsalso thank the reviewers for helpful comments thatallowed them to improve the quality of the manu-script significantly.

References1. A. Roorda, F. Romero-Borja, W. J. Donnelly III, H. Quenner,

T. J. Hebert, andM. C.W. Campbell, “Adaptive optics scanninglaser ophthalmoscopy,” Opt. Express 10, 405–412 (2002).

2. B. Vohnsen, I. Iglesias, and P. Artal, “Confocal scanning laserophthalmoscope with adaptive optical wavefront correction,”Proc. SPIE 4964, 24–32 (2003).

3. J. J. Hunter, C. J. Cookson, M. L. Kisilak, J. M. Bueno, andM. C. W. Campbell, “Characterizing image quality in a scan-

ning laser ophthalmoscope with differing pinholes and in-duced scattered light” J. Opt. Soc. Am. A 24, 1284–1295(2007).

4. J. M. Wanek, M. Mori, and M. Shahidi, “Effect of aberrationsand scatter on image resolution assessed by adaptive opticsretinal section imaging,” J. Opt. Soc. Am. A 24, 1296–1304(2007).

5. D. C. Gray, R. Wolfe, and B. P. Gee, D. Scoles, Y. Geng,B. D. Masella, A. Dubra, S. Luque, D. R. Williams, andW. H. Merigan, “In vivo imaging of the fine structure of rho-damine-labeled macaque retinal ganglion cells,” Invest.Ophthalmol. Visual Sci. 49, 467–473 (2008).

6. F. Romero-Borja, K. Venkateswaran, T. J. Hebert, A. Roorda,“Optical slicing of human retinal tissue in vivo with the adap-tive optics scanning laser ophthalmoscope” Appl. Opt. 44,4032–4040 (2005).

7. J. I. Wolfing, M. Chung, J. Carroll, A. Roorda, andD. R. Williams, “High-resolution retinal imaging of cone-roddystrophy” Ophthalmol. Annu. 113 , 1014–1019 (2006).

8. J. L. Duncan, Y. Zhang, J. Gandhi, C. Nakanishi, M. Othman,K. H. Branham, A. Swaroop, and A. Roorda, “High resolutionimaging of foveal cones in patients with inherited retinaldegenerations using adaptive optics.” Invest. Ophthalmol.Visual Sci. 48, 3283–3291 (2007).

9. A. Roorda, Y. Zhang, and J. L. Duncan, “High-resolution invivo imaging of the RPE mosaic in eyes with retinal disease.”Invest. Ophthalmol. Visual Sci. 48, 2297–2303 (2007).

10. A. R. Wade and F. W. Fitzke, “A fast, robust recognition systemfor low light level image registration and its application to ret-inal image,” Opt. Express 3, 190–197 (1998).

11. V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution forhigh-resolution confocal scanning laser ophthalmoscopy,” J.Opt. A 7, 585–592 (2005).

12. D. X. Hammer, R. D. Ferguson, C. E. Bigelow, N. V. Iftimia,T. E. Ustun, and S. A. Burns, “Adaptive optics scanning laserophthalmoscope for stabilized retinal imaging,” Opt. Express14, 3354–3367 (2006).

13. D. X. Hammer, R. D. Ferguson, J. C. Magill, M. A. White,A. E. Elsner, and R. H. Webb, “Compact scanning laserophthalmoscope with high-speed retinal tracker” Appl. Opt.42, 4621–4632 (2003).

14. S. B. Stevenson and A. Roorda, “Correcting for miniature eyemovements in high resolution scanning laser ophthalmo-scopy,” Proc. SPIE 5688A, 145–151 (2005).

15. D. W. Arathorn, Q. Yang, C. R. Vogel, Y. Zhang,P. Tiruveedhula, and A. Roorda, “Retinally stabilized cone-targeted stimulus delivery,” Opt. Express 15, 13731–13744(2007).

16. C. R. Vogel, D. W. Arathorn, A. Roorda, and A. Parker, “Retinalmotion estimation in adaptive optics scanning laser ophthal-moscopy.” Opt. Express 14, 487–497 (2006).

17. George K. Matsopoulos, Konstantinos K. Delibasis, andNicolaos A. Mouravliansky, “Medical image registration andfusion techniques: a review,” in Advanced Signal ProcessingHandbook: Theory and Implementation for Radar, Sonar,and Medical Imaging Real Time Systems, S. Stergiopoulos,ed. (CRC Press, 2000).

18. V. Nourrit, B. Vohnsen, and P. Artal, “Non-linear correctionof eye movements for scanning laser ophthalmoscope ima-gery,” Invest. Ophthalmol. Visual Sci., 48, E-Abstract 2765(2007).

19. N. Ryan, C. Heneghan, and P. de Chazal, “Registration of di-gital retinal images using landmark correspondence by expec-tation maximization,” Image Vision Comput. 22, 883–898(2004).

20. L. G. Brown, “A survey of image registration techniques,”ACM Comput. Surv. 24, 325–376 (1992).

Fig. 10. Spatially resolved degree of polarization computed fromthe polarimetric SLO images shown in Fig. 9: (a) from bottom row,(b) from upper row.

5346 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

21. J. M. Bueno and B. Vohnsen, “Polarimetric high-resolutionconfocal scanning laser ophthalmoscope,” Vision Res. 45,3526–3534 (2005).

22. B. Vohnsen, I. Iglesias, and P. Artal, “Directional imaging ofthe retinal cone mosaic,” Opt. Lett. 29, 968–970 (2004).

23. B. Vohnsen, I. Iglesias, and P. Artal, “Directional light scan-ning laser ophthalmoscope,” J. Opt. Soc. Am. A 22, 2606–2612(2005).

24. A. Ambirajan and D. C. Look, “Optimum angles for a polari-meter: part I,” Opt. Eng. 34, 1651–1655 (1995).

25. J. M. Bueno and J. W. Jaronski, “Spatially resolved polariza-tion properties for in vitro corneas,” Ophthal. Physiol. Opt. 21384–392 (2001).

26. J. M. Bueno and M. C. W. Campbell, “Confocal scanning laserophthalmoscopy improvement by use of Mueller-matrix po-larimetry,” Opt. Lett. 27, 830–832 (2002).

27. J. M. Bueno, E. Berrio, and P. Artal, “Aberro-polariscope forthe human eye,” Opt. Lett. 28, 1209–1211 (2003).

28. R. A. Chipman, “Polarimetry,” in Handbook of Optics, 2nd ed.,M. Bass, ed. (McGraw-Hill, 1995), Vol. 2, chap. 22.

29. J. M. Bueno, E. Berrio, M. Ozolinsh, and P. Artal, “Degree ofpolarization as an objective method of estimating scattering,”J. Opt. Soc. Am. A 21, 1316–1321 (2004).

30. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7,2032–40 (1990).

31. Y. F. Choong, F. Rakebrandt, R. V. North, and J. E. Morgan,“Acutance, an objective measure of retinal nerve fibre imageclarity,” Br. J. Ophthalmol. 87, 322–326 (2003).

32. K. A. Goatman, A. Manivannan, J. H. Hipwell, P. F. Sharp,N. Lois, and J. V. Forrester, “Automatic registration and aver-aging of ophthalmic autofluorescence images,” in ConferenceProceedings in Medical Image Understanding and Analysis(MIUA) (BMVA, 2001), pp. 157–160.

33. A. Can, C. V. Stewart, B. Roysam, and H. L. Tannenbaum, “Afeature based, robust, hierarchical algorithm for registeringpairs of images of the curved human retina,” IEEE Trans. Pat-tern Anal. Mach. Intell. 24, 347–364 (2002).

34. W. E. Hart and M. H. Goldbaum, “Registering retinal imagesusing automatically selected control point pairs,” IEEE Inter-national Conference Image Processing, 1994. Proceedings.ICIP-94 (IEEE, 1994), vol. 3, pp. 576–580.

35. D. Lloret, J. Serrat, A. M. Lopez, A. Soler, and J. J. Villaneuva,“Retinal image registration using creases as anatomical land-

marks,” 15th International Conference on Pattern Recognition(ICPR'00) (IEEE Computer Society, 2000), vol. 3, pp.3207–3210.

36. E. Peli, R. Augliere, and G. Timberlake, “Feature-based regis-tration of retinal images,” IEEE Trans. Biomed. Eng. MI-6,272–278 (1987).

37. G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, andK. S. Nikita, “Automatic registration of retinal images withglobal optimization techniques,” IEEE Trans. Inf. Technol.Biomed. 3, 47–60 (1999).

38. R. P. Woods, “Spatial transformation models” in Handbook ofMedical Imaging: Processing and Analysis, I. Bankman, ed.(Academic, 2000), pp. 465–490.

39. H. Deubel and B. Bridgeman, “Fourth Purkinje image signalsreveal eye-lens deviations and retinal image distortions dur-ing saccades,” Vision Res. 35, 529–538 (1995).

40. A. A. Goshtasby, 2D and 3D Image Registration for Medical,Remote Sensing, and Industrial Applications (Wiley, 2005).

41. A. A. Goshtasby, “Piecewise linear mapping functions for im-age registration,” Pattern Recognition 19, 459–66 (1986).

42. A. A. Goshtasby, “Image registration by local approximationmethods,” Image Vision Comput. 6 ( 255–261 (1988).

43. J. B. Mulligan, “Recovery of motion parameters from distor-tion in scanned images,” presented at NASA Image Registra-tion Workshop (IRW97), NASA Goddard Space Flight Center,Greenbelt, Md., 20–21 November 1997.

44. J. M. Bueno, J. J. Hunter, C. J. Cookson, M. L. Kisilak, and M.C. W. Campbell, “Improved scanning laser fundus imagingusing polarimetry,” J. Opt. Soc. Am. A 24, 1337–1348 (2007).

45. A. W. Dreher, K. Reiter, and R. N. Weinreb, “Spatially resolvedbirefringence of the retinal never fiber layer assessed with aretinal laser ellipsometer,” Appl. Opt. 31, 3730–3735 (1992).

46. S. A. Burns, A. E. Elsner, M. B.Mellem-Kairala, and R. B. Sim-mons, “Improved contrast of subretinal structures using polar-ization analysis,” Invest. Ophthalmol. Visual Sci. 44, 4061–4068 (2003).

47. M. Miura, A. E. Elsner, A. Weber, M. C. Cheney, M. Oshako,M. Usui, and T. Iwasaki, “Imaging polarimetry in central ser-ous chorioretinopathy,” Am. J. Ophthalmol. 140, 1014–1019(2005).

48. H. Song, Y. Zhao, X. Qi, Y. T. Chui, and S. A. Burns, “Stokesvector analysis of adaptive optics images of the retina,” Opt.Lett. 33, 137–139 (2008).

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5347


Recommended