+ All Categories
Home > Documents > GlaucomaRiskIndex - Pattern Recognition Lab · GlaucomaRiskIndex:...

GlaucomaRiskIndex - Pattern Recognition Lab · GlaucomaRiskIndex:...

Date post: 18-Sep-2018
Category:
Upload: lethuan
View: 214 times
Download: 0 times
Share this document with a friend
18
Transcript

Glaucoma Risk Index:Automated glaucoma detection from color fundus images

Rudiger Bock a,b,! Jorg Meier a Laszlo G. Nyul c Joachim Hornegger a,b Georg Michelson d,b,e

aPattern Recognition Lab, Department of Computer Science,Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen, Germany

bErlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-University Erlangen-NurembergcDepartment of Image Processing and Computer Graphics, University of Szeged, Arpad ter 2, 6720 Szeged, Hungary

dDepartment of Ophthalmology, Friedrich-Alexander-University Erlangen-Nuremberg,Schwabachanlage 6, 91054 Erlangen, Germany

eInterdisciplinary Center of Ophthalmic Preventive Medicine and Imaging, Friedrich-Alexander-UniversityErlangen-Nuremberg, Schwabachanlage 6, 91054 Erlangen, Germany

Abstract

Glaucoma as a neurodegeneration of the optic nerve is one of the most common causes of blindness. Becauserevitalization of the degenerated nerve fibers of the optic nerve is impossible early detection of the disease is essential.This can be supported by a robust and automated mass-screening. We propose a novel automated glaucoma detectionsystem that operates on inexpensive to acquire and widely used digital color fundus images. After a glaucomaspecific preprocessing, di!erent generic feature types are compressed by an appearance-based dimension reductiontechnique. Subsequently, a probabilistic two-stage classification scheme combines these features types to extract thenovel Glaucoma Risk Index (GRI) that shows a reasonable glaucoma detection performance. On a sample set of 575fundus images a classification accuracy of 80% has been achieved in a 5-fold cross validation setup. The GRI gainsa competitive area under ROC (AUC) of 88% compared to the established topography-based Glaucoma ProbabilityScore of scanning laser tomography with AUC of 87%. The proposed color fundus image-based GRI achieves acompetitive and reliable detection performance on a low-priced modality by the statistical analysis of entire imagesof the optic nerve head.

Key words: computer aided diagnosis, glaucoma, optic disk, appearance-based image analysis, linear principal componentanalysis

! Corresponding author. Telephone: +49-9131-85-27775Fax: +49-9131-303811

Email addresses:[email protected] (RudigerBock), [email protected] (JorgMeier), [email protected] (Laszlo G. Nyul),[email protected]

(Joachim Hornegger), [email protected](Georg Michelson).

1. Introduction

Glaucoma is one of the most common causes ofblindness with a mean prevalence of 2.4 % for allages and of 4.7 % for ages above 75 years (Kleinet al., 1992). The disease is characterized by theprogressive degeneration of optic nerve fibers andastrocytes showing a distinct pathogenetic image ofthe optic nerve head.

Glaucoma leads to (i) structural changes of theoptic nerve head (ONH) and the nerve fiber layer

Preprint submitted to Medical Image Analysis 30 December 2009

and (ii) a simultaneous functional failure of the vi-sual field. The structural changes are manifested bya slowly diminishing neuroretinal rim indicating adegeneration of axons and astrocytes of the opticnerve (Fig. 1).

As lost capabilities of the optic nerve can not berecovered, early detection and subsequent treatmentis essential for a!ected patients to preserve their vi-sion (Michelson et al., 2008). Commonly, glaucomadiagnosis is based on the patient’s medical history,intraocular pressure, visual field loss tests and themanual assessment of the ONH via ophthalmoscopyor stereo fundus imaging (Lin et al., 2007). To ad-ditionally objectify the glaucoma stage and its pro-gression geometric parameters of the ONH are docu-mented. These geometric parameters measure ONHstructures that are changing in case of glaucoma dis-ease: optic disk diameter, optic disk area, cup diam-eter, rim area, mean cup depth etc.

This contribution provides a data driven frame-work extracting a novel glaucoma parameter fromfundus images. Contrary to the established detec-tion techniques, it does not require accurate mea-surements of geometric ONH structures as it per-forms a statistical data mining technique on the im-age patterns themselves. The proposed methodol-ogy can be transferred to other domains and mightbe able to extract further parameters providing newinsights to other ophthalmic questions.

2. Background

The glaucoma disease is characterized by the de-generation of optic nerve fibers and astrocytes that isoften accompanied by an increased intraocular pres-sure. Due to the loss of nerve fibers the retinal nervefiber layer (RNFL) thickness is descreasing. In thecourse of disease, the interconnection between thephotoreceptors and the visual cortex is reduced. Inthe worst case, the visual information of the photore-ceptors can no longer be transmitted to the brainand visual field loss up to blindness is threatening.The disappearance of axons and astrocytes a!ectsthe structural appearance of the ONH and causes areduction of the functional capabilities of the retina.The ONH can be examined by ophthalmoscopy orby stereo fundus photography: in the course of thedisease the neuroretinal rim gets thinner while thecup is expanding due to the loss of nerve fibers andastrocytes (Fig. 1).

The qualitative assessment of the ONH structure

Fig. 1. Major structures of the optic nerve head that are vis-ible in color fundus photographs: The optic disk is marginedby the optic disk border and can be divided into two ma-jor zones: (i) the neuroretinal rim is composed of astrocytesand nerve fibers while (ii) the brighter cup or excavationexclusively consists of supporting tissue.

and the functional abilities in addition with the pa-tient’s medical history and intraocular pressure arethe common base for a reliable glaucoma diagnosisby ophthalmologists. This inherent subjectivity ofthe gained conclusion leads to a considerable inter-and intra-observer variability in di!erentiating be-tween normal and glaucomatous ONHs (Varmaet al., 1992).

However, quantitative parameters can help tomake the qualitative assessment more objective, re-producible and lead to a reduction of the observervariability or to track glaucoma progression in pa-tient follow-up. Such parameters can be gainedmanually or even by computer based technologiesfrom several imaging modalities.

Stereoscopic images of the ONH are commonlyused for documenting its cup shaped structure. Im-portant ONH characteristics such as disk area, diskdiameter, rim area, cup area or cup diameter canbe extracted from the stereo image by planimetry(Betz et al., 1982) to also gain the well establishedcup-to-disk ratio. For the glaucomatous disease thecup-to-disk ratio measures the decrease of rim areawhile the disk area remains constant.

Although, this ratio is highly influenced by thedisk size, it gives a general estimation whether theONH shape is within its normal limits or it has tobe considered conspicuous.

There exist several imaging modalities which pro-vide quantitative parameters of the ONH in glau-coma: (i) Confocal Scanning Laser Ophthalmoscopy(CSLO), (ii) Scanning Laser Polarimetry (SLP) or(iii) Optical Coherence Tomography (OCT). CSLO,

2

commercially available as Heidelberg Retina Tomo-graph (HRT, Heidelberg Engineering, Heidelberg,Germany), provides a 2.5-dimensional topographicimage of the ONH through the undilated pupil(Fig. 2(b)). After a manual outlining of the opticdisk border, the device is able to generate geomet-ric parameters such as the cup volume, cup depth,cup shape measure or even retinal height varia-tions along the rim contour. Discriminant analysis(Moorfields Regression Analysis (MRA)) properlycombines these geometric parameters. It shows agood classification performance that was validatedby Miglior et al. (2003) and Wollstein et al. (1998).Due to the manual outlining of the optic disk bor-der, the gained quantitative parameters are notfully objective. However, the Glaucoma ProbabilityScore (GPS) of the latest version of HRT utilizesthe parameters of a non-linear shape model of thetopographic ONH shape for glaucoma classifica-tion. As this cup model is automatically fitted tooriginal shape, this method overcomes the sub-jectivity of contour based methods while it showsa comparable overall diagnostic accuracy to theMRA (Burgansky-Eliash et al., 2007). In a series ofconsecutive acquisitions over years the progressionof glaucomatous degeneration could be quanti-fied. The temporal glaucomatous structural ONHchanges are automatically located and quantifiedby the HRT Topographic Change Analysis (TCA)from consecutive acquisitions. Thus, this techniqueis promising to quantify a further progression ofglaucoma (Chauhan et al., 2000).

Alongside the structural changing of the ONH,the degeneration of the nerve fibers is depicted bya thinning of the RNFL in the course of glaucomadisease, too. The thickness of the RNFL can bemeasured by SLP or OCT (Fig. 2(c)). In SLP theretina is illuminated by polarized light and RNFLthickness can be directly determined from the po-larization change of the reflected light (Sehi et al.,2007). OCT provides complete depth profiles of theretina utilizing low-coherence interferometry of nearinfrared light. The desired structural RNFL can beextracted from these depth profiles utilizing manualor automated segmentation techniques (Fernandezet al., 2005). From these thickness maps severalglobal and sectoral geometric parameters such asaverage thickness, minimum thickness etc. can beextracted that di!erentiate between glaucomatousand control cases (Medeiros et al., 2004b).

Beside structural characteristics of the retina, theloss of functional capacities of the optic nerve is

(a) (b)

(c)

Fig. 2. Example images of the central retina: Optic nervehead (ONH) centered fundus photograph (a) is used forautomated glaucoma detection by the proposed glaucomarisk index while glaucoma probability score utilizes HRT2.5-dimensional topography images (b). OCT line scan (c)traversing the ONH illustrates di!erent layers of the retinasuch as nerve fiber layer as the top one.

one major criteria for reliably diagnosing glaucoma.Modalities such as Standard Automated Perime-try (SAP), Short-wavelength Automated Perimetry(SWAP) or Frequency Doubling Technology (FDT)stimulate retinal regions to identify visual field de-fects. Quantitative medical parameters are derivedfrom the number and the location of missing stimu-lated spots. Although this technology shows a highspecificity for detecting glaucoma, it can only de-tect already occurred functional damage (Medeiroset al., 2004a).

Overall, these structural parameters of the op-tic nerve head are well established in the medicalcommunity and verified in several studies (Greaneyet al., 2002; Sharma et al., 2008). As these param-eters are derived from structural measurements tocharacterize structural glaucomatous changes theyare very meaningful and intuitive to the physician.

We propose a data-driven framework that is able toextract a novel glaucoma parameter from ONH de-picted on color fundus images. The proposed datamining technique is based on the idea of “Eigenim-ages” (Turk and Pentland, 1991) that statisticallyanalyzes the pixel input data to capture character-

3

!"#$

!"#$

!"#$

!"#$

%&'(()*)+',)-.

/(,!(,'01 2.3!(,'01

/4!5&&67).',)-.!+-881+,)-.24!#1((1&!817-9'&:4!;<,)+!.1891!=1'3!!!!.-87'&)>',)-.

?81<8-+1(().0

!@'A!).,1.(),)1(

!B-68)18!'.'&C()(

!"<&).1!).,18<-&',)-.

B1',681!1D,8'+,)-.

).<6,

)7'01

E@5

<FEG

<8'AFEG

<*-68)18FEG

<(<&).1FEG

?%H

?%H

?%H

Fig. 3. Processing pipeline in detail: Glaucoma risk calculation consists of three steps: (i) Preprocessing eliminates the diseaseindependent variations from the input image, (ii) Feature extraction transforms the preprocessed input data to characteristicand compact representation, and (iii) Classification generates the Glaucoma Risk Index (GRI) by a two-stage probabilisticSVM classifier.

istic variations that look promising for di!erenti-ating between glaucomatous optic nerve heads andcontrols. Due to the analysis of pure image data,the achieved parameters are influenced by the ap-pearance of the whole ONH and not by structuralmeasurements as it is the case for some establishedquantitative glaucoma parameters.

3. The concept of Eigenimages for glaucomadetection

Due to the high variability of the ONH appear-ance, the established determination of geometricONH parameters utilized for glaucoma detection isdi"cult to automate.

We consider the described situation for auto-mated glaucoma detection similar to that onestated by Turk and Pentland (1991) for early facedetection methods. Before the 1990’s face detectionsystems characterized faces by a set of geometricparameters such as normalized distances or ratiosbetween characteristic facial landmarks (Bledsoe,1966). Based on this previous work Turk and Pent-land (1991) concluded that the established methods“on automated face recognition [have] [. . . ] ignoredthe issue of just what aspects of the face stimulusare important for identification”. This time, theyproposed an information theory approach that cap-tures facial variations from a set of training imagesto gain a compact, but meaningful collection of pa-rameters that are usable for classification purposes.As an information theory technique they proposedthe Principal Component Analysis (PCA). Thismethod models a linear transformation that projectsthe image space to a low dimensional feature spacewhile a maximum of data variation is preserved.The captured variation between the images canthen be represented by a set of Eigenimages. Thisso called appearance-based method is still consid-

ered as the baseline face recognition system.

In this contribution, the idea of appearance-basedrecognition is transferred to the domain of glaucomarecognition in order to get novel di!erentiating non-geometric parameters and to possibly gain new in-sights to the glaucoma disease. The major procedureillustrated in Fig. 3 consists of three steps:

(i) Preprocessing : The appearance-based tech-niques preserve the data variation in the lowdimensional representation independent fromits origin although it might not be related tothe classification task. Variations such as il-lumination inhomogeneities are not linked tothe glaucoma disease and have to be excludedfrom the image data beforehand.

(ii) Feature extraction: Beside the common Eigen-image approach on raw pixel intensities wepropose further types of data representationin order to capture additional image informa-tion. These feature types are then compressedseparately by PCA to gain a low dimensionalimage representation for classification.

(iii) Classification: In the last processing step, aprobabilistic two-stage classifier scheme com-bines the di!erent types of features to gain onesingle glaucoma prediction.

4. Image preprocessing

The proposed appearance-based approach ana-lyzes the entire input image data to capture the glau-coma characteristics. To emphasize these desiredcharacteristics in the input data, the variations notrelated to the glaucoma disease are excluded fromthe images in a preprocessing step. This includesvariations due to image acquisition, such as inhomo-geneous illumination or di!erent optic nerve head

4

locations, but also retinal structures not directly re-lated to glaucoma, e.g. the vessel tree.

Due to the reflection properties of the eye ground,the red channel of the fundus photos is often over-saturated (especially in the central, optic nerve headregion), while the blue channel can be undersatu-rated and noisy. Therefore, we used the green chan-nel for a proper image processing as only this chan-nel shows a reliable saturation.

4.1. Illumination correction

The acquired images can be disturbed by brightspeckles or inhomogeneous background e.g. due todi!erent visual angle of the patients during imageacquisition. Although, these interferences are notoriginated by glaucoma they a!ect the illuminationof the ONH and would have an influence to the sub-sequent statistical analysis.

To avoid this behavior, we desire (i) a homoge-neous lightning of the ONH and (ii) a similar illu-mination level among all images of the sample set.This can be achieved by global correction techniques(Youssif et al., 2006) applying a background correc-tion. These methods subtract the estimated retinalbackground from the original image to gain a homo-geneously illuminated fundus image. The estimationof a background can be done by average intensity fil-tering within a large neighborhood (Chrastek et al.,2005; Hoover et al., 2000) or polynomial surface fit-ting (Narasimha-Iyer et al., 2006).

We implemented a correction method similar tothe one proposed by Narasimha-Iyer et al. (2006)as it does not require a time consuming image lowpass filtering as in case of average intensity filtering.According to their lighting model, the observed in-tensity in each color channel of an input image I "Rn!m is a pixel-wise product of a luminosity com-ponent L and a reflectance component R. The firstcomponent is due to the source illumination, whilethe second one is related to the structures in theretina and their properties. Taking the logarithm,the pixel at (x, y) becomes an additive term:

log(I(x, y)) = log(L(x, y)·R(x, y))= log(L(x, y)) + log(R(x, y)).

(1)

Low frequency changes of image intensities areconsidered as illumination inhomogeneities and canbe well described by a 4th-order polynomial surface.This two dimensional polynomial is defined by thecoe"cients c " R15 which are determined by a least-

square estimate from the following (weighted) linearequation system (Narasimha-Iyer et al., 2006):

log(i) = (WS)c , (2)

where i is the vector representation of I. The matrixS " RN!15 with N = n · m stores the polynomialcoe"cients of the pixel locations. Retinal structuressuch as optic nerve head or vessel tree set apartfrom background and have to be excluded from theleast square polynomial fitting. The diagonal ma-trix W " RN!N is used to mask out these regionsfrom the computation. The diagonal elements are 1where a pixel is considered as background and 0 forforeground structures.

We consider pixels having intensities between the15th and 85th percentile of the image histogram asbackground while the remaining pixels mainly be-long to retinal structures such as the bright ONHstructure or the dark vessel structures. It is only ofimportance that the set of background pixels doesnot contain any structure pixels as only these pix-els are utilizable to estimate the background. There-fore, the masking of the structures does not need tobe very accurate.

The vector of the logarithmic reflectance compo-nent r is then recovered as

log(r) = log(i)# Sc . (3)

The reflectance component R is obtained by reshap-ing rlog to matrix notation and transforming it backfrom logarithmic space. It shows a considerably re-duced amount of illumination artifacts and intensityinhomogeneities (Fig. 4 second row).

4.2. Vessel removal

The glaucoma disease is mainly related to theoptic nerve fibers and astrocytes. The vessel treestrongly varies among di!erent patients, but thevessel location and the vessel diameters are min-imally a!ected by glaucoma itself. The proposedappearance-based approach captures these varia-tions of the used training sample set. In case of theutilization of images without removed vessel thatwould lead to the emphasis of the vessels and notof glaucoma (Meier et al., 2007). To avoid this be-havior, the vessel structures in the eye ground areremoved by (i) segmentation and subsequent (ii)inpainting of the detected vessel tree.

First, we perform a rough segmentation of theretinal vascular structure. In the literature, sev-eral automated vessel detection methods were pro-

5

posed over the last years (Niemeijer et al., 2004).Most of these techniques exploit the local appear-ance of the vessel including (i) edge-based (Al-Diriet al., 2009; Martinez-Perez et al., 2007; Sofka andStewart, 2006), template-matching (Ricci and Per-fetti, 2007; Sofka and Stewart, 2006; Wang et al.,2007) or supervised approaches (Niemeijer et al.,2007; Ricci and Perfetti, 2007; Soares et al., 2006;Staal et al., 2004) as well as a combination of those(Ricci and Perfetti, 2007; Sofka and Stewart, 2006).Based on the initial pixel-wise segmentation con-tinuative methods ensure the global connectivityand topography of the vessel tree (Grisan et al.,2004; Niemeijer et al., 2009b). We combined edge-based and template-matching techniques to extractthe vessel tree from the fundus images. Initially, weuse an adaptive thresholding technique, wherein foreach pixel, the median of its 15$15 neighborhoodis taken as a threshold to separate foreground frombackground. The size of the neighborhood was de-termined to approximately match the size of thestructures, i.e. the vessels. Then, the informationof this mask and a Canny edge map (Canny, 1986)is combined. This generated mask is filtered suchthat small objects are removed and only structuresthat are bounded by parallel running pairs of edgesare kept. These potential vessel parts are validatedby gridding and a matched filter technique (Canet al., 1999) where edge templates are applied to thegrid points at di!erent orientations and distances.A morphological closing of the valid regions yieldsthe final vessel mask (Fig. 4 third row).

Second, an iterative inpainting technique, used inphoto restoration and video processing (Bertalmioet al., 2000; Shen and Chan, 2002), replaces the in-valid pixels of the vessel mask by those of the neigh-borhood in a visually pleasing way. In our imple-mentation, the vessel regions are iteratively filledlayer by layer from outside inwards. The missingpixels become a distance weighted average of thealready valid neighboring values. Finally, we gaina vessel-free photography of the optic nerve head(Fig. 4 fourth row).

4.3. Optic nerve head normalization

For glaucoma detection the ONH is one of themost important structures for observing glauco-matous characteristics. As known from face de-tection (Turk and Pentland, 1991), the proposedappearance-based method requires at least rough

Fig. 4. Image preprocessing eliminates glaucoma disease in-dependent variations and allows an appearance-based post-processing. Row-to-row: (i) Original fundus photos. (ii) Il-lumination corrected images with (iii) overlaid vessel maskwhich is then (iv) inpainted to hide the vessel tree. (v) Finalnormalized optic nerve head image used for feature extrac-tion.

point correspondences to be able to gain a reason-able performance. Consequently, we normalize therim area according to the optic nerve head borderwithin distinct ranges of optic nerve head sizes.

First, the ONH rim has to be determined. Inthe literature, some methods restrict the segmentedONH rim to be circular or elliptical using e.g.Hough transform (Blanco et al., 2006; Zhu et al.,2009) while other techniques allow higher variabil-

6

ity of the ONH shape using parametric or free-formdeformable models (Chrastek et al., 2005; Li andChutatape, 2003; Xu et al., 2007). The performanceof the proposed segmentation techniques stronglyrely on proper initialization with the location of theONH: Proposed algorithms utilize ONH template-matching, intensity assumptions or the convergenceof the vessel tree (Chrastek et al., 2005; Hoover andGoldbaum, 2003; Lowell et al., 2004; Youssif et al.,2008). Besides, supervised techniques were also suc-cessfully applied for both purposes (Abramo! et al.,2007; Merickel et al., 2007; Niemeijer et al., 2009a).As we are interested in a circular mapping of theONH rims, we utilize the segmentation technique ofChrastek et al. (2005) that determines the circularrim contour of the ONH border as the first process-ing step: Assuming the ONH as the brightest spotin the fundus image, a center estimate is achievedby a strong intensity smoothing and further thresh-old probing. This estimation then restricts the sub-sequent circular Hough transform that is performedon the edge map to find the optic disk border.

Second, a square box of size three times the cal-culated ONH radius, centered at the ONH center iscropped and then scaled to the preprocessed imageP of fixed reference size of 128 $ 128 pixels (Fig. 4last row).

This proposed procedure ensures images ofthe same dimension in addition with circularlymapped ONH rims that is required for a reasonableappearance-based feature computation.

5. Feature extraction

The performed image preprocessing emphasizesglaucomatous variations among the images and al-lows a generic and appearance-based feature extrac-tion. The high dimensional preprocessed images Pare statistically compressed by PCA to gain com-pact, and meaningful features f. To capture comple-mentary image information we propose three di!er-ent generic image representations with di!erent spa-tial and frequency resolution for feature extraction.

5.1. Pixel intensity values

Like the standard appearance-based approach(Turk and Pentland, 1991), we serialized the pre-processed 2-dimensional images P " R128!128 toan image vector p " R128·128. This is then decom-posed by a precalculated decomposition matrix to

low dimensional feature vector fraw. According toTurk and Pentland (1991) this decomposition ma-trix was determined by an implicit computationof Eigen vectors from images of a training set. Wehave shown (Meier et al., 2007) that thirty principalcomponents capture already at least 95 % of datavariation. Therefore, the feature vector from rawpixel intensities was restricted to dimensionality offraw " R30.

5.2. FFT coe!cients

In contrast to the spatial pixel intensities, Fouriercoe"cients capture the image’s global frequency in-formation and are computable from an image byFourier transform (FT). We apply a discrete ver-sion of FT, the Fast Fourier Transform (FFT), onthe preprocessed image P and compress the real re-sponse of its Fourier coe"cients by PCA to featuref!t " R30.

5.3. B-spline coe!cients

In addition to the described feature types, B-spline coe"cients decode spatial frequency infor-mation as they are defined by piecewise polynomi-als over a pixel neighborhood (Ibanez et al., 2005;Unser, 1999; Unser et al., 1993a,b).

The discrete input image P of size n $ m withx = {1, . . . , n} and y = {1, . . . ,m} is transformedto B-spline coe"cients c(k, l) with k = {1, . . . , n}and l = {1, . . . ,m}.

p(x, y) =n,m!

k=0,l=0

c(k, l)!d(x# k, y # l) (4)

with d denoting the degree of the central B-spline!d(x, y). The (d + 1)-fold convolution of the rectan-gular B-spline !0 generates splines of higher degreewhich are symmetrical bell shaped (Unser, 1999):

!0(x, y) =

"###$

###%

1, |(x, y)T |2 <12

12, |(x, y)T |2 =

12

0, otherwise

(5)

!d(x, y) = !0 " !0 " . . . " !0(x, y)& '( )(d+1) times

(6)

Because the number of input pixels p(x, y) is equalto the used number of spline coe"cients, equation

7

(4) defines a projection and no image informationwill be lost.

Our processing calculates coe"cients c(k, l) of B-splines of degree n = 4 from preprocessed images P.The coe"cients C are subsequently compressed byPCA and denoted as features fspline " R30.

6. Classification

In the final classification step (Fig. 3), a glaucomaprobability and the associated class label such as“glaucoma” or “not glaucoma” is computed from thethree di!erent feature types f that will be denotedas the Glaucoma Risk Index (GRI).

6.1. Classifier

In general, classifiers achieve good results if theirunderlying separation model fits well to the distri-bution of the sample data.

In our previous investigations (Bock et al.,2007), we evaluated the glaucoma detection perfor-mance for three di!erent kinds of classifiers in anappearance-based pipeline configuration. The Sup-port Vector Machine classifier (SVM) shows similarresults compared to other classifiers such as thenaive Bayes classifier or k-nearest neighbor classi-fier. In this work, we decided to use the SVM as itis known that the SVM is less prone to a sparselysampled feature space compared to the other clas-sifiers as it is the case here. This SVM classifierdetermines a maximum-margin and soft hyperplanethat best separates the considered classes in a ker-nel transformed feature space (Chen et al., 2005;Scholkopf et al., 2000).

A further improvement of classification perfor-mance could not be verified in our elaborate eval-uations (Bock et al., 2007) that additionally uti-lized di!erent classifier enhancement methods likeAdaBoost or attribute selection in combination withSVM classifier. Therefore, a stand-alone SVM clas-sification scheme is performed for the proposed clas-sification purpose.

6.2. Two-stage classification combines di"erentfeature types

In order to be able to benefit from the complemen-tary information captured by the three proposed fea-ture types, they have to be combined. We propose a

two-stage classification scheme that synthesizes onefinal result from them.

In the first stage, each feature type (fraw, f!t,fspline) obtained from feature extraction is classi-fied separately. A probabilistic SVM classifier deter-mines a probability p(G) for the normalized PCAcompressed feature (e.g. praw(G) from raw pixel in-tensities fraw).

In the second stage, these probabilities are con-catenated to one low-dimensional, common featurespace by composing a feature vector:

fp =*praw(G), p!t(G), pspline(G)

+T. (7)

Probabilistic SVM now processes this generated vec-tor of probabilities as feature fp and outputs onecommon glaucoma probability p(G).

7. Evaluation

Based on the presented fully automated process-ing procedure illustrated in Fig. 3, we achieved anovel probabilistic index that we refer to as Glau-coma Risk Index (GRI). In order to quantify itsability in detecting glaucoma from color fundus im-ages the performance of GRI is first characterizedin more detail by some key figures and a reliabilityanalysis. Furthermore, its performance is comparedto (i) that of glaucoma experts and (ii) to medicallyrelevant and well established glaucoma parameterssuch as Glaucoma Probability Score (GPS) of theHRT III device (Swindale et al., 2000) and the cup-to-disk ratio (Betz et al., 1982).

Our evaluations showed that the proposed two-stage classification scheme outperforms configura-tions using one common single feature space andthat it is competitive to common glaucoma param-eters.

7.1. Image data set and setup

The used data set of fundus images was randomlyselected from the Erlangen Glaucoma Registry(EGR) which contains more than 2,000 records ofmulti-modal fundus images of a long-term screeningstudy. The gold standard diagnosis was given by anexperienced ophthalmologist based on a completeophthalmological examination with anamnesis, oph-thalmoscopy, visual field test, intraocular pressure(IOP), and scanning laser tomography (HeidelbergRetina Tomograph, HRT II). The color fundus pho-tos were acquired by a Kowa NonMyd alpha digital

8

fundus camera with a optic nerve head centered 22"field of view and an image size of 1600$ 1216 pixels(Fig. 9).

For the performance evaluation of the GRI weused a data set of 575 ONH centered color fundus im-ages from 358 persons with normal sized ONHs (av-erage vertical ONH diameter: 1.8± 0.22mm). Themean age was 56.1± 11.4 years, 52.2% female. Thesamples had an unambiguous gold standard diagno-sis with 239 glaucomatous images and 336 normals.For comparison, the corresponding topography im-ages were applied with HRT III device to calculatethe GPS. The linear cup-to-disk ratio was also ex-tracted from HRT topography images based on themanually outlined optic disk border. A subset of 240images (160 normals, 80 glaucomatous) from thedata set was additionally evaluated by two ophthal-mologists experienced in diagnosing glaucoma. Thefindings were done on the fundus images exclusivelywithout using any anamnesis or further image datato fairly rank the ability of fundus image based GRI.

We performed a 5-fold cross validation setup togain a robust statistical evaluation. Each fold con-tained nearly the same number of subjects and asimilar ratio of glaucomatous versus controls. AsSVM classifier the libSVM implementation (EL-Manzalawy and Honavar, 2005) was utilized withnon-linear radial basis kernel and probabilistic out-put. For the first stage classification, the PCA com-pressed features f were normalized beforehand bymin-max normalization. The parameters of SVMwere manually optimized based on the classificationaccuracy of the first fold.

For reliability analysis, we successively capturedimages of 17 selected subjects. The consecutive ac-quisitions were done at the same day, but with su"-cient relaxation time in-between to avoid a decline ofimage quality due to miosis. For each eye three im-ages were acquired and processed individually whichare referred to as one image series. Because of badimage quality, seven of the 102 images had to beexcluded from the analysis as they showed obviousprocessing failure such as missed vessel segmenta-tion or wrong optic nerve head localization. Finally,the analysis was done on 95 images linked to 32 eyes.

7.2. Properties of the proposed Glaucoma Risk Index

For the novel GRI, we proposed a two-stage clas-sification scheme to combine the di!erent featuretypes (fraw, f!t, fspline). In principle, two alternative

Table 1Performance of proposed classifiers indicated by classifica-tion accuracy, area under Receiver Operating Characteris-tic curve (AUC), sensitivity and specificity (%) for detect-ing glaucoma: The performance of the proposed two-stageGlaucoma Risk Index (GRI) is compared to (i) single-featureclassification (fraw, f!t, fspline), (ii) single-stage classifica-tion setup and (iii) human experts and established quanti-tative glaucoma parameters i.e.Glaucoma Probability Score(GPS) and cup-to-disk ratio (CDR).The p-values denote the statistical significance for a di!erentROC compared to GRI: * p " 0.05.

Setup Accuracy AUC Sensitivity Specificity p-value

GRI 80 88 73 85 –

fraw 80 87 69 88 0.07

f!t 79 86 71 85 0.01*

fspline 81 88 69 90 0.46

single-stage 79 86 70 85 0.08

expert 1 83 – 54 97 –

expert 2 83 – 51 98 –

GPS 78 87 88 72 0.96

CDR 68 88 93 50 0.51

configurations are also possible: (i) Single-featureclassification: Instead of combining the three di!er-ent feature types, only one single feature type is uti-lized that provides the best detection performance.(ii) Single-stage classification: To combine di!erentfeatures, these feature are concatenated to one sin-gle high dimensional feature space. This is then re-duced by attribute selection to gain an expedientfeature dimensionality that can directly be used forclassification.

In this section, the properties of the GRI are char-acterized in more detail by key numbers and a reli-ability analysis in order to show the advantages ofthe proposed two-stage scheme compared to alter-native classification setups. For calculation of accu-racy, sensitivity and specificity the decision thresh-old was fixed at a 0.5 level.

7.2.1. Performance figuresThe Receiver Operating Characteristic (ROC)

curve shows classifier performance for di!erentdecision thresholds. Therefore, it provides infor-mation on how to tune the decision threshold inorder to achieve the best tradeo! between sensi-tivity and specificity for the desired application. Anon-parametric test based on Mann-Whitney-U-

9

Fig. 5. Receiver operating characteristic (ROC) curves fordetecting glaucoma. In comparison to the ROC curve of thesingle-stage classification (- -), the ROC curve of GRI (—)shows higher sensitivities for specificities lower than 0.8. Thisimprovement might be due to the repeated usage of class in-formation that allows the determination of boundaries betterseparating the two classes.

statistics (DeLong et al., 1988; Vergara et al., 2008)is used to show the significance of the ROC curvedi!erences (statistical significance level: p % 0.05).

The proposed two-stage GRI gains an area underROC curve (AUC) of 88 % (Table 1) and a sensitivityof 73% at a specificity of 85 % (Fig. 5).

Comparing the single-feature classificationsamong themselves, the classifications on f!t or frawwith an AUC of 86% or 87 % respectively nearlyreach the performance on fspline with an AUC of88 %. While the ROC curve from fspline (p = 0.46)is comparable to the one of GRI, the performanceof the remaining single-feature classifications showat least a trend to be significant inferior (p % 0.07).

The single-stage classification gains an AUC of86 % and a sensitivity of 70 %. This setup is not com-petitive to GRI as the ROC curves of both setupsare nearly significant di!erent (p% 0.08). Especially,for high specificities, the sensitivity is decreasing asit is illustrated by the ROC curves shown in Fig. 5.

7.2.2. Distribution of glaucoma probabilitiesThe distribution of gained glaucoma probabilities

P(G) (specified by the histogram of p(G) from thesample set) expresses the separability of the two dif-ferent classes and the classifier’s certainty (Fig. 6).As the data set (Section 7.1) consists of cases witha definitive diagnosis and thus without any suspi-cious cases, we expect an undoubtful classificationwith an high confidence level of the classification.

With a high confidence level (i.e., p(G) % 0.1

(a)

(b)

(c)

Fig. 6. The probability distributions P(G) of control andglaucomatous cases. The two-stage GRI setup (a) shows twocompact distributions with distinct peaks at the borders.This reflects the definitive disease stages of our samples. Thetwo other configurations, namely single-feature classificationrepresented by fspline (b) and single-stage (c) are character-ized by undesired widely scattered distributions.

10

or p(G) & 0.9) 58.6 % of the controls and 48.3 %of the glaucomatous cases are correctly classifiedwith GRI. The histogram of the calculated glaucomaprobabilities p(G) (Fig. 6(a)) illustrates the desireddistinct peaks at the borders. As representation forsingle-feature classification setup, we show the his-togram for feature fspline that provides a classifica-tion performance comparable to GRI. However, acorrect classification at a high confidence level isachieved in only 31.5% of the controls and 23.3 % ofthe glaucomatous cases (Fig. 6(b)). Similar resultsare gained in case of a common single-stage classifi-cation where a high confidence level is only reachedfor 33.6 % of the controls and 17.8 % of the glauco-matous (Fig. 6(c)). These lower confidence levels arealso reflected by their histograms of the calculatedglaucoma probabilities p(G) which do not show thedesired instance accumulation around p(G) = 0 forthe control class or p(G) = 1 for glaucomatous casesrespectively. Although, the majority of clearly diag-nosed subjects were correctly classified the distribu-tion is widely scattered.

In conclusion, the proposed two-stage classificationassembly of the GRI shows a superior performancecompared to the alternative configurations. Only forfsplines the classification performance was compara-ble. However, only the GRI exclusively provides ahigh classification confidence level for the majorityof the instances that well reflects the present defini-tive stages of disease of our data set. For this reason,we consider the two-stage classification setup of GRIas the standard procedure that is further validatedin a reliability analysis and in comparison to otherquantitative glaucoma parameters.

7.2.3. ReliabilityBeside the system performance, the reliability of

the proposed algorithm is also a relevant issue. Onlyif the system gains similar results for the same pa-tient but from di!erent acquisitions is it usable forreliable screening applications.

As quantitative reliability measure, we report themaximal absolute deviation from the median of theGRIs calculated from the three images (i.e. imageseries) acquired from the same subject and eye.

Figure 7 shows the histogram of the deviationswith respect to the glaucoma probability which in-dicates that for the majority of the image seriesthe intra-subject deviation is low. In 19 of 32 se-ries (59%) the maximal deviation was less than 2 %and the largest reported deviation was 11% for one

Fig. 7. Reliability measure as maximal absolute deviationfrom the median of the GRIs calculated from fundus imagesacquired in series. The occurrence of deviations illustrated bythe histogram indicates that for the majority of the samplesthe intra-subject deviation is less than 2%.

single series. Due to these reported deviations, weconsider the processing procedure and classificationscheme reliable.

7.3. Comparing GRI to experts and glaucomaparameters

Based on the preceding evaluations the GRI gainsa reliable glaucoma detection performance with anAUC of 88%. A comparison of GRI with (i) therating of human experts on sole fundus images and(ii) established quantitative glaucoma parameters inthe following section shows a competitive detectionperformance.

7.3.1. Expert performanceIn contrast to the usual clinical glaucoma diag-

nosis utilizing medical history, di!erent imagingmodalities and preceding examinations, the glau-coma experts investigated the color fundus imageson its own. This setup ensures a fair performancecomparison as it provides the experts the sameinformation that is also utilized by GRI.

Both experts classified 83 % of the instances cor-rectly that is slightly superior compared to GRI (Ta-ble 1). Their performance is highly specific whilestill reaching a moderate sensitivity over 50 %. Byputting theses values in relation to the ROC curve(Fig. 8), it is obvious that the human experts out-perform the GRI and also GPS, however on a mod-erate level of sensitivity.

Based on the evaluation of digital color fundus im-ages which is not conform to the daily clinical work-

11

Fig. 8. The Receiver operating characteristic (ROC) curveof GRI (—) in comparison: (i) human experts (•, !) onsingle fundus images without medical history are very specificwhile the sensitivity reaches a moderate level. (ii) The GRIshows a competitive performance compared to GlaucomaProbability Score (GPS) and linear cup-to-disk ratio (CDR)in particular for a specificity around 0.8.

flow, our proposed GRI almost reaches the perfor-mance of the human experts.

7.3.2. Glaucoma Probability Score (GPS)The HRT III software o!ers a module to deter-

mine the GPS from 2.5-dimensional topographic im-ages of the ONH gained by scanning laser tomogra-phy technique. The score is established in the clinicalworkflow and was validated by several trials (Alen-car et al., 2008; Burgansky-Eliash et al., 2007). TheGPS is an objective measurement as it does not de-pend on manual outlining of the optic disk borderand can be considered as a de facto standard in au-tomated glaucoma detection from HRT topographyimages.

On our data set, the GPS achieves a classificationaccuracy of 78 % and an AUC of 87 % (Table 1). Forglaucoma detection we get a sensitivity of 88 % at aspecificity of 72%.

The AUC for the GPS is similar to GRI (p = 0.96).This is also reflected by the ROC curves (Fig. 8)which behave quite similar especially for a specificityaround 0.8.

Although, GRI and GPS were calculated fromdi!erent imaging modalities, they show comparabledetection performance.

7.3.3. Cup-to-disk ratioThe cup-to-disk ratio measures the ratio of the di-

ameter of the cup to the diameter of the entire disk.

Commonly, this ratio can be determined by planime-try (Betz et al., 1982) from color fundus images byoutlining the optic disk border and the cup. TheHRT provides an equivalent ratio, the linear cup-to-disk ratio, from topography images that can be de-termined from manually outlined optic disk rim. Inorder to avoid the manual outlining of the cup, wecompare the GRI to the linear cup-to-disk ratio ofHRT.

On our data set, the cup-to-disk ratio, achieves asimilar AUC of 88 % compared to GRI, however theaccuracy drops down to 68 % assuming a decisionthreshold of 0.5 (Table 1). The ROC curves of cup-to-disk ratio and GRI (Fig. 8) are comparable withp = 0.51.

In summary, the GRI provides a detection perfor-mance on color fundus photographs that is com-petitive to human experts on single fundus images,the topography based GPS and cup-to-disk ratio. Inparticular for specificities around 80 % the markersshow a competitive sensitivity. Thus, the novel GRIprovides a new tool to automatically detect glau-coma from the low priced and widespread funduscamera.

8. Discussion

A reliable and competitive scheme for an auto-matic glaucoma detection was presented. The ma-jority of the subjects were correctly classified as al-ready shown. To give a better impression on theclassification outcome, Fig. 9 shows examples of cor-rectly classified and misclassified fundus images.

Figure 9 (a)-(c) illustrates correctly classified con-trols characterized by a typical small cup. In con-trast, an expanded cup denotes the correctly clas-sified images of glaucomatous eyes (Fig. 9 (d)-(f))which corresponds to the common glaucoma diseasepattern.

Some images of control and glaucomatous ONHwere misclassified. In case of misclassified controls,shown in Fig. 9 (g)-(i), we suppose that the GRIcalculation procedure might be misled by the paleneuroretinal rim or an increased disk size with in-creased cup area. The misclassified glaucomatouscases (Fig. 9 (j)-(l)) are marked by a low contrastbetween rim and cup and a greater optic disk size.

As described in Section 4.3, we perform a down-sampling of larger ONH during normalization stepto gain well mapped tissue structures that are re-quired for appearance-based recognition. We are

12

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Fig. 9. Example images automatically assessed by GRI: (a)-(c) correctly classified controls, (d)-(f) correctly classified glauco-matous cases, (g)-(i) misclassified controls, (j)-(l) misclassified glaucomatous cases.

13

aware that this normalization can bias the pureglaucomatous variations and may decrease theclassification performance. To account for this ef-fect, we utilized a data set of limited ONH diam-eter variations (average vertical ONH diameter:1.8± 0.22mm) and of one race. Considering thewrong classified images (Fig. 9 (g)-(l)) it seems as ifthe GRI is sensitive to larger optic disk diameters.However, the GRI classification is able to properlyhandle di!erent optic disk sizes as the correctlyclassified images (Fig. 9 (a)-(f)) show highly vary-ing optic disk diameters. In order to apply the GRIin a widespread study, we propose to generate aseparate model for each cluster of di!erent ONHdiameters and races.

As the GRI procedure is applied on color fun-dus images, the index can only capture glaucoma-tous signs that are visible on these images. Thus,it can only provide first indicators for the multilay-ered and progressive glaucoma disease. However, thedigital color fundus camera is much cheaper thanother ophthalmic imaging devices such as HRT orOCT. In the future, the GRI can provide a first low-priced glaucoma indication in order to possibly re-duce the amount of false positives misrouted to thecost-intensive elaborate clinical investigations.

The proposed appearance-based technique knownfrom face recognition statistically analyzes the en-tire image pixel pattern to emerge the competitiveGRI and does not rely on accurately determined ge-ometric parameters. Due to this approach, the GRIonly provides a general statement, but does not in-dicate conspicuous retinal regions that might hinderits acceptance in clinical environment.

The required preprocessing of the fundus imagesperforms a rough determination of the retinal vesseltree and the ONH rim for normalization. An accu-rate segmentation of the cup rim as for automateddetermination of cup-to-disk ratio can be omitted.Based on the shown high reliability, we conclude thatthe appearing unavoidable variations originated bysegmentation only lead to negligible variations of theGRI.

9. Conclusion

Based on color fundus photos of the eye, the pre-sented procedure for robust and reliable appearance-based feature extraction allows the automated quan-tification of the probability to su!er from glaucomadisease.

Due to glaucoma specific preprocessing and theappropriate combination of generic features, we areable to successfully apply the generic data-driven ap-proach for this medical classification task. The pro-posed two-stage classification scheme helps to com-bine classifiers of di!erent image inputs. The eval-uation showed that this assembly improves the cer-tainty of the classification and it makes the final de-cision more robust. Established methods early re-duce the amount of feature dimensionality by us-ing parametric models or structural measurements.In contrast, we compress the whole image data intodiscriminating features.

The obtained Glaucoma Risk Index (GRI)reached a classification accuracy of 80% in a twoclass problem (control vs. glaucomatous eyes) tak-ing a gold standard diagnosis by ophthalmologistsas a basis. The AUC was 88 % with a sensitivityof 73% at a specificity of 85% in detecting glau-coma. The results in our evaluation with 575 imageswere comparable to the commercial and establishedGlaucoma Probability Score (GPS) of the HRT IIIand cup-to-disk ratio.

Overall, this contribution provides a competitive,reliable and probabilistic glaucoma risk index fromimages of the low-cost digital color fundus cameraas its performance is comparable to medical relevantglaucoma parameters. This proves, data-driven GRIis able to extract relevant glaucoma features. In thefuture, it might give a first, low-cost glaucoma indi-cation to route the patients to more elaborate clini-cal trials only if necessary.

Acknowledgements

This contribution was supported by the GermanNational Science Foundation (DFG) in context ofCollaborative Research Center 539 subproject A4(SFB 539 A4), the German Academic Exchange Ser-vice (DAAD, Germany) and Hungarian ScholarshipBoard (MOB, Hungary). The authors gratefully ac-knowledge funding of the Erlangen Graduate Schoolin Advanced Optical Technologies (SAOT) by theDFG in the framework of the excellence initiative.R. Bock was supported by the DFG and SAOT.J. Meier is stipendiary of the International Max-Planck Research School for Optics and Imaging, Er-langen (IMPRS). L. G. Nyul is a research fellowof the Alexander von Humboldt Foundation (Ger-many).

14

References

Abramo!, M. D., Alward, W. L. M., Greenlee, E. C.,Shuba, L., Kim, C. Y., Fingert, J. H., Kwon, Y. H.,2007. Automated segmentation of the optic discfrom stereo color photographs using physiologi-cally plausible features. Invest Ophthalmol Vis Sci48 (4), 1665–1673.

Al-Diri, B., Hunter, A., Steel, D., 2009. An ac-tive contour model for segmenting and measuringretinal vessels. IEEE Trans Med Imaging 28 (9),1488–1497.

Alencar, L. M., Bowd, C., Weinreb, R. N., Zangwill,L. M., Sample, P. A., Medeiros, F. A., 2008. Com-parison of HRT-3 glaucoma probability score andsubjective stereophotograph assessment for pre-diction of progression in glaucoma. Invest Oph-thalmol Vis Sci 49 (5), 1898–1906.

Bertalmio, M., Sapiro, G., Caselles, V., Ballester,C., 2000. Image inpainting. In: Proceedings ofthe 27th Annual Conference on Computer Graph-ics and Interactive Techniques, SIGGRAPH 2000,New Orleans, USA. pp. 417–424.

Betz, P., Camps, F., Collignon-Brach, J., Lavergne,G., Weekers, R., 1982. Biometric study of the disccup in open-angle glaucoma. Graefes Arch ClinExp Ophthalmol 218 (2), 70–74.

Blanco, M., Penedo, M. G., Barreira, N., Penas, M.,Carreira, M. J., 2006. Localization and extractionof the optic disc using the fuzzy circular Houghtransform. Lecture Notes in Computer Science4029, 712–721.

Bledsoe, W. W., 1966. The model method in facialrecognition. Tech. rep., Panoramic Research Inc.,Palo Alto, CA, Rep. PRI:15.

Bock, R., Meier, J., Michelson, G., Nyul, L. G.,Hornegger, J., 2007. Classifying glaucoma withimage-based features from fundus photographs.In: 9th Annual Symposium of the German As-sociation for Pattern Recognition, DAGM. Lec-ture Notes in Computer Science (LNCS). Vol.4713/2007. Berlin, pp. 355–365.

Burgansky-Eliash, Z., Wollstein, G., Bilonick, R. A.,Ishikawa, H., Kagemann, L., Schuman, J. S., 2007.Glaucoma detection with the Heidelberg RetinaTomograph 3. Ophthalmology 114 (3), 466–471.

Can, A., Shen, H., Turner, J. N., Tanenbaum, H. L.,Roysam, B., 1999. Rapid automated tracing andfeature extraction from retinal fundus images us-ing direct exploratory algorithms. IEEE Trans InfTechnol Biomed 3 (2), 125–138.

Canny, J. F., 1986. A computational approach toedge detection. IEEE Trans Pattern Anal MachIntell 8 (6), 679–698.

Chauhan, B. C., Blanchard, J. W., Hamilton, D. C.,LeBlanc, R. P., Mar 2000. Technique for detectingserial topographic changes in the optic disc andperipapillary retina using scanning laser tomog-raphy. Invest Ophthalmol Vis Sci 41 (3), 775–782.

Chen, P. H., Lin, C. J., Scholkopf, B., 2005.A tutorial on #-support vector machines. Ap-plied Stochastic Models in Business and Industry21 (2), 111–136.

Chrastek, R., Wolf, M., Donath, K., Niemann, H.,Paulus, D., Hothorn, T., Lausen, B., Lammer, R.,Mardin, C., Michelson, G., 2005. Automated seg-mentation of the optic nerve head for diagnosis ofglaucoma. Med Image Anal 9 (4), 297–314.

DeLong, E., DeLong, D., Clarke-Pearson, D., 1988.Comparing the areas under two or more corre-lated receiver operating characteristic curves: Anonparametric approach. Biometrics 44, 837–845.

EL-Manzalawy, Y., Honavar, V., 2005. WLSVM: In-tegrating LibSVM into Weka Environment. Soft-ware available at http://www.cs.iastate.edu/#yasser/wlsvm.

Fernandez, D. C., Salinas, H. M., Puliafito, C. A.,2005. Automated detection of retinal layer struc-tures on optical coherence tomography images.Opt Express 13 (25), 10200–10216.

Greaney, M. J., Ho!man, D. C., Garway-Heath,D. F., Nakla, M., Coleman, A. L., Caprioli, J.,2002. Comparison of optic nerve imaging methodsto distinguish normal eyes from those with glau-coma. Invest Ophthalmol Vis Sci 43 (1), 140–145.

Grisan, E., Pesce, A., Giani, A., Foracchia, M., Rug-geri, A., 2004. A new tracking system for the ro-bust extraction of retinal vessel structure. In: En-gineering in Medicine and Biology Society, 2004.IEMBS ’04. 26th Annual International Confer-ence of the IEEE. Vol. 1. pp. 1620–1623.

Hoover, A., Goldbaum, M., 2003. Locating the op-tic nerve in a retinal image using the fuzzy con-vergence of the blood vessels. IEEE Trans MedImaging 22 (8), 951–958.

Hoover, A., Kouznetsova, V., Goldbaum, M., 2000.Locating blood vessels in retinal images by piece-wise threshold probing of a matched filter re-sponse. IEEE Trans Med Imaging 19 (3), 203–210.

Ibanez, L., Schroeder, W., Ng, L., Cates, J.,2005. The ITK Software Guide. Kitware,Inc. ISBN 1-930934-15-7, http://www.itk.org/ItkSoftwareGuide.pdf, 2nd Edition.

15

Klein, B. E., Klein, R., Sponsel, W. E., Franke, T.,Cantor, L. B., Martone, J., Menage, M. J., 1992.Prevalence of glaucoma. The Beaver Dam EyeStudy. Ophthalmology 99 (10), 1499–1504.

Li, H., Chutatape, O., 2003. Boundary detection ofoptic disk by a modified ASM method. PatternRecognition 36 (9), 2093–2104.

Lin, S. C., Singh, K., Jampel, H. D., Hodapp, E. A.,Smith, S. D., Francis, B. A., Dueker, D. K., Fecht-ner, R. D., Samples, J. S., Schuman, J. S., Minck-ler, D. S., 2007. Optic nerve head and retinalnerve fiber layer analysis: a report by the Ameri-can Academy of Ophthalmology. Ophthalmology114 (10), 1937–1949.

Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R.,Fletcher, E., Kennedy, L., 2004. Optic nerve headsegmentation. IEEE Trans Med Imaging 23 (2),256–264.

Martinez-Perez, M. E., Hughes, A. D., Thom, S. A.,Bharath, A. A., Parker, K. H., 2007. Segmenta-tion of blood vessels from red-free and fluoresceinretinal images. Med Image Anal 11 (1), 47–61.

Medeiros, F. A., Sample, P. A., Weinreb, R. N.,2004a. Frequency doubling technology perimetryabnormalities as predictors of glaucomatous vi-sual field loss. Am J Ophthalmol 137 (5), 863–871.

Medeiros, F. A., Zangwill, L. M., Bowd, C., Wein-reb., R. N., 2004b. Comparison of the GDx VCCscanning laser polarimeter, HRT II confocal scan-ning laser ophthalmoscope, and stratus OCT opti-cal coherence tomograph for the detection of glau-coma. Arch Ophthalmol 122 (6), 827–837.

Meier, J., Bock, R., Michelson, G., Nyul, L. G.,Hornegger, J., 2007. E!ects of preprocessingeye fundus images on appearance based glau-coma classification. In: 12th International Confer-ence on Computer Analysis of Images and Pat-terns, CAIP. Lecture Notes in Computer Science(LNCS). Vol. 4673/2007. Berlin, pp. 165–173.

Merickel, M. B. J., Abramo!, M. D., Sonka, M., Wu,X., 2007. Segmentation of the optic nerve headcombining pixel classification and graph search.Proceedings of SPIE 6512 (1), 651215.

Michelson, G., Warntges, S., Hornegger, J., Lausen,B., 2008. The papilla as screening parameter forearly diagnosis of glaucoma. Dtsch Arztebl Int105 (34-35), 583–589.

Miglior, S., Guareschi, M., Albe’, E., Gomarasca,S., Vavassori, M., Orzalesi, N., 2003. Detection ofglaucomatous visual field changes using the Moor-fields regression analysis of the Heidelberg retinatomograph. Am J Ophthalmol 136 (1), 26–33.

Narasimha-Iyer, H., Can, A., Roysam, B., Stewart,C. V., Tanenbaum, H. L., Majerovics, A., Singh,H., 2006. Robust detection and classification oflongitudinal changes in color retinal fundus im-ages for monitoring diabetic retinopathy. IEEETrans Biomed Eng 53 (6), 1084–1098.

Niemeijer, M., Abramo!, M., van Ginneken, B.,2007. Segmentation of the optic disc, macula andvascular arch in fundus photographs. IEEE TransMed Imaging 26 (1), 116–127.

Niemeijer, M., Abramo!, M. D., van Ginneken, B.,2009a. Fast detection of the optic disc and fovea incolor fundus photographs. Med Image Anal 13 (6),859–870.

Niemeijer, M., Staal, J., van Ginneken, B., Loog, M.,Abramo!, M. D., 2004. Comparative study of reti-nal vessel segmentation methods on a new pub-licly available database. In: Proceedings of SPIE.Vol. 5370. p. 648.

Niemeijer, M., van Ginneken, B., Abramo!, M. D.,2009b. A linking framework for pixel classificationbased retinal vessel segmentation. In: Proceedingsof SPIE. Vol. 7262. p. 726216.

Ricci, E., Perfetti, R., 2007. Retinal blood vessel seg-mentation using line operators and support vectorclassification. IEEE Trans Med Imaging 26 (10),1357–1365.

Scholkopf, B., Smola, A. J., Williamson, R. C.,Bartlett, P. L., 2000. New support vector algo-rithms. Neural Comput 12 (5), 1207–1245.

Sehi, M., Guaqueta, D. C., Feuer, W. J., Greenfield,D. S., 2007. Scanning laser polarimetry with vari-able and enhanced corneal compensation in nor-mal and glaucomatous eyes. Am J Ophthalmol143 (2), 272–279.

Sharma, P., Sample, P. A., Zangwill, L. M., Schu-man, J. S., Nov 2008. Diagnostic tools for glau-coma detection and management. Surv Ophthal-mol 53 Suppl1, S17–S32.

Shen, J., Chan, T. F., 2002. Mathematical mod-els for local nontexture inpaintings. SIAM J ApplMath 62 (3), 1019–1043.

Soares, J. V., Leandro, J. J., Jr, R. M. C., Jelinek,H. F., Cree, M. J., 2006. Retinal vessel segmenta-tion using the 2-D Gabor wavelet and supervisedclassification. IEEE Trans Med Imaging 25 (9),1214–1222.

Sofka, M., Stewart, C. V., 2006. Retinal vessel cen-terline extraction using multiscale matched filters,confidence and edge measures. IEEE Trans MedImaging 25 (12), 1531–1546.

Staal, J., Abramo!, M., Niemeijer, M., Viergever,

16

M., van Ginneken, B., 2004. Ridge-based vesselsegmentation in color images of the retina. IEEETrans Med Imaging 23 (4), 501–509.

Swindale, N. V., Stjepanovic, G., Chin, A., Mikel-berg, F. S., 2000. Automated analysis of normaland glaucomatous optic nerve head topographyimages. Invest Ophthalmol Vis Sci 41 (7), 1730–1742.

Turk, M., Pentland, A., 1991. Eigenfaces for recog-nition. J Cogn Neurosci 3 (1), 71–86.

Unser, M., 1999. Splines: A perfect fit for signal andimage processing. IEEE Signal Processing Maga-zine 16 (6), 22–38.

Unser, M., Aldroubi, A., Eden, M., 1993a. B-splinesignal processing. I. Theory. IEEE Trans SignalProcess 41 (2), 821–833.

Unser, M., Aldroubi, A., Eden, M., 1993b. B-splinesignal processing. II. E"ciency design and appli-cations. IEEE Trans Signal Process 41 (2), 834–848.

Varma, R., Steinmann, W. C., Scott, I. U., 1992.Expert agreement in evaluating the optic disc forglaucoma. Ophthalmology 99 (2), 215–221.

Vergara, I., Norambuena, T., Ferrada, E., Slater, A.,Melo, F., 2008. StAR: a simple tool for the statis-tical comparison of ROC curves. BMC Bioinfor-matics 9, 265–269.

Wang, L., Bhalerao, A., Wilson, R., 2007. Analysisof retinal vasculature using a multiresolution her-mite model. IEEE Transactions on Medical Imag-ing 26 (2), 137–152.

Wollstein, G., Garway-Heath, D. F., Hitchings,R. A., 1998. Identification of early glaucoma caseswith the scanning laser ophthalmoscope. Oph-thalmology 105 (8), 1557–1563.

Xu, J., Chutatape, O., Sung, E., Zheng, C., Kuan,P. C. T., 2007. Optic disk feature extraction viamodified deformable model technique for glau-coma analysis. Pattern recognition 40 (7), 2063–2076.

Youssif, A. A., Ghalwash, A. Z., Ghoneim, A.,2008. Optic disc detection from normalized dig-ital fundus images by means of a vessels’ direc-tion matched filter. IEEE Transactions on Medi-cal Imaging 27 (1), 11–18.

Youssif, A. A. A., Ghalwash, A. Z., Ghoneim, A. S.,2006. Comparative study of contrast enhance-ment and illumination equalization methods forretinal vasculature segmentation. In: Proceedingsof the Third Cairo International Biomedical En-gineering Conference (CIBEC’06). pp. 1–5.

Zhu, X., Rangayyan, R., Ells, A., 2009. Detection

of the optic nerve head in fundus images of theretina using the hough transform for circles. Jour-nal of Digital Imaging published online: Febru-ary, 24th 2009, http://dx.doi.org/10.1007/s10278-009-9189-5.

17


Recommended