+ All Categories
Home > Documents > Automatic detection and characterisation of retinal vessel tree bifurcations and crossovers in eye...

Automatic detection and characterisation of retinal vessel tree bifurcations and crossovers in eye...

Date post: 14-Nov-2023
Category:
Upload: esomi
View: 0 times
Download: 0 times
Share this document with a friend
11
c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38 jo u rn al hom epage : www.intl.elsevierhealth.com/journals/cmpb Automatic detection and characterisation of retinal vessel tree bifurcations and crossovers in eye fundus images David Calvo, Marcos Ortega , Manuel G. Penedo, Jose Rouco Dept. of Computer Science, University of A Coru ˜ na, Spain a r t i c l e i n f o Article history: Received 2 September 2009 Received in revised form 4 June 2010 Accepted 4 June 2010 Keywords: Eye fundus Feature point Blood vessel tree Segmentation Automatic classification a b s t r a c t Analysis of retinal vessel tree characteristics is an important task in medical diagnosis, specially in cases of diseases like vessel occlusion, hypertension or diabetes. The detection and classification of feature points in the arteriovenous eye tree will increase the informa- tion about the structure allowing its use for medical diagnosis. In this work a method for detection and classification of retinal vessel tree feature points is presented. The method applies and combines imaging techniques such as filters or morphologic operations to obtain an adequate structure for the detection. Classification is performed by analysing the fea- ture points environment. Detection and classification of feature points is validated using the VARIA database. Experimental results are compared to previous approaches showing a much higher specificity in the characterisation of feature points while slightly increasing the sensitivity. These results provide a more reliable methodology for retinal structure analysis. © 2010 Elsevier Ireland Ltd. All rights reserved. 1. Introduction In the field of medical diagnosis and the studying of diseases it is necessary to analyse in detail several medical images. This analysis usually requires measuring or computing certain parameters according to the structures present in the image. These tasks are typically performed by experts in a manual fashion. This specialised process, takes up a lot of time and, because of the manual process nature, is sensitive to subjec- tive errors. It is, therefore, necessary to use more reliable and automatic methods in the medical imaging field. The vascular tree is the complex vessel structure of the retina and, in spite of the fact that it is formed from simple patterns, it can show morphological variations due to age or diseases such as vessel occlusion that makes veins longer, hypertension that reduces arteries or diabetes that creates new blood vessels [1]. The branches of this structure intert- wine frequently, creating points where three or more vessel segments coincide. In these points it is important to know Corresponding author. Tel.: +34 981167000x1330. E-mail addresses: [email protected] (D. Calvo), [email protected] (M. Ortega), [email protected] (M.G. Penedo), [email protected] (J. Rouco). whether the segments are in the same spatial plane or not in order to perform several tasks like, for instance, blood ves- sel tracking. These points are considered as feature points in this work. Many methods can be found in the literature for retinal vessel detection. In Ref. [2] a Gaussian shaped curve is used to model the cross-section of a vessel and a matched filter is used for its detection. Based on this previous approach, Ref. [3] introduces a matched filter response and a piece- wise segmentation of the retinal vessels. Ref. [4] introduces a new model-based method based on simple edge detection and incorporating large-scale properties to refine the model (length, length-to-width ratio, etc). Model based approaches also include the use of Gabor filters to detect vessel structure [5]. Other different approaches for vessel detection include the use of neural networks [6]. In Ref. [7] a tracking method is intro- duced. Tracking methods often tend to terminate at feature points and, thus, each vessel or branch is evaluated separa- tely. Unfortunately, most of these approaches do not use any contextual information around the vessel points to identify 0169-2607/$ see front matter © 2010 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2010.06.002
Transcript

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38

jo u rn al hom epage : www.int l .e lsev ierhea l th .com/ journa ls /cmpb

Automatic detection and characterisation of retinal vesseltree bifurcations and crossovers in eye fundus images

David Calvo, Marcos Ortega ∗, Manuel G. Penedo, Jose RoucoDept. of Computer Science, University of A Coruna, Spain

a r t i c l e i n f o

Article history:

Received 2 September 2009

Received in revised form 4 June 2010

Accepted 4 June 2010

Keywords:

Eye fundus

a b s t r a c t

Analysis of retinal vessel tree characteristics is an important task in medical diagnosis,

specially in cases of diseases like vessel occlusion, hypertension or diabetes. The detection

and classification of feature points in the arteriovenous eye tree will increase the informa-

tion about the structure allowing its use for medical diagnosis. In this work a method for

detection and classification of retinal vessel tree feature points is presented. The method

applies and combines imaging techniques such as filters or morphologic operations to obtain

an adequate structure for the detection. Classification is performed by analysing the fea-

Feature point

Blood vessel tree

Segmentation

Automatic classification

ture points environment. Detection and classification of feature points is validated using

the VARIA database. Experimental results are compared to previous approaches showing a

much higher specificity in the characterisation of feature points while slightly increasing the

sensitivity. These results provide a more reliable methodology for retinal structure analysis.

duced. Tracking methods often tend to terminate at feature

1. Introduction

In the field of medical diagnosis and the studying of diseasesit is necessary to analyse in detail several medical images.This analysis usually requires measuring or computing certainparameters according to the structures present in the image.These tasks are typically performed by experts in a manualfashion. This specialised process, takes up a lot of time and,because of the manual process nature, is sensitive to subjec-tive errors. It is, therefore, necessary to use more reliable andautomatic methods in the medical imaging field.

The vascular tree is the complex vessel structure of theretina and, in spite of the fact that it is formed from simplepatterns, it can show morphological variations due to age ordiseases such as vessel occlusion that makes veins longer,hypertension that reduces arteries or diabetes that creates

new blood vessels [1]. The branches of this structure intert-wine frequently, creating points where three or more vesselsegments coincide. In these points it is important to know

∗ Corresponding author. Tel.: +34 981167000x1330.E-mail addresses: [email protected] (D. Calvo), [email protected] (M. Or

0169-2607/$ – see front matter © 2010 Elsevier Ireland Ltd. All rights resdoi:10.1016/j.cmpb.2010.06.002

© 2010 Elsevier Ireland Ltd. All rights reserved.

whether the segments are in the same spatial plane or notin order to perform several tasks like, for instance, blood ves-sel tracking. These points are considered as feature points inthis work.

Many methods can be found in the literature for retinalvessel detection. In Ref. [2] a Gaussian shaped curve is usedto model the cross-section of a vessel and a matched filteris used for its detection. Based on this previous approach,Ref. [3] introduces a matched filter response and a piece-wise segmentation of the retinal vessels. Ref. [4] introducesa new model-based method based on simple edge detectionand incorporating large-scale properties to refine the model(length, length-to-width ratio, etc). Model based approachesalso include the use of Gabor filters to detect vessel structure[5]. Other different approaches for vessel detection include theuse of neural networks [6]. In Ref. [7] a tracking method is intro-

tega), [email protected] (M.G. Penedo), [email protected] (J. Rouco).

points and, thus, each vessel or branch is evaluated separa-tely. Unfortunately, most of these approaches do not use anycontextual information around the vessel points to identify

erved.

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

s i n

ttac

ttdbtdamfibwtoetl

tditnta

ttmia

2

FsrhottwiisrsvRmi

pv

c o m p u t e r m e t h o d s a n d p r o g r a m

he landmarks in the vessel tree. This contextual informa-ion can be expressed in terms of neighbour points or vesselsnd could help to determine the real structure and physicalonnections of the retinal vessel tree.

In the bibliography there are some works that try to solvehis problem. The work proposed by Can et al. [8] tries to solvehis problem in difficult images using the central vessel line toetect and classify feature points. Another method proposedy Tsai et al. [9] uses vessel segments intersections as seedso track vessel centre lines and classify feature points accor-ing to intersection angles. The work proposed by Grisan etl. [10] extracts the structure using a vessel tracking basedethod. This method needs a previous step before detecting

eature points in order to avoid the loss of connectivity in thentersections after the tracking stage. Another work, proposedy Bevilacqua et al. [11], uses a small window to analyse thehole skeleton of the vascular structure. The main problem of

his solution is the misclassification of crossovers, as they arenly properly classified when the vessel segments intersectxactly in the same pixel. In Ref. [12] a morphological struc-ure of the vessel tree is extracted and analysed to detect treeandmarks.

In this paper, a novel method to detect feature points ofhe retinal vascular tree and a subsequent classification of theetected points into two classes: bifurcations and crossovers,

s proposed. From an image of the retinal structure of the eye,he vascular tree is segmented. From this the skeleton is obtai-ed, where the feature points are detected. In the last step,

hese feature points are classified according to a local analysisnd a topological study.

The paper is organised as follows: in Section 2 a descrip-ion of the detection method is presented. Section 3 describeshe classification method used. Section 4 shows the experi-

ental results and validation obtained using standard retinalmage databases. Finally, Section 5 provides some discussionnd conclusions.

. Arteriovenous structure segmentation

eature point detection implies an analysis of the vasculartructure so a segmentation of the retinal vessel tree is requi-ed. In this work we use an approach with a particularlyigh sensitivity and specificity at classifying points as vesselr non-vessel points. This segmentation process is done inwo main steps: vascular structure enhancement and extrac-ion of the arteriovenous tree. This approach is based in [13]here a similar methodology is used. In our proposal, we

nclude a median filter to reduce the noise and vascular reflexn the preprocessing stage and a final stage of removal ofpurious structures using the connectivity property of theetinal vessel tree. Another recent approaches for vascularegmentation can be found. In Ref. [14] the centreline of theessels is detected and then a filling process takes place. Inef. [15] the segmentation of the vascular structure is perfor-ed by means of classification of textures present in retinal

mages.By performing an initial enhancement the causes of a

otential malfunction of the whole process, such as noise oressel reflections are eliminated.

b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38 29

The preprocessing step applies a Top-hat filter [16] toenhance the biggest and darkest structures present in theimage, corresponding to the vessels. Then, a median filter isapplied to reduce noise and to tone down vascular reflex.

The vessel enhancement step uses a multi-scalar approxi-mation where the eigenvalues of the Hessian matrix [17] areused to apply a filter process that detects different sized geo-metric tubular structures. A function B(p) is defined to measurethe membership of a pixel, p, to vessel structures:

B(p) =

⎧⎨⎩

0 if �2 < 0

exp(−2R2b)

(1 − exp

(− S2

2c2

))(1)

where Rb = �1/�2 (one and two eigenvalue), c is the half of themax Hessian norm, S represents a measure to “second orderstructures”. Vessel pixels are characterised by a small �1 valueand a higher positive �2 value.

Once the blood vessels are enhanced, the vascular extrac-tion is done in two steps: first an early segmentation andsecond, a removal of isolated pixels.

An hysteresis based thresholding is done in the segmenta-tion task. A hard threshold (Th) obtains the pixels with a highconfidence of being vessel pixels while the weak threshold(Tw) keeps all the pixels of the tree, even spurious ones. Thefinal segmentation will be formed by all the pixels selectedby the weak threshold connected to, at least, one pixel obtai-ned by the hard threshold. Th and Tw are obtained from twoimage properties: the percentage of image representing ves-sels and the percentage of the image classified as fundus. Thegap between both percentages will include all not classifiedpixels. After calculating the percentiles with Eq. (2) obtainingthe values for the thresholds is immediate:

Pk = Lk + k(n/100) − Fk

fk× c, k = 1, 2, . . . , 99 (2)

where Lk is percentile k lower limit, n stands for the size of dataset, Fk is the accumulated frequency for k − 1, fk representsthe frequency of percentile k and c is the size of the percentileinterval (1 in this case).

To be able to obtain adequate results not only in highquality images from healthy eyes but also in poor images orimages from eyes with diseases a last step is taken to erasespurious structures that, not belonging to the vascular struc-ture, reached this point. To solve this, all isolated structuressmaller than a prefixed number of pixels are erased. Fig. 1shows an example of the retinal vessel tree segmentationresult applied to a digital retina image.

Prior to the introduction of the detection and classificationmethod of feature points, the modified segmentation metho-dology presented here was validated using 40 images fromDRIVE [18] database with a 15 × 15 window for the Top-hatfilter. DRIVE dataset has the retinal vessel structure manuallysegmented by experts and therefore it is a good option for asimple validation of the changes introduced in the segmenta-

tion methodology. The validation is performed by comparingthe automatically segmented vessel tree versus the manuallysegmented one for the 40 images. From this comparison weobtain two metrics per each methodology, the true positive

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

30 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38

Fig. 1 – (a) Original image and (b) result of applying the

Fig. 2 – Types of feature points where (a) shows a

vessel tree segmentation process.

rate (TP%) and the false positive rate (FP%). TP% measures theamount of real vessel points detected by each system (ideally100%) while FP% measures the amount of points falsely detec-ted as vessel points by the system (ideally 0%).

Table 1 shows a comparison of the obtained results withother cited segmentation methods. It is also important to notethat a key advantage of the method proposed here is the effi-ciency, allowing it to run in 1.43 s on average for each image inthe DRIVE database on Pentium IV 2.4 GHz desktop PC.

3. Feature point detection

To define more specifically the feature points (Fig. 2), a crosso-

ver can be defined as a point where, with different depth levels,two blood vessels coincide. A bifurcation is a point where ablood vessel splits into two smaller vessels.

Table 1 – Comparison of different segmentationmethods. true positive (TP%) measures the percentage ofreal vessel points detected while false positive (FP%)indicates the percentage of detected points that do notcorrespond to real vessel points.

T.P. (%) F.P. (%)

D. Calvo et al. 86.17 3.48A. Bhuiyan et al. 84.37 0.39A. Condurache et al. 77.72 3.21A.M. Mendonca et al. 73.44 2.46

bifurcation and (b) shows a crossover.

Detecting vascular tree feature points is a complex task par-ticularly due to the complexity of the vessel structure whoseillumination and size is highly heterogeneous both betweenimages and between regions from the same image.

In Fig. 1(b), an example of segmented image followingthis methodology was shown. It is clear that properties arenot constant along all the structure, like vessel width, thatdecreases as the branch level of the structure becomes dee-per. To unify this property a method able to reduce width toone pixel without changing either vessel direction or connec-tivity is needed. The skeleton is the structure that meets allthese properties.

However, as it can be observed in Fig. 3, the results ofthe segmentation process force a previous preprocessing stepbefore the skeletonisation. Fig. 3(a) shows gaps inside the ves-sels in the segmented image that would give a wrong skeletonstructure if the next step is applied to images with this pro-blem. A vessel with this problem in the segmented imagewould produce two parallel vessels in the skeletonised image(one for each border of the gap) creating false feature points,as shown in Fig. 3(b).

To avoid these false positive feature points it is necessaryto “fill” the gaps inside the vessels. To perform this task, a dila-tion process is applied making the lateral vessel borders growtowards the centre filling the mentioned gaps. The dilationprocess is done using a modified median filter. As in this casethe filter is applied to a binary image the result central pixelvalue will be the most repeated value in the original window.In order to avoid an erosion when the filter is applied to theexternal border of vessels, the result value will only be set ifit is a vessel pixel. To “fill” as much white gaps as possible thedilation process is applied in an iterative way, this is, dilation

is applied to the previous dilation result N times. The valueof N must be big enough to fill as much gaps as possible and,at the same time, small enough to avoid merging not connec-ted vessels. The value of N depends on the spatial resolution of

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38 31

Fig. 3 – Segmentation problems, creating gaps inside the vessels: (a) shows the segmentation problem with inside vesselgaps coloured in red and (b) shows the skeleton of a vessel with gaps, false feature points are marked in red. (Fori , the

tia

pscbtotfc

Ft

nterpretation of the references to color in this figure legend

he images used, with the images used in this work (768 × 584)t was determined empirically that optimal values for N wereround 4. The iterative process is shown in Fig. 4.

Usually, skeletonisation goal is to represent global objectsroperties while reducing the original image as much as pos-ible. The skeleton, as stated before, expresses the structuralonnectivity of the objects with a width of one pixel. Theasic method to obtain the skeleton is thinning, an itera-ive technique that erases pixels of the borders with, at least,

ne background neighbour if this erasing does not changehe connectivity. The skeleton is defined by the medial axisunction (MAF) defined as a set of circle centres. These circlesorrespond to the largest ones fitting inside the vessel. Cal-

ig. 4 – Original segmented image (a) and result of the dilation phe thinning process obtained from (c).

reader is referred to the web version of the article.)

culating directly the MAF is a very expensive task and thustemplate based methods are used due to its versatility andeffectiveness. In this work the Stentiford thinning method [19]is used. This method uses four templates (one for each of thefour different borders of the objects) erasing the pixels onlywhen the template matches and the connectivity is not affec-ted. Fig. 4(d) shows the results obtained with this approach.

3.1. Feature points location

As defined previously, feature points are landmarks in thevessel tree where several vessels appear together in the 2Drepresentation. This allows to locate the feature points in the

rocess with N = 2 (b) and N = 4 (c). (d) Shows the result of

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

m s i n

32 c o m p u t e r m e t h o d s a n d p r o g r a

vessel tree using local information along it. This informationis obtained with the analysis of the neighbours of each point.This way, the intersection number, I(V), is calculated for eachpoint, V, of the structure as show in (1), where the Ni(V) are theneighbours of the analysed point, V, named clockwise conse-cutively:

I(V) = 12

(8∑

i=1

∣∣Ni(V) − Ni+1(V)∣∣) (3)

According to its intersection number each point will be mar-ked as

• Vessel end point if I(V) = 1• Vessel internal point if I(V) = 2• Vessel bifurcation or crossover if I(V) > 2

In this approach, points are labelled as feature points whentheir intersection number I(V) is greater than two correspon-ding to bifurcations or crossovers.

The problem in this detection is that not all the points arereal points, this is, not every point detected exists in the realimage due to the small branches that the skeletonisation pro-cess creates in the border of the vessels. To solve this problem,a skeleton filtering step, where all these points are deleted, isperformed after the intersection number (I(V)) is calculated forall the points.

The skeleton of the retinal vascular tree, as shown before,is obtained from a segmented image through a thinning pro-cess that erases the pixels from the borders towards the vesselcentre without affecting the connectivity. To adapt this struc-ture to the point classification process it is necessary to erasethe branches that do not actually belong to the retinal tree butits appearance is due to small waves in the borders of the ves-sels. This step affects only the branches of the structure beingextracted as explained now.

The points previously detected are divided into two sets:

• C1: Set of points labelled as vessel end points (I(V) = 1)• C2: Set of points labelled as bifurcation or crossover (I(V) > 2)

With these two sets, the extraction algorithm is as follows:

(1) A point, c ∈ C1 is taken as initial point (seed).(2) Starting in c, the vessel is tracked following the direction

of neighbour pixels. Note that every pixel has only onepredecessor and one successor.

(3) When a previously labelled point, v, is found:(a) If v ∈ C1, the segment is labelled as an independent

segment and the process ends.(b) If v ∈ C2, the segment is labelled as a branch and the

process ends.

Once extracted all the branches, defined by its final points(initial and end point), the internal points, its length and

its label, the pruning task consists of an analysis of all thesegments deleting the ones shorter than the established thre-shold (�). Erasing a branch implies erasing the intersectionsassociated to it, this is, the false previously detected points.

b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38

The chosen value for � is given by the own definition of thefalse branches, the ones due to small undulations in vesselborders. So, � is the maximum vessel width expected in theimage. Once the less significant points have been deleted, theremaining feature points need to be processed to be classified,as shown in the next section.

4. Feature point classification

A similar approach to the one introduced in this work for thedetection based on the intersection number is also used in Ref.[11] for the classification between bifurcations and crossoversin such a way that every point with I(V) = 3 is classified as abifurcation and points with I(V) = 4 are classified as crosso-vers. However, the own criterion used to classify the pointsmakes unlikely to classify a point as crossover. In crossovers,on the contrary to bifurcations, the vessels that create the fea-ture point do not coincide in the same pixel in the skeleton.This is due to the angle and width of the vessels involved inthe feature point can cause, and they do in most cases, that thecentral axis of the vessels do not intersect in the same pixel.This problem, shown in Fig. 5, produces a misclassification ofcrossovers as two close bifurcations.

To solve this problem and produce a more robust and validclassification it is needed to perform a further analysis on thefeature points. This classification is done according to localfeatures of the points and a topological analysis necessary tospot the cases of the close bifurcations. Feature point clas-sification is done in two parts. In a first step the points arelabelled according only to environment features, this is, usingonly local information. In a second step, this classification isrefined using information of the relationship between points,namely the distance between points.

4.1. Local analysis for classification

The first classification step is done according to local featuresof the points without considering, for it, the effect of the otherpoints. So, to define a classification for a point, the number ofvessel segments that create the intersection is studied. Eachdetected feature point, F, is used as centre of a circumferencewith radius Rc used for the analysis. n(F) gives the number ofvessel segments that intersect the circumference being thepoint, F, classified as follows:

• F classified as bifurcation candidate ⇔ n(F) = 3• F classified as crossover candidate ⇔ n(F) = 4

Fig. 6 shows these two possible classifications. The imagesshow the blood vessels, the circumference used to do theanalysis, and, coloured darker, the pixels where the vesselsintersect the circumference.

This classification has some problems when classifyingcrossovers due to its representation as two close bifurcations,previously explained and shown in Fig. 5. Because of this pro-

blem, the radius Rc of the circumference used to the presentedclassification has to be big enough to be intersected by the twobifurcations of each crossover and, in this way, be intersectedby the four vessel segments to be classified as a crossover.

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38 33

Fig. 5 – Problem of the representation, in the skeleton, of a crossover as two bifurcations: (a) shows the crossover in theo imag

Ttmtos

ttR

Wmr

C

B

wcaicia

Fb

riginal image and (b) shows the skeleton over the original

he problem of increasing the size of the radius is that, due tohe complexity of the vascular structure and the classification

ethod used, the circumference can be intersected by vesselshat do not belong to the feature point analysed and, becausef this, wrongly classify the point. To avoid this problem a voteystem with three radius sizes is used.

In the vote system, three different classifications accordingo three different radius sizes, R1, R2, R3, for the analysis ofhe point are established. The chosen radius are defined as:

1 = Rc − �, R2 = Rc and R3 = Rc + � where � is a fixed amount.ith these definitions, two values are calculated, C(F) and B(F),eaning the number of votes for a point F to be classified,

espectively, as a crossover and a bifurcation:

(F) = 2 × C(F, R1) + C(F, R2) + C(F, R3) (4)

(F) = B(F, R1) + B(F, R2) + 2 × B(F, R3) (5)

here C(F, Ri) and B(F, Ri) are binary values indicating if F islassified, respectively, as a crossover or a bifurcation using

radius Ri. Note that the contribution of the small radius

s more valuable, and therefore weighted, in the crossoverlassification while for bifurcations the big radius adds morenformation. F will be classified as a crossover when C(F) > B(F)nd a bifurcation otherwise.

ig. 6 – Preliminary feature point classification according to the nifurcation and (b) a crossover.

e.

As shown before the contribution is not the same for allclassifications of all radius. So the influence that has, for ins-tance, the fact that the smallest radius classifies the featurepoint as crossover is bigger than the same classification doneby the biggest radius. This is due to the fact that the bigger theradius of the circumference is, the more likely to be intersectedby a vessel segment that does not belong to the feature pointis. If a point is classified as crossover using the small radius,as this is less probable, it adds more information to the finalresult. Analogously, and because all the crossovers are repre-sented as two bifurcations, the classification of a feature pointas a bifurcation is more significant in the final result.

As a result of this step, a preliminary classification of thefeature point detected in the vascular structure is given. Thisclassification will guide the last analysis in the next step.

4.2. Topological analysis for classification

The previously presented classification method, as statedbefore, is only based on the local features of the point to clas-sify it. However, due to the representation of crossovers in

the skeleton, this information is not enough to assure thata feature point is a crossover while being a necessary condi-tion. According to this, a topological classification is neededanalysing the feature points in pairs.

umber of vessel intersections where (a) represents a

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38

Fig. 7 – Schema of the different cases in the crossoverclassification: (a) a feature point that fulfils the condition ofdistance but not connectivity, not a crossover; (b) a featurepoint that fulfils the condition of connectivity but notdistance, not a crossover; (c) a feature point classifies ascrossover fulfilling the conditions of distance andconnectivity.

34 c o m p u t e r m e t h o d s a n d p r o g r a

Grouping the feature points classified, in the previous step,as crossover candidates in pairs is done in terms of Euclideandistance, d, this is, in the group of feature point classifiedas crossovers candidates the ones that minimise d(Fi, Fj) aregrouped in pairs.

These pair of points must satisfy two conditions to be clas-sified as a crossover:

• Both points must be connected. This is, it must exist a vesselsegment between the two points.

• d(Fi, Fj) ≤ 2 × Rc, this is, they must be close enough to fallinside a circle of radius Rc.

Fig. 7 illustrates possible situations depending on bothconditions.

Until this point, the feature point classification step wascentred in whether or not a pair of points must be classi-fied as crossover in the real image. The goal of this step isto find the real position of these crossovers that, obviously,does not match any of the two bifurcations of the pair. Empi-rically, it was determined that a good approach was to set thereal point for the crossover in the middle point of the segmentthat joins the two bifurcations. Fig. 8 shows the final positionof the crossover, marked over the real retina image.

At this point of the process, it could be assured that everypoint not classified as crossover is a bifurcation and havingclassified the crossovers in the previous step, the classificationof every point could be considered complete. However, assu-ming that every point not marked as crossover is a bifurcationmakes the classification of the last ones too dependant of thesuccess of the classification of the crossovers. So, if this idea isaccepted, for each misclassified crossover in the previous steptwo bifurcations would be misclassified now. Because of this,it is necessary to use another threshold (Rb) to allow to take adecision of which points are accepted as bifurcations.

This process is analogous to the previously presented forcrossover classification. This way, the closest bifurcation toanother one, in terms of Euclidean distance, can be found.For each pair, a circumference with radius (Rb) centred in themiddle point of the segment between the points is used. Thiscircumference cannot contain both points. Every pair of points

not fulfilling the conditions is marked as not classified in thefinal result. This is due to the fact that points are not closeenough to be considered as one crossover but not far enoughto be considered as two independent bifurcations.

Fig. 8 – Position of the real crossover point computed as the midpoint, lighter, is shown over the real image and the skeleton, in

Once finished the feature point classification, three cate-gories can be distinguished (shown in Fig. 9):

• The feature points classified as crossovers, those fulfillingthe conditions of morphology and proximity. (Fig. 9(a)).

• The feature points classified as bifurcations, those that ful-filling the conditions of morphology are further than theestablished threshold (Fig. 9(b)).

dle point of the segment between the bifurcations. The realblack.

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38 35

Fig. 9 – Final categories of the feature points where (a) apoint classified as crossover, (b) two independentb

oi

F

Table 2 – Results obtained for feature point detection. TPstands for true positives, FN for false negatives and FPfor false positives. Accuracy measures the rate betweenreal feature points detected (TP) and the total amount ofreal feature points (TP + FN).

TP FN FP Accuracy

D. Calvo et al. 1116 76 32 93.6%A. Bhuiyan et al. [12] 1124 68 41 94.3%V. Bevilacqua et al. [11] 1057 135 38 88.67%E. Grisan et al. [10] 1028 164 51 86.24%

Table 3 – Results for crossover classification according toRc showing true positives (TP), false negatives (FN), falsepositives (FP), sensitivity and specificity.

Radius (Rc) TP FN FP Sensitivity Specificity

5 54 216 0 20.00% 100%10 108 162 0 40.00% 100%

ifurcations and (c) two feature points not classified.

The feature points not classified, those that not being closeenough to be classified as crossovers and not either farenough to be classified as bifurcations (Fig. 9(c)).

Fig. 10 shows the result of the detection and classificationf the feature points of the vascular structure in an eye fundus

mage. Marked with an asterisk the points classified as crosso-

ig. 10 – Definitive classification of the feature points.

15 162 108 0 60.00% 100%20 204 66 18 75.56% 96.20%25 204 66 54 75.56% 94.94%

vers, marked with a circle the points classified as bifurcationsand marked with a square the points without classification.

Note that Rc and Rb parameters allow to tune the sys-tem in terms of specificity and sensitivity as some domainswould require different performances. In the next section,some experiments and performance results are shown.

5. Results

For the analysis of the methodology a set of 45 images fromdifferent individuals randomly extracted from VARIA [20] andlabelled by experts were used. These images are acquired witha non-mydriatic camera Top-Con model NW100 centred in theoptic disc with a spatial resolution of 768 × 584 pixels.

The system evaluation has been done in two differentsteps. A first step to evaluate the feature point detection anda second step to evaluate the classification of the detectedfeature points.

The detection evaluation is performed by analysing thenumber of real feature points detected and false feature pointsdetected. To quantify the results, several metrics are compu-ted: true positives (TP), false negatives (FN), false positives(FP) and the detection accuracy (Accuracy). TP indicates theamount of real feature points detected by the system, FNindicates the amount of real feature points not detected bythe system, FP indicates the spurious feature points detec-ted by the system and, finally, accuracy measures the rateof real feature points detected (TP/(TP+FN)). Table 2 presentsthe results obtained by several approaches. Note that someof these approaches do not provide quantitative results, sothe original methodology was implemented and tested in thesame dataset to allow the comparison.

For the classification validation, image preprocessing para-meters are necessary to the correct performance of later steps.For the image set used, the adequate number of dilations, N, is

four, the chosen prune threshold is � = 20 and the used radiusdistance is � = 5.

Radius parameters, Rc and Rb, allow to tune the permissi-veness in the classification task and, thus, a deeper analysis

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

36 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38

Table 4 – Results for bifurcation classification accordingto Rb showing true positives (TP), false negatives (FN),false positives (FP), not classified (NC), sensitivity andspecificity.

Radius(Rb)

TP FN FP NC Sensitivity Specificity

25 726 120 132 84 85.82% 0.00%

Fig. 11 – (a) Shows the influence of the parameter Rc incrossover classification and (b) shows the influence of theparameter R in bifurcation classification.

30 690 156 48 216 80.85% 63.63%35 630 216 12 324 74.47% 90.90%40 534 312 12 396 63.12% 90.90%

of the influence of both parameters is presented. The pre-viously used images in the detection process are used nowfor obtaining these results. The parameter Rc is the radiusof the circumference centred in the feature point analysed.This value affects directly to the point classification increa-sing the crossover detection probability as increases the radiussize. For the bifurcation, the parameter to consider is Rb, thatrepresents the minimum distance two bifurcation must beseparated to be considered as two independent bifurcations.This is, the smaller the distance (Rb) is the more bifurcationclassified.

For a right choice of the mentioned parameters, Rc and Rb,a quantitative study according to the parameters is presen-ted. The results allow to choose the adequate parameters fora specific domain where the desired sensitivity or specificitylevels can change. For a total of 1116 feature points with 846bifurcations and 270 crossovers, Table 3 shows the results forcrossover classification according to the chosen Rc radius andFig. 11(a) shows how can the parameter be adjusted to fit dif-ferent domains. In this domain, sensitivity measures the rateof real crossovers correctly classified while specificity mea-sures the rate of real crossovers versus all the points classifiedas a crossover.

The table shows how the number of correct classified cros-sovers increase with the radius size. This tendency couldthrow the idea of increasing the radius size until obtaininga big number of classified crossovers, however, increasing theradius also increases the number of misclassified crossovers.For the remaining 978 feature points with 846 bifurcations and66 crossover (that means 132 points due to crossover represen-tation) and with the chosen Rc = 20 value, Table 4 shows theresults for bifurcation classification according to the chosenRb radius. This table shows a new category, the non classifiedpoints, that includes the points that fulfilling the morphologyconditions are not close enough to be classified as crosso-ver and not far enough not to be classified as independent

bifurcations.

The results, represented in Fig. 11(b), show how the para-meter allows to adjust the results according to the domain.The bigger radius, Rb, used the more number of points without

Table 5 – Comparison of different methods for retinal feature po

Bifurcations

Sensitivity Spe

D. Calvo et al. 75% 9A. Bhuiyan et al. [12] 79% 8E. Grisan et al. [10] 76% 8

b

classification but the number of false positives will be belowthe 1%. Opposite to this, if a big level of true positives is neededwith a small radius the sensitivity is over the 80%.

We have observed that all the previously mentionedapproaches are based on the neighbourhood analysis of fea-ture points after some morphologic thinning processing of thevessel tree. This leads to the problem of the crossover misclas-sification due to the thinned representation where a crossoverturns into two close bifurcations [11]. These works do not offerquantitative results in the classification task but our imple-

mentation of this technique shows that nearly every point isclassified as a bifurcation, being capable to classify correctlyonly 3% of the crossovers. Table 5 shows a comparison of the

int classification.

Crossovers

cificity Sensitivity Specificity

1% 76% 96%3% 66% 67%7% 62% 74%

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

s i n

wcisf

6

Itititaca

hcdsmp

mSeeFpfcapascbta

Fitsvao

urc(

titT

r

c o m p u t e r m e t h o d s a n d p r o g r a m

ork presented in this paper and other approaches for thelassification of feature points. The main improvement comesn the crossover rates, due to the topological analysis propo-al. In general, the system exhibits a very high specificity rateor both classes making it suitable for critical tasks.

. Conclusions and future work

n this work a method for the detection and classification ofhe feature points of the retinal vascular tree using severalmage processing techniques has been presented. The detec-ion and classification of these points is important becauset increases the information about the retinal vascular struc-ure. Having the feature points of the tree allows an objectivenalysis of the diseases that cause modifications in the vas-ular morphology avoiding, in this way, a manual subjectivenalysis.

Many of the previous techniques oriented to the same taskave some problems classifying the feature points, speciallyrossovers, due to the own features searched in the points thato not match the ones obtained after the preprocessing of thetructures. Other works, despite getting good results, need tooany preprocessing steps to fix problems caused by its own

revious preprocessing such as the connectivity loss.As a result, this work presents a reliable and efficient

ethod that is able to overcome the problems of other works.o, starting with a segmented image of the arterio-venousye tree, and after preprocessing the structure, it has beenroded to obtain the skeleton and obtain the feature points.or the classification local features and the relations betweenoints have been analysed. At the end, a classification of theeature points in terms of crossovers, bifurcations and nonlassified points is offered. This technique offers results with

high sensitivity and specificity in the classification of featureoints allowing to control the number of misclassified pointsccording to the application domain. The obtained values forensitivity are in line with the typical rates and even higher forrossovers while in the case of specificity the rates are muchetter than previously ones allowing to build a reliable systemhat significantly reduces the introduction of mistakes in thenalysis.

The presented work is valid for many different domains.or instance, in the authentication task, using retinal imagesn order to help the comparison between points according tohe classification given by this method, or in medical analy-is task to compare and quantify the variation, if any, of theascular structure allowing to detect several diseases, suchs, vessel occlusion, high blood pressure or diabetes amongthers.

All the implementations for this work have been donesing the C++ language, on a Pentium IV 2.4 GHz desktop PCunning Linux OS. The average running time for the whole pro-ess (segmentation, detection and classification) on an image768 × 584, optic disc centred) was 6.52 s.

To improve the system, a future work could use vessel fea-

ures to classify the feature points. The classification methods done now according to the number of vessels that belongo the point and in the relationship between pairs of points.his method is easy to implement, understand and tune by

b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38 37

making use of the parameters. But, due to the fact that themethod does not consider the features of the vessels, somepoints can be left without classification like, for instance, avery acute crossover.

Conflict of interest statement

None of the authors has any relationship or interest in anorganization that could suppose conflict of interest.

Acknowledgements

This paper has been partly funded by the Ministerio de Cienciay Tecnología, Instituto de Salud Carlos III through the grantcontract PI08/90420 and FEDER.

e f e r e n c e s

[1] E. Kohner, A.M.P. Hamilton, S. Saunders, The retinal bloodflow in diabetes, Diabetologia 11 (1975) 27–33.

[2] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M.Goldbaum, Detection of blood vessels in retinal imagesusing two-dimensional matched filters, IEEE Transactionson Medical Imaging 8 (3) (1989) 263–269.

[3] A. Hoover, V. Kouznetsova, M.H. Goldbaum, Locating bloodvessels in retinal images by piece-wise threshold probing ofa matched filter response, IEEE Transactions in MedicalImaging 19 (3) (2000) 203–210.

[4] K. Vermeer, F. Vos, H. Lemij, A. Vossepoel, A model basedmethod for retinal blood vessel detection, Computers inBiology and Medicine 34 (3) (2004) 209–219.

[5] R.M. Rangayyan, F.J. Ayres, F. Oloumi, P. Eshghzadeh-Zanjani,Detection of blood vessels in the retina with multiscalegabor filters, Journal of Electronic Imaging 17 (2) (2008)023018.

[6] R. Nekovei, Y. Sun, Back-propagation network and itsconfiguration for blood vessel detection in angiograms, IEEETransactions on Neural Networks 6 (1) (1995) 64–72.

[7] A. Pinz, S. Bernogger, P. Datlinger, A. Kruger, Mapping thehuman retina, IEEE Transactions on Medical Imaging 17 (4)(1998) 606–619.

[8] A. Can, H. Shen, J. Turner, H. Tanenbaum, B. Roysam, Rapidautomated tracing and feature extraction from retinalfundus images using direct exploratory algorithms, IEEETransactions on Information Technology in Biomedicine 3(2) (1999) 125–138.

[9] C. Tsai, C. Stewart, H. Tanenbaum, B. Roysam, Model-basedmethod for improving the accuracy and repeatability ofestimating vascular bifurcations and crossovers from retinalfundus images, IEEE Transactions on InformationTechnology in Biomedicine 8 (2) (2004) 122–130.

[10] E. Grisan, A. Pesce, A. Giani, M. Foracchia, A. Ruggeri, A newtracking system for the robust extraction of retinal vesselstructure, in: Proc. of the International Conference of theIEEE Engineering in Medicine and Biology Society (IEMBS),vol. 3, 2004, pp. 1620–1623.

[11] V. Bevilacqua, S. Cambó, L. Cariello, G. Mastronardi, Acombined method to detect retinal fundus features, in: IEEEEuropean Conference on Emergent Aspects in Clinical Data

Analysis, 2005.

[12] A. Bhuiyan, B. Nath, J. Chua, K. Ramamohanarao, Automaticdetection of vascular bifurcations and crossovers from colorretinal fundus images, in: IEEE International Conference on

Journal Identification = COMM Article Identification = 3075 Date: May 31, 2011 Time: 4:25 pm

m s i n

38 c o m p u t e r m e t h o d s a n d p r o g r a

Signal-Image Technologies and Internet-Based Systems,2007, pp. 711–718.

[13] A. Condurache, T. Aach, Vessel segmentation in angiogramsusing hysteresis thresholding, in: Proc. of the 9th IAPRconference on Machine Vision Applications, vol. 1, 2005, pp.269–272.

[14] A.M. Mendonca, A. Campilho, Segmentation of retinal bloodvessels by combining the detection of centerlines andmorphological reconstruction, IEEE Transactions on MedicalImaging 25 (9) (2006) 1200–1213.

[15] A. Bhuiyan, B. Nath, J. Chua, K. Ramamohanarao, Bloodvessel segmentation from color retinal images usingunsupervised texture classification, in: Proc. of InternationalConference on Image Processing (ICIP), 2007, pp. 521–524.

b i o m e d i c i n e 1 0 3 ( 2 0 1 1 ) 28–38

[16] E. Dougherty, Mathematical Morphology in ImageProcessing, M. Dekker, New York, 1993.

[17] A.F. Frangi, W.J. Niessen, K.L. Vincken, M.A. Viergever,Multiscale vessel enhancement filtering, in: Proc. of theMedical Image Computing and Computer-AssistedInterventation (MICCAI), 1998, pp. 130–137.

[18] DRIVE: Digital Retinal Images for Vessel Extraction, Website,http://www.isi.uu.nl/Research/Databases/DRIVE/ Last Visit:November, 2009.

[19] F.W.M. Stentiford, R.G. Mortimer, Some new heuristics forthinning binary handprinted characters for ocr, IEEE

Transactions on Systems, Man, and Cybernetics 13 (1) (1983)81–84.

[20] VARIA: Varpa retinal images for authentication, http://www.varpa.es/varia.html, Last Visit: November, 2009.


Recommended