+ All Categories
Home > Documents > Iris Recognition Optimized for Information Assurance … · Iris Recognition Optimized for...

Iris Recognition Optimized for Information Assurance … · Iris Recognition Optimized for...

Date post: 17-Sep-2018
Category:
Upload: vandiep
View: 222 times
Download: 0 times
Share this document with a friend
8
Proceedings of The National Conference On Undergraduate Research (NCUR) 2009 University of Wisconsin La-Crosse La-Crosse, Wisconsin April 16-18, 2009 Iris Recognition Optimized for Information Assurance Kaleb Waite Department of Mathematics & Physics Rockhurst University 1411 Rockhurst Rd Kansas City, Missouri, 64110 USA Amanda Eure Department of Mathematics Winston-Salem State University 601 S. Martin Luther King Jr. Dr Winston-Salem, North Carolina, 27110 USA Faculty Advisor: Dr. John Paul Roop Abstract In an increasingly digital society, the demand for secure identification has led to increased development of biometric systems. These biometric systems are becoming widely adopted and accepted as one of the most effective ways to positively identify people. We discuss the history, purpose, and nature of both physiological and behavioral biometric systems and how they are classified. Some common biometric systems include fingerprinting, signatures, and face recognition. However, iris recognition is the most reliable of the biometric systems used today, in many respects. In this project, we examined the properties and implementation of iris recognition biometric systems. A general iris recognition system consists of five parts: 1) image acquisition, In which a picture of the eye is acquired. 2) Segmentation, which locates the borders of the iris in the image. 3) Normalization, which maps the circular iris to a rectangular image. 4) Feature encoding, which creates a biometric template. 5) Matching, which is the process of comparing templates. The mathematics of each of these parts is described in detail and several implementations of these methods are mentioned. It is important to note that one particular algorithm for e.g. feature encoding is not superior to another, as different algorithms can be customized for a particular application area. Specifically, we examine such differences by applying an open-source MATLAB package for iris recognition to the iris image database maintained by the Chinese Academy of Sciences. Keywords: iris, biometric, identification, security 1. Introduction The most stable and reliable biometric technology available today is iris recognition 12 . Iris recognition is viable because every human eye has unique features in the iris which are different from every other iris, which allows a person to be identified by patterns within the iris. The method involves taking pictures of a subject’s eye, finding the iris, converting the iris portion of the image into digital form, and writing sophisticated computer software to compare iris patterns and determine if a subject is who he/she claims to be. Compared to other biometrics, iris recognition boasts the lowest rates for false identification. Thus, iris recognition is ideal for tasks that require a very high level of security. Following is an outline of this technical report. Section 2 of this report will present some background information about biometrics, Section 3 will explain how the biometric system of iris recognition is implemented, and section 4
Transcript

Proceedings of The National Conference On Undergraduate Research (NCUR) 2009

University of Wisconsin La-Crosse La-Crosse, Wisconsin

April 16-18, 2009

Iris Recognition Optimized for Information Assurance

Kaleb Waite

Department of Mathematics & Physics Rockhurst University 1411 Rockhurst Rd

Kansas City, Missouri, 64110 USA

Amanda Eure Department of Mathematics

Winston-Salem State University 601 S. Martin Luther King Jr. Dr

Winston-Salem, North Carolina, 27110 USA

Faculty Advisor: Dr. John Paul Roop

Abstract

In an increasingly digital society, the demand for secure identification has led to increased development of biometric systems. These biometric systems are becoming widely adopted and accepted as one of the most effective ways to positively identify people. We discuss the history, purpose, and nature of both physiological and behavioral biometric systems and how they are classified. Some common biometric systems include fingerprinting, signatures, and face recognition. However, iris recognition is the most reliable of the biometric systems used today, in many respects. In this project, we examined the properties and implementation of iris recognition biometric systems. A general iris recognition system consists of five parts: 1) image acquisition, In which a picture of the eye is acquired. 2) Segmentation, which locates the borders of the iris in the image. 3) Normalization, which maps the circular iris to a rectangular image. 4) Feature encoding, which creates a biometric template. 5) Matching, which is the process of comparing templates. The mathematics of each of these parts is described in detail and several implementations of these methods are mentioned. It is important to note that one particular algorithm for e.g. feature encoding is not superior to another, as different algorithms can be customized for a particular application area. Specifically, we examine such differences by applying an open-source MATLAB package for iris recognition to the iris image database maintained by the Chinese Academy of Sciences. Keywords: iris, biometric, identification, security

1. Introduction The most stable and reliable biometric technology available today is iris recognition12. Iris recognition is viable because every human eye has unique features in the iris which are different from every other iris, which allows a person to be identified by patterns within the iris. The method involves taking pictures of a subject’s eye, finding the iris, converting the iris portion of the image into digital form, and writing sophisticated computer software to compare iris patterns and determine if a subject is who he/she claims to be. Compared to other biometrics, iris recognition boasts the lowest rates for false identification. Thus, iris recognition is ideal for tasks that require a very high level of security. Following is an outline of this technical report. Section 2 of this report will present some background information about biometrics, Section 3 will explain how the biometric system of iris recognition is implemented, and section 4

will outline some experimental research conducted by the authors using a MATLAB package developed by Libor Masek14 in conjunction with the Chinese Academy of Sciences’ Iris Image Database1. 2. Biometrics: An Overview The term “biometrics” refers to the automatic identification of a person based on his or her physiological or behavioral characteristics9. Biometrics as we know it today began in the late 19th century when it was recognized that humans had distinct, unique patterns in their fingerprints. Initially, this discovery was used to identify criminals and their fingerprints were stored in a database of card files15. This method of identification has grown tremendously in scope, with most modern law enforcement agencies having access to large databases of fingerprints. In addition to fingerprinting, many other discriminating features of the human body and behavior have been found and used to keep track of not only criminals, but also the general public. There are two broad categories of biometrics- behavioral and physiological. Behavioral biometrics include signatures, voice recognition, gait measurement, and even keystroke recognition. These tend to be less reliable than physiological methods because they are easier to duplicate than physical characteristics. Physiological attributes are the more trusted method in biometrics. Some examples of physiological biometrics are facial recognition, fingerprinting, hand profiling, iris recognition, retinal scanning, and DNA testing. 2.1. biometric stages/functions A biometric system has two stages, enrollment and identification. In the enrollment stage, the biometric device takes a measurement from the subject, processes the measurements to find the key points (called features) that set it apart from other people, and stores the features in a template to be used when identifying the subject at a later date. The identification stage consists of taking a measurement from the subject, extracting the features again, and comparing it to the template created in the enrollment stage9. Biometric systems generally have one of three general functions: To verify, to identify, or to screen the subject. Verification involves confirming that a subject is who he or she claims to be. For example, an ATM using a verification system could use a fingerprint check after the customer’s card is swiped to verify that the fingerprints match those on record for that customer. The next function, identification, is used to determine who the subject using the system is given just a biometric sample. In the ATM example, an identification system would only require the customer to supply their fingerprint, and it would then search a database of customers’ templates to find a match and bring up that customer’s account. Finally, screening is used to check if a person is on a list of “wanted” people. Screening is similar to identification because it requires the system to search for a match, but the number of templates is much smaller than in identification, and the subject is not always willing or even aware of the screening process. An example of this would be lifting fingerprints from a crime scene and checking them against a current list of criminals’ fingerprints10. Verification is the easiest system to implement and fastest to use, because the system already knows who the subject claims to be, and only has to compare one stored feature template to check the identity of a subject. On the other hand, identification is much more computationally challenging because the system must check through a large database of templates (depending on the application, this may be in the millions) to find one match for the subject. Screening is challenging because the quality of the sample acquired is often lower so the features extracted may not always be sufficient to produce positive identification of the subject during the matching process.

2.2. performance measurements In order to compare biometric systems, several terms are defined to assess the quality of a biometric. The performance measurements of biometric systems are divided into four categories: false reject rate (FRR), false accept rate (FAR), crossover error rate (CER) / equal error rate (EER), failure to acquire (FTA) / failure to enroll (FTE). Table 3.1 shows a brief description of these performance measurements17. Table 3.1-Measures of Evaluation Measurement Description False Reject Rate (FRR) Measures the number of times an authorized user is

wrongly refused access to the protected system.

False Accept Rate (FAR) Measures the number times an authorized user is accepted and therefore wrongly admitted to the protected system thus enabling a security breach.

Crossover Error Rate (CER) / Equal Error Rate (EER)

FRR equals to FAR

Failure to Acquire (FTA) / Failure to Enroll (FTE) Failure to correctly capture features Many biometric devices can be adjusted to favor security or user convenience. The false reject rate (FRR), is typically set statistically to a figure between 2% and 5%, while false accept rates are set between 0.5% and 0.1% 17. Higher security applications will have a lower than normal FAR and a more convenient system will have a lower than normal FRR. A rise in FAR corresponds to a drop in FRR, and vice-versa. Equal error rate (EER), which is also known as crossover error rate (CER), occurs when the false reject rate and the false accept rate are equal. The lower the equal error rate is the better the system’s performance. The equal error rate is often used to gauge the overall effectiveness of a biometric system, as the sensitivity of a biometric system is usually adjusted to equalize false acceptance and rejection. Failure to acquire (FTA) means that the biometric device was not able to capture a good image. Failure to enroll rate (FTE) is the percentage of people that do not have sufficient sample quality to enroll on a given biometric system. Each biometric device has such a shortcoming. Some examples of failures to enroll can be cause by a finger pressing down too hard on a finger scanner, background noise or hoarseness in voice recognition, and sunlight shining on an iris scanning capture device.

3. Iris Recognition Iris recognition is the process of identifying and verifying a person by using mathematical analysis of the random patterns that are visible within the iris of the person’s eye from some distance. Since the iris is complex, unique (even identical twins have different iris patterns), and stable throughout life, it can serve a living passport or password. As an example, international passengers at some airports are able to move through security procedures at faster rate because they used their iris as their passport versus a traditional passport. Iris recognition technology so far has been implemented in the following areas: substitute for national passports, aviation security and controlling access to restricted areas at airports, and hospital settings2. Due to the randomness of iris patterns, recognition decisions are made with high levels of confidence because the likelihood of another person with an iris pattern identical to the individual in a national-sized database is very low. There are five steps in the iris recognition process: Image Capture, Segmentation, Normalization, Feature Encoding, and Matching. Each of these steps will be described in detail in the following sections.

3.1. image capture

The first step in the iris recognition procedure is acquiring an image of the eye. The quality of the image must contain at least 50 pixels from the center of the pupil out to the outside edge of the iris for recognition to be successful. This quality of image is attained using grayscale CCDs (charge-coupled devices) contained in a digital camera or camcorder5. Another important factor in a good image capturing system is that the eye is close to centered in the picture, as significantly off-centered eyes can cause the next step of the procedure to fail. Also, it is important to minimize the occurrences of artifacts (specular reflections, aberrations, etc.) because they obscure the iris image. Finally, it is also desirable to make the system as easy to use, with as little discomfort as possible for the subject and operator of the system19.

The most efficient imaging of the iris is done using near-infrared (NIR) lighting with wavelengths in the range of 700-900nm5. Using low-intensity NIR light both reduces the occurrence of reflections off of the iris which block the view of parts of the iris, and prevents discomfort in the subject caused by shining bright light into the eye20. In order to achieve the clarity required in the texture of the iris, the distance of the subject from the camera can usually be no more than a meter, although using specially designed cameras, successful iris captures have been done at distances between 5 and 10 meters6. Also, to attain a good image without the annoyance of multiple captures in case of a mistake like the eye blinking or a blurry image, video imaging is often used because adjustments can then be made in real-time without forcing the subject to remain perfectly still while waiting for an image to be taken. The highest quality frames from the video are then chosen and used to encode the iris.

In order to automate the process of locating a good frame in a sequence, several methods have been proposed. John Daugman, who created the first patented iris recognition procedure3, used a 2D Fourier spectrum to analyze the clarity of an image. Zhang and Salganicoff later patented a procedure which uses the sharpness of the transition between the iris and the pupil to determine the acceptability of an image22. Ma et. al also proposed a method of choosing quality frames using a frequency distribution on the image data12.

3.2. segmentation The goal of the second step in the iris recognition procedure is to take the image acquired in the first step and localize the iris from the noise (non-iris parts) in the image. These noises include the pupil, sclera, eyelids, eyelashes, and artifacts7. This part of the process is the most computationally demanding, as it requires the algorithm to find both the boundary between the pupil and iris, and the boundary between the iris and the sclera. Looking at an image, it seems easy for us to locate the boundaries visually, but an algorithm works with raw image data from the monochromatic image (i.e. an image in which each pixel is represented by a value between 0 and 255 indicating the level of “darkness” of the pixel). In addition to finding the iris-pupil and iris-sclera boundaries, the algorithm must also locate any eyelids which are blocking part of the iris. The more advanced processes also attempt to locate eyelashes and artifacts. Finding the iris-pupil and iris-sclera boundaries is called iris localization, and locating eyelids, eyelashes, and artifacts within those boundaries in called noise reduction. The entire process is called segmentation, and many different techniques have been proposed to optimize both speed and quality of localization and noise reduction, a few of which will now be described. The first method proposed to localize the iris was proposed by John Daugman, who is considered the father of iris recognition technology because his system was the first developed and implemented. Daugman’s segmentation algorithm made use of the integro-differential operator of the form shown in (1) 13.

(1)

Where I(x,y) is the eye image, r is the radius to search for, G

σ(r) is a Gaussian smoothing function, and s is the

contour of the circle given by r, x0, y

0

13. This operator iteratively takes an x,y coordinate and makes circles of

various radii centered at that coordinate and sums up (integrates) the intensity values in the circular contour of radius r. It then moves to the next radius and integrates that contour, and the derivative is taken between the changes of the intensity of the two radii, and so on. The circle that is found to have the maximum rate of change between circular contours is then determined to be the circle defining the iris-sclera boundary. This process is then run again with a higher sensitivity just inside the circle containing the iris to find the boundary between the iris and pupil. The sensitivity of the operator is set by the σ factor of the Gaussian function2. The iris localization method that is employed by Wildes19, Tisse et al18, Kong and Zhang11, and Ma et al12 involves a computer vision algorithm developed to locate simple shapes in images. This method is called the circular Hough transform. In principle, it works similar to Daugman’s integro-differential operator by taking the derivatives of intensity in an image. The difference is that the circular Hough transform uses thresholds within the derivative information in the horizontal and vertical components of the intensity gradient to create an edge map in “Hough space” of where the intensity is changing rapidly (e.g. a boundary between iris and sclera). Then, for each point above the threshold in Hough space, circles of various radii are chosen to pass through the points on the edge map and the circles which are best defined (pass through the most points in Hough space) are chosen as iris-sclera and iris-pupil boundaries. A third iris localization method, which was proposed by Ritter 16, uses a discrete circular active contour (DCAC) to find the boundary between the iris and pupil. First, this method finds a point interior to the pupil by averaging the intensities in series of 5x5 pixels, and choosing a point in the darkest part of the image (which is the pupil). Then an expanding circular contour is centered at that point which actively resizes itself over increments of time (t), based on internal forces pushing outward from within the contour (vi) with an internal force defined by Fi(t), and external forces defined by Gi(t) pushing inward on the contour. Ritter’s formula for the contour at time t is shown in (2).

(2)

The process of moving the contour continues until an equilibrium is reached, at which time the contour should outline the iris-pupil boundary. The internal force’s magnitude is defined by a factor δ which the researcher determines based on the image attributes. I high δ corresponds to a larger internal force and a quicker expanding circle. The value of δ must be chosen so that is not too large or the contour will jump over the boundary and continue to expand, and not too small or the contour may stop expanding too soon. The external force’s magnitude is found using the variance image. Once the iris has been localized, other noises like the eyelid boundaries and artifacts are then considered. Daugman uses masking bits (described further in 4.2.4) to eliminate the eyelids and artifacts, but in his academic papers he doesn’t go into detail of how to find the masking bits. The eyelids were found by a parabolic Hough transform in Wildes et al20, and Kong and Zhang11. Kong and Zhang also present a way to detect both individual eyelashes using 1D Gabor filters, as well as multiple eyelashes and specular reflections in an image using variances in intensity and eyelash connectivity principles (i.e. an eyelash pixel is noisy if connected to another eyelash pixel or eyelid, and so on). Huang et al proposes a faster noise reduction technique which waits until after normalization to eliminate the eyelids and artifacts8. 3.3. normalization

Because the radius of the iris varies in pixel sizedepending on lighting conditions and image resolution, it is necessary to convert the localized portion of the iris into a standard form so that two pictures taken at different times and under different conditions can still be recognized as from the same eye. Accomplishing this is the normalization step of the iris recognition. Normalization takes the variable-sized circular iris image (because of pupil dilation and image resolution) and transforms it into a fixed size rectangular image to be encoded. This transformation is accomplished by a conversion from polar coordinates into rectangular coordinates, with a fixed number of pixels taken from each angle (φ). The transformation is described by Yu et al21 and is presented in equation (3),

(3)

Where x(r,φ) and y(r, φ) are the linear combinations of points in the on the pupil-iris boundary ((φ)xinner, (φ)yinner), and points on the pupil-sclera boundary ((φ)xouter, (φ)youter). 3.4. feature encoding Looking at a normalized iris image is not very helpful for identifying whether it comes from the same iris as another image. Instead, it must be encoded mathematically into some numerical representation of the unique features of the iris. This conversion is called feature encoding. Feature encoding has two components: filtering and phase quantization. Filtering is the process by which the normalized image is convolved with a predefined complex filter or operator. A convolution integral is of the form defined by equation (4), and the discrete version of this is shown by equation (5).

(4)

(5) Phase quantization is then performed, in which the resulting complex array is translated into a binary array of twice the size. The first entry is a one if the real part of the complex array positive and zero otherwise. The second entry is one of the complex part of the array is positive and zero otherwise. The important thing to consider when developing an algorithm to encode the normalized image is that the eye may occasionally be shifted slightly in any direction depending on the position of the eye when the image was taken. For

!!! dtgftgft

)()()(0" #=$

!="#

=#

1

0

)(n

jjiji

gfgf

this reason it is not feasible to use a simple pixel-by-pixel encoding of the image because that is spatially dependent and intensities will vary depending on conditions. Instead, functions are chosen which analyze the frequency and phase of the patterns in the texture of the iris which do not change between captures. The method that Daugman presents to accomplish feature encoding uses complex 2-D Gabor wavelet filtering to perform phase demodulation on the iris image. The convolution used is polar, and the corresponding convolution integral is given by equation (6):

(6)

Yu et. al. reviews several other methods for feature encoding, including Laplacian, Gaussian, and other wavelet filters in his paper21. 3.5. matching Several matching algorithms were described in previous research 4,13,21. The most accessible of these is the Hamming distance, which depends upon the XOR operator. As feature encoding outputs a binary matrix, in order to compare two individuals, one can merely perform the XOR operator over the entire matrix. The Hamming distance is then applied in which the number of nonzero entries in the resulting matrix is summed and the result is divided by the number of overall entries in the matrix. In other words, if A and B are the respective m by n matrices resulting from feature encoding, then the Hamming distance is given by equation (7).

!"

#$%

&= '

ji

jiji BAXORmn

HD,

,, ),(1 (7)

It is clear that for two individuals that are distinct, the binary matrices resulting from feature encoding will essentially represent random matrices. Therefore, elementary statistical analysis will suffice in order to make the decision whether the individuals are the same person or different people. Other improvements can be made in which the noise masks of the individuals are used to eliminate the possibility of including eyelashes, eyelids, and specular reflections factoring into the decision making process. Also, as an eye image may be obtained at various head angles, shifting may be employed in order to lower the false rejection rate (FRR.) 4. Computational Experiment In this section, we applied the MATLAB code given in 13,14, in conjunction with iris database from CASIA1,14, in order to better understand the process of iris recognition. The left image in Figure 1 displays the image of an iris after iris localization. The middle image displays the second phase of iris segmentation, which is noise reduction. The rightmost image in Figure 1 shows an unsuccessful segmentation of the iris. Some of the reasons why the segmentation is unsuccessful could be that the upper eyelid and eyelashes interfered with iris localization, the individual did not align their iris with the iris scanner properly, or there is not enough of a distinctive contrast between the iris and the sclera (i.e. derivative values will lie below the threshold). Finally, Figure 2 shows the image of the iris after it has been normalized.

Figure 1- Visual examples of segmentation process

Figure 2- Iris image after normalization

Our main experiment involves developing a binomial probability experiment for matching the templates created by Masek’s feature encoding algorithm14. In this binomial experiment, the fixed number of independent trials is 1000, which comes from 20 rows and 50 columns of the template matrix. This subset of the template matrix is selected to exclude the upper and lower eyelids. The two outcomes of binomial experiment from the XOR equation are 0 and 1. The probability of and outcome of 0 or 1 is .5 and it remains the same for each independent trial. To find the mean for binomial experiment, we multiply the fixed number of independent trials (N = 1000) by probability of outcome (P = 0.5) which is 500. Variance is average square deviation from the mean, which is 250. The standard deviation is square root of variance, which is 15.81. After the calculations, we use the XOR equation: sum(sum(xor(template1(:,231-280),template2(:,231-280))) to determine if the templates originate from the same eye. If the answer is within a specified number of standard deviations from 500 then the templates come from the same person. This would represent a rejection of the null hypothesis. However, if we are unable to reject the null hypothesis, we are only able to conclude that the templates come from two different people. An example of this is given in Figure 3. Though it is not visible with the naked eye, the process that we have just described will indicate that the portions of the templates on the left and middle of Figure 3 come from the same person, whereas the portion of the template on the right in Figure 3 comes from a different person.

Figure 3- 231-280th column of template created by Masek’s algorithm for 3 normalized images.

5. Conclusion Iris recognition technology is growing rapidly throughout the U.S and is the most promising biometric from the standpoint of information assurance in many respects. In this project, the details behind several different approaches to each of the five components of iris recognition: image acquisition, segmentation, normalization, feature encoding, and matching were explored. Then, a rudimentary statistical experiment was described by which iris recognition procedures may be analyzed utilizing existing iris recognition software and databases. Further research might include more extensive development of iris recognition software, less expensive and more efficient image capturing systems, and tailoring each to specifications needed for particular applications in information assurance. 6. Acknowledgements The authors would like to thank all the organizations who supported the 2007 Summer Information Assurance Program, these include: the United States Department of Education, NSA, HBCU-UP, Talent 21 and North Carolina A&T State University. They would also like to thank to Ms. Candy Carter and Ms. Sunnie Howard for their vital leadership with respect to the outreach programs at North Carolina A&T. Also, a special thanks is extended to this project’s research mentor Dr. John Paul Roop and program director Dr. Kossi Edoh for their guidance throughout the summer. Lastly, but most importantly, the authors would like to thank their families for their constant support, encouragement and love. 7. References 1. Chinese Academy of Sciences – Institute of Automation. Database of 756 Greyscale Eye Images. Available

http://www.cbsr.ia.ac.cn/IrisDatabase.htm, Version 1.0, 2003.

2. J. Daugman. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 11, pp. 1148-1161, 1993.

3. J. Daugman. Biometric personal identification system based on iris analysis. United States Patent, Patent Number: 5,291,560, 1994.

4. J. Daugman. How iris recognition works. Proceedings of 2002 International Conference on Image Processing, Vol. 1, pp. I33 - I36, 2002.

5. J. Daugman. The importance of being random: statistical principles of iris recognition. Pattern Recognition. Vol. 36, pp. 279-291, 2003.

6. C. Fancourt, L. Bogoni, K. Hanna, Y. Guo, R. Wildes, N. Takahashi, U. Jain, Iris Recognition at a Distance. AVBPA 2005, LNCS 3546, pp. 1-13, 2005.

7. X. He, P. Shi. A new segmentation approach for iris recognition based on hand-held capture device. Pattern Recognition. Vol 40, pp. 1326-1333, 2007.

8. J. Huang, Y Wang, T. Tan, J. Cui, A New Iris Segmentation Method for Recognition. Proceedings of the 17th International Conference on Pattern Recognition. Vol 3, pp. 554–557, 2004.

9. A. Jain, L. Hong, S. Pankanti, Biometric Identification. Communications of the ACM. Vol 43, pp 90-98, 2000.

10. A. Jain, A. Ross, S. Pankanti, Biometrics: A Tool for Information Security. IEEE Transactions on Information Forensics and Security. Vol 1, pp 125-143 June 2006.

11. W. Kong, D. Zhang, Accurate iris segmentation based on novel reflection and eyelash detection model. Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, pp. 263-266, 2001.

12. L. Ma, Y. Wang, T. Tan, Iris Recognition Using Circular Symmetric Filters. 16th International Conference on Pattern Recognition (ICPR'02),Volume 2, pp. 414-417, 2002

13. L. Masek. Recognition of human iris patterns for biometric identification. Bachelor’s Thesis, University of Western Australia, 2003.

14. L. Masek, P Kovesi. MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. The University of Western Australia. 2003. Available http://www.csse.uwa.edu.au/~pk/studentprojects/libor/ sourcecode.html

15. S. Prabhakar, S. Pankanti, A. Jain, Biometric recognition: security and privacy concerns. Security & Privacy Magazine, IEEE. Vol 1, pp 33-42 Mar-Apr 2003.

16. N. Ritter. Location of the pupil-iris border in slit-lamp images of the cornea. Proceedings of the International Conference on Image Analysis and Processing, pp. 740-745, 1999.

17. S. Sanderson, J. Erbetta. Authentication for secure environments based on iris scanning technology. IEEE Colloquium on Visual Biometrics, pp. 8/1-8/7, 2000.

18. C. Tisse, L. Martin, L. Torres, M. Robert. Person identification technique using human iris recognition. Proceedings of the 15th International Conference on Vision Interface, Canada, pp. 27-29, 2002.

19. R. Wildes. Iris recognition: an emerging biometric technology. Proceedings of the IEEE, Vol. 85, No. 9, pp. 1348-1363, 1997.

20. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride. A system for automated iris recognition. Proceedings IEEE Workshop on Applications of Computer Vision, Sarasota, FL, pp. 121-128, 1994.

21. L. Yu, D. Zhang, K. Wang. The relative distance of key point based iris recognition. Pattern Recognition. Vol 40, 423-430, 2007.

22. G. Zhang, M. Salganicoff, Method of Measuring the Focus of Close-Up Images of Eyes, United States Patent, no. 5953440, 1999.


Recommended