Advances in Molecular Imaging
Vol.09 No.01(2019), Article ID:89960,13 pages
10.4236/ami.2019.91002

Implementation of the Hough Transform for Iris Detection and Segmentation

Francisco Javier Paulín-Martínez, Alberto Lara-Guevara, Rosa María Romero-González, Hugo Jiménez-Hernández

Facultad de Informática, Universidad Autónoma de Querétaro, Querétaro, México

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 4, 2018; Accepted: January 14, 2019; Published: January 17, 2019

ABSTRACT

The iris is used as a reference for the study of unique biometric marks in people. The analysis of how to extract the iris characteristic information represents a fundamental challenge in image analysis, due to the implications it presents: detection of relevant information, data coding schemes, etc. For this reason, in the search for extraction of useful and characteristic information, approximations have been proposed for its analysis. In this article, it is presented a scheme to extract the relevant information based on the Hough transform. This transform helps to find primitive geometries in the irises, which are used to characterize each one of these. The results of the implementation of the algorithm of the Hough transform applied to the location and segmentation of the iris by means of its circumference are presented in the paper. Two public databases of iris images were used: UBIRIS V2 and CASIA-IrisV4, which were acquired under the same conditions and controlled environments. In the pre-processing stage the edges are found from the noise elimination in the image through the Canny detector. Subsequently, to the images of the detected edges, the Hough transform is applied to the disposition of the geometries detected.

Keywords:

Image Processing, Iris Segmentation, Hough Transform

1. Introduction

The iris recognition has become one of the most used methods in biometric recognition systems due to the unique characteristics of the iris and also for its steady behavior throughout the life of the human being [1] .

There are different biometric authentication systems that are based on the characteristics of an individual. Recognition of face, fingers, voice and iris are among the most used characteristics for recognition [2] . Currently, iris recognition is also used in different security systems, and recently, in clinical application systems.

In [3] it is mentioned that one of the main complications that have arisen to carry out the iris recognition has been the distance in which the image is acquired. When an image is obtained at a distance greater than 3 m, the iris image regularly becomes blurred, and, therefore, deficient in details such as to identify the texture of the iris due to the loss of information compared to images that are obtained at a smaller distance. In [1] other problems are identified, such as movement, lighting and noise, as well as the present refraction in the images. In addition to the obstruction of the eyelids, the use of glasses and hair prevent obtaining a complete image of the iris.

In [1] it is considered that the fundamental objective of the segmentation process is to extract the iris texture from the structures that surround it, for example, the pupil, the eyelids, the sclera and to eliminate or reduce reflections of light in the iris. In recent years, segmentation methods have been presented with the aim of increasing the percentages of success in the identification. To facilitate segmentation processes, different iris databases have been used, in different sizes, distances and positions.

Due to the fact that the images obtained regularly are not exclusively from the iris, and in them there are other elements that represent noise, it is necessary to implement techniques and algorithms to be able to separate the iris from the rest of the elements. One of the most common ways to achieve this is through the morphology of the object, which in this case focuses on the detection of the circles within the image corresponding to the iris and the pupil [4] .

The importance of correct iris segmentation without losing the properties of the image is not only relevant for the authentication and security systems, but also for the health area, mainly where the iris is the object of study, the conservation of colors and textures is important. Several studies related to the processing of images involving iris data have been carried out, mainly by [5] and [6] , which has focused on the study of new methods of iris recognition; [7] in the segmentation and parameterization of iridological images.

The identification of shapes by means of their edges in the images facilitates the classification of objects. In order to carry out the identification, some figures can be formed by the edges that compose them. [8] and [9] use the Canny’s method with the first derivative for edge detection, based on the variation of intensity between pixels.

Canny’s method was used by [10] in 2004, [11] in 2007, [12] in 2013, [13] in 2015, [14] in 2017, and recently, by [9] [15] [16] in 2018, for identifying different objects in images with diverse purposes.

The Hough transform consists of constructing a parametric space of regular geometric structures. The maximum zones of this space denote the regions with a high probability of finding these structures. Various investigations have shown that it is possible to detect different figures. In [1] [15] [17] and [9] this method has been used to detect circumferences in different types of images, not necessarily applied to iris identification or segmentation.

Different approaches have been implemented for iris recognition and segmentation. This task can be facilitated if its circular morphology is considered and images are prepared properly before moving on to the processing phase. The purpose of this article is to facilitate the detection process using the morphology of the circular objects contemplating different conditions in the acquisition, as well as the possible obstruction with objects surrounding the iris. Digital image processing techniques are used such as: gray scale transformation, negative and binarization. The Canny’s method and the Hough transform are applied for the detection of edges and circumferences, respectively.

Investigations have been conducted in which the Hough transform has been used to locate and segment the iris [18] [19] [20] [21] and [22] using different methodological approaches that mostly aimed at the elimination of noise, the location of the eye, and location of the center of the pupil, using different techniques to achieve iris segmentation. In [18] and [19] the Hough transform was used to detect the center of the pupil and from it to project the iris. Using different techniques for the elimination of noise on the images, in [19] the elimination of noise was made by applying a Gaussian filter. It is important to note that, in the investigations described above, the iris detection tests were carried out on images belonging to the same database, for which, in this research, it was decided to use a database more to homogenize them in the same environment, applying techniques of pre-processing images such as conversion to gray scale and negative.

2. Methodology

The methodology used for iris identification and segmentation was developed by [1] and [7] . The methodology consists of six main phases: 1) image acquisition, 2) pre-processing, 3) iris localization, 4) edge localization, 5) iris extraction, and 6) post processing. Figure 1 shows graphically each of the component phases, as well as the algorithms used to fulfill its function.

2.1. Acquisition Phase

Two databases of iris images were used in this phase:

・ UBIRIS V2. Facilitated by SOCIA Lab: Soft Computing and Image Analysis Group of the Department of Computer Science, University of Beira [23] .

・ -CASIA-IrisV4. Provided by the Center for Biometrics and Security Research of the Institute of Automation, Chinese Academy of Sciences [24] .

Characteristics of used databases are described in Table 1. In the case of the CASIA-IrisV4, database is divided into different categories depending on the conditions in which the acquired image was taken.

Figure 2 and Figure 3 show examples of images provided by CASIA-IrisV4 and UBIRIS V2, respectively.

Figure 1. Phases for iris segmentation, using the Hough transform. Source: Own elaboration (2018).

Table 1. Characteristics of used databases.

Figure 2. Image of the iris in CASIA-IrisV4. Source: [24] .

Figure 3. Image of the iris in UBIRIS V2. Source: [23] .

2.2. Pre-Processing Phase

Acquired images went through a previous process also called pre-processing in [1] , mainly due to the fact that photographs were obtained by different devices and conditions. Images that were obtained in color were transformed to gray scale in order to work better with them, as shown in Figure 4.

In order to carry out iris segmentation, a set of image processing techniques described in [1] and [7] were used. Among them, it can be found: binarization, negative and Otsu method, which were selected with base on those that contributed to minimize the error in characterizing the iris.

This phase improved the image since it eliminated light reflections in images, which can be produced by the device with which the image is acquired or by the environment in which it is taken.

Based on a grayscale image, an inverse or negative was applied to it, in order to eliminate areas that were not relevant for this study. To calculate the inverse or negative of the image, it was applied Equation (1).

f ( i , j ) = | ( 2 n ) f ( i , j ) | (1)

where i, j represent each pixel, f the original image and n the number of bits in the image. The result of the application of the negative on Figure 4 is shown in Figure 5.

The binarization allowed transforming an image which was originally in gray scale to a black and white image. This transformation is regularly based on a threshold, where all the pixels that make up the image are evaluated, and if these are below the established threshold, they become 0, otherwise they become 1.

Figure 4. Image of the iris in UBIRIS V2 converted to gray scale. Source: own elaboration (2018).

Figure 5. Image of the iris in UBIRIS V2 in negative. Source: own elaboration (2018).

2.3. Phase of Edge Localization

Canny’s algorithm was used to detect the edges present in the image and to facilitate the object identification, mainly circles, by means of the Hough transform. The edge detection was made taking into account the intensity variation existing between one or more regions present in an image.

[8] Points out that Canny’s method uses the first derivate for the edge detection, taking into account the intensity: in those regions where the intensity does not change, it is established a value of 0, while in the case of a sudden intensity change, a value of 1 is established. These characteristics are used for edge detection.

2.4. Phase of Iris Localization and Extraction

In this phase, several tests were performed with different algorithms to locate the iris structure. [1] Proposes to use HOG for describing the structure and vector support machines and their classification. For the iris extraction phase, the initial identification of the pixels was implemented to see which ones belonged to the iris in order to extract them. Finally, the post-processing phase involved separating the pupil leaving only the iris.

It was necessary to improve the contrast of used images before beginning the iris recognition process. The Retinex algorithm was proposed by [1] to improve the contrast in images. This algorithm uses the image decomposition (S) into two different images; in one of them, the illumination (L), and in the other one the reflection (R), for each one of the pixels that make it up. This decomposition allows removing light effects that cause contrast inconsistencies [1] . In recent years, this algorithm of the Equation (2) has had several corrections, as described in [25] .

S ( x , y ) = L ( x , y ) R ( x , y ) (2)

3. Results

In order to facilitate the iris identification by means of Hough’s transform in images used, it was necessary to implement image pre-processing algorithms. In the particular case of binarization, the optimal value of the threshold was sought, in such a way that no relevant details of the iris were lost, nor they were mixed with the rest of image elements.

To find the binarization threshold, a set of tests were performed on the images considering its grayscale histogram. Figure 6 shows the histogram of one of the images used. The value of the optimal threshold used was selected taking into account two criteria: the first, considering that the pupil is regularly black, the image was given a negative. Pixels that had a value close to 255 (those of lighter color) were looked for. Second, when the value was too high, parts of the iris were lost, so it was necessary to identify the value that would allow differentiating the circularity in the iris without losing parts of it.

In Figure 7 it can be seen how increasing the threshold level, the iris circularity

Figure 6. Frequency histogram of each gray scale in an image.

Figure 7. Binarization of an image with intensity thresholds 150, 180, 210 and 230.

became easier to identify, since it was differentiated from the other elements or components of the image that have a circular shape.

The algorithm employed in Canny’s method allowed identifying irregular edges of the iris circularity. This method detected edges using the first derivate: taking a value of zero in those regions where there was no intensity variation and a constant value in every transition, which allowed an intensity change to be reflected in a sudden variation of the first derivate. The edge identification facilitated to some extent the recognition of circumferences for the iris detection and the pupil separation. Figure 8 shows the application result of Canny’s method on Figure 7, to which the gray scale, the negative and the binarization had previously been applied.

Figure 8. Result of the edge identification by the Canny’s method.

Once edges were identified, figures that had a partial or complete circumference were detected, trying to locate the iris. Within images used after the application of the pre-processing algorithms, the main circumference was that corresponding to the iris. Although it was a regularly partial circumference because it was commonly obstructed by objects such as eyelashes, eyelids or the use of glasses, this was the most significant within the image.

In the circumference identification, the Hough transform implemented by [1] [15] , and [9] was used. Because images were obtained under a controlled medium in terms of the distance that was taken, a standard radio measurement was used as a parameter in the Hough transform, as well as the detection thresholds, looking for only one circumference to be located: the one corresponding to the radio. The sensitivity threshold value is found on a scale of 0 to 1. Several tests were performed so that only the most defined circumference within the image was identified. It should be noted that, in most of the images, the iris circumference was not totally defined by the presence of other objects such as: eyelids, eyelashes, hair or the presence of other objects. Figure 9 shows the identification result of circumferences using different thresholds and where the value of the optimal threshold for iris identification was: 0.98.

For the identification of circumferences by means of the Hough transform, the minimum and maximum ranges of the radii that one wanted to find were modified, the appropriate value of the range used depends on the number of pixels that make up the iris, in order to find a single circumference within the image. This value depends on the size and distance of the device, as well as the resolution of the image.

Figure 10 shows final results of the Hough transform implementation on images corresponding to subjects c11, c13, c15, and c17. The original image (Column A), the edge identification by the Canny’s method (Column B), and the final iris identification by the Hough transform method (Column C). Column C shows the circular line in red which represents the circularity identified by the Hough transform method that corresponds to the iris presented in the original image of Column A.

Figure 9. Results of threshold change in the Hough transform for the circumference identification.

Figure 10. Final results of the Hough transform implementation.

Figure 10 shows final results of the Hough transform implementation on images corresponding to subjects c11, c13, c15, and c17. The original image (Column A), the edge identification by the Canny’s method (Column B), and the final iris identification by the Hough transform method (Column C). Column C shows the circular line in red which represents the circularity identified by the Hough transform method that corresponds to the iris presented in the original image of Column A.

The iris segmentation techniques currently employed turn out to be very precise from 96% in [5] to 99% as in [6] [18] and [19] . However, the percentage of effectiveness is diminished when different environments are integrated in the acquisition of the images. Through this method the identification of the iris was achieved correctly even in the presence of objects that partially or totally obstruct the iris, as is the case of the use of lenses (Figure 11(a)) or by objects close to the area of interest: eyelashes, Eyelids or hair (Figure 11(b) and Figure 11(c)).

4. Discussion

To perform correct iris segmentation, it was necessary that images were acquired under the same conditions. To facilitate the segmentation process, it was necessary to ensure that from the acquisition phase of the image it had been taken correctly, making sure that there were no objects that partially or

Figure 11. Identification in the presence of objects that partially or totally obstruct the iris.

completely obstructed the iris, thus facilitating the detection work. One of the main problems presented from databases used was that, in most iris images, the iris did not appear complete, having regularly partial obstruction by an additional object, mainly the eyelid, the eyelashes, the hair or the use of glasses.

Due to the fact that images were obtained by sensors, in conditions of different luminosity and distance, each image required its own thresholds for binarization and for the Hough transform in the circumference identification.

For iris segmentation, two circumferences that delimited the internal and external part of the iris were identified. The external circumference in the regularly used images was not complete, so the Hough transform was used to complete it. The internal circumference was complete in most cases in databases used, however, the main problem was to find the correct contrast to separate the pupil from the iris, because there was a contrast that was not very easy to differentiate.

5. Conclusions

The Hough transform can be used for iris detection due to its circular structure. The definition and the right following up of the process suitable for image processing facilitate the detection and segmentation of the iris.

The distance and the appearance of circular objects within an iris image make it difficult to locate it by means of this method. The correct determination of the optimal thresholds for binarization, the Canny method and the Hough transform are crucial for the correct detection of the iris and to avoid showing false positives in the identification.

There are multiple factors that make the correct identification difficult to take place, such as: the distance, the device (sensor), the lighting, the environment, the quality and the space in which the image is acquired.

By establishing a formal process for iris identification, from the acquisition phase, if the criteria are defined and applied to be considered as a clean image, the task of the following phases will be facilitated.

The algorithm determination to be implemented to achieve segmentation must consider conditions in which the images were acquired and in accordance with the databases used. In this way, each phase of the process will fulfill its function and will contribute to improve the image and to remove elements that are not relevant for the segmentation process.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Paulín-Martínez, F.J., Lara-Guevara, A., Romero-González, R.M. and Jiménez-Hernández, H. (2019) Implementation of the Hough Transform for Iris Detection and Segmentation. Advances in Molecular Imaging, 9, 6-18. https://doi.org/10.4236/ami.2019.91002

References

  1. 1. Radman, A., Zainal, N. and Suandi, S.A. (2017) Automated Segmentation of Iris Images Acquired in an Unconstrained Environment Using HOG-SVM and GrowCut. Digital Signal Processing, 64, 60-70. https://doi.org/10.1016/j.dsp.2017.02.003

  2. 2. Bansal, A., Agarwal, R. and Sharma, R.K. (2015) Determining Diabetes Using Iris Recognition System. International Journal of Diabetes in Developing Countries, 35, 432-438. https://doi.org/10.1007/s13410-015-0296-1

  3. 3. Deshpande, A. and Patavardhan, P.P. (2017) Super Resolution and Recognition of Long Range Captured Multi-Frame Iris Images. IET Biometrics.

  4. 4. Umer, S., Dhara, B.C. and Chanda, B. (2015) Iris Recognition Using Multiscale Morphologic Features. Pattern Recognition Letters, 65, 67-74. https://doi.org/10.1016/j.patrec.2015.07.008

  5. 5. Daugman, J.G. (2004) How Iris Recognition Works. IEEE Transactions on Circuits and Systems for Video Technology, 14, 21-30.

  6. 6. Daugman, J. (2007) New Methods in Iris Recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 37, 1167-1175. https://doi.org/10.1109/TSMCB.2007.903540

  7. 7. Mendoza, L.E., Meza, E.F. and Gualdron, O.E. (2006) Segmentación y parametrización automáti-ca de imágenes iridológicas. Revista Ingeniería Biomédica, 10, 13-21.

  8. 8. Rebaza, J.V. (2007) Detección de bordes mediante el algoritmo de Canny. Escuela Académico Profesional di Informática. Universidad Nacional de Trujillo.

  9. 9. Lee, J., Tang, H. and Park, J. (2018) Energy Efficient Canny Edge Detector for Advanced Mobile Vision Applications. IEEE Transactions on Circuits and Systems for Video Technology, 28, 1037-1046. https://doi.org/10.1109/TCSVT.2016.2640038

  10. 10. Liu, H. and Jezek, K.C. (2004) Automated Extraction of Coastline from Satellite Imagery by Integrating Canny Edge Detection and Locally Adaptive Thresholding Methods. International Journal of Remote Sensing, 25, 937-958. https://doi.org/10.1080/0143116031000139890

  11. 11. Kang, C.C. and Wang, W.J. (2007) A Novel Edge Detection Method Based on the Maximizing Objective Function. Pattern Recognition, 40, 609-618. https://doi.org/10.1016/j.patcog.2006.03.016

  12. 12. Vijayarani, S. and Vinupriya, M. (2013) Performance Analysis of Canny and Sobel Edge Detection Algorithms in Image Mining. International Journal of Innovative Research in Computer and Communication Engineering, 1, 1760-1767.

  13. 13. Kim, J. and Lee, S. (2015) Extracting Major Lines by Recruiting Zero-Threshold Canny Edge Links along Sobel Highlights. IEEE Signal Processing Letters, 22, 1689-1692. https://doi.org/10.1109/LSP.2015.2400211

  14. 14. Shen, X., Duan, X., Han, D. and Yuan, W. (2017) Research on Adaptive Canny Algorithm Based on Dual-Domain Filtering. International Symposium on Parallel Architecture, Algorithm and Programming, 729, 182-191.

  15. 15. Meng, Y., Zhang, Z., Yin, H. and Ma, T. (2018) Automatic Detection of Particle Size Distribution by Image Analysis Based on Local Adaptive Canny Edge Detection and Modified Circular Hough Transform. Micron, 106, 34-41. https://doi.org/10.1016/j.micron.2017.12.002

  16. 16. Othman, Z., Ahmad, A., Kasmin, F., Ahmad, S.S.S., Sari, M.Y.A. and Mustapha, M.A. (2018) Comparison between Edge Detection Methods on UTeM Unmanned Arial Vehicles Images. MATEC Web of Conferences, 150, Article No. 06029. https://doi.org/10.1051/matecconf/201815006029

  17. 17. DE Vegt, S.E. (2015) A Fast and Robust Algorithm for the Detection of Circular Pieces in a Cyber Physical System. ES Reports.

  18. 18. Tian, Q.C., Pan, Q., Cheng, Y.M. and Gao, Q.X. (2004) Fast Algorithm and Application of Hough Transform in Iris Segmentation. Proceedings of 2004 International Conference on Machine Learning and Cybernetics, Shanghai, 26-29 August 2004, Vol. 7, 3977-3980.

  19. 19. Koh, J., Govindaraju, V. and Chaudhary, V. (2010) A Robust Iris Localization Method Using an Active Contour Model and Hough Transform. 20th International Conference on Pattern Recognition, Istanbul, 23-26 August 2010, 2852-2856. https://doi.org/10.1109/ICPR.2010.699

  20. 20. Jan, F., Usman, I., Khan, S.A. and Malik, S.A. (2013) Iris Localization Based on the Hough Transform, a Radial-Gradient Operator, and the Gray-Level Intensity. Optik—International Journal for Light and Electron Optics, 124, 5976-5985. https://doi.org/10.1016/j.ijleo.2013.04.116

  21. 21. Ren, Y., Qu, Z. and Liu, X. (2015) A Robust Iris Segmentation Algorithm Using Active Contours without Edges and Improved Circular Hough Transform. International Conference on Cloud Computing and Security, Nanjing, 13-15 August 2015, Vol. 9483, 457-468. https://doi.org/10.1007/978-3-319-27051-7_39

  22. 22. Zeng, Y. and Jun, W. (2018) Research on Iris Recognition Algorithm Based on Hough Transform. IOP Conference Series: Materials Science and Engineering, 439, Article ID: 032007. https://doi.org/10.1088/1757-899X/439/3/032007

  23. 23. Proença, H., Filipe, S., Santos, R., Oliveira, J. and Alexandre, L.A. (2009) The UBIRIS.v2: A Database of Visible Wavelength Iris Images Captured On-the-Move and At-a-Distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32, 1529-1535.

  24. 24. CASIA Iris Image Database (2013). http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp

  25. 25. Kimmel, R., Elad, M., Shaked, D., Keshet, R. and Sobel, I. (2003) A Variational Framework for Retinex. International Journal of Computer Vision, 52, 7-23. https://doi.org/10.1023/A:1022314423998