Journal of Signal and Information Processing, 2013, 4, 154-157
doi:10.4236/jsip.2013.43B027 Published Online August 2013 (http://www.scirp.org/journal/jsip)
Face Recognition Using Chain Codes
Nazmeen B. Boodoo-Jahangeer, Sunilduth Baichoo
Department of Computer Science, University of Mauritius, Reduit, Mauritius.
Email: nazmeen182@yahoo.com
Received May, 2013.
ABSTRACT
Face recognition is an active area of biometrics. This study investigates the use of Chain Codes as features for recogni-
tion purpose. Firstly a segmentation method, based on skin color model was applied, followed by contour detection,
then the chain codes of the contours were determined. The first difference of chain codes were calculated since the latter
is invariant to rotation. The features were calculated and stored in a matrix. Experiments were performed using the
University of Essex Face database, and results show a recognition rate of 95% with this method, when compared with
Principal Components Analysis (PCA) giving 87.5% recognition rate.
Keywords: Face Recognition; Biometrics; Chain Codes; PCA
1. Introduction
This study investigates the use of chain codes as features
for face recognition. Features representing images are
termed as descriptors. For robust recognition, descriptors
should not be sensitive to variations like: Size Change,
Translation and Rotation.
Common image descriptors include Area, Perimeter,
Compactness and Eccentricity. However these descrip-
tors are sensitive to variations. Such descriptors therefore
cannot be used as reliable features to represent objects.
Other object representations include Chain codes, Po-
lygonal Approximation, Signatures, Convex Hull, Fou-
rier Descriptor, Texture Content and Moments. The use
of chain codes have not been explored enough in litera-
ture.
In this work, the use of chain codes as descriptor has
been investigated. Face images have been pre-processed
to find the contours. The chain codes of the contours are
used as features for recognition.
2. Background Study
2.1 Face Recognition
Research in automatic face recognition dates back to the
1960’s [1]. A survey of face recognition techniques has
been given by Zhao, W. et al., [2]. In general, face rec-
ognition techniques can be divided into two groups based
on the face representation they use:
(a) Appearance-based, which uses holistic texture fea-
tures and is applied to either whole-face or specific re-
gions in a face image;
(b) Feature-based, which uses geometric facial features
(mouth, eyes, brows, cheeks etc.) and geometric rela-
tionships between them.
2.2. Chain Codes
Chain codes are one of the techniques that can recognize
characters and digits successfully. According to Seul et al
[3], this technique possesses several advantages. Firstly,
the chain codes are a compact representation of a binary
object. Secondly, it is easier to compare objects using
chain codes as they are a translation invariant representa-
tion of a binary object. Another advantage is that chain
code can be used to compute any shape feature as it is a
complete representation of an object or curve. Chain
codes provide lossless compression and preserve all
topological and morphological information which will be
useful in the analysis of line patterns in terms of speed
and effectiveness [4].
Jiang et al. (2009) made use of Chain codes to extract
features of fingerprint. The fingerprint image captured
was enhanced and binarized, the ridges of fingerprint
image were thinned, then freeman chain code was de-
rived to describe the extracted fingerprint ridge, match-
ing based on fingerprint ridge contour. This method has
proved to be the translation invariant. No results were
reported by the authors
3. Methodology
In this section, the database used, the pre-processing
techniques and the procedures followed are explained.
Copyright © 2013 SciRes. JSIP
Face Recognition Using Chain Codes 155
3.1. Pre-Processing
Method 1: Canny Edge Detection
The original image is converted to grayscale and the
canny edge detection is applied to the image. Figure 1
shows the results.
(a)
(b)
Figure 1. (a) Original Image, (b) Image Edges.
Method 2: Skin Color Model
Several skin color detection techniques have been used
in literature [6,7]. A skin color model is applied for de-
tecting the face and extracting the background. The
method makes use of the YC rCb color space [8]. YCrCb
is an encoded nonlinear RGB signal, commonly used by
European television studios and for image compression
work. Color is represented by luma (which is luminance,
computed from nonlinear RGB, constructed as a
weighted sum of the RGB values, and two color differ-
ence values Cr and Cb that are formed by subtracting
luma from RGB red and blue components.
Y = 0.299R+0.587G+0.114B
Cr = RY
Cb = BY
This color space is suitable for skin color modeling
since it is simple to convert from RGB to YCrCb [9].
Face image segmentation using skin color model is
done by the following steps:
1) Read Original Image.
2) Convert image to YCrCb color space
3) Apply proper threshold to detect skin region in im-
age. Figure 2 shows the skin region detected.
4) Apply contour detection.
The resulting image was then used to detect the
boundaries in the face. The function retrieves contours
from the binary image and returns the number of re-
trieved contours. Figure 3 shows the result of boundary
detection where the face, the eyes and the nose bounda-
ries can be clearly seen. The largest boundary represents
the boundary of the face image and was considered for
further processing.
3.2. Algorithm
In this work, two algorithms have been studied, namely
the Principal Components Analysis (PCA), as described
by Turk and Pentland [10] and secondly the Chain
Codes.
Figure 2. Result of skin detection.
Figure 3. Result of Contour detection.
3.2.1. The PCA Procedure
Given an s-dimensional vector representation of each
face in a training set of M images, PCA tends to find a
t-dimensional subspace whose basis vectors correspond
to the maximum variance direction in the original image
space. This new subspace is normally low dimensional.
New basis vectors define a subspace of face images
called face space. All images of known faces are pro-
jected onto the face space to find sets of weights that
describe the contribution of each vector. To identify an
unknown image, that image is projected onto the face
space as well to obtain its set of weights. By comparing a
set of weights for the unknown face to sets of weights of
known faces, the face can be identified.
The main steps of PCA Algorithm are:
a) Determine PCA subspace from training data. ith im-
age vector containing N pixels is in the form
(1)
b) Store all p images in the image matrix
(2)
c) Compute covariance matrix
(3)
d) Compute eigenvalues and eigenvectors
(4)
where, is the vector of eigenvalues of the covariance
Copyright © 2013 SciRes. JSIP
Face Recognition Using Chain Codes
156
matrix.
e) Order eigenvectors
(5)
f) Order the eigenvectors in according to their cor-
responding eigenvalues in descending order. Keep only
the eigenvectors associated with non-zero eigenvalues.
This matrix of eigenvectors forms the eigenspace ,
where each column of is the eigenvector. Visualized
eigenvectors of the covariance matrix are called eigen-
faces.
3.2.2. Chain C odes
Chain code is an object method representation based on
its boundary. The chain code of a boundary is determined
by specifying a starting pixel and the sequence of unit
vectors obtained from going either left, right, up, or
down in moving from pixel to pixel along the boundary.
The advantage of using chain codes is that it preserves
information and allows considerable data reduction. The
first approach for representing digital curves using chain
code was introduced by Freeman in 1961[11], and it is
known as Freeman Chain Code (FCC).
Figure 4. (a) 4 directional chain codes, (b) 8-directional
chain codes.
This code follows the contour in counter clockwise
manner and keeps track of the directions as we go from
one contour pixel to the next. This representation is
based on 4-connectivity or 8- connectivity of the seg-
ments [12]. The direction of each segment is coded by
using a numbering scheme as shown in Figure 4(a) and
Figure 4(b).
The first difference of a chain code is the number of
direction changes (in counterclockwise) between 2 adja-
cent elements of the code. This method is invariant to
rotation. Figure 5 shows the eight directions that a num-
ber in a chain code can follow.
The number of direction changes for each pair of pix-
els in the contour is determined. The feature vector is
created by counting the direction changes of each face’s
contours.
3.3. Database
The study used the Essex Face Database [13]. The train-
ing set included 400 images for 20 persons (20 images
per person). Sample faces are shown in Figure 1. Face
images were of 180 × 200 pixels. There were small
variations in facial expression, luminance, scale and
viewing angle and were shot at different time. Limited
side movement and tilt of the head were tolerated.
0
1
2
3
4
5
6
7
Figure 5. First Difference.
4. Results
Experiments done included testing chain codes with im-
ages on which canny edge detection and contour algo-
rithm were performed. Secondly, testing chain codes by
using the proposed skin color model followed by contour
detection was done. Both results were compared with
PCA algorithm, as shown in Table 1. The corresponding
False Acceptance Rate (FAR) and False Rejection Rate
(FRR) are also determined.
Table 1. Face Recgnition Results.
Recognition
Rate FAR FRR
PCA 87.5% 0.8 0.2
Chain codes using canny
edge detection and contour
detection
70% 0.4 0.1
Chain Codes using Skin
Model and contour detection95% 0.2 0.1
As shown in Table 1, the proposed system yields a
higher recognition rate when using the skin color model,
than when using canny edge detector. PCA outperforms
chain codes when the latter uses the canny edge detection.
This can be explained by the fact that the canny edges
contains a lot of noise which impacts performance,
This study shows that chain codes can be used as fea-
tures for face recognition. Further works include testing
this method for ear recognition and ultimately creating a
multi-model face and ear recognition system.
REFERENCES
[1] W. W., Bledsoe, “The Model Method in Facial Recogni-
tion,” Technical Report, PRI 15, Panoramic Research,
Inc., Palo Alto, California, 1964.
[2] W. Zhao, R. Chellappa, J. Phillips and A. Rosenfeld,
Copyright © 2013 SciRes. JSIP
Face Recognition Using Chain Codes
Copyright © 2013 SciRes. JSIP
157
“Face Recognition in Still and Video Images: A Litera-
ture Survey,” ACM Computing Surveys, Vol. 35, 2003, pp.
399-458. doi;10.1145/954339.954342
[3] M. Seul, O. Lawrence and M. J., Sammon, “Practical
Algorithms for Image Analysis: Description,” Examples
and Code, edisi kebrp. USA: Cambridge University Press,
2000.
[4] A. McAndrew, “Introduction to Digital Image Processing
with Matlab,” USA: Thomson Course Technology: pp.
353, 2004.
[5] C. Jiang, Y. Zhao, W. Xu and X. Meng, “Research of
Fingerprint Recognition,” 2009 Eighth IEEE Interna-
tional Conference on Dependable, Autonomic and Secure
Computing, 2009, pp. 847-848.
doi:10.1109/DASC.2009.102
[6] P. Kakumanu, S. Makrogiannis and N. Bourbakis, "A
Survey of Skin-ColorModeling and Detection Methods",
Pattern Recognition 40, 2007, pp 1106- 1122.
doi:10.1016/j.patcog.2006.06.010
[7] V. Vezhnevets, V. Sazonov and A. Andreeva, “A Survey
on Pixelbased Skin Color Detection Techniques,” In
GraphiCon, Moscow, Russia, 2003.
[8] R. Gonzalez and E. Woods, Digital Image Processing,
2nd edition, Prentice Hall, 2002.
[9] S. L. Phung, A. Bouzerdoum and D. Chai, “A Novel Skin
Color Model in Ycbcr Color Space and Its Application to
Human Face Detection,” IEEE International Conference
on Image Processing (ICIP’2002), Vol. 1, 2002, pp.
289-292.
[10] M. Turk and A. Pentland, “Eigenfaces for Recognition,”
Journal of Cognitive Neuroscience, Vol. 3, 1991, pp.
71-86.doi:10.1162/jocn.1991.3.1.71
[11] H. Freeman, “On the Encoding of Arbitrary Geometric
Configurations,” in Proceedings of IRE Translation Elec-
tron Computer, New York, 1961, pp. 260-268.
[12] R. C. Gonzales and R. E. Woods, Digital Image Process-
ing, 2nd Ed. Upper Saddle River, N. J.: Prentice-Hall,
Inc., 2002.
[13] Spacek L., Computer Vision Science Research Projects,
2007, Retrieved on July 20, 2012 from Essex University
website:
http://cswww.essex.ac.uk/mv/allfaces/faces94.html.