Vol.2, No.1, 49-53 (2010) Natural Science
http://dx.doi.org/10.4236/ns.2010.21007
Copyright © 2010 SciRes. OPEN ACCESS
Face recognition based on manifold learning and
Rényi entropy
Wen-Ming Cao, Ning Li
Intelligent Information Processing Key Laboratory, Shenzhen University, Shenzhen, China; wmcao@szu.edu.cn
Received 10 September 2009; revised 29 October 2009; accepted 9 November 2009.
ABSTRACT
Though manifold learning has been success-
fully applied in wide areas, such as data visu-
alization, dimension reduction and speech rec-
ognition; few researches have been done with
the combination of the information theory and
the geometrical learning. In this paper, we carry
out a bold exploration in this field, raise a new
approach on face recognition, the intrinsic
α-Rényi entropy of the face image attained from
manifold learning is used as the characteristic
measure during recognition. The new algorithm
is tested on ORL face database, and the ex-
periments obtain the satisfying results.
Keywords: Manifold Learning; Rényi Entropy; Face
Recognition
1. INTRODUCTION
Face recognition has becoming a research hotspot in the
fields of image processing, pattern recognition and arti-
ficial intelligence in recent years. Numerous research
papers appear on the famous international publications, a
great deal of capital and manpower is invested to this
research and its relevant application system develop-
ments. However, the performance of the face recognition
could be influenced by many factors, such as illumina-
tion, gesture, age, facial expressions, image resolution,
and noise, etc; which cause difficulties for the computer
processing of face recognition, and turn it into a chal-
lenging task at the same time.
The existing face recognition methods can be roughly
classified into two categories [1]: local feature based,
and global feature based. A local feature based method
symbolizes a human face with the extracted feature vec-
tors (eyes, nose, mouth, hair, and face contours), designs
certain kinds of classifiers to make recognition. On the
other hand, a global feature based method employs the
whole images as the input feature vectors, and then
low-dimensional features are extracted by some learning
algorithms. The main difference between the two cate-
gories lies in the way how the features are extracted. In a
local feature based method, features are designed com-
pletely by the algorithm designers; while in global fea-
ture based method, features are automatically extracted
or learned by some self-learning algorithms.
Local feature based or learning based methods can be
further divided into two classes: 1) statistical learning,
such as artificial neural networks (ANN) [2-4], support
vector machine (SVM) [5,6], and Boosting [7,8]; 2)
manifold learning(or dimensionality reduction), such as
linear methods like PCA [9,10], LDA [11,12], and
nonlinear methods like Isomap [13], LLE [14], Lapla-
cian Eigenmaps [15,16].
In recent years, nonlinear dimensionality reduction
(NLDR) methods have attracted great attentions due to
their capability to deal with nonlinear and curved data
[1]. All these algorithms rely on an assumption that the
image data lie on or close to a smooth low-dimensional
manifold in a high-dimensional input image space. A big
limitation of NLDR algorithms is the way how to esti-
mate the intrinsic dimension of the manifold. LLE and
Laplacian Eigenmaps do not give method to estimate the
intrinsic dimension; Isomap shows a simple way to es-
timate the intrinsic dimension by searching for the “el-
bow point” where the residual error decreases signifi-
cantly. However, for some real data, it is difficult to find
an obvious “elbow point” to indicate the intrinsic di-
mension.
The intrinsic dimensionality estimation of a data set is
a classical problem of pattern recognition. From the
math point, the intrinsic dimension of a manifold is the
dimension of the vector space that is homeomorphic to
local neighborhoods of the manifold; in other words,
intrinsic dimension describes how many “degrees of
freedom” are necessary to generate the observed data.
When the samples are drawn from a large population of
signals one can interpret them as realizations from a
multivariate distribution supported on the manifold. The
intrinsic entropy of random samples obtained from a
manifold is an information theoretic measure of the
complexity of this distribution. The entropy is a finite
value when the distribution satisfies the restriction of