Enhanced Face Detection Technique Based on Color Correction Approach and SMQT Features
522
This procedure is repeated until no changes occur.
Training of the non face table is performed in the same
manner, and finally the single table is created according
to Equation (5). One way to speed up the classification in
object recognition is to create a cascade of classifiers
[15]. Here the full SNoW classifier will be split up in sub
classifiers to achieve this goal. Note that there will be no
additional training of sub classifiers instead the full clas-
sifier will be divided. Consider all possible feature com-
binations for one feature, , then
,1,2,,2
i
Pi LD
2
1
,
LD
i
i
vxhx PxW
(8)
results in a relevance value with respective significance
to all features in the feature patch. Sorting all the feature
relevance values in the patch will result in an importance
list. Let be a subset chosen to contain the fea-
tures with the largest relevance values. Then
WW
x
xW
hMx
(9)
can function as a weak classifier, rejecting no faces
within the training database, but at the cost of an in-
creased number of false detections. The desired threshold
used on θ' is found from the face in the training database
that results in the lowest classification value from Equa-
tion (9). Extending the number of sub classifiers can be
achieved by selecting more subsets and performing the
same operations as described for one sub classifier. Con-
sider any division, according to the relevance values, of
the full set . Then W' has fewer fea-
tures and more false detections compared to W'' and so
forth in the same manner until the full classifier is
reached. One of the advantages of this division is that W''
will use the sum result from W' . Hence, the maximum of
summations and lookups in the table will be the number
of features in the patch W.
WW W
6. Face Detection Training and Classification
The face detector analyzes image patches 32 × 32 pixels
is applied. This patch is extracted and classified by
jumping Δx = 1and Δy = 1 pixels through the whole im-
age. In order to find faces of various sizes, the image is
repeatedly downscaled and resized with a scale factor Sc
= 1.2.
To overcome the illumination and sensor problem, the
proposed local SMQT features are extracted. Each pixel
will get one feature vector by analyzing its vicinity. This
feature vector can further be recalculated to an index.
1
1
2i
i
i
mVxL
(10)
where V(xi) is a value from the feature vector at position i.
This feature index can be calculated for all pixels which
results in the feature indices image. A circular mask con-
taining P = 648 pixels is applied to each patch to remove
background pixels, avoid edge effects from possible fil-
tering and to avoid undefined pixels at rotation operation.
The face and nonface tables are trained with the pa-
rameters α = 1.005, β = 0.995 and γ = 200. The two
trained tables are then combined into one table according
to Equation (5). Given the SNoW classifier table, the
proposed split up SNoW classifier is created. The splits
are here performed on 20, 50, 100, 200 and 648 summa-
tions. This setting will remove over 90% of the back-
ground patches in the initial stages from video frames
recorded in an office environment.
Overlapped detections are pruned using geometrical
location and classification scores. Each detection is
tested against all other detections. If one of the area
overlap ratios is over a fixed threshold, then the different
detections are considered to belong to the same face.
Given that two detections overlap each other, the detec-
tion with the highest classification score is kept and the
other one is removed. This procedure is repeated until no
more overlapping detections are found.
7. Experimental Discussion & Results
Our experiments are performed using Matlab ver. 7.4,
CPU 2.13GHZ to verify the effectiveness of the proposed
method. The proposed method is applied on 150 color
images gathered from various sources such as Internet,
UCD Face Image Database and Georgia Database. These
images are varying in: size, lighting effects, uniform and
nonuniform background, number of person in each image
and the rotation angle of person. Figure 3 shows some of
the output of tested images in Figure 4 obtained by ap-
plying proposed method and SFSC method.
As can been seen in Figure 3 the face detection per-
formance of the proposed method is better than SFSC
method. Figure 5 illustrates Comparison between pro-
posed method and SFSC method in terms of face detec-
tion rate, false positive rate and false negative rate.
As can be seen in Figure 5, face detection rate in pro-
posed method is better than SFSC method. The proposed
method could detect approximately 84.1% of the faces
correctly and SFSC method could detect approximately
74.6% of the faces correctly. Although false positive rate
and false negative rate in proposed method is less than in
SFSC method. In proposed method false positive rate is
10.4% and false negative rate is 15.9%. In SFSC method
false positive rate is 22.0% and false negative rate is
25.4%. Figure 6 illustrates detection time among 150
images in comparison of proposed method and SFSC
method, as can be seen detection time in proposed me-
thod is a little bit increased.
Copyright © 2013 SciRes. JSEA