Circuits and Systems
Vol.07 No.08(2016), Article ID:67329,11 pages
10.4236/cs.2016.78128

Gait Based Human Recognition with Various Classifiers Using Exhaustive Angle Calculations in Model Free Approach

S. M. H. Sithi Shameem Fathima1, R. S. D. Wahida Banu2, S. Mohamed Mansoor Roomi3

1Department of ECE, Syed Ammal Engineering College, Ramanathapuram, India

2Department of ECE, Government College of Engineering, Salem, India

3Department of ECE, Thiagarajar College of Engineering, Madurai, India

Copyright © 2016 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 19 April 2016; accepted 10 May 2016; published 14 June 2016

ABSTRACT

Human Gait recognition is emerging as a supportive biometric technique in recent years that identifies the people through the way they walk. The gait recognition in model free approaches faces the challenges like speed variation, cloth variation, illumination changes and view angle variations which result in the reduced recognition rate. The proposed algorithm selected the exhaustive angles from head to toe of a person, and also height and width of the same subject. The experiments were conducted using silhouettes with view angle variation, and cloth variation. The recognition rate is improved to the extent of 91% using Support vector machine classifier. The proposed method is evaluated using CASIA Gait Dataset B (The institute of Automation, Chinese Academy of Sciences), China. Experimental results demonstrate that the proposed technique shows promising results using state of the art classifiers.

Keywords:

Gait Recognition, CASIA Gait Dataset B, Classifiers

1. Introduction

In this modern era, with an increasing demand for surveillance systems, people want to have a more convenient method for identifying a person. The gait is the only biometric that identifies the person at a distance. Human gait recognition is drawing huge interest in biometrics field to monitor or authenticate a person with malicious intent. Over the last few decades, gait analysis has evolved as a well potential technology in computer vision community. Human gait is a complex action involving the motion of various parts of the body. It refers to various activities like joint motion, swinging with linear and nonlinear motions, toe on, toe off, heel on and heel off. Because gait parameters such as step length, gait cycle, angles of the hip, knee and joint rotation can be unique attributes of the human gait to identify a person. The gait recognition methods can be categorized into the model based and appearance based methods [1] . Model-based approach [2] - [4] models the person body structure that estimates static body parameters over time. This process is computationally intensive since it needs to model and track the subject body. Models are constructed on the basis of prior knowledge about the object and justifiable assumptions, such as the system only accounts for pathologically normal gait. The model based methods use template matching like rectangle, ellipse, ribbon and skeleton models for gait recognition [5] - [13] . In appearance based methods, silhouette based, floor sensor based (sensor outside the body), body owned sensor (sensors kept with human) methods are used for recognition. In the silhouette based method, the recognition can be made by using various approaches like, gait energy image (GEI), gait entropy image (GEnI), change energy image (CEI), gait flow image (GFI). Of these approaches, linear techniques like Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA) are taken into consideration. Using PCA, a linear view independent gait identification system was proposed [14] . Speed variation, view angle variation, and age calculation performed using OU-ISIR database [15] . Most of the papers consider the features from head to ankle portion applied with different algorithms.

2. CASIA Gait Data Base

CASIA dataset consists of four subsets. Dataset A is a standard dataset that contains 19,139 images. Dataset B is a large gait dataset which has 124 subjects. Waking style of each subject in the form silhouettes were captured from 11 views and three variations namely viewing angle variation, different clothing styles and different luggage carrying conditions, with the changes in personality. Dataset C was collected using an infrared camera with four different walking styles: normal walking, slow walking, fast walking, and normal walking with a bag. It contains 153 subjects. Dataset D consists of the gait of 88 subjects and their corresponding footprints. In this paper, CASIA dataset B with 124 persons have been taken for experimentation with cloth variation and viewing angle variation of 11 views from 0˚, 18˚, 36˚, ∙∙∙ 180˚.

Pre-Processing

The silhouettes obtained from CASIA database are affected by noises and discontinuities. Imperfections in silhouette extraction have a negative effect on the performance of a gait recognition system [16] . There are algorithms for background subtraction and shadow removal algorithm [17] . Hence a pre processing technique is implemented in this work. Silhouettes with discontinuity are detected using blob detection technique. The number of blobs is greater than one, indicates discontinuity is present in the silhouette. Incomplete silhouette which contained inside a gait sequences are taken into clustering procedure by taking height and width as features. Dominant energy image (DEI) is evaluated using clustering. This DEI consists of noise. Hence threshold is used to remove noises. The successive frame difference at consecutive instants is obtained as Frame Difference Energy Image (FDEI). Implementation of gray scale erosion filter is used to produce good quality silhouettes [18] .

3. Experimentation

The pre processed gait sequences are taken for the experimentation. The proposed methodology consists of training and testing phases. In training phase gait patterns are captured and features are extracted from pre processed silhouettes. It is stored in a feature database. The exhaustive angle components from head to neck, neck to torso, hip to knee of two leg portions, knee to toe of both of the legs, maximum height and maximum width of the silhouette sequences are also taken into feature dataset. In testing phase same features are extracted and classified using various classifiers.

3.1. Height and Width Calculation

A person moves nearer to the camera, the frame height is increased. Similarly, when a person is leaving away from it, the frame height is decreased. Variation of height and width can be calculated by defining the bounding box and the centroid point for each observed body. The height and width of the box alters in a gait cycle. Let be the various frame heights of skeleton in a gait cycle. Then the maximum height of the person in the entire silhouettes is denoted as in Equation (1)

. (1)

The variation of the width in a gait cycle is one other important feature for gait analysis, as it contains structural and dynamical information about the gait. When the person is in mid stance position, the space between the two legs is quite small and hence the width is reduced. The maximum width is achievable when a person walks by swinging his arms. Let be the width of skeleton in a gait cycle. Then the maximum spacing between two legs for a person is denoted as in Equation (2)

. (2)

The width of the outer contour of the binary silhouettes of a walking person is also one of the feature vectors.

The centroid of the human silhouette is calculated by using the following Equations (3) & (4)

(3)

(4)

where represents the average contour pixel position. represents points on the human blob, N―total number of points on the contour.

3.2. Skeletonization and Angle Calculation

Skeleton of the input silhouette U is defined as in terms of erosions and openings.

where (5)

with (6)

V is the structuring element and indicates t successive erosions of U and T is its final value

. (7)

The angles are measured in terms of deviation of reference partitioned planes between the given divided skeleton planes as shown in Figure 1.

Since the variation is higher, the output of the Radon coefficients also become higher in its value, and it is very suitable to act as feature vectors.

The radon transform of a skeleton image is denoted as where r is defined by a normal distance from the origin, as a normal angle [19] .

Radon transform point is given by the Equation (8)

(8)

where

Figure 2(a) represents the silhouettes and its corresponding skeletons. Figure 2(b) represents the sub divided six skeleton portions for angle calculations.

Figure 3 shows the block diagram for the proposed method. The silhouettes are taken from CASIA database.

Figure 1. Six angles:―angle between head to neck,―angle between neck to torso,―angle between hip to knee of left leg,―angle between hip to knee of right leg,―angle between knee to foot of left leg,―angle between knee and foot of right leg.

Six angles, height and width are extracted from the silhouettes. Classifiers are used to train the features and it is stored in a database. During the testing period, the classifier is used to recognize the person by comparing the database.

3.3. K_Nearest Neighbour Classifier (K_NN)

K_NN is the one of the simplest method for pattern classification. Let denote a training set of n la-

belled examples with inputs

For the discrete binary to represent whether yi and yj are matched with each other or not. Then to compute the squared distance between two vectors are denoted by the Equation (9)

. (9)

Linear transform optimizes K_NN classification. And the cost function over the distance metrics is defined by the Equation (10).

(a) (b)

Figure 2. (a) Human silhouettes and its corresponding skeletons (b) segmented parts of a skeleton.

Figure 3. Block diagram for the proposed method.

. (10)

Here is to indicate whether input is an expected neighbour of input.

3.4. Multiclass SVM

Support Vector Machines (SVM) was originally designed for binary classification. The formulation to solve multiclass SVM problems in one step has variables proportional to the number of classes. Therefore, for multiclass SVM methods, either several binary classifiers have to be constructed or a larger optimization problem is needed. Hence, in general it is computationally more expensive to solve a multiclass problem than a binary problem with the same number of data (Kohli & Verma 2011) [20] .

Assume is a training set, where and and for multiclass problem, it has to determine. For a k class problem, the optimal hyper plane constructed with SVM for the class i against class j and is defined as

. (11)

Here is a vector in feature, is a mapping function and is a scalar. Here the orientation of optimal hyper plane is given in Equation (12)

. (12)

For the n set of classes, One-Against-All method was constructed using SVM models. The ith SVM is trained with all of the training examples in the ith class with positive labels and all other examples with negative labels. The final output of the one-against-all method is the class that corresponds to the SVM with the highest output value (Liu et al. 2008) [21] . Thus, by solving the optimization problem of SVM using all the training samples in the dataset, the decision function of the ith SVM is given in Equation (13)

. (13)

The input vector x will be assigned to the class that corresponds to the largest value of the decision function. Sample x is classified into class X given by the Equation (14)

. (14)

3.5. RVM Classifier

The relevance vector machine (RVM) technique has been applied in many different areas of pattern recognition. It is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic calculations. However, during the training phase the inversion of a large matrix is required. Hence this methodology is not suitable for large datasets. Consider a two class problem with training points and the label and, based on Bernoulli’s likelihood distribution

(15)

where is the logistic function and

4. Results and Discussion

CASIA data set B with different cloth variation and angle variation are trained and tested with K_NN, RVM, SVM classifiers. The recognition rate is obtained using the given formula as in the Equation (16)

. (16)

Table 1 shows around 85% of recognition rate using K_NN classifier. The performance of this classifier in the different sets did not produce large variation with respect to different clothing. The proposed algorithm shows robustness towards cloth variation. The successive frames of training and testing are relevant then the recognition rate is improved. Figure 4 shows the histogram representation of cloth variation set using K_NN.

Table 2 shows around 88% of recognition rate using RVM classifier. The output of this classifier produced improved recognition rate for same database than K_NN classifier. K_NN outperforms well for small dataset. RVM provides Bayesian inference based probabilistic output. Hence it makes results more informative than K_NN. Figure 5 shows the histogram representation of cloth variation set using RVM classifier.

Table 3 shows around 91% of recognition rate using SVM classifier since the SVM classifier found the best separation among the closest feature points .The SVM classifier outperforms well than K_NN and RVM classifier. Figure 6 represents Histogram representation of cloth variation using SVM classifier.

Table 4 shows training and testing of silhouettes with different viewing angle from 0˚ to 180˚ the highest recognition for the similar angle variation for training and testing frames using SVM. Angle variation for training

Table 1. Recognition rate of CASIA-B dataset (cloth variation) using K_NN classifier.

Table 2. Recognition rate of CASIA-B dataset (cloth variation) using RVM classifier.

Table 3. Recognition rate of CASIA-B dataset (cloth variation) using SVM classifier.

Table 4. Percentage of recognition rate in a CASIA dataset (multi view) using SVM classifier.

Figure 4. Histogram representation of K_NN Classifier output.

frame is different with testing frame angle variation, the recognition rate is reduced. So view angle variation has produced the high impact on this algorithm. The similar variations are obtained using K_NN and RVM classifiers with less recognition rate than SVM Classifier. Goffredo et al. (2010) [22] achieved the mean correct classification rate for 65 persons with 73.6% across all views. The proposed algorithm shows an average of 91.4% for a range of 0˚ to 180˚ variations (multi view). Regarding cloth variations, the recognition rate of a person is around 91.5%. For the experiments conducted to recognize the persons from CASIA dataset B of 124 persons from CASIA, and more than 25,000 silhouette sequences are trained and stored in a database for the recognition calculation. The efficiency of SVM produces better results than K_NN, RVM classifiers. SVM classifiers produce a good recognition performance to CASIA database.

Table 5 shows that the SVM classifier outperforms and produced 91.5%. The K_NN and RVM classifier

Figure 5. Histogram representation of RVM classifier output.

Figure 6. Histogram representation of SVM classifier output.

produced low recognition rate than SVM for cloth variation and angle variation dataset.

Table 6 shows that the proposed method produced the good recognition percentage than the existing

Table 5. Percentage recognition rate with respect to different classifier.

Table 6. Comparison of recognition rate with existing method.

method. The selected features of exhaustive angles, height and width improved the good recognition rate.

5. Conclusion and Future Work

This algorithm presented an automated approach for human identification from low-resolution silhouettes. The algorithm has utilized exhaustive head to toe angles, height, and width as feature vectors. The gait characteristic features are kinematics based. The proposed method produced small variation in recognition percentage for cloth variations, since the algorithm identifies the person without bothering his dress codes. Moreover the algorithm shows its robustness towards different clothing styles. The SVM classifier classified the gait features efficiently. Experimental results demonstrated the feasibility of the approach. In future, more attention would be paid to the feature space for describing and recognizing the human gait on more subjects. SVM classifier has produced a high recognition rate of 91.5% in CASIA dataset than K_NN and RVM classifiers. The algorithm includes the meeting point of ground with toe as added information, and it distinguishes the person correctly. For future, the challenging problems like occlusion, shadows, and noises in silhouettes by modifying this proposed algorithm need to be rectified.

Contribution in This Paper

This proposed method considers the features, exhaustively from head to toe angle for improved recognition rate. Most the above said papers, did not consider the angle between ankle and toe portion with respect to its gait cycle. This added information gives complete kinematic information about a person to enhance the recognition rate. The algorithm yields better recognition for cloth variation and view angle variation.

Cite this paper

S. M. H. Sithi Shameem Fathima,R. S. D. Wahida Banu,S. Mohamed Mansoor Roomi, (2016) Gait Based Human Recognition with Various Classifiers Using Exhaustive Angle Calculations in Model Free Approach. Circuits and Systems,07,1465-1475. doi: 10.4236/cs.2016.78128

References

  1. 1. Boulgouris, N.V., Hatzinakos, D. and Plataniotis, K.N. (2005) Gait Recognition: A Challenging Signal Processing technology for Biometric Identification. IEEE Signal Processing Magazine, 22, 78-90.
    http://dx.doi.org/10.1109/MSP.2005.1550191

  2. 2. Yam, C., Nixon, M.S. and Carter, J.N. (2004) Automated Person Recognition by Walking and Running via Model-Based Approaches. Pattern Recognition, 37, 1057-1072.
    http://dx.doi.org/10.1016/j.patcog.2003.09.012

  3. 3. Niyogi, S.A. and Adelson, E.H. (1994) Analyzing and Recognizing Walking Figures in XYT. 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, 21-23 June 1994, 469-474.
    http://dx.doi.org/10.1109/cvpr.1994.323868

  4. 4. Bouchrika, I and Nixon, M.S. (2007) Model-Based Feature Extraction for Gait Analysis and Recognition. Proceedings of the 3rd International Conference on Computer Vision/Computer Graphics Collaboration Techniques, France, March 2007, 150-160.

  5. 5. Guo, Y., Xu, G. and Tsuji, S. (1994) Tracking Human Body Motion Based on a Stick Figure Model. Journal of Visual Communication and Image Representation, 5, 1-9.
    http://dx.doi.org/10.1006/jvci.1994.1001

  6. 6. Cunado, D., Nixon, M.S. and Carter, J.N. (1997) Using Gait as a Biometric, via Phase-Weighted Magnitude Spectra. 1st International Conference on Audio and Video Based Biometric Person Authentication, Crans-Montana, Switzerland, 12-14 March 1997, 95-102.

  7. 7. Bobick, A.F. and Johnson, A.Y. (2001) Gait Recognition Using Static, Activity-Specific Parameters. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, 8-14 December 2001, 423-430.

  8. 8. Lee, L. and Grimson, W.E.L. (2002) Gait Analysis for Recognition and Classification. Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington DC, 20-21 May 2002, 155-161.

  9. 9. Wagg, D.K. and Nixon, M.S. (2004) On Automated Model-Based Extraction and Analysis of Gait. Proceedings of 6th International Conference on Automatic Face and Gesture Recognition, Seoul, South Korea, 19 May 2004, 11-16.

  10. 10. Gooeredo, M., Seely, R.D., Carter, J.N. and Nixon, M.S. (2008) Markerless View Independent Gait Analysis with Self-Camera Calibration. Proceedings of the Eighth IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, 17-19 September 2008.

  11. 11. Zhang, R., Vogler, C. and Metaxas, D. (2007) Human Gait Recognition at Sagittal Plane. Image &Vision Computing, 25, 321-330.
    http://dx.doi.org/10.1016/j.imavis.2005.10.007

  12. 12. Blum, H. (1967) A Transforms for Extracting New Descriptors of Shape.
    http://pageperso.lif.univ-mrs.fr/~edouard.thiel/rech/1967-blum.pdf

  13. 13. Bharat Kumar, A.G., Daigle, K.E., Pandy, M.G., Cai, Q. and Aggarwal, J.K. (1994) Lower Limb Kinematics of Human Walking with the Medial Axis Transformations. Proceeding of IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin TX, 11-12 November 1994, 70-76.

  14. 14. Zhang, Z.H. and Troje, N.F. (2005) View-Independent Person Identification from Human Gait. Neuro Computing, 69, 250-256.
    http://dx.doi.org/10.1016/j.neucom.2005.06.002

  15. 15. Makihara, Y., Mannami, H., Tsuji, A., Hossain, Md.A., Sugiura, K., Mori, A. and Yagi, Y. (2002) The OU-ISIR Gait Database Comprising the Tread mill Dataset. IPSJ Transaction on Computer Vision and Applications, 4, 53-62.

  16. 16. Liu, Z. and Sarkar, S. (2005) Effect of Silhoutte Quality on hard Problems in Gait Recognition. IEEE Transactions on Systems, Man, and Cybernetics, 35, 170-183.
    http://dx.doi.org/10.1109/TSMCB.2004.842251

  17. 17. Zhu, S.P., Guo, Z.C. and Ma, L. (2012) Shadow Removal with Background Difference Method Based on Shadow Position and Edges Attributes. EURASIP Journal on Image and Video Processing, 2012, 22.
    http://dx.doi.org/10.1186/1687-5281-2012-22

  18. 18. Sithi Shameem Fathima, S.M.H. and Wahida Banu, R.S.D. (2014) Gait Recognition-A Novel Approach to Quality Improvement in Silhouettes. Applied Mechanics and Materials, 573, 459-464.

  19. 19. Sithi Shameem Fathima, S.M.H. and Wahida Banu, R.S.D. (2015) Human Gait Recognition Using Silhouettes. International Journal of Applied Engineering Research, 10, 5443-5454.

  20. 20. Kohli, N. and Verma, N.K. (2005) Arrhythmia Classification Using SVM with Selected Features. International Journal of Engineering Science and Technology, 3, 122-131.

  21. 21. Liu, B., Hao, Z. and Tsang, E.C.C. (2008) Nesting One-against-One Algorithm Based on SVM’s for Pattern Classification. IEEE Transactions on Neural Networks, 19, 2044-2052.
    http://dx.doi.org/10.1109/TNN.2008.2003298

  22. 22. Goffredo, M., Bouchrika, I., Carter, J.N. and Nixon, M.S. (2010) Self Calibrating View-Invariant Gait Biometrics. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 40, 997-1008.
    http://dx.doi.org/10.1109/TSMCB.2009.2031091