Pattern Recognition and Image Understanding

Gait-based person identification

Special instruction:
Gait Database OU-ISIR Gait Database is published.

The Optimal Camera Arrangement by a Performance Model for Gait Recognition
OptimalSensorArrangement Recently, many gait recognition algorithms are proposed, and the optimal camera arrangement is necessary to maximize the performance. In this paper, we propose the optimal camera arrangement by using a performance model considering observation conditions comprehensively. We select silhouette resolution, observation view, and its local and global changes as the observation conditions affecting the performance. Then, training sets composed of pairs of the observation conditions and the performance is obtained by gait recognition experiments under several camera arrangements. A performance model is constructed by applying Gaussian Processes Regression to the training set. The optimal arrangement is determined by estimating the performance for each camera arrangement with the performance model. Experiments of performance estimation with a training set including 17 subjects and the optimal camera arrangement demonstrate the effectiveness of the proposed method.
  • Publications
    1. N. Akae, Y. Makihara, and Y. Yagi, ``The Optimal Camera Arrangement by a Performance Model for Gait Recognition,'' Proc. the 9th IEEE Conf. on Automatic Face and Gesture Recognition, pp. 1-6, Santa Barbara, CA, USA, Mar. 2011.[PDF]



Gait Analysis of Gender and Age using a Large-scale Multi-view Gait Database
Gender Age Analysis This paper describes video-based gait feature analysis for gender and age classification using a large-scale multi-view gait database. First, we constructed a large-scale multi-view gait database in terms of the number of subjects (168 people), the diversity of gender and age (88 males and 80 females between 4 and 75 years old), and the number of observed views (25 views) using a multi-view synchronous gait capturing system. Next, classification experiments with four classes, namely children, adult males, adult females, and the elderly were conducted to clarify view impact on classification performance. Finally, we analyzed the uniqueness of the gait features for each class for several typical views to acquire insight into gait differences among genders and age classes from a computer-vision point of view. In addition to insights consistent with previous works, we also obtained novel insights into view-dependent gait feature differences among gender and age classes as a result of the analysis.
  • Publications
    1. Y. Makihara, H. Mannami, and Y. Yagi, ``Gait Analysis of Gender and Age Using a Large-Scale Multi-View Gait Database,'' Proc. the 10th Asian. Conf. on Computer Vision, pp. 975-986, Queenstown, New Zealand, Nov. 2010.[PDF]



Performance Evaluation of Vision-based Gait Recognition using a Very Large-scale Gait Database
Large Gait DB This paper describes the construction of the largest gait database in the world and its application to a statistically reliable performance evaluation of vision-based gait recognition. Whereas existing gait databases include at most an order of a hundred subjects, we construct an even larger gait database which includes 1,035 subjects (569 males and 466 females) with ages ranging from 2 to 94 years. Because a sufficient number of subjects for each gender and age group are included in this very large-scale gait database, we can analyze the dependence of gait recognition performance on gender or age groups. The results of GEI-based gait recognition provide several novel insights, such as the tradeoff of gait recognition performance among age groups derived from the maturity of walking ability and physical strength. Moreover, improvement in the statistical reliability of performance evaluation is shown by comparing the gait recognition results for the whole set and subsets of a hundred subjects selected randomly from the whole set.
  • Publications
    1. M. Okumura, H. Iwama, Y. Makihara, and Yasushi Yagi, ``Performance Evaluation of Vision-based Gait Recognition using a Very Large-scale Gait Database,'' Proc. IEEE 4th Int. Conf. on Biometrics: Theory, Applications and Systems, pp. 1-6, Washington D.C., USA, Sep. 2010.[PDF]



Cluster-Pairwise Discriminant Analysis
CPDA Pattern recognition problems often suffer from the larger intra-class variation due to situation variations such as pose, walking speed, and clothing variations in gait recognition. This paper describes a method of discriminant subspace analysis focused on a situation cluster pair. In a training phase, both a situation cluster discriminant subspace and class discriminant subspaces for the situation cluster pair are constructed by using training samples of non recognition-target classes. In testing phase, given a matching pair of patterns of recognition-target classes, posteriors of the situation cluster pairs are estimated at first, and then the distance is calculated in the corresponding cluster-pairwirse class discriminant subspace. The experiments both with simulation data and real data show the effectiveness of the proposed method.
  • Publications
    1. Y. Makihara and Y. Yagi, ``Cluster-Pairwise Discriminant Analysis,'' Proc. of the 20th Int. Conf. on Pattern Recognition, pp. 577-580, Istanbul, Turkey, Aug. 2010. [PDF]



Adaptive Acceptance Threshold Control for ROC Curve Optimization
AATC In two-class classification problems such as one-to-one verification and object detection, the performance is usually evaluated by a so-called Receiver Operating Characteristics (ROC) curve expressing a tradeoff between False Rejection Rate (FRR) and False Acceptance Rate (FAR). On the other hand, it is also well known that the performance is significantly affected by the situation differences between enrollment and test phases. This paper describes a method to adaptively control an acceptance threshold with confidence values derived from situation differences so as to optimize the ROC curve. We show that the optimal evolution of the adaptive threshold in the domain of the distance and confidence value is equivalent to a constant evolution in the domain of the error gradient defined as a ratio of a total error rate to a total acceptance rate. An experiment with simulation and real data demonstrates the effectiveness of the proposed method, particularly under a lower FAR or FRR tolerance condition.
  • Publications
    1. Y. Makihara, M.A. Hossain, and Y. Yagi, ``How to Control Acceptance Threshold for Biometric Signatures with Different Confidence Values?,'' Proc. of the 20th Int. Conf. on Pattern Recognition, pp. 1208-1211, Istanbul, Turkey, Aug. 2010. [PDF]



Gait Identification for Low Frame-rate Videos
low frame-rate Gait analyses have recently gained attention as methods of identification for wide-area surveillance and criminal investigation. Generally surveillance camera videos are taken by low spatio-temporal resolution because of limitation on communication band and storage device. Therefore it is difficult to apply existing gait identification methods to low spatio-temporal resolution videos. Period-based gait trajectory matching in eigen space using phase synchronization. A gait can be taken as a trajectory in eigen space and phase synchronization is done by time stretching and time shifting. In addition, with considering fluctuation among gait sequence, robust matching is done by statistical procedure on period-based matching result. In addition, statistical procedures on period-based matching results make robust matching for fluctuation among gait sequence. Experiments about performance assessment on spatio-temporal resolution demonstrate the effectiveness of the proposed method.
  • Publications
    1. A. Mori, Y. Makihara, and Y. Yagi, ``Gait Recognition using Period-based Phase Synchronization for Low Frame-rate Videos,'' Proc. of the 20th Int. Conf. on Pattern Recognition, pp. 2194-2197, Istanbul, Turkey, Aug. 2010. [PDF]



Online Gait Measurement for Ditigal Entertainment
online gait measurement This paper presents a method to measure online the gait features from the gait silhouette images and reflect the gait features to CG characters for an audience-participation digital entertainment. First, both static and dynamic gait features are extracted from the silhouette images captured by the online gait measurement system with two cameras and a chroma-key background. Then, Standard Gait Models (SGMs) with various types of gait features are constructed and stored, which are composed of a pair of CG characters’ rendering parameters and synthesized silhouette images. Finally, blend ratios of the SGMs are estimated to minimize gait feature errors between the blended model and the online measurement. In an experiment, a gait database with 100 subjects is used for gait feature analysis and it is confirmed that the measured gait features are reflected to the CG character effectually.
  • Publications
    1. M. Okumura, Y. Makihara, S. Nakamura, S. Morishima, and Y. Yagi, ``The Online Gait Measurement for the Audience-Participant Digital Entertainment,'' Proc. of Invited Workshop on Vision Based Human Modeling and Synthesis in Motion and Expression, Xi'an, China, Sep. 2009. [PDF]
    2. S. Nakamura, M. Shiraishi, S. Morishima, M. Okumura, Y. Makihara, and Y. Yagi, ``Characteristic Gait Animation Synthesis from Single View Silhouette,'' Proc. of SIGGRAPH 2009 (Poster), New Orieans, Louisiana, USA, Aug. 2009. [PDF]



Clothing-invariant Gait Identification
cloghint invariant gait Variations in clothing alter an individual's appearance, making the problem of gait identification much more difficult. If the type of clothing differs in a gallery and a probe, certain parts of the silhouettes are likely to change and the ability to discriminate subjects decreases with respect to these parts. A part-based approach, therefore, has the potential for selecting the appropriate parts. This paper proposes a method for part-based gait identification in light of substantial clothing variations. We divide the human body into 8 sections, including 4 overlapping ones, since the larger parts have a higher discrimination capability while the smaller parts are more likely to be unaffected by clothing variations. Furthermore, as there are certain clothes that are common to different parts, we present a categorization for items of clothing to group similar clothes. Next, we exploit the discrimination capability as a matching weight for each part and control the weights adaptively based on a distribution of distances between the probe and all the galleries. The results of the experiments using our large-scale gait dataset with clothing variations show that the proposed method achieves far better performance than other approaches.
  • Publications
    1. Md. Altab Hossain, Yasushi Makihara, Junqiu Wang, and Yasushi Yagi ``Clothing-invariant gait identification using part-based clothing categorization and adaptive weight control,'' Pattern Recognition, Vol. 43, No. 6, pp. 2281-2291, Jun., 2010. [PDF]
    2. M.A. Hossain, Y. Makihara, J. Wang, and Y. Yagi, ``Clothes-Invariant Gait Identification using Part-based AdaptiveWeight Control,'' Proc. of the 19th Int. Conf. on Pattern Recognition, Tampa, USA, Dec. 2008. [PDF]



Speed-invariant Gait Identification
speed invariant gait We propose a method of gait silhouette transformation from one speed to another to cope with walking speed changes in gait identification. Firstly, static and dynamic features are divided from gait silhouettes using a human model. Secondly, a speed transformation model is created using a training set of dynamic features for multiple persons on multiple speeds. This model can transform dynamic features from a reference speed to another arbitrary speed. Finally, silhouettes are restored by combining the divided static features and the transformed dynamic features. Evaluation by gait identification using frequency-domain features shows the effectiveness of the proposed method.
  • Publications
    1. A. Tsuji, Y. Makihara, and Y. Yagi, ``Silhouette Transformation based on Walking Speed for Gait Identification,'' Proc. of the 23rd IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, USA, Jun. 2010. [PDF]



Multi-view Gait Identification using an Omnidirectional Camera
omni gait We propose a method of gait identification based on multi-view gait images using an omnidirectional camera. We first transform omnidirectional silhouette images into panoramic ones and obtain a spatio-temporal gait silhouette volume (GSV). Next, we extract frequency-domain features by Fourier analysis based on gait periods estimated by autocorrelation of the GSVs. Because the omnidirectional camera provides a change of observation views, multi-view features can be extracted from parts of GSV corresponding to basis views. In an identification phase, distance between a probe and a gallery feature of the same view is calculated, and then these for all views are integrated for matching. Experiments of gait identification including 21 subjects from 5 views demonstrate the effectiveness of the proposed method.
  • Publications
    1. K. Sugiura, Y. Makihara, and Y. Yagi, ``Gait Identification based on Multi-view Observations using Omnidirectional Camera,'' Proc. of 8th Asian Conf. on Computer Vision, Vol. 1, pp. 452-461, Tokyo, Japan, Nov. 2007. [PDF]
    2. K. Sugiura, Y. Makihara, and Y. Yagi, "Omnidirectional Gait Identification by Tilt Normalization and Azimuth View Transformation,'' Proc. of the IEEE Workshop on OMNIVIS, Marseille, France, Oct. 2008. [PDF]



Gait Identification Considering Body Tilt by Walking Direction Changes
body tilt correction Gait identification has recently gained attention as a method of identifying individuals at a distance. Thought most of the previous works mainly treated straight-walk sequences for simplicity, curved-walk sequences should be also treated considering situations where a person walks along a curved path or enters a building from a sidewalk. In such cases, person's body sometimes tilts by centrifugal force when walking directions change, and this body tilt considerably degrades gait silhouette and identification performance, especially for widely-used appearance-based approaches. Therefore, we propose a method of body-tilted silhouette correction based on centrifugal force estimation from walking trajectories. Then, gait identification process including gait feature extraction in the frequency domain and learning of a View Transformation Model (VTM) follows the silhouette correction. Experiments of gait identification for circular-walk sequences demonstrate the effectiveness of the proposed method.
  • Publications
    1. Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, ``Gait Identification Considering Body Tilt by Walking Direction Changes,'' Electronic Letters on Computer Vision and Image Analysis, Vol. 8, No. 1, pp. 15--26, Apr. 2009.[PDF]
    2. Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, "Adaptation to Walking Direction Changes for Gait Identification", Proc. of the 18th Int. Conf. on Pattern Recognition, Vol. 2, pp. 96-99, Hong Kong, China, Aug. 2006. [PDF]



Gait Identification using a View Transformation Model in the Frequency Domain
View Transformation Model Gait analyses have recently gained attention as methods of individual identification at a distance from a camera. However, appearance changes due to view direction changes cause difficulties for gait identification. We propose a method of gait identification using frequency-domain features and a view transformation model. We first extract frequency-domain features from a spatio-temporal gait silhouette volume. Next, our view transformation model is obtained with a training set of multiple persons from multiple view directions. In an identification phase, the model transforms gallery features into the same view direction as that of an input feature.
  • Publications
    1. Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, "Which Reference View is Effective for Gait Identification Using a View Transformation Model?", Proc. of the IEEE Computer Society Workshop on Biometrics 2006, New York, USA, Jun. 2006. [PDF]
    2. Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, "Gait Recognition Using a View Transformation Model in the Frequency Domain", Proc. of the 9th European Conf. on Computer Vision, Vol. 3, pp. 151-163, Graz, Austria, May 2006. [PDF]
    3. R. Sagawa, Y. Makihara, T. Echigo, and Y. Yagi, "Matching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance," Proc. of the 7th Asian Conf. on Computer Vision, Vol. 2, pp. 141-150, Hyderabad, India, Jan. 2006.



Spatio-Temporal Analysis of Walking
GaitVolume This paper describes a new approach for identifying a person from a spatio-temporal volume that consists of sequential images of a person walking in an arbitrary direction. The proposed approach employs an omnidirectional image sensor and analyzes the three dimensional frequency properties of spatio-temporal volume, because the sensor can capture a long image sequence of a person's movements in all directions, and, as well, their walking patterns have some cycles. Spatio-temporal volume data, here called ``gait volume'', contain information not only of spatial individualities such as features of the torso and face, but also movements of their torso within a unique rhythm. Three dimensional fourier transform is applied to the gait volume to obtain a unique frequency for each person's walking pattern. This paper also evaluates the availability of three dimensional frequency analysis of the gait volume, how much is the difference in frequency patterns while walking.
  • Publications
    1. Yu Ohara, Ryusuke Sagawa, Tomio Echigo, Yasushi Yagi, "Gait Volume : Spatio-Temporal Analysis of Walking", In The fifth Workshop on Omnidirectional Vision, Camera Networks and Non-classical cameras, pp.79-90, 2004.
    2. Yu Ohara, Ryusuke Sagawa, Tomio Echigo, Yasushi Yagi, "Walking Person Identification Dealing With Resolution and Appearance Changes", In Proc. of Korea-Japan Joint Workshop on Frontiers of Computer Vision, pp.89--94, 2004.
    3. Yu Ohara, Ryusuke Sagawa, Tomio Echigo, Yasushi Yagi, "Walking Person Identification Dealing With Resolution and Appearance Changes", In Proc. SANKEN International Workshop on Inteligent Systems, pp.1-9, 2003.