The OU-ISIR Large Population Gait Database with real-life carried object (OU-LP-Bag)

Introduction

The OU-ISIR Large Population Gait Database with real-life carried object (OU-LP-Bag) is meant to aid research efforts in the area of developing, testing and evaluating algorithms for vision-based gait recognition with carried objects (COs) covariate along with detection and classification of COs where a COs is being carried. The dataset includes background-subtracted image sequences and associated size-normalized (i.e., 88x128) GEIs, The Institute of Scientific and Industrial Research (ISIR), Osaka University (OU) has copyright in the collection of gait video and associated data and serves as a distributor of the OU-ISIR Gait Database. If you use this dataset please cite the following paper:

  • M.Z. Uddin, T.T. Ngo, Y. Makihara, N. Takemura, X. Li, D. Muramatsu, Y. Yagi, "The OU-ISIR Large Population Gait Database with Real-Life Carried Object and its performance evaluation,'' IPSJ Trans. on Computer Vision and Applications, Vol. 10, No. 5, pp. 1-11, May., 2018. [PDF] [Bib]

Be noted that, for OU-LP-Bag β version see bottom of the page.

Data capture

The gait data were collected in conjunction with an experience-based demonstration of video-based gait analysis at a science museum (Miraikan), and informed consent for purpose of research use was obtained electronically. The dataset consists of 62,528 subjects (with age ranging from 2 to 95 years) The camera was set at a distance of approximately 8 m from the straight walking course and a height of approximately 5 m. The image resolution and frame rate were 1280 x 980 pixels and 25 fps, respectively. Each subject was asked to walk straight three times at his/her preferred speed. The first sequence with or without COs (if he/she did not have COs) is called the A1 sequence, and the second and third sequences without COs are called A2 and A3 sequences, respectively. An overview of the capture system is illustrated in the Fig. 1.



Fig.1: Illustration of the data collection system

Annotation of the carrying status

Each subject was manually divided the area in which the COs could be carried into four regions with respect to the human body: side bottom, side middle, front, and back, as shown in Fig. 2. However, some subjects did not carry a CO, some carried multiple COs in multiple regions, and others changed a CO’s position within a GEI gait period. As a result, a total of seven distinct labels for the carrying status (CS) were annotated in this database, some examples of CS labels in Fig. 3.



Fig. 2: Four approximating regions for a person in which a CO is being carried.



Fig. 3: Examples of CS labels: (a) sample RGB image within a gait period with COs (circled in yellow) in their A1 sequence; (b) corresponding GEI feature; (c) GEI feature of the same subject without a CO in another captured sequence (A2 or A3).

Feature generation

A silhouette image sequence of a subject was extracted using a chroma-key technique and then, registration and size normalization of the silhouette images were performed. A GEI was constructed by averaging the subject’s silhouette image sequence over a centered gait period.



How to get the dataset?

To advance the state-of-the-art in gait-based application, this dataset including a set of size-normalized GEI, silhouette sequences, subject ID list with CS labels and silhouette frame list (frames used for generating GEI) could be downloaded as a zip file with password protection and the password will be issued on a case-by-case basis. To receive the password, the requestor must send the release agreement signed by a legal representative of your institution (e.g., your supervisor if you are a student) to the database administrator by mail, e-mail, or FAX.


The OU-ISIR Gait Database, Large Population Dataset with Bag, β version

Introduction

The OU-ISIR Gait Database, Large Population Dataset with Bag, β version (OU-LP-Bag β) (for full version see top of the page OU-LP-Bag) includes two walking image sequences each of 2070 subjects with various carrying statuses. If you use this dataset please cite the following paper:

  • Y. Makihara, A. Suzuki, D. Muramatsu, X. Li, Y. Yagi, "Joint Intensity and Spatial Metric Learning for Robust Gait Recognition", The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [Bib]

Dataset

Training: It contains 2,068 sequences of 1,034 subjects. For each subject, one sequence is with his/her own carried objects, another sequence is without his/her carried objects.

Probe: It contians 1,036 subjects with carried objects that is disjoint from the 1,034 subjects in the training set.

Gallery: It contains the same subjects as the probe set, while without carried objects.

Samples of carring status (captured color images) are shown in below figures.



Examples of size-normalized GEI (i.e., 88x128) are shown in the following Figure (top: without carried objects, bottom: with carried objects).





How to get the dataset?

To advance the state-of-the-art in gait-based application, this dataset including a set of size-normalized GEI and subject ID lists of training, probe and gallery set could be downloaded as a zip file with password protection and the password will be issued on a case-by-case basis. To receive the password, the requestor must send the release agreement signed by a legal representative of your institution (e.g., your supervisor if you are a student) to the database administrator by mail, e-mail, or FAX.



The database administrator

Department of Intelligent Media, The Institute of Scientific and Industrial Research, Osaka University
Address: 8-1 Mihogaoka, Ibaraki, Osaka, 567-0047, JAPAN
Mail address
FAX: +81-6-6877-4375.