Department of Intelligent Media, ISIR, Osaka Univ.

Safety and Security

From wearable surveillance system to criminal investigation assitant system

Development of Mobile Omni-Alarm using Compound Omnidirectional Sensor

A compound stereo sensor, which consists of a single camera with multiple convex mirrors, has been proposed for surveying omnidirectional at once and detecting near objects. However, a real mobile system, including an image processing function, has not been realized. In this paper, a new mobile omni-alarm system is introduced which has been developed for realizing omnidirectional observation, mobility, and real-time processing. We developed a stand-alone system with a small and lightweight body by incorporating a small computer with newly designed sensors in order to realize a mobile security alarm. In the system, the camera and six convex parabolic mirrors are used for taking omnidirectional images, and the small computer is used for processing the images to detect near objects. When the system detects objects, the direction is immediately noti¯ed to a user by voice. Experimental results show that such a small mobile sensor has a certain ability of detecting near objects and processing images in real-time. We confirmed the availability as a security sensor of the developed system.

Publications
  1. Yuichiro Kojima, Ryusuke Sagawa, Tomio Echigo, Yasushi Yagi, “Calibration and Performance Evaluation of Omnidirectional Sensor with Compound Spherical Mirrors”, In The 6th Workshop on Omnidirectional Vision, Camera Networks and Non-classical cameras, 2005.

Criminal Investigation Assitance by Gait Recognition

Gait analyses have recently gained attention as methods of individual identification at a distance from a camera. However, appearance changes due to view direction changes cause difficulties for gait identification. We propose a method of gait identification using frequency-domain features and a view transformation model. We first extract frequency-domain features from a spatio-temporal gait silhouette volume. Next, our view transformation model is obtained with a training set of multiple persons from multiple view directions. In an identification phase, the model transforms gallery features into the same view direction as that of an input feature.

Publications
  1. Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, “Which Reference View is Effective for Gait Identification Using a View Transformation Model?”, Proc. of the IEEE Computer Society Workshop on Biometrics 2006, New York, USA, Jun. 2006. [PDF]
  2. Y. Makihara, R. Sagawa, Y. Mukaigawa, T. Echigo, and Y. Yagi, “Gait Recognition Using a View Transformation Model in the Frequency Domain”, Proc. of the 9th European Conf. on Computer Vision, Vol. 3, pp. 151-163, Graz, Austria, May 2006. [PDF]
  3. R. Sagawa, Y. Makihara, T. Echigo, and Y. Yagi, “Matching Gait Image Sequences in the Frequency Domain for Tracking People at a Distance,” Proc. of the 7th Asian Conf. on Computer Vision, Vol. 2, pp. 141-150, Hyderabad, India, Jan. 2006.