Department of Intelligent Media, ISIR, Osaka Univ.

Motion

Tracking and Detection

Tracking by Gait Recognition

Gait recognition has recently gained attention as an effective approach to identify individuals at a distance from a camera. Most existing gait recognition algorithms assume that people have been tracked and silhouettes have been segmented successfully. Tacking and segmentation are, however, very difficult especially for articulated objects such as human beings. Therefore, we present an integrated algorithm for tracking and segmentation supported by gait recognition. After the tracking module produces initial results consisting of bounding boxes and foreground likelihood images, the gait recognition module searches for the optimal silhouette-based gait models corresponding to the results. Then, the segmentation module tries to segment people out using the provided gait silhouette sequence as shape priors. Experiments on real video sequences show the effectiveness of the proposed approach.

Publications
  1. J. Wang, Y. Yagi, and Y. Makihara, “People tracking and segmentation using efficient shape sequences matching,” Proc. 9th Asian Conference on Computer Vision, Xi’an, China, Sep. 2009. [PDF]
  2. J. Wang, Y. Makihara, and Y. Yagi, “People Tracking and Segmentation Using Spatiotemporal Shape Constraints,” Proc. of the 1st ACM Int. Workshop on Vision Networks for Behaviour Analysis, Vancouver, Canada, Oct. 2008.
  3. J. Wang, Y. Makihara, and Y. Yagi, “Human Tracking and Segmentation Supported by Silhouette-based Gait Recognition,” Proc. of the 2008 IEEE International Conference on Robotics and Automation, pp. 1698-1703, Pasadena, USA, May 2008. [PDF]

Visual tracking and segmentation using appearance and spatial information of patches

Object tracking and segmentation find a wide range of applications in robotics. Tracking and segmentation are difficult in cluttered and dynamic backgrounds. We propose a tracking and segmentation algorithm in which tracking and segmentation are performed consecutively. We separate input images into disjoint patches using an efficient oversegmentation algorithm. Objects and their background are described by bags of patches. We classify the patches in a new frame by searching \emph{k} nearest neighbors. K-d trees are constructed using these patches to reduce computational complexity. Target location is estimated coarsely by running the mean-shift algorithm. Based on the estimated locations, we classify the patches again using appearance and spatial information. This strategy outperforms direct segmentation of patches based on appearance information only. Experimental results show that the proposed algorithm provides good performance on difficult sequences with clutter.

Publications
  1. Junqiu Wang, Yasushi Yagi, “Visual tracking and segmentation using appearance and spatial information of patches”, In Proc. 2008 IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA, 2010.
  2. Junqiu Wang, Yasushi Yagi, “Patch-based adaptive tracking using spatial and appearance information”, In Proc. of IEEE Int’l. Conf. on Image Processing 2008, pp.1564-1567, San Diego, USA, 2008.

Switching Local and Covariance Matching for Efficient Object Tracking

The covariance tracker finds the targets in consecutive frames by global searching. Covariance tracking has achieved impressive successes thanks to its ability of capturing spatial and statistical properties as well as the correlations between them. Nevertheless, the covariance tracker is relatively inefficient due to its heavy computational cost of model updating and comparing the model with the covariance matrices of the candidate regions. Moreover, it is not good at dealing with articulated object tracking since integral histograms are employed to accelerate the searching process. In this work, we aim to alleviate the computational burden by selecting appropriate tracking approaches. We compute foreground probabilities of pixels and localize the target by local searching when the tracking is in steady states. Covariance tracking is performed when distractions, sudden motions or occlusions are detected. Different from the traditional covariance tracker, we use Log-Euclidean metrics instead of Riemannian invariant metrics which are more computationally expensive. The proposed tracking algorithm has been verified on many video sequences. It proves more efficient than the covariance tracker. It is also effective in dealing with occlusions, which are an obstacle for local mode-seeking trackers such as the mean-shift tracker.

Publications
  1. Junqiu Wang, Yasushi Yagi, “Switching Local and Covariance Matching for Efficient Object Tracking”, In Proc. of the 19th Int. Conf. on Pattern Recognition, Tampa, Florida USA, Dec., 2008.

Tracking and segmentation using Min-Cut with consecutive shape priors

Tracking and segmentation find a wide range of applications such as intelligent sensing of robots, human-computer interaction, and video surveillance. Tracking and segmentation, however, are challenging for many reasons, e.g., complicated object shapes, cluttered background. We propose a tracking and segmentation algorithm that employs shape priors in a consecutive way. We found that shape information obtained using the Min-Cut algorithm can be applied in segmenting the consecutive frames. In our algorithm, the tracking and segmentation are carried out consecutively. We use an adaptive tracker that employs color and shape features. The target is modeled based on discriminative features selected using foreground/background contrast analysis. Tracking provides overall motion of the target for the segmentation module. Based on the overall motion, we segment object out using the effective min-cut algorithm. Markov Random Fields, which are the foundation of the min-cut algorithm, provide poor priors for specific shapes. It is necessary to embed shape priors into the min-cut algorithm to achieve reasonable segmentation results. Object shapes obtained by segmentation are employed as shape priors to improve segmentation in next frame. We have verified the proposed approach and got positive results on challenging video sequences.

Publications
  1. Junqiu Wang, Yasushi Yagi, “Tracking and segmentation using Min-Cut with consecutive shape priors”, Paladyn. Journal of Behavioral Robotics, Versita, co-published with Springer-Verlag GmbH, vol.1, no.1, pp.73–86, Mar., 2010.
  2. Junqiu Wang, Yasushi Yagi, “Consecutive Tracking and Segmentation Using Adaptive Mean Shift and Graph Cuts”, In 2007 IEEE International Conference on Robotics and Biomimetics, Sanya, China, Dec. 15-18, 2007.

Adaptive Mean-Shift Tracking with Auxiliary Particles

We present a new approach for robust and efficient tracking by incorporating the efficiency of the mean-shift algorithm with the multi-hypothesis characteristics of particle filtering in an adaptive manner. The aim of the proposed algorithm is to cope with problems brought about by sudden motions and distractions. The mean-shift tracking algorithm is robust and effective when the representation of a target is sufficiently discriminative, the target does not jump beyond the bandwidth, and no serious distractions exist. We propose a novel two-stage motion estimation method that is efficient and reliable. If a sudden motion is detected by the motion estimator, some particle filtering-based trackers can be used to outperform the mean-shift algorithm at the expense of using a large particle set. In our approach, the mean-shift algorithm is used as long as it provides reasonable performance. Auxiliary particles are introduced to cope with distractions and sudden motions when such threats are detected. Moreover, discriminative features are selected according to the separation of the foreground and background distributions when threats do not exist. This strategy is important since it is dangerous to update the target model when the tracking is in an unsteady state. We demonstrate the performance of our approach by comparing it with other trackers in tracking several challenging image sequences.

Publications
  1. Junqiu Wang, Yasushi Yagi, “Adaptive Mean-Shift Tracking with Auxiliary Particles”, IEEE Transactions on Systems, Man and Cybernetics -Part B, vol.39(6), pp.1578-1589, 2009.
  2. Junqiu Wang, Yasushi Yagi, “Discriminative Mean Shift Tracking with Auxiliary Particles”, In Proc. 8th Asian Conference on Computer Vision, pp.576-585, Tokyo, Japan, Nov. 18-22, 2007.

Integrating Color and Shape-Texture Features for Adaptive Real-Time Object Tracking

We extend the standard mean-shift tracking algorithm to an adaptive tracker by selecting reliable features from color and shape-texture cues according to their descriptive ability. The target model is updated according to the similarity between the initial and current models, and this makes the tracker more robust. The proposed algorithm has been compared with other trackers using challenging image sequences, and it provides better performance.

Award
  1. 2006 IEEE International Conference on Robotics and Biomimetics, Finalist for T.J.Tarn Best Paper in Robotics “Integrating Shape and Color Features for Adaptive Real-time Object Tracking” Junqui Wang and Yasushi Yagi
Publications
  1. Junqiu Wang, Yasushi Yagi, “Integrating Color and Shape-texture Features for Adaptive Real-time Tracking”, IEEE Trans. On Image Processing, vol.17, no.2, 2008. IEEE TIP.
  2. J. Wang, Y. Yagi, “Integrating Shape and Color Features for Adaptive Real-time Object Tracking”, In Proc. of The 2006 IEEE International Conference on Robotics and Biomimetics, Kunming, China, December 17-20, 2006.