Department of Intelligent Media, ISIR, Osaka Univ.

Geometry

3D reconstruction and camera calibration

Dense 3D reconstruction method using a single pattern for fast moving object

Dense 3D reconstruction of extremely fast-moving objects could contribute to various applications such as body structure analysis and accident avoidance and so on. The actual cases for scanning we assume are, for example, acquiring sequential shape at the moment when an object explodes or observing fast-rotating turbine’s blades. In this paper, we propose such a technique based on a one-shot scanning method that reconstructs 3D shape from a single image where dense and simple pattern are projected onto an object. To realize dense 3D reconstruction from a single image, there are several issues to be solved; e.g. instability derived from using multiple colors, and difficulty in detecting dense pattern because of the influence of object color and texture compression. This paper describes the solutions of the issues by combining two methods, that is (1) an efficient line detection technique based on de Bruijn sequence and belief propagation, and (2) an extension of shape from intersections of lines method. As a result, a scanning system that can capture an object in fast motion has been actually developed by using a high-speed camera. In the experiments, the proposed method successfully captured the sequence of dense shapes of an exploding balloon and a breaking ceramic dish at 300-1000 fps.

Awards
  1. The Fourth Pacific-Rim Symposium on Image and Video Technology, Best Paper Award “One-shot Entire Shape Acquisition Method Using Multiple Projectors and Cameras,” Ryo Furukawa, Ryusuke Sagawa, Hiroshi Kawasaki, Kazuhiro Sakashita, Yasushi Yagi, Naoki Asada, 16th November, 2010.
Publications
  1. Hiroshi Kawasaki, Ryo Furukawa, Ryusuke Sagawa, Yuya Ohta, Kazuhiro Sakashita, Ryota Zushi, Yasushi Yagi, Naoki Asada, “Linear solution for oneshot active 3D reconstruction using two projectors”, In In Proc. Fifth International Symposium on 3D Data Processing, Visualization and Transmission, Paris, May, 2010.
  2. Ryusuke Sagawa, Yuya Ohta, Yasushi Yagi, Ryo Furukawa, Naoki Asada, Hiroshi Kawasaki, “Dense 3D Reconstruction Method Using a Single Pattern for Fast Moving Object”, In Proc. 2009 IEEE 12th International Conference on Computer Vision, pp.1779–1786, Kyoto, Sep., 2009.

Simultaneously Capturing Texture and Shape by Projecting Structured Infrared Light

Simultaneous capture of the texture and shape of a moving object in real-time is expected to be applicable to various fields including virtual reality and object recognition. Two difficulties must be overcome to develop a sensor able to achieve this feature: fast capturing of shape and the simultaneous capture of texture and shape. One-shot capturing methods based on projecting colored structured lights have already been proposed to obtain shape at a high frame rate. However, since these methods used visible lights, it is impossible to capture texture and shape simultaneously. In this paper, we propose a method that uses projected infrared structured light. Since the proposed method uses visible light for texture and infrared light for shape, simultaneous capturing can be achieved. In addition, a system was developed that maps texture onto the captured shape without occlusion by placing the cameras for visible and infrared lights coaxially..

Publications
  1. Kiyotaka Akasaka, Ryusuke Sagawa, Yasushi Yagi, “A Sensor for Simultaneously Capturing Texture and Shape by Projecting Structured Infrared Light”, In Proc. The 6th International Conference on 3-D Digital Imaging and Modeling, pp.375–381, Montreal, Canada, August 21-23, 2007.

Registration of Deformable Objects

This paper describes a method for registering deformable 3D objects. When an object such as a hand deforms, the deformation of the local shape is small, whereas the global shape deforms to a greater extent in many cases. Therefore, the local shape can be used as a feature for matching corresponding points. Instead of using a descriptor of the local shape, we introduce the convolution of the error between corresponding points for each vertex of a 3D mesh model. This approach is analogous to window matching in 2D image registration. Since the proposed method computes the convolution for every vertex in a model, it incorporates dense feature matching as opposed to sparse matching based on certain feature descriptors. Through experiments, we show that the convolution is useful for finding corresponding points and evaluate the accuracy of the registration.

Publications
  1. Ryusuke Sagawa, Kiyotaka Akasaka, Yasushi Yagi, Henning Hamer, Luc Van Gool, “Elastic Convolved ICP for the Registration of Deformable Objects”, In Proc. 2009 IEEE 12th International Conference on Computer Vision Workshops (3DIM2009), pp.1558–1565, Kyoto, Oct., 2009.
  2. Ryusuke Sagawa, Nanaho Osawa, Yasushi Yagi, “Deformable Registration of Textured Range Images by Using Texture and Shape Features”, In Proc. The 6th International Conference on 3-D Digital Imaging and Modeling, pp.65–72, Montreal, Canada, August 21-23, 2007.

A Probabilistic Method for Aligning and Merging Range Images with Anisotropic Error Distribution

This paper describes a probabilistic method of aligning and merging range images. We formulate these issues as problems of estimating the maximum likelihood. By examining the error distribution of a range finder, we model it as a normal distribution along the line of sight. To align range images, our method estimates the parameters based on the Expectation-Maximization (EM) approach. By assuming the error model, the algorithm is implemented as an extension of the Iterative Closest Point (ICP) method. For merging range images, our method computes the signed distances by finding the distances of maximum likelihood. Since our proposed method uses multiple correspondences for each vertex of the range images, errors after aligning and merging range images are less than those of earlier methods that use one-to-one correspondences. Finally, we tested and validated the efficiency of our method by simulation and on real range images.

Publications
  1. Ryusuke Sagawa, Nanaho Osawa, Yasushi Yagi, “A Probabilistic Method for Aligning and Merging Range Images with Anisotropic Error Distribution”, In Proc. Third International Symposium on 3D Data Processing, Visualization and Transmission, Chapel Hill, USA, June, 2006.

Real Time 3D Environment Modeling for a Mobile Robot by Aligning Range Image Sequences

This paper describes real-time 3D modeling of the environment for a mobile robot. A real-time laser range finder is mounted on the robot and obtains a range image sequence of the environment while moving around. In this paper, we detail our method that accomplished simultaneous localization and 3D modeling by aligning the acquired range images. The method incrementally aligns range images in real-time by a variant of the iterative closest point (ICP) method. By estimating the uncertainty of range measurements, we introduce a new weighting scheme into the aligning framework. In an experiment, we first evaluate the accuracy of the localization results by aligning range images. Second, we show the results of modeling and localization when a robot moves along a meandering path. Finally, we summarize the conditions and limitations required of the robot’s motion and the environment for our method to work well.

Publications
  1. Ryusuke Sagawa, Nanaho Osawa, Tomio Echigo, Yasushi Yagi, “Real Time 3D Environment Modeling for a Mobile Robot by Aligning Range Image Sequences”, In Proc. British Machine Vision Conference 2005, vol.1, pp.330-339, Oxford, UK, September, 2005.

Narrow-baseline Stereo

We propose a disparity detection filter which is a novel method to detect disparity for a narrow baseline system. Although the method to detect near objects, which was previously proposed, has a low computational cost, but it can only determine whether objects are close enough or not. The disparity detection filter can be applied to our omnidirectional sensor and it can detect disparity of objects through hierarchically-smoothed images with various window sizes. This method does not need finding correspondences along epipolar lines and the computational cost is low because the disparity of each hierarchically-smoothed image is detected by intensity gradient. In addition, the method is better adapted to detect small disparities. Applying the method, we can detect disparity at high speed with narrow baseline stereo cameras and our omnidirectional sensor.

Robust and Real-Time Egomotion Estimation Using a Compound Omnidirectional Sensor

We propose a new egomotion estimation algorithm for a compound omnidirectional camera. Image features are detected by a conventional feature detector and then quickly classified into near and far features by checking infinity on the omnidirectional image of the compound omnidirectional sensor. Egomotion estimation is performed in two steps: first, rotation is recovered using far features; then translation is estimated from near features using the estimated rotation. RANSAC is used for estimations of both rotation and translation. Experiments in various environments show that our approach is robust and provides good accuracy in real-time for large motions.

Publications
  1. Ngo Trung Thanh, Yuichiro Kojima, Hajime Nagahara, Ryusuke Sagawa, Yasuhiro Mukaigawa, Masahiko Yachida, Yasushi Yagi, “Real-time Estimation of Fast Egomotion with Feature Classification using Compound Omnidirectional Vision Sensor”, IEICE Transactions on Information and Systems, vol.E93-D, no.01, pp.152-166, Jan., 2010.
  2. Ngo, Thanh Trung, Nagahara, Hajime, Sagawa, Ryusuke, Mukaigawa, Yasuhiro, Yachida, Masahiko, Yagi, Yasushi, “Robust and Real-Time Egomotion Estimation Using a Compound Omnidirectional Sensor”, In Proc. 2008 IEEE International Conference on Robotics and Automation, pp.492–497, May 19-23, 2008.
  3. Trung Ngo Thanh, Hajime Nagahara, Ryusuke Sagawa, Yasuhiro Mukaigawa, Masahiko Yachida, Yasushi Yagi, “Robust and Real-time Rotation Estimation of Compound Omnidirectional Sensor”, In Proc. IEEE International Conference on Robotics and Automation, pp.4226–4231, Rome, Italy, April 10-14, 2007.

Accurate Calibration of Intrinsic Camera Parameters by Observing Parallel Light Pairs

This study describes a method of estimating the intrinsic parameters of a perspective camera. In previous calibration methods for perspective cameras, the intrinsic and extrinsic parameters are estimated simultaneously during calibration. Thus, the intrinsic parameters depend on the estimation of the extrinsic parameters, which is inconsistent with the fact that intrinsic parameters are independent of extrinsic ones. Moreover, in a situation where the extrinsic parameters are not used, only the intrinsic parameters need to be estimated. In this case, an intrinsic parameter, such as focal length, is not sufficiently robust to combat the image processing noise, that is absorbed by both parameter types, during calibration. We, therefore, propose a new method that allows the estimation of intrinsic parameters without estimating the extrinsic parameters. In order to calibrate the intrinsic parameters, the proposed method observes parallel light pairs that are projected on different points. This is accomplished by applying the constraint that the relative angle of two parallel rays is constant irrespective of where the rays are projected. This method focuses only on intrinsic parameters and the calibrations are sufficiently robust as demonstrated in this study. Moreover, our method can visualize the error of the calibrated result and the degeneracy of the input data.

Awards
  1. 2008 IEEE International Conference on Robotics and Automation (ICRA2008), Finalist for Best Vision Paper “Accurate Calibration of Intrinsic Camera Parameters by Observing Parallel Light Pairs” Ryusuke Sagawa and Yasushi Yagi
Publications
  1. Sagawa, Ryusuke, Yagi, Yasushi, “Accurate Calibration of Intrinsic Camera Parameters by Observing Parallel Light Pairs”, In Proc. 2008 IEEE International Conference on Robotics and Automation, pp.1390–1397, May 19-23, 2008.

Mirror Localization for Catadioptric Imaging System by Observing Parallel Light Pairs

This paper describes a method of mirror localization to calibrate a catadioptric imaging system. While the calibration of a catadioptric system includes the estimation of various parameters, we focus on the localization of the mirror. The proposed method estimates the position of the mirror by observing pairs of parallel lights, which are projected from various directions. Although some earlier methods for calibrating catadioptric systems assume that the system is a single viewpoint, which is a strong restriction on the position and shape of the mirror, our method does not restrict the position and shape of the mirror. Since the constraint used by the proposed method is that the relative angle of two parallel lights is constant with respect to the rigid transformation of the imaging system, we can omit both the translation and rotation between the camera and calibration objects from the parameters to be estimated. Therefore, the estimation of the mirror position by the proposed method is independent of the extrinsic parameters of a camera. We compute the error between the model of the mirror and the measurements, and then estimate the position of the mirror by minimizing this error. We test our method using both simulation and real experiments and evaluate the accuracy thereof.

Publications
  1. Ryusuke Sagawa, Nobuya Aoki, Yasushi Yagi, “Mirror Localization for Catadioptric Imaging System by Observing Parallel Light Pairs”, In Proc. 8th Asian Conference on Computer Vision, pp.116–126, Tokyo, Japan, Nov. 18-22, 2007.

Mirror Localization for a Catadioptric Imaging System by Projecting Parallel Lights

This paper describes a method of mirror localization to calibrate a catadioptric imaging system. Even though the calibration of a catadioptric system includes the estimation of various parameters, in this paper, we focus on the localization of the mirror. Since some previously proposed methods assume a single viewpoint system, they have strong restrictions on the position and shape of the mirror. We propose a method that uses parallel lights to simplify the geometry of projection for estimating the position of the mirror, thereby not restricting the position or shape of the mirror. Further, we omit the translation process between the camera and calibration objects from the parameters to be estimated by observing some parallel lights from a different direction. We obtain the constraints on the projection and compute the error between the model of the mirror and the measurements. The position of the mirror is estimated by minimizing the error. We also test our method by simulation and real experiments, and finally, we evaluate the accuracy of our method.

Publications
  1. Ryusuke Sagawa, Nobuya Aoki, Yasuhiro Mukaigawa, Tomio Echigo, Yasushi Yagi, “Mirror Localization for a Catadioptric Imaging System by Projecting Parallel Lights”, In Proc. IEEE International Conference on Robotics and Automation, pp.3957–3962, Rome, Italy, April 10-14, 2007.

Calibration of Lens Distortion by Structured-Light Scanning

This paper describes a new method to automatically calibrate lens distortion of wide-angle lenses. We project structured-light patterns using a flat display to generate a map between the display and the image coordinate systems. This approach has two advantages. First, it is easier to take correspondences of image and marker (display) coordinates around the edge of a camera image than using a usual marker, e.g. a checkerboard. Second, since we can easily construct a dense map, a simple linear interpolation is enough to create an undistorted image. Our method is not restricted by the distortion parameters because it directly generates the map. We have evaluated the accuracy of our method and the error becomes smaller than results by parameter fitting.

Publications
  1. Ryusuke Sagawa, Masaya Takatsuji, Tomio Echigo, and Yasushi Yagi, “Calibration of Lens Distortion by Structured-Light Scanning,” In Proc. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1349-1354, August, 2005.