Department of Intelligent Media, ISIR, Osaka Univ.

Medical Engineering

Medical image enhancement and diagnosis assistance

Color Analysis for Segmenting Digestive Organs in VCE

This paper presents an efficient method for automatically segmenting the digestive organs in a Video Capsule Endoscopy (VCE) sequence. The method is based on unique characteristics of color tones of the digestive organs. We first introduce a color model of the gastrointestinal (GI) tract containing the color components of GI wall and non-wall regions. Based on the wall regions extracted from images, the distribution along the time dimension for each color component is exploited to learn the dominant colors that are candidates for discriminating digestive organs. The strongest candidates are then combined to construct a representative signal to detect the boundary of two adjacent regions. The results of experiments are comparable with previous works, but computation cost is more efficient.

Publications
  1. Hai Vu, Tomio Echigo, Keiko Yagi, Masatsugu Shiba, Kazuhide Higuchi, Tetsuo Arakawa, Yasushi Yagi, “Color Analysis for Segmenting Digestive Organs in VCE”, In Proceeding of International Conference on Pattern Recognition, 2010.

Contraction Detection in Small Bowel from an Image Sequence of Wireless Capsule Endoscopy

This paper describes a method for automatic detection of contractions in the small bowel through analyzing Wireless Capsule Endoscopic images. Based on the characteristics of contraction images, a coherent procedure that includes analyzes of the temporal and spatial features is proposed. For temporal features, the image sequence is examined to detect candidate contractions through the changing number of edges and an evaluation of similarities between the frames of each possible contraction to eliminate cases of low probability. For spatial features, descriptions of the directions at the edge pixels are used to determine contractions utilizing a classification method. The experimental results show the effectiveness of our method that can detect a total of 83% of cases. Thus, this is a feasible method for developing tools to assist in diagnostic procedures in the small bowel.

Publications
  1. Hai Vu, Tomio Echigo, Ryusuke Sagawa, Keiko Yagi, Masatsugu Shiba, Kazuhide Higuchi, Tetsuo Arakawa, Yasushi Yagi, “Contraction Detection in Small Bowel from an Image Sequence of Wireless Capsule Endoscopy”, In Medical Image Computing and Computer-Assisted Intervention — MICCAI 2007, vol. LNCS 4791, pp.775–783, Springer, 2007.

Omnidirectional Vision Attachment for Medical Endoscopes

Medical endoscopes are equipped with wide-angle lenses to provide a wide field of view operating doctors. However, it has been pointed out that there is a backward-looking blind area for endoscopes such that an affected area could be overlooked since the gastrointestinal system is intricate and the inside shaped by plicae. In this paper, we propose an omnidirectional vision attachment that has a convex mirror fitted at the tip of an endoscope. The attachment enables backward observations with the endoscope by providing a 360 degrees view. The issue related to developing this attachment is the illumination of the field of view. Because the light source is usually only the light at the tip of the endoscope when it is inside the organ, we designed an attachment to illuminate the back-looking viewing field by reflecting light in a mirror. In the experiment, we measured the field of view and the illuminated field and confirmed that the field behind an endoscope tip can be observed using the attachment.

Publications
  1. Ryusuke Sagawa, Takurou Sakai, Tomio Echigo, Keiko Yagi, Masatsugu Shiba, Kazuhide Higuchi, Tetsuo Arakawa, Yasushi Yagi, “Omnidirectional Vision Attachment for Medical Endoscopes”, In Proc. the Eighth Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras, Marseille, France, October 17, 2008.

Adaptive Control of Video Display for Diagnostic Assistance by Analysis of Capsule Endoscopic Images

In this paper, we present a method for reducing the diagnostic time by adaptively controlling the frame rate in a capsule endoscopic image sequence. The video sequence, which was capture over 8 hours, requires from 45 minutes to two hours of extreme concentration by examining doctors to make a diagnosis. The effectiveness of the method is that the sequence can be played at high speed in stable regions to save time and then decreased at rough changes that can then help ascertain suspicious findings more conveniently. To realize such a system, the capturing conditions are classified into groups corresponding to the changing states between two frames. The delay time of these frames was calculated by the parametric functions. The optimal parameter set was determined from evaluations by medical doctors. We concluded that the average diagnostic time could be reduced from 8 hours down to about 30 minutes.

Awards
  1. 5th International Conference on Capsule Endoscopy (ICCE2006) Distinguished Poster “Adaptive Display Speed Control for Diagnosis of Capsule Endoscopic Images” Yasushi Yagi, Tetsuo Arakawa, Kazuhide Higuchi, Masatsugu Shiba, Keiko Yagi, Ryusuke Sagawa, Vu Hai, Tomio Echigo
Publications
  1. Vu Hai, Tomio Echigo, Ryusuke Sagawa, Keiko Yagi, Masatsugu Shiba, Kazuhide Higuchi, Tetsuo Arakawa, Yasushi Yagi, “Adaptive Control of Video Display for Diagnostic Assistance by Analysis of Capsule Endoscopic Images”, In Proc. of the 18th Int. Conf. on Pattern Recognition, vol.3, pp.980–983, Hong Kong, China, Aug., 2006.
  2. Yasushi Yagi, Tetsuo Arakawa, Kazuhide Higuchi, Masatsugu Shiba, Keiko Yagi, Ryusuke Sagawa, Vu Hai, Tomio Echigo, “Adaptive Display Speed Control for Diagnosis of Capsule Endoscopic Images”, In Program & Abstracts of The 5th International Conference on Capsule Endoscopy, pp.83, Boca Raton, Florida, USA, March 6-7, 2006.
  3. Yasushi Yagi, Hai Vu, Tomio Echigo, Ryusuke Sagawa, Keiko Yagi, Masatsugu Shiba, Kazuhide Higuchi, Tetsuo Arakawa, “DIAGNOSIS SUPPORTING SYSTEM FOR CAPSULE ENDOSCOPY”, In Proc. International Conference on Ulcer Research (ICUR), Jul., 2006.

Deformable Registration for Generating Dissection Image of an Intestine from Annular Image Sequence

Examination inside an intestine by an endoscope is difficult and time-consuming because the whole image of the intestine cannot be taken at one time due to the limited field of view. Thus, it is necessary to generate a dissection image, which can be obtained by extending the image of an intestine. We acquire an annular image sequence with an omnidirectional or wide-angle camera, and then generate the dissection image by mosaicing the image sequence. Though usual mosaicing techniques transform an image by perspective or affine transformations, these are not suitable for our situation because the target object is a generalized cylinder and the camera motion is unknown a priori. Therefore, we propose a novel approach for image registration that deforms images by a twodimensional-polynomial function which parameters are estimated from optical flow. We evaluated our method by registering annular image sequences and we successfully generated dissection images, as presented in this paper.

Publications
  1. Suchit Pongnumkul, Ryusuke Sagawa, Tomio Echigo, Yasushi Yagi, “Deformable Registration for Generating Dissection Image of an Intestine from Annular Image Sequence”, In Computer Vision for Biomedical Image Applications, pp.271–280, 2005.

Calibration of Lens Distortion by Structured-Light Scanning

This paper describes a new method to automatically calibrate lens distortion of wide-angle lenses. We project structured-light patterns using a flat display to generate a map between the display and the image coordinate systems. This approach has two advantages. First, it is easier to take correspondences of image and marker (display) coordinates around the edge of a camera image than using a usual marker, e.g. a checkerboard. Second, since we can easily construct a dense map, a simple linear interpolation is enough to create an undistorted image. Our method is not restricted by the distortion parameters because it directly generates the map. We have evaluated the accuracy of our method and the error becomes smaller than results by parameter fitting.

Publications
  1. Ryusuke Sagawa, Masaya Takatsuji, Tomio Echigo, and Yasushi Yagi, “Calibration of Lens Distortion by Structured-Light Scanning,” In Proc. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1349-1354, August, 2005.