MA05 Robot Vision
Time : 09:00~10:30
Room : Room 105
Chair : Prof.Wang-Heon Lee (Hansei University, )
09:00~09:15        MA05-1
Comparison of faster R-CNN models for object detection

Chungkeun Lee(Seoul National University, Korea), H. Jin Kim(Seoul National Univ, Korea)

we convert several state-of-the-art models from convolution neural network (CNN) for image classification. Then, we compare converted models with several image crop size in terms of computation time and detection precision. We will utilize those comparison data for selecting a proper detection model in case a robot needs to perform an object detection task.
09:15~09:30        MA05-2
Occlusion-Robust Segmentation for Multiple Objects using a Micro Air Vehicle

Asahi Kainuma, Hirokazu Madokoro, Kazuhito Sato, Nobuhiro Shimoi(Akita Prefectural University, Japan)

This paper presents a novel object extraction method using a micro air vehicle (MAV) for improving the robustness of occlusion. The proposed method is based on saliency of objects for extracting regions of interest using scale invariant feature transform (SIFT) features and segmentation of target objects using GrabCut, which requires advance learning. Results of experiments revealed that object extraction accuracies measured using precision, recall, and $F$-measure improved according to the MAV movement for images with changing rates of collusion between two objects: a chair and a table.
09:30~09:45        MA05-3
3D Reconstruction of Structures using Spherical Cameras with Small Motion

Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama(The University of Tokyo, Japan)

A dense 3D reconstruction technique for large structures from small spherical camera motion is proposed. Spherical cameras can capture the whole structure at once. Epipolar geometry of two spherical images clicked near it is found by combining feature points and dense optical flow. Feature points alone are not accurate and prone to noise with small displacements. But, spherical cameras allow use of the global, dense flow field. Thus, epipolar geometry is densely optimized for accurate reconstruction. This can help measure large infrastructures (bridges, etc.) with minimal robot motion.
09:45~10:00        MA05-4
Parallel optical flow estimation with capturing images in real time

Yuta Hamada, Teruo Yamaguchi, Hiroshi Harada(Kumamoto University, Japan)

Spatio-temporal differentiation method is one of the most effective methods of determining the velocity of moving objects. It is difficult to estimate optical flow in real time, because it must wait for 3 frames of image. In order to solve such problem, we introduced a multi-core programming that captures image and evaluates the velocity of moving objects in parallel. In order to reduce time to calculate eigenvalues, we improved the method of calculating the eigenvalues. Moreover, to get more accurate optical flow, we compared uniform average filter and Gaussian filter.
10:00~10:15        MA05-5
Exposure Correction and Image Blending for Planar Panorama Stitching

Sangil Lee, Seung Jae Lee, Jaehyeon Park, H. Jin Kim(Seoul National University, Korea)

In this paper, we propose a planar panorama stitching method to blend consecutive images captured by a multi-rotor equipped with a fish-eye camera. In addition, we suggest an exposure correction method to reduce the brightness difference between contiguous images, and a drift error correction method to compensate the estimated position of multi-rotor. In experiments, the multi-rotor flies at 35 meters above the ground, and the fish-eye camera attached in gimbals system takes pictures. Then we validate the performance of the algorithm with processing video frames.
10:15~10:30        MA05-6
Non-contact Gap and Flush Measurement using Monocular Structured Light Vision

Trang Thi Tran, CheolKeun Ha(University of Ulsan, Korea)

The fit-up of various parts inspected by measuring the width of the gap between two adjacent panels and the alignment of the two surfaces, namely flushness, is key function in assembling vehicle. The solution needs high accuracy and fast measurement. Toward this end, we propose a vision-based non-contact gap and flush measurement sensor consists of a high resolution camera and a multi-line laser generator. The proposed sensor projects laser lines onto the panels. Then, the lines are digitized, the desired calculations are made to form gap and flush measurements.

<<   1   >>