David Scaramuzza

From Vision Wiki
Revision as of 01:01, 31 October 2007 by Merrielle (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Omnidirectional Vision: on calibration, feature extraction, and visual odometry for robotic applications

Davide Scaramuzza, PhD student at the Autonomous System Lab, Swiss Federal Institute of Technology Zurich (ETH).
Thesis supervisor:. Prof. Dr. Roland Siegwart

Abstract:

Omnidirectional cameras are vision sensors providing a 360° field of view of the scene. Such cameras can be obtained either by combining a shaped mirror with a standard camera or using a fisheye camera. This talk is composed of three parts: 1. I will start describing a new camera model and a new algorithm for calibrating cantral omnidirectional cameras. Our model assumes that the imaging function can be described by a Taylor serier expansion whose coffiecients are the instric calibration parameters. The calibration inputs are the corners of a checkerboard like pattern shown around the camera. We give also a Matlab toolbox (available on-line) that implements the proposed method. 2. In this part, I present a method for extracting and matching visual vertical features between images taken by an omnidirectional camera. Matching robustness is achieved by creating a distinctive descriptor which is invariant to rotation and slight changes of illumination. The robustness of the approach is validated through real experiments with a wheeled robot equipped with an omnidirectional camera. 3. In this part, I describe a method for computing the ego-motion of a vehicle relative to the road. The algorithm uses as only input images provided by a single omnidirectional camera mounted on the roof of a vehicle. The front ends of the system are two different feature trackers. The first one is a homography-based tracker that detects and matches SIFT features that most likely belong to the ground plane. The second one is a skyline tracker that is used as a visual compass to give high resolution estimates of the rotation of the vehicle.This 2D pose estimation method has been applied successfully to videos from an automotive platform.