期刊名称:ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
印刷版ISSN:2194-9042
电子版ISSN:2194-9050
出版年度:2014
卷号:XL-3/W1
页码:15-20
DOI:10.5194/isprsarchives-XL-3-W1-15-2014
出版社:Copernicus Publications
摘要:Modern mobile mapping systems include one or several laser scanners and cameras. The main outputs of these systems are oriented camera images and 3D point clouds. These point clouds can be derived from pairs of overlapping images or using the laser raw data together with platform trajectory. A mobile mapping campaign may include several overlapping areas, generally, the derived point clouds of the same area are not properly registered, due to partial or total GNSS occlusions, multipath and inertial drift and noise. Nowadays, the standard procedure for co-registration between laser and laser and between camera and laser, includes several steps. The first ones are the system calibration where the lever arm and boresight between laser and IMU, and between camera and IMU must be determined. After the calibration steps, a camera and LiDAR point cloud can be derived. Then, a co-registration between LIDAR points clouds and between camera point cloud and LiDAR point cloud are computed. In contrast to the standard approach, in this paper we propose to solve the orientation and calibration of laser and camera data in a single, combined adjustment. Solving the orientation and calibration allows us to implicitly deal with the co-registration problem. The proposed method is based on the identification of common tie features between images and point clouds and their use in a combined adjustment. This common tie features are straight line segments. The preliminary results indicate the feasibility and the potential of the approach