The fusion of imaging lidar information and digital imagery results in 2.5-dimensional surfaces covered with texture information, called texel images. These data sets, when taken from different viewpoints, can be combined to create three-dimensional (3-D) images of buildings, vehicles, or other objects. This paper presents a procedure for calibration, error correction, and fusing of flash lidar and digital camera information from a single sensor configuration to create accurate texel images. A brief description of a prototype sensor is given, along with a calibration technique used with the sensor, which is applicable to other flash lidar/digital image sensor systems. The method combines systematic error correction of the flash lidar data, correction for lens distortion of the digital camera and flash lidar images, and fusion of the lidar to the camera data in a single process. The result is a texel image acquired directly from the sensor. Examples of the resulting images, with improvements from the proposed algorithm, are presented. Results with the prototype sensor show very good match between 3-D points and the digital image (< 2.8 image pixels), with a 3-D object measurement error of < 0.5%, compared to a noncalibrated error of ∼3%.
S. E. Budge and N. S. Badamikar, “Calibration method for texel images created from fused lidar and digital camera images,” Opt. Eng., vol. 52, no. 10, pp. 103 101–9, Oct. 2013.