摘要:Scanning our surroundings has become one of the key challenges in automation. Effective and efficient position, distance and velocity sensing is key to accurate decision making in automated applications from robotics to driverless cars. Light detection and ranging (LiDAR) has become a key tool in these 3D sensing applications, where the time-of-flight (TOF) of photons is used to recover distance information. These systems typically rely on scanning of a laser spot to recover position information. Here we demonstrate a hybrid LiDAR approach which combines a multi-view camera system for position and distance information, and a simple (scanless) LiDAR system for velocity tracking and depth accuracy. We show that we are able to combine data from the two component systems to provide a compound image of a scene with position, depth and velocity data at more than 1 frame per second with depth accuracy of 2.5 cm or better. This hybrid approach avoids the bulk and expense of scanning systems while adding velocity information. We hope that this approach will offer a simpler, more robust alternative to 3D scanning systems for autonomous vehicles.