摘要:AbstractAutonomous navigation in real-life situations is a challenging task. Various sensors, like GPS, LiDAR, RADAR, etc. are used to localise an autonomous vehicle. There are some limitations in different approaches. A camera-based visual odometry is a promising technique. In this paper, the motion trajectory of an ego vehicle (having a single camera) is estimated in real-time using the CARLA simulator. A generalised flowchart for visual odometry is discussed. Five available algorithms for feature extraction and detection are compared based on their performances of the sequential images. The performances of two available algorithms for feature matching are also analysed. The comparison is done based on number of feature points detected, number of good matches, time complexity and feasibility of the algorithm. Six different variations of the sequential images from CARLA simulator, i.e. zoomed, panned, contrast, flipped, rotated, sheared images are considered to meet the real-life requirements. It is concluded that no set of available feature extraction, detection and matching algorithms is suitable to meet the requirements of all weather conditions. In comparison to other combinations, BRIEF-based Pipelines perform the best.