首页    期刊浏览 2024年07月07日 星期日
登录注册

文章基本信息

  • 标题:Semantic segmentation–aided visual odometry for urban autonomous driving
  • 本地全文:下载
  • 作者:Lifeng An ; Xinyu Zhang ; Hongbo Gao
  • 期刊名称:International Journal of Advanced Robotic Systems
  • 印刷版ISSN:1729-8806
  • 电子版ISSN:1729-8814
  • 出版年度:2017
  • 卷号:14
  • 期号:5
  • DOI:10.1177/1729881417735667
  • 出版社:SAGE Publications
  • 摘要:Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
  • 关键词:Visual odometry ; dynamic scene ; semantic segmentation ; deep learning
国家哲学社会科学文献中心版权所有