首页    期刊浏览 2024年07月03日 星期三
登录注册

文章基本信息

  • 标题:Recognition and Depth Estimation of Ships Based on Binocular Stereo Vision
  • 本地全文:下载
  • 作者:Zheng, Yuanzhou ; Liu, Peng ; Qian, Long
  • 期刊名称:Journal of Marine Science and Engineering
  • 电子版ISSN:2077-1312
  • 出版年度:2022
  • 卷号:10
  • 期号:8
  • 页码:1-22
  • DOI:10.3390/jmse10081153
  • 语种:English
  • 出版社:MDPI AG
  • 摘要:To improve the navigation safety of inland river ships and enrich the methods of environmental perception, this paper studies the recognition and depth estimation of inland river ships based on binocular stereo vision (BSV). In the stage of ship recognition, considering the computational pressure brought by the huge network parameters of the classic YOLOv4 model, the MobileNetV1 network was proposed as the feature extraction module of the YOLOv4 model. The results indicate that the mAP value of the MobileNetV1-YOLOv4 model reaches 89.25%, the weight size of the backbone network was only 47.6 M, which greatly reduced the amount of computation while ensuring the recognition accuracy. In the stage of depth estimation, this paper proposes a feature point detection and matching algorithm based on the ORB algorithm at sub-pixel level, that is, firstly, the FSRCNN algorithm was used to perform super-resolution reconstruction of the original image, to further increase the density of image feature points and detection accuracy, which was more conducive to the calculation of the image parallax value. The ships’ depth estimation results indicate that when the distance to the target is about 300 m, the depth estimation error is less than 3%, which meets the depth estimation needs of inland ships. The ship target recognition and depth estimation technology based on BSV proposed in this paper makes up for the shortcomings of the existing environmental perception methods, improves the navigation safety of ships to a certain extent, and greatly promotes the development of intelligent ships in the future.
  • 关键词:navigation safety; environmental perception; binocular stereo vision; MobileNetV1-YOLOv4; FSRCNN; ORB
国家哲学社会科学文献中心版权所有