首页    期刊浏览 2024年10月05日 星期六
登录注册

文章基本信息

  • 标题:Deep Learning Method for Recognition and Classification of Images from Video Recorders in Difficult Weather Conditions
  • 本地全文:下载
  • 作者:Aleksey Osipov ; Ekaterina Pleshakova ; Sergey Gataullin
  • 期刊名称:Sustainability
  • 印刷版ISSN:2071-1050
  • 出版年度:2022
  • 卷号:14
  • 期号:4
  • 页码:2420
  • DOI:10.3390/su14042420
  • 语种:English
  • 出版社:MDPI, Open Access Journal
  • 摘要:The sustainable functioning of the transport system requires solving the problems of identifying and classifying road users in order to predict the likelihood of accidents and prevent abnormal or emergency situations. The emergence of unmanned vehicles on urban highways significantly increases the risks of such events. To improve road safety, intelligent transport systems, embedded computer vision systems, video surveillance systems, and photo radar systems are used. The main problem is the recognition and classification of objects and critical events in difficult weather conditions. For example, water drops, snow, dust, and dirt on camera lenses make images less accurate in object identification, license plate recognition, vehicle trajectory detection, etc. Part of the image is overlapped, distorted, or blurred. The article proposes a way to improve the accuracy of object identification by using the Canny operator to exclude the damaged areas of the image from consideration by capturing the clear parts of objects and ignoring the blurry ones. Only those parts of the image where this operator has detected the boundaries of the objects are subjected to further processing. To classify images by the remaining whole parts, we propose using a combined approach that includes the histogram-oriented gradient (HOG) method, a bag-of-visual-words (BoVW), and a back propagation neural network (BPNN). For the binary classification of the images of the damaged objects, this method showed a significant advantage over the classical method of convolutional neural networks (CNNs) (79 and 65% accuracies, respectively). The article also presents the results of a multiclass classification of the recognition objects on the basis of the damaged images, with an accuracy spread of 71 to 86%.
国家哲学社会科学文献中心版权所有