首页    期刊浏览 2024年07月05日 星期五
登录注册

文章基本信息

  • 标题:Mural classification model based on high- and low-level vision fusion
  • 本地全文:下载
  • 作者:Jianfang Cao ; Hongyan Cui ; Zibang Zhang
  • 期刊名称:Heritage Science
  • 印刷版ISSN:2050-7445
  • 出版年度:2020
  • 卷号:8
  • 期号:1
  • 页码:1-18
  • DOI:10.1186/s40494-020-00464-2
  • 出版社:BioMed Central
  • 摘要:The rapid classifcation of ancient murals is a pressing issue confronting scholars due to the rich content and information contained in images. Convolutional neural networks (CNNs) have been extensively applied in the feld of computer vision because of their excellent classifcation performance. However, the network architecture of CNNs tends to be complex, which can lead to overftting. To address the overftting problem for CNNs, a classifcation model for ancient murals was developed in this study on the basis of a pretrained VGGNet model that integrates a depth migration model and simple low-level vision. First, we utilized a data enhancement algorithm to augment the original mural dataset. Then, transfer learning was applied to adapt a pretrained VGGNet model to the dataset, and this model was subsequently used to extract high-level visual features after readjustment. These extracted features were fused with the low-level features of the murals, such as color and texture, to form feature descriptors. Last, these descriptors were input into classifers to obtain the fnal classifcation outcomes. The precision rate, recall rate and F1-score of the proposed model were found to be 80.64%, 78.06% and 78.63%, respectively, over the constructed mural dataset. Comparisons with AlexNet and a traditional backpropagation (BP) network illustrated the efectiveness of the proposed method for mural image classifcation. The generalization ability of the proposed method was proven through its application to diferent datasets. The algorithm proposed in this study comprehensively considers both the highand low-level visual characteristics of murals, consistent with human vision.
  • 其他摘要:Abstract The rapid classification of ancient murals is a pressing issue confronting scholars due to the rich content and information contained in images. Convolutional neural networks (CNNs) have been extensively applied in the field of computer vision because of their excellent classification performance. However, the network architecture of CNNs tends to be complex, which can lead to overfitting. To address the overfitting problem for CNNs, a classification model for ancient murals was developed in this study on the basis of a pretrained VGGNet model that integrates a depth migration model and simple low-level vision. First, we utilized a data enhancement algorithm to augment the original mural dataset. Then, transfer learning was applied to adapt a pretrained VGGNet model to the dataset, and this model was subsequently used to extract high-level visual features after readjustment. These extracted features were fused with the low-level features of the murals, such as color and texture, to form feature descriptors. Last, these descriptors were input into classifiers to obtain the final classification outcomes. The precision rate, recall rate and F1-score of the proposed model were found to be 80.64%, 78.06% and 78.63%, respectively, over the constructed mural dataset. Comparisons with AlexNet and a traditional backpropagation (BP) network illustrated the effectiveness of the proposed method for mural image classification. The generalization ability of the proposed method was proven through its application to different datasets. The algorithm proposed in this study comprehensively considers both the high- and low-level visual characteristics of murals, consistent with human vision.
  • 关键词:Vggnet model;Transfer learning;Mural classifcation;Feature fusion;Low-level features;SVM classifer
国家哲学社会科学文献中心版权所有