首页    期刊浏览 2024年07月07日 星期日
登录注册

文章基本信息

  • 标题:Source Cell-Phone Identification in the Presence of Additive Noise from CQT Domain
  • 本地全文:下载
  • 作者:Tianyun Qin ; Rangding Wang ; Diqun Yan
  • 期刊名称:Information
  • 电子版ISSN:2078-2489
  • 出版年度:2018
  • 卷号:9
  • 期号:8
  • 页码:205
  • DOI:10.3390/info9080205
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:With the widespread availability of cell-phone recording devices, source cell-phone identification has become a hot topic in multimedia forensics. At present, the research on the source cell-phone identification in clean conditions has achieved good results, but that in noisy environments is not ideal. This paper proposes a novel source cell-phone identification system suitable for both clean and noisy environments using spectral distribution features of constant Q transform (CQT) domain and multi-scene training method. Based on the analysis, it is found that the identification difficulty lies in different models of cell-phones of the same brand, and their tiny differences are mainly in the middle and low frequency bands. Therefore, this paper extracts spectral distribution features from the CQT domain, which has a higher frequency resolution in the mid-low frequency. To evaluate the effectiveness of the proposed feature, four classification techniques of Support Vector Machine (SVM), Random Forest (RF), Convolutional Neural Network (CNN) and Recurrent Neuron Network-Long Short-Term Memory Neural Network (RNN-BLSTM) are used to identify the source recording device. Experimental results show that the features proposed in this paper have superior performance. Compared with Mel frequency cepstral coefficient (MFCC) and linear frequency cepstral coefficient (LFCC), it enhances the accuracy of cell-phones within the same brand, whether the speech to be tested comprises clean speech files or noisy speech files. In addition, the CNN classification effect is outstanding. In terms of models, the model is established by the multi-scene training method, which improves the distinguishing ability of the model in the noisy environment than single-scenario training method. The average accuracy rate in CNN for clean speech files on the CKC speech database (CKC-SD) and TIMIT Recaptured Database (TIMIT-RD) databases increased from 95.47% and 97.89% to 97.08% and 99.29%, respectively. For noisy speech files with seen noisy types and unseen noisy types, the performance was greatly improved, and most of the recognition rates exceeded 90%. Therefore, the source identification system in this paper is robust to noise.
  • 关键词:source cell-phone identification; additive noise; CQT; CNN; multi-scene training; noise robustness source cell-phone identification ; additive noise ; CQT ; CNN ; multi-scene training ; noise robustness
国家哲学社会科学文献中心版权所有