期刊名称:Advances in Electrical and Computer Engineering
印刷版ISSN:1582-7445
电子版ISSN:1844-7600
出版年度:2021
卷号:21
期号:3
页码:3-10
DOI:10.4316/AECE.2021.03001
语种:English
出版社:Universitatea "Stefan cel Mare" Suceava
摘要:Significant efforts are constantly involved in finding manners to decrease the number of bits required for quantization of neural network parameters. Although in addition to compression, in neural networks, the application of quantizer models that are robust to changes in the variance of input data is of great importance, to the best of authors knowledge, this topic has not been sufficiently researched so far. For that reason, in this paper we give preference to logarithmic companding scalar quantizer, which has shown the best robustness in high quality quantization of speech signals, modelled similarly as weights in neural networks, by Laplacian distribution. We explore its performance by performing the exact and asymptotic analysis for low resolution scenario with 2-bit quantization, where we draw firm conclusions about the usability of the exact performance analysis and design of our quantizer. Moreover, we provide a manner to increase the robustness of the quantizer we propose by involving additional adaptation of the key parameter. Theoretical and experimental results obtained by applying our quantizer in processing of neural network weights are very good matched, and, for that reason, we can expect that our proposal will find a way to practical implementation.
关键词:image classification;neural networks;quantization;signal to noise ratio;source coding