期刊名称:International Journal of Modern Education and Computer Science
印刷版ISSN:2075-0161
电子版ISSN:2075-017X
出版年度:2021
卷号:13
期号:3
页码:13-22
DOI:10.5815/ijmecs.2021.03.02
出版社:MECS Publisher
摘要:One of the trends in information technologies is implementing neural networks in modern software packages [1]. The fact that neural networks cannot be directly programmed (but trained) is their distinctive feature. In this regard, the urgent task is to ensure sufficient speed and quality of neural network training procedures. The process of neural network training can differ significantly depending on the problem. There are verification methods that correspond to the task’s constraints; they are used to assess the training results. Verification methods provide an estimate of the entire cardinal set of examples but do not allow to estimate which subset of those causes a significant error. This fact leads to neural networks’ failure to perform with the given set of hyperparameters, making training a new one time-consuming. On the other hand, existing empirical assessment methods of neural networks training use discrete sets of examples. With this approach, it is impossible to say that the network is suitable for classification on the whole cardinal set of examples. This paper proposes a criterion for assessing the quality of classification results. The criterion is formed by describing the training states of the neural network. Each state is specified by the correspondence of the set of errors to the function range representing a cardinal set of test examples. The criterion usage allows tracking the network’s classification defects and marking them as safe or unsafe. As a result, it is possible to formally assess how the training and validation data sets must be altered to improve the network’s performance, while existing verification methods do not provide any information on which part of the dataset causes the network to underperform.