摘要:The article proposes two algorithms for substandard texts filtering. The first of these is based on the fact that the frequency of n-grams occurrence in a quality text obeys the Zipf law,and when the words of the text are rearranged,the law ceases to act. Comparison of the frequency characteristics of the source text with the characteristics of the text resulting from the permutation of words enables researchers to draw conclusions regarding the quality of the source text. The second algorithm is based on calculating and comparing the rate new words appear in good quality and randomly generated texts. In a good text,this rate is,as a rule,uneven whereas in randomly generated texts,this unevenness is smoothed out,which makes it possible to detect low-quality texts. The methods for solving the problem of substandard texts filtering are statistical and are based on the calculation of various frequency characteristics of the text. As compared to the "bag of words” model,a graph model of the text,in which the vertices are words or word forms,and the edges are pairs of words,as well as models with higher order structures,in which the frequency characteristics of n-grams are used with n > 2,takes into account the mutual disposition of word pairs,as well as triples of words in a common part of the text,for example, in one sentence or one n-gram.
其他摘要:В статье предложено два алгоритма фильтрации некачественных текстов. Первый алгоритм основан на том,что частота появления n-грамм в качественном тексте подчиняется закону Зипфа,а в случай-но генерированных текстах данный закон перестает действовать. Сравн
关键词:natural text;pseudo-text;text filtering;Zipf’s law;n-grams;the rate of appearance of new words;"bag of words” model of the text;graph model of the text.
其他关键词:естественный текст;псевдотекст;фильтрация текстов;закон Зипфа;n-граммы;скорость появления новых слов;«мешок слов»;графовая модель текста.