首页    期刊浏览 2024年07月08日 星期一
登录注册

文章基本信息

  • 标题:Completeness Problem of the Deep Neural Networks
  • 本地全文:下载
  • 作者:Ying Liu ; Shaohui Wang
  • 期刊名称:American Journal of Computational Mathematics
  • 印刷版ISSN:2161-1203
  • 电子版ISSN:2161-1211
  • 出版年度:2018
  • 卷号:08
  • 期号:02
  • 页码:184-196
  • DOI:10.4236/ajcm.2018.82014
  • 语种:English
  • 出版社:Scientific Research Publishing
  • 摘要:Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly improved modeling power, and Restricted Boltzmann Machines (RBM) are universal approximators of discrete distributions. In this paper, we provide yet another proof. The advantage of this new proof is that it will lead to several new learning algorithms. We prove that the Deep Neural Networks implement an expansion and the expansion is complete. First, we briefly review the basic Boltzmann Machine and that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ -transformation and its completeness, i . e. any function can be expanded by θ -transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θ -transformation; therefore, an ABM can simulate any distribution. We discuss how to convert an ABM into a Deep Neural Network. Finally, by establishing the equivalence between an ABM and the Deep Neural Network, we prove that the Deep Neural Network is complete.
  • 关键词:AI;Universal Approximators;Boltzmann Machine;Markov Chain;Invariant Distribution;Completeness;Deep Neural Network
国家哲学社会科学文献中心版权所有