首页    期刊浏览 2024年11月13日 星期三
登录注册

文章基本信息

  • 标题:Understanding Deep Neural Networks with Rectified Linear Units
  • 本地全文:下载
  • 作者:Raman Arora ; Amitabh Basu ; Poorya Mianjy
  • 期刊名称:Electronic Colloquium on Computational Complexity
  • 印刷版ISSN:1433-8092
  • 出版年度:2017
  • 卷号:2017
  • 出版社:Universität Trier, Lehrstuhl für Theoretische Computer-Forschung
  • 摘要:

    In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train to global optimality a ReLU DNN with one hidden layer, assuming the input dimension and number of nodes of the network as fixed constants.

    We also improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of ``hard'' functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k 2 hidden layers and total size k 3 , such that any ReLU DNN with at most k hidden layers will require at least 2 1 k k +1 − 1 total nodes.

    Finally, we construct a family of R n R piecewise linear functions for n 2 (also smoothly parameterized), whose number of affine pieces scales exponentially with the dimension n at any fixed size and depth. To the best of our knowledge, such a construction with exponential dependence on n has not been achieved by previous families of ``hard'' functions in the neural nets literature. This construction utilizes the theory of zonotopes from polyhedral theory.

国家哲学社会科学文献中心版权所有