Motivated by the resurgence of neural networks in being able to solve complex learning tasks we undertake a study of high depth networks using ReLU gates which implement the function x max 0 x . We try to understand the role of depth in such neural networks by showing size lowerbounds against such network architectures in parameter regimes hitherto unexplored. In particular we show the following two main results about neural nets computing Boolean functions,
1. We use the method of random restrictions to show almost linear, ( 2(1 − ) n 1 − ) , lower bound for completely weight unrestricted LTF-of-ReLU circuits to match the Andreev function on at least 2 1 + fraction of the inputs for \sqrt{2\frac{\log^{\frac {2}{2-\delta}}(n)}{n}}"> 2 n log 2 2 − ( n ) for any ( 0 2 1 )
2. We use the method of sign-rank to show exponential in dimension ( n ) lower bounds for ReLU circuits ending in a LTF gate and of depths upto O ( n ) with 8 1 with some restrictions on the weights in the bottom most layer. All other weights in these circuits are kept unrestricted. This in turns also implies the same lowerbounds for LTF circuits with the same architecture and the same weight restrictions on their bottom most layer.