Over the years a range of positive algorithmic results have been obtained for learning various classes of monotone Boolean functions from uniformly distributed random examples. Prior to our work, however, the only negative result for learning monotone functions in this model has been an information-theoretic lower bound showing that certain superpolynomial-size monotone circuits cannot be learned to accuracy ½ + ω(log n) / √ n (Blum, Burch, and Langford, FOCS '98). This is in contrast with the situation for non-monotone functions, where a wide range of cryptographic hardness results establish that various “simple” classes of polynomial-size circuits are not learnable by polynomial-time algorithms.
In this paper we establish cryptographic hardness results for learning various “simple” classes of monotone circuits, thus giving a computational analogue of the information-theoretic hardness results of Blum et al. mentioned above. Some of our results show the cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than ½ + 1 / √ n, which is close to the optimal accuracy bound by positive results of Blum et al. Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomial-size circuits computing monotone functions is hard to learn. This result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation '04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O'Donnell (JCSS '04).