摘要:
Continuing previous studies, we present further results about the behavior of small abstract networks during
supervised learning. In particular, we show that constraints on the complexity that a network is permitted to assume
during learning reduces its learning success in ways that depend on the nature of the applied limitation. Moreover, we
show that relaxing the criterion due to which changes of the network structure are accepted during learning leads to a
dramatic improvement of the learning performance. The non-monotonicity of network complexity during learning, which
remains unchanged in both scenarios, is related to a similar feature in ε-machine complexity.