The ability to generalize—to abstract regularities from our experiences that can be applied to new experiences—is fundamental to human cognition and our abilities to flexibly adapt to changing situations. However, the generalization abilities of children and adults are far from perfect, with many clear demonstrations of failures to generalize in situations that would otherwise appear to lend themselves to generalization. It seems that people require extensive experience with a domain to demonstrate good generalization, and that their generalization abilities are best when dealing with relatively concrete, familiar situations. In this paper, we argue that people's successes and failures in generalization are well characterized by neural network models. Networks of neurons connected by synaptic weights are naturally predisposed to encode information in a highly specific fashion, which does not support generalization (as has been seized upon by critics of such models). However, with sufficient experience and appropriate architectural properties, such models can develop abstract representations that support good generalization. Implications for the neural bases and development of generalization abilities are discussed.