摘要:AbstractHumans exhibit a wide range of adaptive and robust dynamic motion behavior that is yet unmatched by computational autonomous control systems for real-time behavior generation in cluttered environments. Recent work suggests that task structure learning and ecological cognition in the form of perceptual guidance enables this performance. This work first describes an embedded agent-environment perceptual control model, and uses it to investigate experimental vehicle guidance behavior. Results show that constraint transitions within the embedded model indicate changes in an agent’s perceptual guidance control mode. These mode transitions divide perception and behavior data into elemental segments of interaction, revealing the structure in how humans organize their perceptions and actions, and the specific perceptual guidance relationships that generate motion.