摘要:Although humans can direct their attention to visual targets with or without eye movements, it remains unclear how different brain mechanisms control visual attention and eye movements together and/or separately. Here, we measured MEG and fMRI data during covert/overt visual pursuit tasks and estimated cortical currents using our previously developed extra-dipole, hierarchical Bayesian method. Then, we predicted the time series of target positions and velocities from the estimated cortical currents of each task using a sparse machine-learning algorithm. The predicted target positions/velocities had high temporal correlations with actual visual target kinetics. Additionally, we investigated the generalization ability of predictive models among three conditions: control, covert, and overt pursuit tasks. When training and testing data were the same tasks, the largest reconstructed accuracies were overt, followed by covert and control, in that order. When training and testing data were selected from different tasks, accuracies were in reverse order. These results are well explained by the assumption that predictive models consist of combinations of three computational brain functions: visual information-processing, maintenance of attention, and eye-movement control. Our results indicate that separate subsets of neurons in the same cortical regions control visual attention and eye movements differently.