摘要:The neural encoding of visual features in primary visual cortex (V1) is well understood, with strong correlates to low-level perception, making V1 a strong candidate for vision restoration through neuroprosthetics. However, the functional relevance of neural dynamics evoked through external stimulation directly imposed at the cortical level is poorly understood. Furthermore, protocols for designing cortical stimulation patterns that would induce a naturalistic perception of the encoded stimuli have not yet been established. Here, we demonstrate a proof of concept by solving these issues through a computational model, combining (1) a large-scale spiking neural network model of cat V1 and (2) a virtual prosthetic system transcoding the visual input into tailored light-stimulation patterns which drive in situ the optogenetically modified cortical tissue. Using such virtual experiments, we design a protocol for translating simple Fourier contrasted stimuli (gratings) into activation patterns of the optogenetic matrix stimulator. We then quantify the relationship between spatial configuration of the imposed light pattern and the induced cortical activity. Our simulations in the absence of visual drive (simulated blindness) show that optogenetic stimulation with a spatial resolution as low as 100
\documentclass[12pt