摘要:Ikegami (2021) proposes to reinstate excessiveness as a critical subject in cognitive science. In this commentary, I will discuss what this excessiveness is, taking a hint from the philosophies of Merleau-Ponty and Henry. When we create representations from bodily information, it is impossible to make representations of all the information we receive, and what leaks out here should be called “excessiveness”. In this context, the possibility of being able to handle phenomena that are difficult to verbalize, such as qualia and affectivity, emerges. Furthermore, in relation to deep learning, I would like to discuss contraction and expansion of the representation. Autoencoder is a technology for compressing data while maintaining the original information, and can be divided into an encoder part that compresses the input into latent variables, and a decoder (generator) part that restores the original information by appropriately expanding the latent variables. Generative deep learning is an extension of the generator, which can reconstruct information from appropriate latent variables. I would like to consider how this generative deep learning can regenerate excessiveness from contracted representation and discuss the relationship between the generator and cognitive projection.