摘要:
Visual and spatial representations seem to play a significant role
in analogy. In this paper, we describe a specific role of visual
representations: two situations that appear dissimilar non-visuospatially
may appear similar when rerepresented visuospatially. We present
a computational theory of analogy in which visuospatial re-representation
enables analogical transfer in cases where there are ontological
mismatches in the non-visuospatial representation. Realizing this
theory in a computational model with specific data structures and
algorithms first requires a computational model of visuospatial
analogy, i.e., a model of analogy that only uses visuospatial knowledge.
We have developed a computer program, called Galatea, which
implements a core part of this model: it transfers problem-solving
procedures between analogs that contain only visual and spatial
knowledge. In this paper, we describe both how Galatea accomplishes
analogical transfer using only visuospatial knowledge, and how it
might be extended to support visuospatial re-representation of situations
represented non-visually.