Analogical Retrieval via Intermediate Features: The Goldilocks Hypothesis
Analogical reasoning has been implicated in many important cognitive processes, such as learning, categorization, planning, and understanding natural language. Therefore, to obtain a full understanding of these processes, we must come to a better understanding of how people reason by analogy. Analogical reasoning is thought to occur in at least three stages: retrieval of a source description from memory upon presentation of a target description, mapping of the source description to the target description, and transfer of relationships from source description to target description. Here we examine the first stage, the retrieval of relevant sources from long-term memory for their use in analogical reasoning. Specifically we ask: what can people retrieve from long-term memory, and how do they do it?Psychological experiments show that subjects display two sorts of retrieval patterns when reasoning by analogy: a novice pattern and an expert pattern. Novice-like subjects are more likely to recall superficiallysimilar descriptions that are not helpful for reasoning by analogy. Conversely, expert-like subjects are more likely to recall structurally-related descriptions that are useful for further analogical reasoning. Previous computational models of the retrieval stage have only attempted to model novice-like retrieval. We introduce a computational model that can demonstrate both novice-like and expert-like retrieval with the same mechanism. The parameter of the model that is varied to produce these two types of retrieval is the average size of the features used to identify matches in memory. We find that, in agreement with an intuition from the work of Ullman and co-workers regarding the use of features in visual classification (Ullman, Vidal-Naquet,& Sali, 2002), that features of an intermediate size are most useful for analogical retrieval.We conducted two computational experiments on our own dataset of fourteen formally described stories, which showed that our model gives the strongest analogical retrieval, and is most expert-like, when it uses features that are on average of intermediate size. We conducted a third computational experiment on the Karla the Hawk dataset which showed a modest effect consistent with our predictions. Because our model and Ullman s work both rely on intermediate-sized features to perform recognition-like tasks, we take both as supporting what we call the Goldilocks hypothesis: that on the average those features that are maximally useful for recognition are neither too small nor too large, neither too simple nor too complex, but rather are in the middle, of intermediate size and complexity.