This chapter’s discussion of object perception emphasizes that vision and touch are intrinsically complementary. One is well suited to convey an object’s geometry and the other, its material.
For a quite different example of intermodal cooperation, consider a study about the perception of object shape by Newell et al. (2001). In the learning phase of the experiment, observers studied a set of Lego shapes, each fixed in a particular orientation, using vision or touch (see image below). In the recognition phase, previously studied shapes were presented, oriented either as they had been during the study phase or reversed 180 degrees, along with unstudied shapes. Again using vision or touch, the subjects were required to indicate whether each tested shape was old (previously studied) or new. Not surprisingly, people who studied and were tested in the same modality found it easier to recognize old shapes when the orientation did not change. But now for the surprise: for people who studied and were tested in different modalities—vision changed to touch, or vice versa—recognition was actually better when the studied shapes were reversed in orientation during the recognition phase! Why should this be? It turns out that the natural way to hold an object is thumbs in, fingers out, so that we learn more haptically about the back than the front. When an object that we viewed is then explored by touch, reversing it means the more effectively explored back surface matches what was accessible to vision. Thus, it would appear that two modalities might most efficiently process an object’s shape by a natural collaboration: vision for the front, touch for the back. You can try to replicate this experiment with a friend, if you own a Lego set!
Newell, F. N., Ernst, M. O., Tjan, B. S., and Bülthoff, H. (2001). Viewpoint dependence in visual and haptic object recognition. Psychological Science 12: 37–42.