Motion as a cue for viewpoint invariance

Tamara L. Watson, Alan Johnston, Harold C.H. Hill, Nikolaus F. Troje

Research output: Contribution to journalArticlepeer-review

28 Citations (Scopus)

Abstract

Natural face and head movements were mapped onto a computer rendered three-dimensional average of 100 laser-scanned heads in order to isolate movement information from spatial cues and nonrigid movements from rigid head movements (Hill & Johnston, 2001). Experiment 1 investigated whether subjects could recognize, from a rotated view, facial motion that had previously been presented at a full-face view using a delayed match to sample experimental paradigm. Experiment 2 compared recognition for views that were either between or outside initially presented views. Experiment 3 compared discrimination at full face, three-quarters, and profile after learning at each of these views. A significant face inversion effect in Experiments 1 and 2 indicated subjects were using face-based information rather than more general motion or temporal cues for optimal performance. In each experiment recognition performance only ever declined with a change in viewpoint between sample and test views when rigid motion was present. Nonrigid, face-based motion appears to be encoded in a viewpoint invariant, object-centred manner, whereas rigid head movement is encoded in a more view specific manner.

Original languageEnglish
Pages (from-to)1291-1308
Number of pages18
JournalVisual Cognition
Volume12
Issue number7
DOIs
Publication statusPublished - Oct 2005
Externally publishedYes

Fingerprint

Dive into the research topics of 'Motion as a cue for viewpoint invariance'. Together they form a unique fingerprint.

Cite this