Abstract
Neuroimaging studies investigating human object recognition have primarily focused on a relatively small number of object categories, in particular, faces, bodies, scenes, and vehicles. More recent studies have taken a broader focus, investigating hypothesized dichotomies, for example, animate versus inanimate, and continuous feature dimensions, such as biologically similarity. These studies typically have used stimuli that are identified as animate or inanimate, neglecting objects that may not fit into this dichotomy. We generated a novel stimulus set including standard objects and objects that blur the animate-inanimate dichotomy, for example, robots and toy animals. We used MEG time-series decoding to study the brain's emerging representation of these objects. Our analysis examined contemporary models of object coding such as dichotomous animacy, as well as several new higher order models that take into account an object's capacity for agency (i.e. its ability to move voluntarily) and capacity to experience the world. We show that early (0–200 ms) responses are predicted by the stimulus shape, assessed using a retinotopic model and shape similarity computed from human judgments. Thereafter, higher order models of agency/experience provided a better explanation of the brain's representation of the stimuli. Strikingly, a model of human similarity provided the best account for the brain's representation after an initial perceptual processing phase. Our findings provide evidence for a new dimension of object coding in the human brain – one that has a “human-centric” focus.
Original language | English |
---|---|
Article number | 117139 |
Number of pages | 11 |
Journal | NeuroImage |
Volume | 221 |
DOIs | |
Publication status | Published - 2020 |
Open Access - Access Right Statement
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/ by-nc-nd/4.0/).Keywords
- brain
- decision making
- robotics
- time-series analysis
- visual perception