Description
The neural basis of object recognition and semantic knowledge have been the focus of a large body of research but given the high dimensionality of object space, it is challenging to develop an overarching theory on how brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. Traditional image databases are based on manually selected object concepts and often single images per concept. In contrast, ‘big data’ stimulus sets typically consist of images that can vary significantly in quality and may be biased in content. To address this issue, recent work developed THINGS: a large stimulus set of 1,854 object concepts and 26,107 associated images (https://things-initiative.org/). In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to all concepts and 22,248 images in the THINGS stimulus set. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
This repository contains the code that was used to perform the analyses described in this paper:
Grootswagers, T., Zhou, I., Robinson, A.K. et al. Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams. Sci Data 9, 3 (2022). https://doi.org/10.1038/s41597-021-01102-7
- THINGS images and concept descriptions obtained from: https://osf.io/jum2f (see also: https://things-initiative.org/)
- The raw data, preprocessed data, and grand-average RDMs are publicly available on Openneuro: https://openneuro.org/datasets/ds003825
- RDMs for single subjects are publicly available on figshare: https://doi.org/10.6084/m9.figshare.14721282 (note: OSF sometimes incorrectly lists this as private)
see the README in the code folder for instructions on how to reproduce the figures in the paper.
This repository contains the code that was used to perform the analyses described in this paper:
Grootswagers, T., Zhou, I., Robinson, A.K. et al. Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams. Sci Data 9, 3 (2022). https://doi.org/10.1038/s41597-021-01102-7
- THINGS images and concept descriptions obtained from: https://osf.io/jum2f (see also: https://things-initiative.org/)
- The raw data, preprocessed data, and grand-average RDMs are publicly available on Openneuro: https://openneuro.org/datasets/ds003825
- RDMs for single subjects are publicly available on figshare: https://doi.org/10.6084/m9.figshare.14721282 (note: OSF sometimes incorrectly lists this as private)
see the README in the code folder for instructions on how to reproduce the figures in the paper.
| Date made available | 4 Jun 2021 |
|---|---|
| Publisher | Western Sydney University |
Research output
- 1 Article
-
Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams
Grootswagers, T., Zhou, I., Robinson, A. K., Hebart, M. N. & Carlson, T. A., Dec 2022, In: Scientific Data. 9, 1, 7 p., 3.Research output: Contribution to journal › Article › peer-review
Open Access53 Citations (Scopus)
Cite this
- DataSetCite