forked from hed-standard/hed-examples
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request hed-standard#333 from hed-standard/develop
Added a curly braces example dataset
- Loading branch information
Showing
188 changed files
with
3,889 additions
and
15 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
1.0.0 2021-05-11 | ||
- First release | ||
Revision history for Face Recognition experiment by Wakeman-Henson | ||
|
||
version 1.0 - April 2021 | ||
- Initial release of EEG data in this experiment for HED education purposes |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
**Introduction:** | ||
This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multi-subject, multi-modal face processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Henson (https://pubmed.ncbi.nlm.nih.gov/25977808/). The data has been repackaged in EEGLAB format and has undergone minimal preprocessing as well as reorganization and annotation of the dataset events. | ||
|
||
**Overview of the experiment:** | ||
Eighteen participants completed two recording sessions spaced three months apart – one session recorded fMRI and the other simultaneously recorded MEG and EEG data. During each session, participants performed the same simple perceptual task, responding to presented photographs of famous, unfamiliar, and scrambled faces by pressing one of two keyboard keys to indicate a subjective yes or no decision as to the relative spatial symmetry of the viewed face. Famous faces were feature-matched to unfamiliar faces; half the faces were female. The two sessions (MEEG, fMRI) had different organizations of event timing and presentation because of technological requirements of the respective imaging modalities. Each individual face was presented twice during the session. For half of the presented faces, the second presentation followed immediately after the first. For the other half, the second presentation was delayed by 5-15 face presentations. | ||
|
||
**Preprocessing:** | ||
The preprocessing, which was performed using the `wh_extracteeg_BIDS.m` located in the code directory, includes the following steps: | ||
* Ignore MRI data except for sMRI. | ||
* Extract EEG channels out of the MEG/EEG fif data | ||
* Add fiducials | ||
* Rename EOG and EKG channels | ||
* Extract events from event channel | ||
* Remove spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19 | ||
* Remove spurious event 24 for subject 3 run 4 | ||
* Rename events taking into account button assigned to each subject | ||
* Correct event latencies (events have a shift of 34 ms) | ||
* Resample data to 250 Hz (this step is performed because this dataset is used in a tutorial for EEGLAB and needs to be lightweight) | ||
* Remove event fields `urevent` and `duration` | ||
* Save as EEGLAB .set format | ||
|
||
**Data curators:** | ||
Ramon Martinez, Dung Truong, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA), Kay Robbins (UTSA, San Antonio, TX, USA) | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
{ | ||
"Name": "Face processing MEEG dataset with HED annotation", | ||
"BIDSVersion": "1.8.0", | ||
"HEDVersion": "8.1.0", | ||
"License": "CC0", | ||
"Authors": [ | ||
"Daniel G. Wakeman", | ||
"Richard N Henson", | ||
"Dung Truong (curation)", | ||
"Kay Robbins (curation)", | ||
"Scott Makeig (curation)", | ||
"Arno Delorme (curation)" | ||
], | ||
"ReferencesAndLinks": [ | ||
"Wakeman, D., Henson, R. (2015). A multi-subject, multi-modal human neuroimaging dataset. Sci Data 2, 150001. https://doi.org/10.1038/sdata.2015.1", | ||
"Robbins, K., Truong, D., Appelhoff, S., Delorme, A., & Makeig, S. (2021). Capturing the nature of events and event context using Hierarchical Event Descriptors (HED). In press for NeuroImage Special Issue Practice in MEEG. NeuroImage 245 (2021) 118766. Online: https://www.sciencedirect.com/science/article/pii/S1053811921010387.", | ||
"Robbins, K., Truong, D., Jones, A., Callanan, I., & Makeig, S. (2021). Building FAIR functionality: Annotating events in time series data using Hierarchical Event Descriptors (HED). Neuroinformatics Special Issue Building the NeuroCommons. Neuroinformatics https://doi.org/10.1007/s12021-021-09537-4. Online: https://link.springer.com/article/10.1007/s12021-021-09537-4." | ||
], | ||
"Funding": [ | ||
"Experiment was supported by the UK Medical Research Council (MC_A060_5PR10) and Elekta Ltd.", | ||
"Curation was supported by: NIH R01 EB023297-03, NIH R01 NS047293-l4, NIH R24 MH120037-01, and R01 MH126700-01A1." | ||
], | ||
"DatasetDOI": "10.18112/openneuro.ds003645.v2.0.2", | ||
"EthicsApprovals": [ | ||
"The study was approved by Cambridge University Psychological Ethics Committee. Written informed consent was obtained from each participant prior to and following each phase of the experiment." | ||
] | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
{ | ||
"participant_id": { | ||
"LongName": "Participant identifier", | ||
"Description": "Unique subject identifier" | ||
}, | ||
"gender": { | ||
"Description": "Sex of the subject", | ||
"Levels": { | ||
"M": "male", | ||
"F": "female" | ||
} | ||
}, | ||
"age": { | ||
"Description": "Age of the subject", | ||
"Units": "years" | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
participant_id age gender | ||
sub-002 31 M | ||
sub-003 25 M |
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Oops, something went wrong.