Skip to main content

Datasets

Standard Dataset

EEG Dataset for Genuine and Acted Emotional Expressions

Citation Author(s):
Fares Al-Shargie (American University of Sharjah)
Meera Alex (American University of Sharjah)
Hasan Al-Nashash (American University of Sharjah)
Usman Tariq (American University of Sharjah)
Hasan Mir (American University of Sharjah)
Submitted by:
Fares Al-shargie
Last updated:
DOI:
10.21227/haar-1q96
Data Format:
Research Article Link:
Links:
No Ratings Yet

Abstract

We present here one of the first studies that attempt to differentiate between genuine and acted emotional expressions, using EEG data. We present the first EEG dataset with recordings of subjects with genuine and fake emotional expressions. We build our experimental paradigm for classification of smiles; genuine smiles, fake/acted smiles and neutral expression. For the full details please refere to our paper entitled: 

Discrimination of Genuine and Acted Emotional Expressions using EEG Signal and Machine Learning

Instructions:

Please read the file Readme before using this dataset. The file contains the full description of the data. Besides, the experimental protocol can be found below.

In this study, the emotion eliciting stimuli comprised of 246 still images obtained from two online public image datasets. These include the Open Affective Standardized Image Set (OASIS) ADDIN CSL_CITATION {"citationItems":[{"id":"ITEM-1","itemData":{"DOI":"10.3758/s13428-016-0715-3","ISSN":"15543528","abstract":"We introduce the Open Affective Standardized Image Set (OASIS), an open-access online stimulus set containing 900 color images depicting a broad spectrum of themes, including humans, animals, objects, and scenes, along with normative ratings on two affective dimensions-valence (i.e., the degree of positive or negative affective response that the image evokes) and arousal (i.e., the intensity of the affective response that the image evokes). The OASIS images were collected from online sources, and valence and arousal ratings were obtained in an online study (total N = 822). The valence and arousal ratings covered much of the circumplex space and were highly reliable and consistent across gender groups. OASIS has four advantages: (a) the stimulus set contains a large number of images in four categories; (b) the data were collected in 2015, and thus OASIS features more current images and reflects more current ratings of valence and arousal than do existing stimulus sets; (c) the OASIS database affords users the ability to interactively explore images by category and ratings; and, most critically, (d) OASIS allows for free use of the images in online and offline research studies, as they are not subject to the copyright restrictions that apply to the International Affective Picture System. The OASIS images, along with normative valence and arousal ratings, are available for download from www.benedekkurdi.com/#oasis or https://db.tt/yYTZYCga .","author":[{"dropping-particle":"","family":"Kurdi","given":"Benedek","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Lozano","given":"Shayn","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Banaji","given":"Mahzarin R.","non-dropping-particle":"","parse-names":false,"suffix":""}],"container-title":"Behavior Research Methods","id":"ITEM-1","issue":"2","issued":{"date-parts":[["2017","4","1"]]},"page":"457-470","publisher":"Springer New York LLC","title":"Introducing the Open Affective Standardized Image Set (OASIS)","type":"article-journal","volume":"49"},"uris":["http://www.mendeley.com/documents/?uuid=b9ef4e78-7176-332c-957c-53e2d12f9c35"]}],"mendeley":{"formattedCitation":"[26]","plainTextFormattedCitation":"[26]","previouslyFormattedCitation":"[25]"},"properties":{"noteIndex":0},"schema":"https://github.com/citation-style-language/schema/raw/master/csl-citation.json"}, and the Geneva Affective Picture Database (GAPED) ADDIN CSL_CITATION {"citationItems":[{"id":"ITEM-1","itemData":{"DOI":"10.3758/s13428-011-0064-1","ISSN":"1554351X","abstract":"In emotional research, efficient designs often rely on successful emotion induction. For visual stimulation, the only reliable database available so far is the International Affective Picture System (IAPS). However, extensive use of these stimuli lowers the impact of the images by increasing the knowledge that participants have of them. Moreover, the limited number of pictures for specific themes in the IAPS database is a concern for studies centered on a specific emotion thematic and for designs requiring a lot of trials from the same kind (e.g., EEG recordings). Thus, in the present article, we present a new database of 730 pictures, the Geneva Affective PicturE Database, which was created to increase the availability of visual emotion stimuli. Four specific negative contents were chosen: spiders, snakes, and scenes that induce emotions related to the violation of moral and legal norms (human rights violation or animal mistreatment). Positive and neutral pictures were also included: Positive pictures represent mainly human and animal babies as well as nature sceneries, whereas neutral pictures mainly depict inanimate objects. The pictures were rated according to valence, arousal, and the congruence of the represented scene with internal (moral) and external (legal) norms. The constitution of the database and the results of the picture ratings are presented. © 2011 Psychonomic Society, Inc.","author":[{"dropping-particle":"","family":"Dan-Glauser","given":"Elise S.","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Scherer","given":"Klaus R.","non-dropping-particle":"","parse-names":false,"suffix":""}],"container-title":"Behavior Research Methods","id":"ITEM-1","issue":"2","issued":{"date-parts":[["2011","6"]]},"page":"468-477","title":"The Geneva affective picture database (GAPED): A new 730-picture database focusing on valence and normative significance","type":"article-journal","volume":"43"},"uris":["http://www.mendeley.com/documents/?uuid=806f46c5-927c-3de1-85f1-814ea4a877e7"]}],"mendeley":{"formattedCitation":"[27]","plainTextFormattedCitation":"[27]","previouslyFormattedCitation":"[26]"},"properties":{"noteIndex":0},"schema":"https://github.com/citation-style-language/schema/raw/master/csl-citation.json"}. Three types of image sets were chosen to conduct this study (116-funny images, 70-neutral images, and 60-one-plain image). Funny pictures involved pictures of human and animal babies; neutral pictures include pictures of nature, and the plain-images mainly depicted a picture of a plain book.  All images used in this study were based on the valence-arousal scale. Images were displayed and presented on 19 inch LCD screen that was kept 50 cm away from the participant. Images presentation order was semi-randomized, with the condition that no currently viewed picture belonged to the same category as the previously rated one.  Three different event markers were sent to mark the epochs/trials of each type of image stimuli. The participants were instructed to pose an acted smile once the target image (plain-image) appeared on the screen and to hit a respective keyboard response (i.e. ‘Q’). This was in order to invoke acted/fake emotion in the subject. In addition, all participants were asked to hit letter “P” or “N” once and only whenever they felt their feeling had changed and have to act to certain emotional expressions in the form of a true smile (hitting the letter ‘P’) and neutral expression by (hitting the letter ‘N’), respectively. There were a total of 246 trials in this experiment . Each trial has a one-second drift check followed by two-seconds of emotion stimulating image. This entire experiment lasted about 13 minutes, and the number of trials varied between subjects depending on their response speed.All the trials were labeled according to the response from the participant and only successful trials that induced genuine and acted smiles were considered for the analysis.

Please cite as follows: Alex, M., U. Tariq, F. Al-Shargie., Mir, H. and Al-Nashash, H.,. Discrimination of Genuine and Acted Emotional Expressions using EEG Signal and Machine Learning. IEEE Access. 2020.
Fares Al-shargie Wed, 09/30/2020 - 07:35 Permalink