EEG Dataset for Genuine and Acted Emotional Expressions

Citation Author(s):
American University of Sharjah
American University of Sharjah
American University of Sharjah
American University of Sharjah
American University of Sharjah
Submitted by:
Fares Al-shargie
Last updated:
Tue, 05/17/2022 - 22:17
Data Format:
Research Article Link:
0 ratings - Please login to submit your rating.


We present here one of the first studies that attempt to differentiate between genuine and acted emotional expressions, using EEG data. We present the first EEG dataset with recordings of subjects with genuine and fake emotional expressions. We build our experimental paradigm for classification of smiles; genuine smiles, fake/acted smiles and neutral expression. For the full details please refere to our paper entitled: 

Discrimination of Genuine and Acted Emotional Expressions using EEG Signal and Machine Learning


Please read the file Readme before using this dataset. The file contains the full description of the data. Besides, the experimental protocol can be found below.

In this study, the emotion eliciting stimuli comprised of 246 still images obtained from two online public image datasets. These include the Open Affective Standardized Image Set (OASIS) ADDIN CSL_CITATION
introduce the Open Affective Standardized Image Set (OASIS), an open-access
online stimulus set containing 900 color images depicting a broad spectrum of
themes, including humans, animals, objects, and scenes, along with normative
ratings on two affective dimensions-valence (i.e., the degree of positive or
negative affective response that the image evokes) and arousal (i.e., the
intensity of the affective response that the image evokes). The OASIS images
were collected from online sources, and valence and arousal ratings were
obtained in an online study (total N = 822). The valence and arousal ratings
covered much of the circumplex space and were highly reliable and consistent
across gender groups. OASIS has four advantages: (a) the stimulus set contains
a large number of images in four categories; (b) the data were collected in
2015, and thus OASIS features more current images and reflects more current
ratings of valence and arousal than do existing stimulus sets; (c) the OASIS
database affords users the ability to interactively explore images by category
and ratings; and, most critically, (d) OASIS allows for free use of the images
in online and offline research studies, as they are not subject to the
copyright restrictions that apply to the International Affective Picture
System. The OASIS images, along with normative valence and arousal ratings, are
available for download from or
New York LLC","title":"Introducing the Open Affective
Standardized Image Set
(OASIS)","type":"article-journal","volume":"49"},"uris":[""]}],"mendeley":{"formattedCitation":"[26]","plainTextFormattedCitation":"[26]","previouslyFormattedCitation":"[25]"},"properties":{"noteIndex":0},"schema":""}, and the Geneva Affective Picture Database (GAPED) ADDIN CSL_CITATION
emotional research, efficient designs often rely on successful emotion
induction. For visual stimulation, the only reliable database available so far
is the International Affective Picture System (IAPS). However, extensive use of
these stimuli lowers the impact of the images by increasing the knowledge that
participants have of them. Moreover, the limited number of pictures for
specific themes in the IAPS database is a concern for studies centered on a
specific emotion thematic and for designs requiring a lot of trials from the
same kind (e.g., EEG recordings). Thus, in the present article, we present a
new database of 730 pictures, the Geneva Affective PicturE Database, which was
created to increase the availability of visual emotion stimuli. Four specific
negative contents were chosen: spiders, snakes, and scenes that induce emotions
related to the violation of moral and legal norms (human rights violation or
animal mistreatment). Positive and neutral pictures were also included:
Positive pictures represent mainly human and animal babies as well as nature
sceneries, whereas neutral pictures mainly depict inanimate objects. The
pictures were rated according to valence, arousal, and the congruence of the
represented scene with internal (moral) and external (legal) norms. The
constitution of the database and the results of the picture ratings are
presented. © 2011 Psychonomic Society,
Geneva affective picture database (GAPED): A new 730-picture database focusing
on valence and normative
significance","type":"article-journal","volume":"43"},"uris":[""]}],"mendeley":{"formattedCitation":"[27]","plainTextFormattedCitation":"[27]","previouslyFormattedCitation":"[26]"},"properties":{"noteIndex":0},"schema":""}. Three types of image sets were chosen to conduct this study (116-funny images, 70-neutral images, and 60-one-plain image). Funny pictures involved pictures of human and animal babies; neutral pictures include pictures of nature, and the plain-images mainly depicted a picture of a plain book.  All images used in this study were based on the valence-arousal scale. Images were displayed and presented on 19 inch LCD screen that was kept 50 cm away from the participant. Images presentation order was semi-randomized, with the condition that no currently viewed picture belonged to the same category as the previously rated one.  Three different event markers were sent to mark the epochs/trials of each type of image stimuli. The participants were instructed to pose an acted smile once the target image (plain-image) appeared on the screen and to hit a respective keyboard response (i.e. ‘Q’). This was in order to invoke acted/fake emotion in the subject. In addition, all participants were asked to hit letter “P” or “N” once and only whenever they felt their feeling had changed and have to act to certain emotional expressions in the form of a true smile (hitting the letter ‘P’) and neutral expression by (hitting the letter ‘N’), respectively. There were a total of 246 trials in this experiment . Each trial has a one-second drift check followed by two-seconds of emotion stimulating image. This entire experiment lasted about 13 minutes, and the number of trials varied between subjects depending on their response speed.All the trials were labeled according to the response from the participant and only successful trials that induced genuine and acted smiles were considered for the analysis.


Please cite as follows:
Alex, M., U. Tariq, F. Al-Shargie., Mir, H. and Al-Nashash, H.,. Discrimination of Genuine and Acted Emotional Expressions using EEG Signal and Machine Learning. IEEE Access. 2020.

Submitted by Fares Al-shargie on Wed, 09/30/2020 - 03:35


Submitted by Hendrik Karu on Wed, 09/08/2021 - 13:43

Hello, I can't download the database..

Submitted by Xin YAN on Mon, 12/20/2021 - 18:54

Hello, The dataset is not available. How can I get it?

Submitted by Gerald Lavergne on Sat, 09/09/2023 - 23:53

Dataset Files

    Files have not been uploaded for this dataset