This dataset contains facial expressions from different sides. the top-level videos are shot on Logitech c270 and the bottom ones are shot with an LG g6. The videos are continuous shots at 480p from different angles. This is meant to serve as a dataset for facial expression recognition under different angles and poses.
README file contains some basic python function that can be used to read the dataset, and the license to use the data.
There are videos at the base level and one top level. The title of the video contains the name of the facial expression. So it would be easy to parse the facial expression with a python
A dataset of videos, recorded by an in-car camera, of drivers in an actual car with various facial characteristics (male and female, with and without glasses/sunglasses, different ethnicities) talking, singing, being silent, and yawning. It can be used primarily to develop and test algorithms and models for yawning detection, but also recognition and tracking of face and mouth. The videos are taken in natural and varying illumination conditions. The videos come in two sets, as described next:
You can use all videos for research. Also, you can display the screenshots of some (not all) videos in your own publications. Please check the Allow Researchers to use picture in their paper column in the table to see if you can use a screenshot of a particular video or not. If for a particular video that column is “no”, you are NOT allowed to display pictures from that specific video in your own publications.
The videos are unlabeled, since it is very easy to see the yawning sequences. For more details, please see:
S. Abtahi, M. Omidyeganeh, S. Shirmohammadi, and B. Hariri, “YawDD: A Yawning Detection Dataset”, Proc. ACM Multimedia Systems, Singapore, March 19 -21 2014, pp. 24-28. DOI: 10.1145/2557642.2563678