EEG brain recordings of ADHD and non-ADHD individuals during gameplay of a brain controlled game, recorded with an EMOTIV EEG headset. It can be used to design and test methods to detect individuals with ADHD.
For details, please see:
Alaa Eddin Alchalabi, S. Shirmohammadi, A. N. Eddin and M. Elsharnouby, “FOCUS: Detecting ADHD Patients by an EEG-Based Serious Game”, IEEE Transactions on Instrumentation and Measurement, Vol. 67, No. 7, July 2018, pp. 1512-1520.
Images of various foods, taken with different cameras and different lighting conditions. Images can be used to design and test Computer Vision techniques that can recognize foods and estimate their calories and nutrition.
Please note that in its full view, the human thumb in each image is approximately 5 cm by 1.2 cm.
For more information, please see:
P. Pouladzadeh, A. Yassine, and S. Shirmohammadi, “FooDD: Food Detection Dataset for Calorie Measurement Using Food Images”, in New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, V. Murino, E. Puppo, D. Sona, M. Cristani, and C. Sansone, Lecture Notes in Computer Science, Springer, Volume 9281, 2015, ISBN: 978-3-319-23221-8, pp 441-448. DOI: 10.1007/978-3-319-23222-5_54
A dataset of videos, recorded by an in-car camera, of drivers in an actual car with various facial characteristics (male and female, with and without glasses/sunglasses, different ethnicities) talking, singing, being silent, and yawning. It can be used primarily to develop and test algorithms and models for yawning detection, but also recognition and tracking of face and mouth. The videos are taken in natural and varying illumination conditions. The videos come in two sets, as described next:
You can use all videos for research. Also, you can display the screenshots of some (not all) videos in your own publications. Please check the Allow Researchers to use picture in their paper column in the table to see if you can use a screenshot of a particular video or not. If for a particular video that column is “no”, you are NOT allowed to display pictures from that specific video in your own publications.
The videos are unlabeled, since it is very easy to see the yawning sequences. For more details, please see:
S. Abtahi, M. Omidyeganeh, S. Shirmohammadi, and B. Hariri, “YawDD: A Yawning Detection Dataset”, Proc. ACM Multimedia Systems, Singapore, March 19 -21 2014, pp. 24-28. DOI: 10.1145/2557642.2563678
This is an eye tracking dataset of 84 computer game players who played the side-scrolling cloud game Somi. The game was streamed in the form of video from the cloud to the player. The dataset consists of 135 raw videos (YUV) at 720p and 30 fps with eye tracking data for both eyes (left and right). Male and female players were asked to play the game in front of a remote eye-tracking device. For each player, we recorded gaze points, video frames of the gameplay, and mouse and keyboard commands.
- AVI offset represents the frame from which data gathering has been started.
- The 1st frame of each YUV file is the 901st frame of its corresponding AVI file.
- For detailed info and instructions, please see:
Hamed Ahmadi, Saman Zad Tootaghaj, Sajad Mowlaei, Mahmoud Reza Hashemi, and Shervin Shirmohammadi, “GSET Somi: A Game-Specific Eye Tracking Dataset for Somi”, Proc. ACM Multimedia Systems, Klagenfurt am Wörthersee, Austria, May 10-13 2016, 6 pages. DOI: 10.1145/2910017.2910616