Recently, surface electromyogram (EMG) has been proposed as a novel biometric trait for addressing some key limitations of current biometrics, such as spoofing and liveness. The EMG signals possess a unique characteristic: they are inherently different for individuals (biometrics), and they can be customized to realize multi-length codes or passwords (for example, by performing different gestures).

Categories:
182 Views

Recently, surface electromyography (sEMG) emerged as a novel biometric authentication method. Since EMG system parameters, such as the feature extraction methods and the number of channels, have been known to affect system performances, it is important to investigate these effects on the performance of the sEMG-based biometric system to determine optimal system parameters.

Categories:
576 Views

The Widar3.0 project is a large dataset designed for use in WiFi-based hand gesture recognition. The RF data are collected from commodity WiFi NICs in the form of Received Signal Strength Indicator (RSSI) and Channel State Information (CSI). The dataset consists of 258K instances of hand gestures with a duration of totally 8,620 minutes and from 75 domains. In addition, two sophisticated features from raw RF signal, including Doppler Frequency Shift (DFS) and a new feature Body-coordinate Velocity Profile (BVP) are included.

Instructions: 

Please refer to the README document.

Categories:
2771 Views

Holoscopic micro-gesture recognition (HoMG) database was recorded using a holoscopic 3D camera, which have 3 conventional gestures from 40 participants under different settings and conditions. The principle of holoscopic 3D (H3D) imaging mimics fly’s eye technique that captures a true 3D optical model of the scene using a microlens array. For the purpose of H3D micro-gesture recognition. HoMG database has two subsets. The video subset has 960 videos and the image subset has 30635 images, while both have three type of microgestures (classes).

Instructions: 

Holoscopic micro-gesture recognition (HoMG) database consists of 3 hand gestures: Button, Dial and Slider from 40 subjects with various ages and settings, which includes the right and left hand, two of record distance.

For video subset: There are 40 subjects, and each subject has 24 videos due to the different setting and three gestures. For each video, the frame rate is 25 frames per second and length of videos are from few seconds to 20 seconds and not equally. The whole dataset was divided into 3 parts. 20 subjects for the training set, 10 subjects for development set and another 10 subjects for testing set.

For image subset: Video can capture the motion information of the micro-gesture and it is a good way for micro-gesture recognition. From each video recording, the different number of frames were selected as the still micro-gesture images. The image resolution 1920 by 1080. In total, there are 30635 images selected. The whole dataset was split into three partitions: A Training, Development, and Testing partition. There are 15237 images in the training subsets of 20 participants with 8364 in close distance and 6853 in the far distance. There are 6956 images in the development subsets of 10 participants with 3077 in close distance and 3879 in far distance. There are 8442 images in the testing subsets of 10 participants with 3930 in close distance and 4512 in far distance.

Categories:
206 Views

The first bit of light is the gesture of being, on a massive screen of the black panorama. A small point of existence, a gesture of being. The universal appeal of gesture is far beyond the barriers of languages and planets. These are the microtransactions of symbols and patterns which have traces of the common ancestors of many civilizations.Gesture recognition is important to make communication between the computer system and humans, in the present era many studies are going on regarding the gesture recognition systems.

Categories:
102 Views

This dataset contains the images used in the paper "Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time". M. E. Morocho Cayamcela and W. Lim, "Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time," 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 2019, pp. 100-104.

Instructions: 

The code is written for MATLAB. We used transfer learning using AlexNet and GoogLeNet as convolutional neural network (CNN) backbones.

In MATLAB, replace the directory path with yours. If you want to recognize other classes, just add the images from different classes on labeled folders.

Categories:
1025 Views