Recently, surface electromyogram (EMG) has been proposed as a novel biometric trait for addressing some key limitations of current biometrics, such as spoofing and liveness. The EMG signals possess a unique characteristic: they are inherently different for individuals (biometrics), and they can be customized to realize multi-length codes or passwords (for example, by performing different gestures).


Recently, surface electromyography (sEMG) emerged as a novel biometric authentication method. Since EMG system parameters, such as the feature extraction methods and the number of channels, have been known to affect system performances, it is important to investigate these effects on the performance of the sEMG-based biometric system to determine optimal system parameters.


"The friction ridge pattern is a 3D structure which, in its natural state, is not deformed by contact with a surface''. Building upon this rather trivial observation, the present work constitutes a first solid step towards a paradigm shift in fingerprint recognition from its very foundations. We explore and evaluate the feasibility to move from current technology operating on 2D images of elastically deformed impressions of the ridge pattern, to a new generation of systems based on full-3D models of the natural nondeformed ridge pattern itself.


The present data release contains the data of 2 subjects of the 3D-FLARE DB.


These data is released as a sample of the complete database as these 2 subjects gave their specific consent for the distribution of their 3D fingerprint samples.


The acquisition system and the database are described in the article:


[ART1] J. Galbally, L. Beslay and G. Böstrom, "FLARE: A Touchless Full-3D Fingerprint Recognition System Based on Laser Sensing", IEEE ACCESS, vol. 8, pp. 145513-145534, 2020. 

DOI: 10.1109/ACCESS.2020.3014796.


We refer the reader to this article for any further details on the data.


This sample release contains the next folders:


- 1_rawData: it contains the 3D fingerprint samples as they were captured by the sensor describe in [ART1], with no processing. This folder includes the same 3D fingerprints in two different formats:

* MATformat: 3D fingerprints in MATLAB format

* PLYformat: 3D fingerprints in PLY format


- 2_processedData: it contains the 3D fingerprint samples after the two initial processing steps carried out before using the samples for recognition purposes. These files are in MATLAB format. This folder includes:

* 2a_Segmented: 3D fingerprints after being segemented according to the process described in Sect. V of [ART1]

* 2b_Detached: 3D fingerprints after being detached according to the process described in Sect. VI of [ART1]


The naming convention of the files is as follows: XXXX_AAY_SZZ

XXXX: 4 digit identifier for the user in the database

AA: finger identifier, it can take values: LI (Left Index), LM (Left Middle), RI (Right Index), RM (Right middle)

Y: sample number, with values 0 to 4

ZZ: acquisition speed, it can take values 10, 30 or 50 mm/sec


With the data files we also provide a series of example MATLAB scripts to visualise the 3D fingerprints:






We cannot guarantee the correct functioning of these scripts depending on the MATLAB version you are running.


Two videos of the 3D fingerprint scanner can be checked at:


This is a collection of paired thermal and visible ear images. Images in this dataset were acquired in different illumination conditions ranging between 2 and 10700 lux. There are total 2200 images of which 1100 are thermal images while the other 1100 are their corresponding visible images. Images consisted of left and right ear images of 55 subjects. Images were capture in 5 illumination conditiond for every subjects. This dataset was developed for illumination invariant ear recognition study. In addition it can also be useful for thermal and visible image fusion research.



Any work made public, in whatever form, based directly or indirectly on any part of the DATABASE will include the following reference: 

Syed Zainal Ariffin, S. M. Z., Jamil, N., & Megat Abdul Rahman, P. N. (2016). DIAST Variability Illuminated Thermal and Visible Ear Images Datasets. In Proceeding of 2016 Signal Processing : Algorithms, Architectures, Arrangements, and Applications (SPA), 2016. DOI : 10.1109/SPA.2016.7763611



iSignDB: A biometric signature database created using smartphone

Suraiya Jabin, Sumaiya Ahmad, Sarthak Mishra, and Farhana Javed Zareen

Department of Computer Science, Jamia Millia Islamia, New Delhi-110025, India

It's a database of biometric signatures recorded using sensors present in a smartphone. ​The dataset iSignDB is created to implement a novel anti-spoof biometric signature authentication for smartphone users.


This dataset is composed of 4-Dimensional time series files, representing the movements of all 38 participants during a novel control task. In the ‘’ file this can be set up to 6-Dimension, by the ‘fields_included’ variable. Two folders are included, one ready for preprocessing (‘subjects raw’) and the other already preprocessed ‘subjects preprocessed’.


We provide a large benchmark dataset consisting of about: 3.5 million keystroke events; 57.1 million data-points for accelerometer and gyroscope each; and 1.7 million data-points for swipes. Data was collected between April 2017 and June 2017 after the required IRB approval. Data from 117 participants, in a session lasting between 2 to 2.5 hours each, performing multiple activities such as: typing (free and fixed text), gait (walking, upstairs and downstairs) and swiping activities while using desktop, phone and tablet is shared.


Detailed description of all data files is provided in the *BBMAS_README.pdf* file along with the dataset. 



Please cite:

[1] Amith K. Belman and Vir V. Phoha. 2020. Discriminative Power of Typing Features on Desktops, Tablets, and Phones for User Identification. ACM Trans. Priv. Secur. Volume 23,Issue 1, Article 4 (February 2020), 36 pages. DOI:

[2]Amith K. Belman, Li Wang, S. S. Iyengar, Pawel Sniatala, Robert Wright, Robert Dora, Jacob Baldwin, Zhanpeng Jin and Vir V. Phoha, "Insights from BB-MAS -- A Large Dataset for Typing, Gait and Swipes of the Same Person on Desktop, Tablet and Phone", arXiv:1912.02736 , 2019. 

[3] Amith K. Belman, Li Wang, Sundaraja S. Iyengar, Pawel Sniatala, Robert Wright, Robert Dora, Jacob Baldwin, Zhanpeng Jin, Vir V. Phoha, "SU-AIS BB-MAS (Syracuse University and Assured Information Security - Behavioral Biometrics Multi-device and multi-Activity data from Same users) Dataset ", IEEE Dataport, 2019. [Online]. Available: