"The friction ridge pattern is a 3D structure which, in its natural state, is not deformed by contact with a surface''. Building upon this rather trivial observation, the present work constitutes a first solid step towards a paradigm shift in fingerprint recognition from its very foundations. We explore and evaluate the feasibility to move from current technology operating on 2D images of elastically deformed impressions of the ridge pattern, to a new generation of systems based on full-3D models of the natural nondeformed ridge pattern itself.

Instructions: 

The present data release contains the data of 2 subjects of the 3D-FLARE DB.

 

These data is released as a sample of the complete database as these 2 subjects gave their specific consent for the distribution of their 3D fingerprint samples.

 

The acquisition system and the database are described in the article:

 

[ART1] J. Galbally, L. Beslay and G. Böstrom, "FLARE: A Touchless Full-3D Fingerprint Recognition System Based on Laser Sensing", IEEE ACCESS, vol. 8, pp. 145513-145534, 2020. 

DOI: 10.1109/ACCESS.2020.3014796.

 

We refer the reader to this article for any further details on the data.

 

This sample release contains the next folders:

 

- 1_rawData: it contains the 3D fingerprint samples as they were captured by the sensor describe in [ART1], with no processing. This folder includes the same 3D fingerprints in two different formats:

* MATformat: 3D fingerprints in MATLAB format

* PLYformat: 3D fingerprints in PLY format

 

- 2_processedData: it contains the 3D fingerprint samples after the two initial processing steps carried out before using the samples for recognition purposes. These files are in MATLAB format. This folder includes:

* 2a_Segmented: 3D fingerprints after being segemented according to the process described in Sect. V of [ART1]

* 2b_Detached: 3D fingerprints after being detached according to the process described in Sect. VI of [ART1]

 

The naming convention of the files is as follows: XXXX_AAY_SZZ

XXXX: 4 digit identifier for the user in the database

AA: finger identifier, it can take values: LI (Left Index), LM (Left Middle), RI (Right Index), RM (Right middle)

Y: sample number, with values 0 to 4

ZZ: acquisition speed, it can take values 10, 30 or 50 mm/sec

 

With the data files we also provide a series of example MATLAB scripts to visualise the 3D fingerprints:

readplytomatrix.m

showFilesPLYformat.m

showFilesRaw.m

showFilesSegmented.m

 

We cannot guarantee the correct functioning of these scripts depending on the MATLAB version you are running.

 

Two videos of the 3D fingerprint scanner can be checked at:

https://www.youtube.com/watch?v=XfbumvzXxnU

https://www.youtube.com/watch?v=2U6fqPIWzMg&t

Categories:
1887 Views

This dataset is created with the usage of Galvanic Skin Response Sensor and Electrocardiogram sensor of MySignals Healthcare Toolkit. MySignals toolkit consists of the Arduino Uno board and different sensor ports. The sensors were connected to the different ports of the hardware kit which was controlled by Arduino SDK.

Categories:
328 Views

The dataset is genrated by the fusion of three publicly available datasets: COVID-19 cxr image (https://github.com/ieee8023/covid-chestxray-dataset), Radiological Society of North America (RSNA) (https://www.kaggle.com/c/rsna-pneumonia-detection-challenge), and U.S.  national  library  of  medicine  (USNLM) collected  Montgomery  country - NLM(MC) (http

Categories:
514 Views

We chose 8 publicly available CT volumes of COVID-19 positive patients which were available from https://doi.org/10.5281/zenodo.3757476 and used 3D slicer to generate volumetric annotations of 512*512 dimension for 5 lung lobes namely right upper lobe, right middle lobe, right lower lobe, left upper lobe and left lower lobe. These annotations are validated by a radiologist with over 15 years of experience. 

Instructions: 

CT volumes can be downloaded from https://doi.org/10.5281/zenodo.3757476

Volumetric annotations for 5 lobe segments namely right upper lobe, right middle lobe, right lower lobe, left upper lobe and left lower lobe are saved as segments 1 to 5 respectively. 

For scans with prefix coronacases_00x their corresponding annotations are uploaded with suffix lobes

The scans and annotations measure 512*512 and are in .nii format

Categories:
177 Views

The dataset consists of echo data collected at the Matre research station (61°N) of the Institute of Marine Research (IMR), Norway. Six square sea cages (12 × 12 m and 15 m depth; approximately 2000 m^3) were used. The fish's vertical distribution and density were observed continuously by a PC-based echo integration system (CageEye MK IV, software version 1.1.1., CageEye AS, Steinkjer, Norway) connected to an upward facing transducer which multiplexes between 50 kHz (42° acoustic beam angle) and 200 kHz (14° beam angle).

Instructions: 

The 1-6 are named 15.1-15.6 respectively. There are some header columns indicating date and time, which can be removed. The depth is along the x-axis in the .csv files, thus the data need to be rotated to get a proper visualization.

1. Unzip data to a folder

2. Import data using pandas (python) or equivalent.

3. Remove the first header columns.

4. Log scale the data.

5. Rotate the data.

Categories:
103 Views

The current maturity of autonomous underwater vehicles (AUVs) has made their deployment practical and cost-effective, such that many scientific, industrial and military applications now include AUV operations. However, the logistical difficulties and high costs of operating at-sea are still critical limiting factors in further technology development, the benchmarking of new techniques and the reproducibility of research results. To overcome this problem, we present a freely available dataset suitable to test control, navigation, sensor processing algorithms and others tasks.

Instructions: 

This repository contains the AURORA dataset, a multi sensor dataset for robotic ocean exploration.

It is accompanied by the report "AURORA, A multi sensor dataset for robotic ocean exploration", by Marco Bernardi, Brett Hosking, Chiara Petrioli, Brian J. Bett, Daniel Jones, Veerle Huvenne, Rachel Marlow, Maaten Furlong, Steve McPhail and Andrea Munafo.

Exemplar python code is provided at https://github.com/noc-mars/aurora.

 

The dataset provided in this repository includes data collected during cruise James Cook 125 (JC125) of the National Oceanography Centre, using the Autonomous Underwater Vehicle Autosub 6000. It is composed of two AUV missions: M86 and M86.

  • M86 contains a sample of multi-beam echosounder data in .all format. It also contains CTD and navigation data in .csv format.

  • M87 contains a sample of the camera and side-scan sonar data. The camera data contains 8 of 45320 images of the original dataset. The camera data are provided in .raw format (pixels are ordered in Bayer format). The size of each image is of size 2448x2048. The side-scan sonar folder contains a one ping sample of side-scan data provided in .xtf format.

  • The AUV navigation file is provided as part of the data available in each mission in .csv form.

 

The dataset is approximately 200GB in size. A smaller sample is provided at https://github.com/noc-mars/aurora_dataset_sample and contains a sample of about 200MB.

Each individual group of data (CTD, multibeam, side scan sonar, vertical camera) for each mission (M86, M87) is also available to be downloaded as a separate file. 

Categories:
257 Views

A team of researchers from Qatar University, Doha, Qatar, and the University of Dhaka, Bangladesh along with their collaborators from Malaysia in collaboration with medical doctors from Hamad Medical Corporation and Bangladesh have created a database of chest X-ray images for Tuberculosis (TB) positive cases along with Normal images. In our current release, there are 3500 TB images, and 3500 normal images.

Categories:
323 Views

This is the five mainstream stock market indices dataset.  It includes XJO, DJI, IXIC, HSI, and N225 indices from  Sep. 2010 ~ Aug. 2020.  

Categories:
77 Views

This dataset is flotation froth sequence images, a total of 2386 folders, that is, 2386 groups of froth sequence images. The data comes from a mine in southern China, and the digital camera collected the froth videos every 5 minutes during the period from 2019.08.01 to 2019.12.01. Each group of froth sequence images contains 12 consecutive frames and the sampling time is 0.4s. This dataset is suitable for image processing, pattern recognition, and artificial intelligence.

Categories:
221 Views

An overview of a real-world Chinese mathematics dataset removed duplicated questions and simple questions.

Categories:
53 Views

Pages