This is a dataset having paired thermal-visual images collected over 1.5 years from different locations in Chitrakoot, India and Prayagraj, India. The images can be broadly classified into greenery, urban, historical buildings and crowd data.

The crowd data was collected from the Maha Kumbh Mela 2019, Prayagraj, which is the largest religious fair in the world and is held every 6 years.

 

Instructions: 

The images are classified according to the thermal imager they were used to capture them with.

The SONEL thermal images are inside register_sonel.

The FLIR images are in register_flir and register_flir_old. There are 2 image zip files because FLIR thermal imagers reuse the image names after a certain limit.

The unregistered images are kept as files inside each base zip as unreg folders.

 

The work associated with this database, which details the registration method, the overall logic behind the creation of this database, resizing factors and the reason why there are unregistered images, is a work on thermal image colorization has been submited to IEEE for consideration, and is currently pre printed and available on arXiv.

We ask that you refer to this work when using this database for your work.

A Novel Registration & Colorization Technique for Thermal to Cross Domain Colorized Images 

 

If you find any problem with the data in this dataset (missing images, wrong names, superfluous python files etc), please let us know and we will try to correct the same.

 

The naming classification is as follows:

·         FLIR

o   Registered images are named as <name>.jpg and <name>_color.png with the png file being the optical registered image

o   The raw files are named as FLIR<#number>.jpg and FLIR<#number+1>.jpg where the initial file is the thermal image

o   The unreg_flir folder contains just the raw files

·         SONEL

o   Registered images are named as <name>.jpg and <name>_color.png with the png file being the optical registered image

o   The raw files are named as IRI_<name>.jpg and VIS_< name >.jpg where the IRI file is the thermal image and VIS is the visual image

o   The unreg folder contains just the raw files

Categories:
1232 Views

The 2020 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS) and the Technical University of Munich, aims to promote research in large-scale land cover mapping based on weakly supervised learning from globally available multimodal satellite data. The task is to train a machine learning model for global land cover mapping based on weakly annotated samples.

Last Updated On: 
Mon, 01/25/2021 - 09:03

The dataset contains high-resolution microscopy images and confocal spectra of semiconducting single-wall carbon nanotubes. Carbon nanotubes allow down-scaling of electronic components to the nano-scale. There is initial evidence from Monte Carlo simulations that microscopy images with high digital resolution show energy information in the Bessel wave pattern that is visible in these images. In this dataset, images from Silicon and InGaAs cameras, as well as spectra, give valuable insights into the spectroscopic properties of these single-photon emitters.

Instructions: 

The dataset is generated with docker containers from the measurement data. The measured data is in Igor Binary Waves. The specific format can be read with a custom reader an processed with various tools.

Processing will be applied automatically to various output formats using docker containers.

 

See current development status and dataset description will be updated on

https://gitlab.com/ukos-git/nanotubes

Categories:
543 Views

Multi-modal Exercises Dataset is a multi- sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD).The MEx Dataset contains data from 25 people recorded with four sensors, 2 accelerometers, a pressure mat and a depth camera.

Instructions: 

The MEx Multi-modal Exercise dataset contains data of 7 different physiotherapy exercises, performed by 30 subjects recorded with 2 accelerometers, a pressure mat and a depth camera.

Application

The dataset can be used for exercise recognition, exercise quality assessment and exercise counting, by developing algorithms for pre-processing, feature extraction, multi-modal sensor fusion, segmentation and classification.

 

Data collection method

Each subject was given a sheet of 7 exercises with instructions to perform the exercise at the beginning of the session. At the beginning of each exercise the researcher demonstrated the exercise to the subject, then the subject performed the exercise for maximum 60 seconds while being recorded with four sensors. During the recording, the researcher did not give any advice or kept count or time to enforce a rhythm.

 

Sensors

Obbrec Astra Depth Camera 

-       sampling frequency – 15Hz 

-       frame size – 240x320

 

Sensing Tex Pressure Mat

-       sampling frequency – 15Hz

-       frame size – 32*16

Axivity AX3 3-Axis Logging Accelerometer

-       sampling frequency – 100Hz

-       range – 8g

 

Sensor Placement

All the exercises were performed lying down on the mat while the subject wearing two accelerometers on the wrist and the thigh. The depth camera was placed above the subject facing down-words recording an aerial view. Top of the depth camera frame was aligned with the top of the pressure mat frame and the subject’s shoulders such that the face will not be included in the depth camera video.

 

Data folder

MEx folder has four folders, one for each sensor. Inside each sensor folder,

30 folders can be found, one for each subject. In each subject folder, 8 files can be found for each exercise with 2 files for exercise 4 as it is performed on two sides. (The user 22 will only have 7 files as they performed the exercise 4 on only one side.)  One line in the data files correspond to one timestamped and sensory data.

 

Attribute Information

 

The 4 columns in the act and acw files is organized as follows:

1 – timestamp

2 – x value

3 – y value

4 – z value

Min value = -8

Max value = +8

 

The 513 columns in the pm file is organized as follows:

1 - timestamp

2-513 – pressure mat data frame (32x16)

Min value – 0

Max value – 1

 

The 193 columns in the dc file is organized as follows:

1 - timestamp

2-193 – depth camera data frame (12x16)

 

dc data frame is scaled down from 240x320 to 12x16 using the OpenCV resize algorithm

Min value – 0

Max value – 1

Categories:
992 Views

3D-videos database.

Categories:
85 Views

Changes in left ventricular (LV) aggregate cardiomyocyte  orientation and deformation underlie cardiac function and dysfunction. As such, in vivo aggregate cardiomyocyte "myofiber" strain has mechanistic significance, but currently there exists no established technique to measure in vivo cardiomyocyte strain.

 

Categories:
432 Views

NGM software for applied neurogoniometry. See our previous articles.

Categories:
83 Views

Pages