Holoscopic micro-gesture recognition (HoMG) database was recorded using a holoscopic 3D camera, which have 3 conventional gestures from 40 participants under different settings and conditions. The principle of holoscopic 3D (H3D) imaging mimics fly’s eye technique that captures a true 3D optical model of the scene using a microlens array. For the purpose of H3D micro-gesture recognition. HoMG database has two subsets. The video subset has 960 videos and the image subset has 30635 images, while both have three type of microgestures (classes).

Instructions: 

Holoscopic micro-gesture recognition (HoMG) database consists of 3 hand gestures: Button, Dial and Slider from 40 subjects with various ages and settings, which includes the right and left hand, two of record distance.

For video subset: There are 40 subjects, and each subject has 24 videos due to the different setting and three gestures. For each video, the frame rate is 25 frames per second and length of videos are from few seconds to 20 seconds and not equally. The whole dataset was divided into 3 parts. 20 subjects for the training set, 10 subjects for development set and another 10 subjects for testing set.

For image subset: Video can capture the motion information of the micro-gesture and it is a good way for micro-gesture recognition. From each video recording, the different number of frames were selected as the still micro-gesture images. The image resolution 1920 by 1080. In total, there are 30635 images selected. The whole dataset was split into three partitions: A Training, Development, and Testing partition. There are 15237 images in the training subsets of 20 participants with 8364 in close distance and 6853 in the far distance. There are 6956 images in the development subsets of 10 participants with 3077 in close distance and 3879 in far distance. There are 8442 images in the testing subsets of 10 participants with 3930 in close distance and 4512 in far distance.

Categories:
317 Views

One paramount challenge in multi-ion-sensing arises from ion interference that degrades the accuracy of sensor calibration. Machine learning models are here proposed to optimize such multivariate calibration. However, the acquisition of big experimental data is time and resource consuming in practice, necessitating new paradigms and efficient models for these data-limited frameworks. Therefore, a novel approach is presented in this work, where a multi-ion-sensing emulator is designed to explain the response of an ion-sensing array in a mixed-ion environment.

Categories:
100 Views

Predicting energy consumption is currently a key challenge for the energy industry as a whole.  Predicting the consumption in a certain area is massively complicated due to the sudden changes in the way that energy is being consumed and generated at the current point in time. However, this prediction becomes extremely necessary to minimise costs and to enable adjusting (automatically) the production of energy and better balance the load between different energy sources.

Last Updated On: 
Wed, 12/23/2020 - 12:16
Citation Author(s): 
Isaac Triguero

The ability of detecting human postures is particularly important in several fields like ambient intelligence, surveillance, elderly care, and human-machine interaction. Most of the earlier works in this area are based on computer vision. However, mostly these works are limited in providing real time solution for the detection activities. Therefore, we are currently working toward the Internet of Things (IoT) based solution for the human posture recognition.

Categories:
2978 Views

DataSet used in learning process of the traditional technique's operation, considering different devices and scenarios, the proposed approach can adapt its response to the device in use, identifying the MAC layer protocol, perform the commutation through the protocol in use, and make the device to operate with the best possible configuration.

Categories:
380 Views

Vehicular networks have various characteristics that can be helpful in their inter-relations identifications. Considering that two vehicles are moving at a certain speed and distance, it is important to know about their communication capability. The vehicles can communicate within their communication range. However, given previous data of a road segment, our dataset can identify the compatibility time between two selected vehicles. The compatibility time is defined as the time two vehicles will be within the communication range of each other.

Instructions: 

Note: If you are using this then do cite our work. https://ieeexplore.ieee.org/abstract/document/9186099

 

F. H. Kumbhar and S. Y. Shin, "DT-VAR: Decision Tree Predicted Compatibility based Vehicular Ad-hoc Reliable Routing," in IEEE Wireless Communications Letters, doi: 10.1109/LWC.2020.3021430.

 

Each row contains characteristic information related to two vehicles at time t. Data set feature set (column headings) are as follows: 

 

- Euclidean Distance: The shortest distance between two vehicles in meters

- Relative Velocity: The velocity of 2nd vehicles as seen from 1st vehicle

- Direction Difference: Given the direction information of each vehicle, the direction difference feature identifies the angle both vehicles are moving towards. For instance, two vehicles going on the same road can have direction difference 0, whereas two vehicles moving in the opposite direction will have a difference of 180. we calculated direction difference using: |((Direction of i - Direction of j+ 180)%360 - 180)| .

- Direction Difference Label: To ease the process for the supervised learning model, we also included direction difference label information by identifying three possible directions ( 0 if difference < 60, 2 if difference >120 and 1 if none of above)

- Tendency: The Tendency is an interesting label that is required to differentiate between two vehicles which are moving in opposite directions, but either they are approaching each other or moving away from each other. 

 

Target Label (Compatibility time): Our goal is to identify how long two vehicles will be in the communication range of each other. The predicted compatibility time label tells us five possible values:

L0 means Compatibility Time is 0

L1 means Compatibility Time is more than 2 seconds but less than 5 seconds

L2 means Compatibility Time is more than 5 seconds but less than 10 seconds

L3 means Compatibility Time is more than 10 seconds but less than 15 seconds

 

L4 means Compatibility Time is more than 15 seconds 

Categories:
589 Views

Recognition and classification of currency is one of the important task. It is a very crucial task for visually impaired people. It helps them while doing day to day financial transactions with shopkeepers while traveling, exchanging money at banks, hospitals, etc. The main objectives to create this dataset were:

        1)      Create a dataset of old and new Indian currency.

        2)      Create a dataset of Thai Currency.

        3)      Dataset consists of high-quality images.

Instructions: 

The dataset consists of 10 classes namely 10 New, 10 Old, 20, 50 New, 50 Old, 100 New, 100 Old, 200, 500, 2000 of Indian banknotes and 5 classes namely 20, 50, 100, 500, and 2000 for Thai bank notes.

Categories:
1994 Views

INDIA is the second-largest fruit and vegetable exporter in the world after China. It ranked first in the production of Bananas, Papayas, and Mangoes. Public datasets of fruits are available but they are limited to general fruit classes and failed to classify the fruits according to the fruit quality. To overcome this problem, we have created a dataset named FruitsGB (Fruits Good/Bad) dataset.

Instructions: 

The data set contains 12 classes of fruits namely Bad Apple, Good Apple, Bad Banana, Good Banana, Bad Guava, Good Guava, Bad Lime, Good Lime, Bad Orange, Good Orange, Bad Pomegranate, and Good Pomegranate.

Categories:
3761 Views

Message Queuing Telemetry Transport (MQTT) protocol is one of the most used standards used in Internet of Things (IoT) machine to machine communication. The increase in the number of available IoT devices and used protocols reinforce the need for new and robust Intrusion Detection Systems (IDS). However, building IoT IDS requires the availability of datasets to process, train and evaluate these models. The dataset presented in this paper is the first to simulate an MQTT-based network. The dataset is generated using a simulated MQTT network architecture.

Instructions: 

The dataset consists of 5 pcap files, namely, normal.pcap, sparta.pcap, scan_A.pcap, mqtt_bruteforce.pcap and scan_sU.pcap. Each file represents a recording of one scenario; normal operation, Sparta SSH brute-force, aggressive scan, MQTT brute-force and UDP scan respectively. The attack pcap files contain background normal operations. The attacker IP address is “192.168.2.5”. Basic packet features are extracted from the pcap files into CSV files with the same pcap file names. The features include flags, length, MQTT message parameters, etc. Later, unidirectional and bidirectional features are extracted.  It is important to note that for the bidirectional flows, some features (pointed as *) have two values—one for forward flow and one for the backward flow. The two features are recorded and distinguished by a prefix “fwd_” for forward and “bwd_” for backward. 

 

Categories:
9981 Views

    

Dataset used for "A Machine Learning Approach for Wi-Fi RTT Ranging" paper (ION ITM 2019). The dataset includes almost 30,000 Wi-Fi RTT (FTM) raw channel measurements from real-life client and access points, from an office environment. This data can be used for Time of Arrival (ToA), ranging, positioning, navigation and other types of research in Wi-Fi indoor location. The zip file includes a README file, a CSV file with the dataset and several Matlab functions to help the user plot the data and demonstrate how to estimate the range.

Instructions: 

    

Copyright (C) 2018 Intel Corporation

SPDX-License-Identifier: BSD-3-Clause

 

#########################

Welcome to the Intel WiFi RTT (FTM) 40MHz dataset.

 

The paper and the dataset can be downloaded from:

https://www.researchgate.net/publication/329887019_A_Machine_Learning_Ap...

 

To cite the dataset and code, or for further details, please use:

Nir Dvorecki, Ofer Bar-Shalom, Leor Banin, and Yuval Amizur, "A Machine Learning Approach for Wi-Fi RTT Ranging," ION Technical Meeting ITM/PTTI 2019

 

For questions/comments contact: 

nir.dvorecki@intel.com,

ofer.bar-shalom@intel.com

leor.banin@intel.com

yuval.amizur@intel.com

 

The zip file contains the following files:

1) This README.txt file.

2) LICENSE.txt file.

3) RTT_data.csv - the dataset of FTM transactions

4) Helper Matlab files:

O mainFtmDatasetExample.m - main function to run in order to execute the Matlab example.

O PlotFTMchannel.m - plots the channels of a single FTM transaction.

O PlotFTMpositions.m - plots user and Access Point (AP) positions.

O ReadFtmMeasFile.m - reads the RTT_data.csv file to numeric Matlab matrix.

O SimpleFTMrangeEstimation.m - execute a simple range estimation on the entire dataset.

O Office1_40MHz_VenueFile.mat - contains a map of the office from which the dataset was gathered.

 

#########################

Running the Matlab example:

 

In order to run the Matlab simulation, extract the contents of the zip file and call the mainFtmDatasetExample() function from Matlab.

 

#########################

Contents of the dataset:

 

The RTT_data.csv file contains a header row, followed by 29581 rows of FTM transactions.

The first column of the header row includes an extra "%" in the begining, so that the entire csv file can be easily loaded to Matlab using the command: load('RTT_data.csv')

Indexing the csv columns from 1 (leftmost column) to 467 (rightmost column):

O column 1 - Timestamp of each measurement (sec)

O columns 2 to 4 - Ground truth (GT) position of the client at the time the measurement was taken (meters, in local frame)

O column 5 - Range, as estimated by the devices in real time (meters)

O columns 6 to 8 - Access Point (AP) position (meters, in local frame)

O column 9 - AP index/number, according the convention of the ION ITM 2019 paper

O column 10 - Ground truth range between the AP and client (meters)

O column 11 - Time of Departure (ToD) factor in meters, such that: TrueRange = (ToA_client + ToA_AP)*3e8/2 + ToD_factor (eq. 7 in the ION ITM paper, with "ToA" being tau_0 and the "ToD_factor" lumps up both nu initiator and nu responder)

O columns 12 to 467 - Complex channel estimates. Each channel contains 114 complex numbers denoting the frequency response of the channel at each WiFi tone:

O columns 12 to 125  - Complex channel estimates for first antenna from the client device

O columns 126 to 239 - Complex channel estimates for second antenna from the client device

O columns 240 to 353 - Complex channel estimates for first antenna from the AP device

O columns 354 to 467 - Complex channel estimates for second antenna from the AP device

The tone frequencies are given by: 312.5E3*[-58:-2, 2:58] Hz (e.g. column 12 of the csv contains the channel response at frequency fc-18.125MHz, where fc is the carrier wave frequency).

Note that the 3 tones around the baseband DC (i.e. around the frequency of the carrier wave), as well as the guard tones, are not included.

 

Categories:
1571 Views

Pages