Imagine you just moved to your brand-new home and hired your energy provider. They tell you that based on the provided information they will set up a direct debit of €50/month. However, at the end of the year, that prediction was not quite accurate, and you end up paying a settlement amount of €300, or if you are lucky, they give you back some money. Either way, you will probably be disappointed with your energy provider and might consider moving on to another one. Predicting energy consumption is currently a key challenge for the energy industry as a whole.

Last Updated On: 
Tue, 07/20/2021 - 06:35

The aircraft fuel distribution system has two primary functions: storing fuel and distributing fuel to the engines. These functions are provided in refuelling and consumption phases, respectively. During refuelling, the fuel is first loaded in the Central Reservation Tank and then distributed to the Front and Rear Tanks. In the consumption phase, the two engines receive an adequate level of fuel from the appropriate tanks. For instance, the Port Engine (PE) will receive fuel from Front Tank and the Starboard Engine (SE) will receive fuel from Rear Tank.

Instructions: 

You can easily read the CSV files and apply your method.The dataset has five parts, one normal and four abnormal scenarios.

 

Categories:
738 Views

The  database contains the raw range-azimuth measurements obtained from mmWave MIMO radars (IWR1843BOOST http://www.ti.com/tool/IWR1843BOOST) deployed in different positions around a robotic manipulator.

Instructions: 

The database that contains the raw range-azimuth measurements obtained from mmWave MIMO radars inside a Human-Robot (HR) workspace environment. 

The goal of the training task is to learn a ML model for the detection (classification) of the position of the human operators sharing the workspace, namely the human-robot distance and the direction of arrival (DOA). In particular, we address the detection the human subject in 10 region of interest (ROI), including the one referring to the subject outside the monitored area (class labelled as 0): these are detailed in the Figure. The proposed federated training scenario resorts to a network of 9 physical devices. For testing a continual learning setup, the training data are collected independently by each device over three consecutive days (day 0, day 1, day 2).

Python Code: for federated learning (FL) implementation please use the code in https://github.com/labRadioVision/Federated_Learning_MQTT and follow instructions therein

Training data

For day i (i=0,1,2), the database training_data_range_azimuth_day(i) contains 9 structures (mmwave_data_train_k.mat) that contain the data collected by device k, k=1,...,9 (9 devices) . Each mat file for the k-th device contains two sets:

i) mmwave_data_train_k:  has dimension 900 x 256 x 63. Contains 250 FFT range-azimuth measurements of size 256 x 63 collected by device k: 256-point range samples corresponding to a max range of 11m (min range of 0.5m) and 63 angle bins, corresponding to DOA ranging from -75 to +75 degree. These data are used for training, might be combined with data collected by other devices.

ii) label_train_k: contains the corresponding labels. Each label (from 0 to 9) corresponds to one of the 10 positions (from 1 to 9) of the operator as detailed in the image attached. Empty space (subject outside the area) has label 0

Test data

For day i (i=0,1,2), the database test_data_range_azimuth_day(i) contains the file mmwave_data_test.mat with the test data for validate the training at day i . Each mat file contains two sets:

i) mmwave_data_test: has dimension 250 x 256 x 63. Contains 1000 FFT range-azimuth measurements of size 256 x 63: 256-point range samples corresponding to a max range of 11m (min range of 0.5m) and 63 angle bins, corresponding to DOA ranging from -75 to +75 degree. These data are used for testing (validation database). 

ii) label_test: contains the corresponding groun-truth labels. Each label (from 0 to 9) corresponds to one of the 10 positions (from 1 to 9) of the operator as detailed in the image attached. Empty space (subject outside the area) has label 0

 

iii) LABELS AND CLASSES:

Each class from 1 to 9 corresponds to a subject position in the surrounding of the robot, in particular:

CLASS (or LABEL) 1 identifies the human operator as working close-by the robot, at distance between 0.5 and 0.7 m and azimtuh 40-60 deg (positive).

CLASS 2 identifies the human operator as working close-by the robot, at distance between 0.3 and 0.5 m and azimtuh in the range -10 + 10 deg.

CLASS 3 identifies the human operator as working close-by the robot, at distance between 0.5 and 0.7 m and azimtuh 40-60 deg (negative).

CLASS 4 identifies the human operator as working at distance between 1 and 1.2 m from the robot and azimtuh 20-40 deg (positive).

CLASS 5 identifies the human operator as working close-by the robot, at distance between 0.9 and 1.1 m and azimtuh in the range -10 + 10 deg.

CLASS 6 identifies the human operator as working at distance between 1 and 1.2 m from the robot and azimtuh 20-40 deg (negative).

CLASS 7 identifies the human operator as working at distance between 1.2 and 1.6 m from the robot and azimtuh 10-20 deg (positive).

CLASS 8 identifies the human operator as working at distance between 1.1 and 1.5 m from the robot and azimtuh -5 +5 deg.

CLASS 9 identifies the human operator as working at distance between 1.2 and 1.6 m from the robot and azimtuh 10-20 deg (negative).

CLASS 0: empty space (the operator might move in the surrounding)

Categories:
1798 Views

We collected experimental field data with a prototype open-ended waveguide sensor (WR975) operating between 600 MHz - 1300 MHz. With our prototype sensor we collected reflection coefficient measurements at a total of 50 unique 1-ft^2 sites across two separate established cranberry beds in central Wisconsin. The sensor was placed directly on top of cranberry-crop bed canopies, and we obtained 12 independent reflection coefficient measurements (each defined as one S11 sweep across frequency) at each 1-ft^2 site by randomly rotating and/or translating the sensor aperture above each site. After

Categories:
201 Views

test

Categories:
155 Views

Visible Light Positioning is an indoor localization technology that uses wireless transmission of visible light signals to obtain a location estimate of a mobile receiver. 

This dataset can be used to validate supervised machine learning approaches in the context of Received Signal Strength Based Visible Light Positioning. 

The set is acquired in an experimental setup that consists of 4 LED transmitter beacons and a photodiode as receiving element that can move in 2D.

Categories:
530 Views

A dataset from semiconductor assembly and testing processes is used to evaluate the model selection prediction method. The response variable refers to the throughput rate of a specific machine–product combination in one of the assembly and testing process steps based on historical data. This data set includes 1 response variable, 5 categorical machine and product attributes and 11 numerical attributes. The dataset contains 13186 observations.

Instructions: 

mixed_categorical_numerical_data.csv: the raw data.

mixed_categorical_numerical_dataDummy.csv: the transformed one-hot encoded data.

Full_Model.rds: the full model built from the whole dataset.

Fundamental_Model.rds: the fundamental model built from one fundamental dataset.

Partial_Model_1-11.rds: the models related to the fundamental model mentioned above.

Categories:
603 Views

Tactile perception of the material properties in real-time using tiny embedded systems is a challenging task and of grave importance for dexterous object manipulation such as robotics, prosthetics and augmented reality [1-4] . As the psychophysical dimensions of the material properties cover a wide range of percepts, embedded tactile perception systems require efficient signal feature extraction and classification techniques to process signals collected by tactile sensors in real-time.

Instructions: 

There are four CSV files (X, Y, Z, and S) in the dataset corresponding to the sensor recordings. The 3-dimensional accelerometer sensor recordings are denoted by X, Y, and Z, respectively. The sound recordings from the electret condenser microphone are denoted by S. As there are 12 classes in the dataset, there is one line for each class in the CSV files. For each texture, 20 seconds of recordings are collected. Therefore, each line in the X, Y, Z files has 4,000 samples (20 sec x 200Hz sampling rate) and each line in the S file has 160,000 samples (20 sec x 8 kHz). The training and test sets for the machine learning classifiers can be created by snipping short frames out of these recordings and applying signal feature extraction. For example, the first 400 columns of the 12th row of X.csv and the first 16,000 columns of the 12th row of S.csv both correspond to the first 2 seconds of the recordings for texture class 12. The Python programs we have developed will be made available upon request.

 

Please cite the dataset and accompanying paper if you use this dataset:

  • Kursun, O. and Patooghy, A. (2020) "An Embedded System for Collection and Real-time Classification of a Tactile Dataset", IEEE Access (accepted for publication).
  • Kursun, O. and Patooghy, A. (2020) "Texture Dataset Collected by Tactile Sensors", IEEE Dataport, 2020.
Categories:
1414 Views

 

GPS spoofing and jamming are common attacks against the UAV, however, conducting these experiments for research can be difficult in many areas. This dataset consists of a logs from a benign flight as well as one where the UAV experiences GPS spoofing and jamming. The Keysight EXG N5172B signal generator is used to provide the true coordinates as a location in Shanghai, China.

Instructions: 

PX4 Autopilot v1.11.3 (https://px4.io) is used for all experiments, running on Pixhawk 4 flight controller (PX4_FMU_V5) and Pixhawk GPS receiver. The UAV frame is the Holybro S500. QGroundControl (v4.0.9) is used for GCS (http://qgroundcontrol.com). 

Full flight data is contained in ULOG files (https://dev.px4.io/v1.9.0/en/log/ulog_file_format.html)

CSV files are obtained by conversion using the ulog2csv script (https://github.com/PX4/pyulog/blob/master/pyulog/ulog2csv.py)

Categories:
6598 Views

Conveyor belts are the most widespread means of transportation for large quantities of materials in the mining sector. This dataset contains 388 images of structures with and without dirt buildup.

One can use this dataset for experimentation on classifying the dirt buildup.

Instructions: 

The data are separated into folders that specify each class of the dataset: Clean and Dirty.

Categories:
457 Views

Pages