A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed.
Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks.
The dataset consists of 21 data sequences recorded in 12 agricultural scenarios in the autumn season. The following
<..sequence_id..>'s are available:
01_forest-- Closed loop, Forest trail, No wind, Daytime
02_forest-- Closed loop, Forest trail, No wind, Daytime
03_green_meadow-- Closed loop, Meadow, grass up to 30 cm, No wind, Daytime
04_green_meadow-- Closed loop, Meadow, grass up to 30 cm, Mild wind, Daytime
05_road_asphalt-- Closed loop, Asphalt road, No wind, Nighttime
06_plantation-- Closed loop, Shrubland, Mild wind, Daytime
07_plantation-- Closed loop, Asphalt road, No wind, Nighttime
08_plantation_water-- Random movement, Sprinklers (water drops on camera lens), No wind, Nighttime
09_cattle_farm-- Closed loop, Cattle farm, Mild wind, Daytime
10_cattle_farm-- Closed loop, Cattle farm, Mild wind, Daytime
11_cattle_farm_feed_table-- Closed loop, Cattle farm feed table, Mild wind, Daytime
12_cattle_farm_feed_table-- Closed loop, Cattle farm feed table, Mild wind, Daytime
13_ditch-- Closed loop, Sandy surface, Edge of ditch or drainage channel, No wind, Daytime
14_ditch-- Closed loop, Sandy surface, Shore or bank, Strong wind, Daytime
15_young_pines-- Closed loop, Sandy surface, Pine coppice, No wind, Daytime
16_winter_cereal_field-- Closed loop, Winter wheat sowing, Mild wind, Daytime
17_winter_cereal_field-- Closed loop, Winter wheat sowing, Mild wind, Daytime
18_winter_rapeseed_field-- Closed loop, Winter rapeseed, Mild wind, Daytime
19_winter_rapeseed_field-- Closed loop, Winter rapeseed, Mild wind, Daytime
20_field_with_a_cow-- Closed loop, Cows tethered in pasture, Mild wind, Daytime
21_field_with_a_cow-- Closed loop, Cows tethered in pasture, Mild wind, Daytime
Each sequence contains the following separately downloadable files:
<..sequence_id..>_data.tar.gz-- entire date sequence in raw data format (
AEDAT2.0- DVS, images - RGB-D, point clouds in pcd files - LIDAR, and IMU
csvfiles with original sensor timestamps). Timestamp conversion formulas are available.
<..sequence_id..>_rosdata.tar.gz-- main sequence in
ROS bagformat. All sensors timestamps are aligned with DVS with an accuracy of less than 1 ms.
<..sequence_id..>_rawcalib_data.tar.gz-- recorded fragments that can be used to perform the calibration independently (intrinsic, extrinsic and time alignment).
<..sequence_id..>_video.mp4-- provides an overview of the sequence data (for the DVS and RGB-D sensors).
The contents of each archive are described below...
Raw format data
<..sequence_id..>_data.tar.gz contains the following files and folders:
+ meta-data/ - all the useful information about the sequence
| + meta-data.md - detailed information about the sequence,
| | sensors, files, and data formats
| + cad_model.pdf - sensors placement
| + <...>_timeconvs.json - timestamp conversion formulas
| + ground-truth/ - movement ground-truth data,
| | calculated using 3 different Lidar-SLAM algorithms
| | (Cartographer, HDL-Graph, LeGo-LOAM)
| + calib-params/ - intrinsic and extrinsic calibration parameters
+ recording/ - main sequence
+ dvs/ - DVS events and IMU data
+ lidar/ - Lidar point clouds and IMU data
+ realsense/ - Realsense camera RGB, Depth frames, and IMU data
+ sensorboard/ - environmental sensors data
(temperature, humidity, air pressure)
ROS Bag format data
Will be published later...
<..sequence_id..>_rawcalib_data.tar.gz archive contains the following files and folders:
+ imu_alignments/ - IMU recordings of the platform lifting/releasing before
| and after the main sequence
| (can be used for custom timestamp alignment)
+ solenoids/ - IMU recordings of the solenoid vibrations before
| and after the main sequence
| (can be used for custom timestamp alignment)
+ lidar_rs/ - Lidar vs Realsense camera extrinsic calibration
| by placing a spherical object (ball) in the field of view of both sensors
+ dvs_rs/ - DVS and Realsense camera intrinsic and extrinsic
calibration frames (checkerboard pattern)
In order to obtain the constants of our PID temperature controller, it was necessary to identify the system. The identification of the system allows us, through experimentation, to find the representation of the plant to be able to control it.
The first data with name "data_2.mat" represent the open loop test, where the sampling frequency is 100 [Hz], this data was useful to find the period of the pulse train generator, which is twice the slowest sampling time analyzed between the high pulse and low pulse of the input.
The LEDNet dataset consists of image data of a field area that are captured from a mobile phone camera.
Images in the dataset contain the information of an area where a PCB board is placed, containing 6 LEDs. Each state of the LEDs on the PCB board represents a binary number, with the ON state corresponding to binary 1 and the OFF state corresponding to binary 0. All the LEDs placed in sequence represent a binary sequence or encoding of an analog value.
Design and fabrication outsourcing has made integrated circuits vulnerable to malicious modifications by third parties known as hardware Trojan (HT). Over the last decade, the use of side-channel measurements for detecting the malicious manipulation of the chip has been extensively studied. However, the suggested approaches mostly suffer from two major limitations: reliance on trusted identical chip (e.i. golden chip); untraceable footprints of subtle hardware Trojans which remain inactive during the testing phase.
See the attached document.
These .MAT files contain MATLAB Tables of raw and preprocessed data. Information detailing the bed system used to collect these signals and the steps used to create the preprocessed data are contained in a publication in Sensors – Carlson, C.; Turpin, V.-R.; Suliman, A.; Ade, C.; Warren, S.; Thompson, D.E.; Bed-Based Ballistocardiography: Dataset and Ability to Track Cardiovascular Parameters. Sensors 2021, 21, 156. https://doi.org/10.3390/s21010156.
The reBAP signal is scaled at 100 mmHg/volt. The interbeat interval (IBI), stroke volume (SV), and dP/dt_max are scaled at 1000 ms/volt, 100 mL/volt, and 1 mHg/s/volt, respectively.
1. Movie "movie_S1.avi" demonstrates the normalized absolute element values of dynamic influence matrices and its scaled matrices by comb drive torque along the amplitude. The dynamic influence matrices are plotted according to the comb drive frequency component, index n, and the input frequency component, index m. The absolute values of the elements of the matrix are normalized by its maximum element. The maximum element is drawn in white and the normalized elements less than 10e−6 of the maximum element are depicted in black.
This dataset provides GPS, IMU, and wheel odometry readings on various terrains for the Pathfinder robot which is a lightweight, 4-wheeled, skid-steered, custom-built rover testbed platform. Rover uses a rocker system with a differential bar connected to the front wheels. Pathfinder is utilized with slick wheels to encounter more slippage. The IMU incorporated on the rover is an ADIS-16495 with 50Hz data rate. Pathfinder's quadrature encoders with 47,000 pulses/m resolution are used for wheel odometry readings with 10Hz data rate.
This dataset contains 41 zipped folders. Each folder has at least one bag file and GPS data. The name of the folder represents the data collection date and the test terrain. Folders include bag files(IMU, WO) and GPS solution data on gravel, unpaved, paved, and rough terrains. Bag files can be processed through the code in CoreNav-GP repository.
The ability of detecting human postures is particularly important in several fields like ambient intelligence, surveillance, elderly care, and human-machine interaction. Most of the earlier works in this area are based on computer vision. However, mostly these works are limited in providing real time solution for the detection activities. Therefore, we are currently working toward the Internet of Things (IoT) based solution for the human posture recognition.