This is a unique energy-aware navigation dataset collected at the Canadian Space Agency’s Mars Emulation Terrain (MET) in Saint-Hubert, Quebec, Canada. It consists of raw and post-processed sensor measurements collected by our rover in addition to georeferenced aerial maps of the MET (colour mosaic, elevation model, slope and aspect maps). The data are available for download in human-readable format and rosbag (.bag) format. Python data fetching and plotting scripts and ROS-based visualization tools are also provided.

Instructions: 

The entire dataset is separated into six different runs, each covering different sections of the MET at different times. The data was collected on September 4, 2018 between 17:00 and 19:00 (Eastern Daylight Time). The data is available in both human-readable format and in rosbag (.bag) format.

To avoid extremely large files, the rosbag data of every run was broken down into two parts: “runX_clouds_only.bag” and “runX_base.bag”. The former only contains the point clouds generated from the omnidirectional camera raw images after data collection, and the latter contains all the raw data and the remainder of the post-processed data. Both rosbags possess consistent timestamps and can be merged together using bagedit for example. A similar breakdown was followed for the human-readable data.

Aside from point clouds, the post-processed data of every run includes a blended cylindrical panorama made from the omnidirectional sensor images, planar rover velocity estimates from wheel encoder data and an estimated global trajectory obtained by fusing GPS and stereo imagery coming from cameras 0 and 1 of the omnidirectional sensor using VINS-Fusion later combined with the raw IMU data. Global sun vectors and relative ones (with respect to the rover’s base frame) were also calculated using the Pysolar library. This library also provided clear-sky direct irradiance estimates along every pyranometer measurement collected. Lastly, the set of georeferenced aerial maps, the transforms between different rover and sensor frames, and the intrinsic parameters of each camera are also available.

We strongly recommend interested users to visit the project's home page, which provides additional information about each run (such as their physical length and duration). All download links on the home page were updated to pull from the IEEE DataPort servers. A more detailed description of the test environment and hardware configuration are provided in the project's official journal publication.

Once the data products of the desired run are downloaded, the project's Github repository provides a lightweight ROS package and python utilities to fetch the desired data streams from the rosbags.

Categories:
346 Views

We propose a novel high-resolution dataset named, “Dataset for Indian Road Scenarios (DIRS21)” for developing perception systems for advanced driver assistance systems.

Categories:
1501 Views

Opportunity++ is a precisely annotated dataset designed to support AI and machine learning research focused on the multimodal perception and learning of human activities (e.g. short actions, gestures, modes of locomotion, higher-level behavior).

Categories:
584 Views

We design a solution to achieve coordinated localization between two unmanned aerial vehicles (UAVs) using radio and camera perception. We achieve the localization between the UAVs in the context of solving the problem of UAV Global Positioning System (GPS) failure or its unavailability. Our approach allows one UAV with a functional GPS unit to coordinate the localization of another UAV with a compromised or missing GPS system. Our solution for localization uses a sensor fusion and coordinated wireless communication approach.

Categories:
267 Views

There is an industry gap for publicly available electric utility infrastructure imagery.  The Electric Power Research Institute (EPRI) is filling this gap to support public and private sector AI innovation.  This dataset consists of ~30,000 images of overhead Distribution infrastructure.  These images have been anonymized, reviewed, and .exif image-data scrubbed.  These images are unlabeled and do not contain annotations.  EPRI intends to label these data to support its own research activities.  As these labels are created, EPRI will periodically update this dataset with those data.

Instructions: 

These images are not labeled or annotated.  However, as these images are labeled, EPRI will update this dataset periodically.  If you have annotations you'd like to contribute, please send them, with a description of your labeling approach, to ai@epri.com.

 

Also, if you see anything in the imagery that looks concerning, please send the image and image number ai@epri.com

Categories:
887 Views

To study the driver's behavior in real traffic situations, we conducted experiments using an instrumented vehicle, which comprises:

(i) a camera, installed above the vehicle's side window and oriented toward the driver, and (ii) a Mobile Digital Video Recorder (MDVR).

Categories:
1327 Views

MI3

Surveillance video captured by Multi-intensity infrared illuminator.

GT(ground-truths) :bounding boxes of 'person' in channel 2,4 and 6 by following the Pascal VOC format.

Categories:
349 Views

The Dasha River dataset was collected by a USV sailing along the Dasha River in Shenzhen, China. Visual images in the dataset were extracted from two videos taken from a USV perspective, with a resolution of 1920×1080 pixels. Totally 360 images were obtained after screening, and all labels were manually annotated.

Categories:
135 Views

A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed. Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks. Due to a highly novel type of visual sensors used in event-based vision, only a few datasets aimed at visual navigation tasks are publicly available.

Instructions: 

The dataset includes the following sequences:

  • 01_winter_forest – Daytime, No wind, Clear weather, Snowy scenery, Closed loop, Forest trail
  • 02_winter_forest - Daytime, No wind, Clear weather, Snowy scenery, Closed loop, Forest trail
  • 03_winter_parking_lot - Daytime, No wind, Clear weather, Snowy scenery, Closed loop, Asphalt road
  • 04_winter_bush_rows - Daytime, No wind, Snowy scenery, Closed loop, Shrubland
  • 05_winter_bush_rows - Daytime, No wind, Snowy scenery, Closed loop, Shrubland
  • 06_winter_greenhouse_complex - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 07_winter_greenhouse_complex - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 08_winter_orchard - Daytime, No wind, Snowy scenery, Closed loop, Orchard
  • 09_winter_orchard - Daytime, No wind, Snowy scenery, Closed loop, Orchard
  • 10_winter_farm - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 11_winter_farm - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 12_summer_bush_rows - Daytime, Mild wind, Closed loop, Shrubland
  • 13_summer_bush_rows - Daytime, Mild wind, Closed loop, Shrubland
  • 14_summer_farm - Daytime, Mild wind, Closed loop, Shrubland, Tilled field
  • 15_summer_farm - Daytime, Mild wind, Closed loop, Shrubland, Tilled field
  • 16_summer_orchard - Daytime, Mild wind, Closed loop, Shrubland, Orchard
  • 17_summer_orchard - Daytime, Mild wind, Closed loop, Shrubland, Orchard
  • 18_summer_garden - Daytime, Mild wind, Closed loop, Pine coppice, Winter wheat sowing, Winter rapeseed
  • 19_summer_garden - Daytime, Mild wind, Closed loop, Pine coppice, Winter wheat sowing, Winter rapeseed
  • 20_summer_farm - Daytime, Mild wind, Closed loop, Orchard, Tilled field, Cows tethered in pasture
  • 21_summer_farm - Daytime, Mild wind, Closed loop, Orchard, Tilled field, Cows tethered in pasture
  • 22_summer_hangar - Daytime, No wind, Closed loop
  • 23_summer_hangar - Daytime, No wind, Closed loop
  • 24_summer_hangar - Daytime, No wind, Closed loop
  • 25_summer_puddles - Daytime, No wind, Closed loop, Meadow, grass up to 30 cm
  • 26_summer_green_meadow - Daytime, No wind, Closed loop, Meadow, grass up to 30 cm
  • 27_summer_green_meadow - Daytime, No wind, Closed loop, Meadow, grass up to 30 cm
  • 28_summer_grooved_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm, Furrows (longitudinally and transversely)
  • 29_summer_grooved_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm, Furrows (longitudinally and transversely)
  • 30_summer_grooved_field - Daytime, No wind, Closed loop, Furrows (longitudinally and transversely)
  • 31_summer_grooved_field - Daytime, No wind, Closed loop, Furrows (longitudinally and transversely)
  • 32_summer_cereal_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm
  • 33_summer_cereal_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm
  • 34_summer_forest - Daytime, No wind, Closed loop, Forest trail
  • 35_summer_forest - Daytime, No wind, Closed loop, Forest trail
  • 36_summer_forest - Daytime, No wind, Closed loop, Forest trail, Forest surface - moss, branches, stumps
  • 37_summer_forest - Daytime, No wind, Closed loop, Forest trail, Forest surface - moss, branches, stumps
  • 38_summer_dark_parking_lot - Twilight, No wind, Closed loop, Asphalt road, Lawn
  • 39_summer_dark_parking_lot - Twilight, No wind, Closed loop, Asphalt road, Lawn
  • 40_summer_parking_lot - Daytime, Mild wind, Closed loop, Asphalt road, Lawn
  • 41_summer_greenhouse - Daytime, Closed loop, Greenhouse
  • 42_summer_greenhouse - Daytime, Closed loop, Greenhouse

Each sequence contains the following separately downloadable files:

  • <..sequence_id..>_video.mp4 – provides an overview of the sequence data (for the DVS and RGB-D sensors).
  • <..sequence_id..>_data.tar.gz – entire date sequence in raw data format (AEDAT2.0 - DVS, images - RGB-D, point clouds in pcd files - LIDAR, and IMU csv files with original sensor timestamps). Timestamp conversion formulas are available.
  • <..sequence_id..>_rawcalib_data.tar.gz – recorded fragments that can be used to perform the calibration independently (intrinsic, extrinsic and time alignment).
  • <..sequence_id..>_rosbags.tar.gz – main sequence in ROS bag format. All sensors timestamps are aligned with DVS with an accuracy of less than 1 ms.

The contents of each archive are described below..

Raw format data

The archive <..sequence_id..>_data.tar.gz contains the following files and folders:

  • ./meta-data/ - all the useful information about the sequence
  • ./meta-data/meta-data.md - detailed information about the sequence, sensors, files, and data formats
  • ./meta-data/cad_model.pdf - sensors placement
  • ./meta-data/<...>_timeconvs.json - coefficients for timestamp conversion formulas
  • ./meta-data/ground-truth/ - movement ground-truth data, calculated using 3 different Lidar-SLAM algorithms (Cartographer, HDL-Graph, LeGo-LOAM)
  • ./meta-data/calib-params/ - intrinsic and extrinsic calibration parameters
  • ./recording/ - main sequence
  • ./recording/dvs/ - DVS events and IMU data
  • ./recording/lidar/ - Lidar point clouds and IMU data
  • ./recording/realsense/ - Realsense camera RGB, Depth frames, and IMU data
  • ./recording/sensorboard/ - environmental sensors data (temperature, humidity, air pressure)

Calibration data

The <..sequence_id..>_rawcalib_data.tar.gz archive contains the following files and folders:

  • ./imu_alignments/ - IMU recordings of the platform lifting before and after the main sequence (can be used for custom timestamp alignment)
  • ./solenoids/ - IMU recordings of the solenoid vibrations before and after the main sequence (can be used for custom timestamp alignment)
  • ./lidar_rs/ - Lidar vs Realsense camera extrinsic calibration by showing both sensors a spherical object (ball)
  • ./dvs_rs/ - DVS and Realsense camera intrinsic and extrinsic calibration frames (checkerboard pattern)

ROS Bag format data

There are six rosbag files for each scene, their contents are as follows:

  • <..sequence_id..>_dvs.bag (topics: /dvs/camera_info, /dvs/events, /dvs/imu, and accordingly message types: sensor_msgs/CameraInfo, dvs_msgs/EventArray, sensor_msgs/Imu).
  • <..sequence_id..>_lidar.bag (topics: /lidar/imu/acc, /lidar/imu/gyro, /lidar/pointcloud, and accordingly message types: sensor_msgs/Imu, sensor_msgs/Imu, sensor_msgs/PointCloud2).
  • <..sequence_id..>_realsense.bag (topics: /realsense/camera_info, /realsense/depth, /realsense/imu/acc, /realsense/imu/gyro, /realsense/rgb, /tf, and accordingly message types: sensor_msgs/CameraInfo, sensor_msgs/Image, sensor_msgs/Imu, sensor_msgs/Imu, sensor_msgs/Image, tf2_msgs/TFMessage).
  • <..sequence_id..>_sensorboard.bag (topics: /sensorboard/air_pressure, /sensorboard/relative_humidity, /sensorboard/temperature, and accordingly message types: sensor_msgs/FluidPressure, sensor_msgs/RelativeHumidity, sensor_msgs/Temperature).
  • <..sequence_id..>_trajectories.bag (topics: /cartographer, /hdl, /lego_loam, and accordingly message types: geometry_msgs/PoseStamped, geometry_msgs/PoseStamped, geometry_msgs/PoseStamped).
  • <..sequence_id..>_data_for_realsense_lidar_calibration.bag (topics: /lidar/pointcloud, /realsense/camera_info, /realsense/depth, /realsense/rgb, /tf, and accordingly message types: sensor_msgs/PointCloud2, sensor_msgs/CameraInfo, sensor_msgs/Image, sensor_msgs/Image, tf2_msgs/TFMessage).
Categories:
411 Views

Computer vision systems are commonly used to design touch-less human-computer interfaces (HCI) based on dynamic hand gesture recognition (HGR) systems, which have a wide range of applications in several domains, such as, gaming, multimedia, automotive, home automation. However, automatic HGR is still a challenging task, mostly because of the diversity in how people perform the gestures. In addition, the number of publicly available hand gesture datasets is scarce, often the gestures are not acquired with sufficient image quality, and the gestures are not correctly performed.

Categories:
3139 Views

Pages