This is a unique energy-aware navigation dataset collected at the Canadian Space Agency’s Mars Emulation Terrain (MET) in Saint-Hubert, Quebec, Canada. It consists of raw and post-processed sensor measurements collected by our rover in addition to georeferenced aerial maps of the MET (colour mosaic, elevation model, slope and aspect maps). The data are available for download in human-readable format and rosbag (.bag) format. Python data fetching and plotting scripts and ROS-based visualization tools are also provided.

Instructions: 

The entire dataset is separated into six different runs, each covering different sections of the MET at different times. The data was collected on September 4, 2018 between 17:00 and 19:00 (Eastern Daylight Time). The data is available in both human-readable format and in rosbag (.bag) format.

To avoid extremely large files, the rosbag data of every run was broken down into two parts: “runX_clouds_only.bag” and “runX_base.bag”. The former only contains the point clouds generated from the omnidirectional camera raw images after data collection, and the latter contains all the raw data and the remainder of the post-processed data. Both rosbags possess consistent timestamps and can be merged together using bagedit for example. A similar breakdown was followed for the human-readable data.

Aside from point clouds, the post-processed data of every run includes a blended cylindrical panorama made from the omnidirectional sensor images, planar rover velocity estimates from wheel encoder data and an estimated global trajectory obtained by fusing GPS and stereo imagery coming from cameras 0 and 1 of the omnidirectional sensor using VINS-Fusion later combined with the raw IMU data. Global sun vectors and relative ones (with respect to the rover’s base frame) were also calculated using the Pysolar library. This library also provided clear-sky direct irradiance estimates along every pyranometer measurement collected. Lastly, the set of georeferenced aerial maps, the transforms between different rover and sensor frames, and the intrinsic parameters of each camera are also available.

We strongly recommend interested users to visit the project's home page, which provides additional information about each run (such as their physical length and duration). All download links on the home page were updated to pull from the IEEE DataPort servers. A more detailed description of the test environment and hardware configuration are provided in the project's official journal publication.

Once the data products of the desired run are downloaded, the project's Github repository provides a lightweight ROS package and python utilities to fetch the desired data streams from the rosbags.

Categories:
150 Views

Opportunity++ is a precisely annotated dataset designed to support AI and machine learning research focused on the multimodal perception and learning of human activities (e.g. short actions, gestures, modes of locomotion, higher-level behavior).

Categories:
188 Views

We introduce a novel dataset of bee piping audio signals which was built by collecting 44 different recordings which were published by various beekeepers on the YouTube platform.Each recording has a duration varying from 2 to 13 seconds and is annotated according to the beekeeper comment respectively as Tooting or Quacking.We extracted the audio using ``YouTube soundtrack extraction'' from 14 distinct videos from which the signal is stored without a loss of quality into a WAVE file with a sampling frequency of F_s=22.05 kHz and a sample precision of 16 bits

Categories:
94 Views

Groove is a key structure of high-performance integral cutting tools. It has to be manufactured by 5-axis grinding machine due to its complex spatial geometry and hard materials. The crucial manufacturing parameters (CMP) are grinding wheel positions and geometries. However, it is a challenging problem to solve the CMP for the designed groove. The traditional trial-and-error or analytical methods have defects such as time-consuming, limited-applying and low accuracy.

Categories:
54 Views

Computer vision and image processing have made significant progress in many real-world applications, including environmental monitoring and protection. Recent studies have shown that computer vision and image processing can be used to quantify water turbidity, a crucial physical parameter in water quality assessment. This paper presents a procedure to determine water turbidity using deep learning methods, specifically, convolutional neural network (CNN). At first, water samples were located inside a dark cabin before digital images of the samples were captured with a smartphone camera.

Categories:
312 Views

a novel two-electrode, frequency-scan electrical impedance tomography (EIT) system for gesture recognition

Categories:
113 Views

The research were incorporated an extended cohort monitoring campaign, validation of an existing exposure model and development of a predictive model for COPD exacerbations evaluated against historical electronic health records.A miniature personal sensor unit were manufactured for the study from a prototype developed at the University of Cambridge. The units monitored GPS position, temperature, humidity, CO, NO, NO2, O3, PM10 and PM2.5.Three 6-month cohort monitoring campaigns were carried out, each including of 65 COPD patients.

Categories:
205 Views

Evaluation data of the experiments for the paper "Comparison of Anomaly Detectors: Context Matters".

Categories:
81 Views

This is an ontology used for the identification of common algae involved in harmful algal bloom events. This ontology is used as a guide to determine the algae species to be identified by the expert system.

Categories:
85 Views

Classification of COVID-19 severity using scRNA-Seq

Last Updated On: 
Sun, 10/03/2021 - 21:09
Citation Author(s): 
Mario Flores

Pages