A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed. Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks. Due to a highly novel type of visual sensors used in event-based vision, only a few datasets aimed at visual navigation tasks are publicly available.

Instructions: 

The dataset includes the following sequences:

  • 01_winter_forest – Daytime, No wind, Clear weather, Snowy scenery, Closed loop, Forest trail
  • 02_winter_forest - Daytime, No wind, Clear weather, Snowy scenery, Closed loop, Forest trail
  • 03_winter_parking_lot - Daytime, No wind, Clear weather, Snowy scenery, Closed loop, Asphalt road
  • 04_winter_bush_rows - Daytime, No wind, Snowy scenery, Closed loop, Shrubland
  • 05_winter_bush_rows - Daytime, No wind, Snowy scenery, Closed loop, Shrubland
  • 06_winter_greenhouse_complex - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 07_winter_greenhouse_complex - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 08_winter_orchard - Daytime, No wind, Snowy scenery, Closed loop, Orchard
  • 09_winter_orchard - Daytime, No wind, Snowy scenery, Closed loop, Orchard
  • 10_winter_farm - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 11_winter_farm - Daytime, No wind, Snowy scenery, Closed loop, Cattle farm feed table
  • 12_summer_bush_rows - Daytime, Mild wind, Closed loop, Shrubland
  • 13_summer_bush_rows - Daytime, Mild wind, Closed loop, Shrubland
  • 14_summer_farm - Daytime, Mild wind, Closed loop, Shrubland, Tilled field
  • 15_summer_farm - Daytime, Mild wind, Closed loop, Shrubland, Tilled field
  • 16_summer_orchard - Daytime, Mild wind, Closed loop, Shrubland, Orchard
  • 17_summer_orchard - Daytime, Mild wind, Closed loop, Shrubland, Orchard
  • 18_summer_garden - Daytime, Mild wind, Closed loop, Pine coppice, Winter wheat sowing, Winter rapeseed
  • 19_summer_garden - Daytime, Mild wind, Closed loop, Pine coppice, Winter wheat sowing, Winter rapeseed
  • 20_summer_farm - Daytime, Mild wind, Closed loop, Orchard, Tilled field, Cows tethered in pasture
  • 21_summer_farm - Daytime, Mild wind, Closed loop, Orchard, Tilled field, Cows tethered in pasture
  • 22_summer_hangar - Daytime, No wind, Closed loop
  • 23_summer_hangar - Daytime, No wind, Closed loop
  • 24_summer_hangar - Daytime, No wind, Closed loop
  • 25_summer_puddles - Daytime, No wind, Closed loop, Meadow, grass up to 30 cm
  • 26_summer_green_meadow - Daytime, No wind, Closed loop, Meadow, grass up to 30 cm
  • 27_summer_green_meadow - Daytime, No wind, Closed loop, Meadow, grass up to 30 cm
  • 28_summer_grooved_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm, Furrows (longitudinally and transversely)
  • 29_summer_grooved_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm, Furrows (longitudinally and transversely)
  • 30_summer_grooved_field - Daytime, No wind, Closed loop, Furrows (longitudinally and transversely)
  • 31_summer_grooved_field - Daytime, No wind, Closed loop, Furrows (longitudinally and transversely)
  • 32_summer_cereal_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm
  • 33_summer_cereal_field - Daytime, No wind, Closed loop, Meadow, grass up to 100 cm
  • 34_summer_forest - Daytime, No wind, Closed loop, Forest trail
  • 35_summer_forest - Daytime, No wind, Closed loop, Forest trail
  • 36_summer_forest - Daytime, No wind, Closed loop, Forest trail, Forest surface - moss, branches, stumps
  • 37_summer_forest - Daytime, No wind, Closed loop, Forest trail, Forest surface - moss, branches, stumps
  • 38_summer_dark_parking_lot - Twilight, No wind, Closed loop, Asphalt road, Lawn
  • 39_summer_dark_parking_lot - Twilight, No wind, Closed loop, Asphalt road, Lawn
  • 40_summer_parking_lot - Daytime, Mild wind, Closed loop, Asphalt road, Lawn
  • 41_summer_greenhouse - Daytime, Closed loop, Greenhouse
  • 42_summer_greenhouse - Daytime, Closed loop, Greenhouse

Each sequence contains the following separately downloadable files:

  • <..sequence_id..>_video.mp4 – provides an overview of the sequence data (for the DVS and RGB-D sensors).
  • <..sequence_id..>_data.tar.gz – entire date sequence in raw data format (AEDAT2.0 - DVS, images - RGB-D, point clouds in pcd files - LIDAR, and IMU csv files with original sensor timestamps). Timestamp conversion formulas are available.
  • <..sequence_id..>_rawcalib_data.tar.gz – recorded fragments that can be used to perform the calibration independently (intrinsic, extrinsic and time alignment).
  • <..sequence_id..>_rosbags.tar.gz – main sequence in ROS bag format. All sensors timestamps are aligned with DVS with an accuracy of less than 1 ms.

The contents of each archive are described below..

Raw format data

The archive <..sequence_id..>_data.tar.gz contains the following files and folders:

  • ./meta-data/ - all the useful information about the sequence
  • ./meta-data/meta-data.md - detailed information about the sequence, sensors, files, and data formats
  • ./meta-data/cad_model.pdf - sensors placement
  • ./meta-data/<...>_timeconvs.json - coefficients for timestamp conversion formulas
  • ./meta-data/ground-truth/ - movement ground-truth data, calculated using 3 different Lidar-SLAM algorithms (Cartographer, HDL-Graph, LeGo-LOAM)
  • ./meta-data/calib-params/ - intrinsic and extrinsic calibration parameters
  • ./recording/ - main sequence
  • ./recording/dvs/ - DVS events and IMU data
  • ./recording/lidar/ - Lidar point clouds and IMU data
  • ./recording/realsense/ - Realsense camera RGB, Depth frames, and IMU data
  • ./recording/sensorboard/ - environmental sensors data (temperature, humidity, air pressure)

Calibration data

The <..sequence_id..>_rawcalib_data.tar.gz archive contains the following files and folders:

  • ./imu_alignments/ - IMU recordings of the platform lifting before and after the main sequence (can be used for custom timestamp alignment)
  • ./solenoids/ - IMU recordings of the solenoid vibrations before and after the main sequence (can be used for custom timestamp alignment)
  • ./lidar_rs/ - Lidar vs Realsense camera extrinsic calibration by showing both sensors a spherical object (ball)
  • ./dvs_rs/ - DVS and Realsense camera intrinsic and extrinsic calibration frames (checkerboard pattern)

ROS Bag format data

There are six rosbag files for each scene, their contents are as follows:

  • <..sequence_id..>_dvs.bag (topics: /dvs/camera_info, /dvs/events, /dvs/imu, and accordingly message types: sensor_msgs/CameraInfo, dvs_msgs/EventArray, sensor_msgs/Imu).
  • <..sequence_id..>_lidar.bag (topics: /lidar/imu/acc, /lidar/imu/gyro, /lidar/pointcloud, and accordingly message types: sensor_msgs/Imu, sensor_msgs/Imu, sensor_msgs/PointCloud2).
  • <..sequence_id..>_realsense.bag (topics: /realsense/camera_info, /realsense/depth, /realsense/imu/acc, /realsense/imu/gyro, /realsense/rgb, /tf, and accordingly message types: sensor_msgs/CameraInfo, sensor_msgs/Image, sensor_msgs/Imu, sensor_msgs/Imu, sensor_msgs/Image, tf2_msgs/TFMessage).
  • <..sequence_id..>_sensorboard.bag (topics: /sensorboard/air_pressure, /sensorboard/relative_humidity, /sensorboard/temperature, and accordingly message types: sensor_msgs/FluidPressure, sensor_msgs/RelativeHumidity, sensor_msgs/Temperature).
  • <..sequence_id..>_trajectories.bag (topics: /cartographer, /hdl, /lego_loam, and accordingly message types: geometry_msgs/PoseStamped, geometry_msgs/PoseStamped, geometry_msgs/PoseStamped).
  • <..sequence_id..>_data_for_realsense_lidar_calibration.bag (topics: /lidar/pointcloud, /realsense/camera_info, /realsense/depth, /realsense/rgb, /tf, and accordingly message types: sensor_msgs/PointCloud2, sensor_msgs/CameraInfo, sensor_msgs/Image, sensor_msgs/Image, tf2_msgs/TFMessage).
Categories:
257 Views

Computer vision systems are commonly used to design touch-less human-computer interfaces (HCI) based on dynamic hand gesture recognition (HGR) systems, which have a wide range of applications in several domains, such as, gaming, multimedia, automotive, home automation. However, automatic HGR is still a challenging task, mostly because of the diversity in how people perform the gestures. In addition, the number of publicly available hand gesture datasets is scarce, often the gestures are not acquired with sufficient image quality, and the gestures are not correctly performed.

Categories:
403 Views

(Work in progress)

This dataset contains the augmented images and the images & segmentation maps for seven handwashing steps, six of which are prescirbed WHO handwashing steps.

This work is based on a sample handwashing video dataset uploaded by Kaggle user real-timeAR.

Categories:
122 Views

We provide two folders: 

(1)The shallow depth of field image data set folder consists of 27 folders from 1 to 27. 

In folder 1-27, each folder contains two test images and two word files. Img1 is the shallow depth of field image with the best focusing state taken with a 300 mm long focal lens, and img2 is the overall blurred image. 

Instructions: 

Readme contains a detailed description of the database and experimental results

Categories:
109 Views

The network attacks are increasing both in frequency and intensity with the rapid growth of internet of things (IoT) devices. Recently, denial of service (DoS) and distributed denial of service (DDoS) attacks are reported as the most frequent attacks in IoT networks. The traditional security solutions like firewalls, intrusion detection systems, etc., are unable to detect the complex DoS and DDoS attacks since most of them filter the normal and attack traffic based upon the static predefined rules.

Categories:
899 Views

Liver tumor segmentation.

Categories:
97 Views

Here we present recordings from a new high-throughput instrument to optogenetically manipulate neural activity in moving

Instructions: 

Datasets used in the publication:

1) List of all the dataset used to generate figure 2 and 3:

  • 20210614_RunHeadandTailRailswithDelays_AML470_0-20-40-60-80intensity

  • 20210615_RunHeadandTailRailswithDelays_AML470_0-20-40-60-80intensity

  • 20210616_RunHeadandTailRailswithDelays_AML470_0-20-40-60-80intensity

  • 20210617_RunHeadandTailRailswithDelays_AML470_0-20-40-60-80intensity

2) List of all the dataset used to generate figure S3:

  • 20210618_RunHeadandTailRailswithDelays_AML67_0-20-40-60-80intensity

3) List of all the dataset folders used to generate figure 4 and table 1:

a) AML67 dataset, open loop: 

  • 20210624_RunFullWormRails_Sandeep_AML67_10ulRet_red,

  • 20210614_RunFullWormRails_Sandeep_AML67_10ulRet_red,

  • 20200902_RunFullWormRails_Sandeep_AML67_10ulret.

b) AML67 dataset, closed loop: 

  • 20210624_RunRailsTriggeredByTurning_Sandeep_AML67_10ulRet_red,

  • 20210614_RunRailsTriggeredByTurning_Sandeep_AML67_10ulRet_red,

  • 20200902_RunRailsTriggeredByTurning_Sandeep_AML67_10ulRet.

4) List of all the dataset folders used to generate figure 4:

a) AML470 dataset, open loop: 

  • 20210723_RunFullWormRails_Sandeep_AKS_483.7.e_mec4_Chrimson_10ulRet_red,

  • 20210721_RunFullWormRails_Sandeep_AKS_483.7.e_mec4_Chrimson_10ulRet_red,

  • 20210720_RunFullWormRails_Sandeep_AKS_483.7.e_mec4_Chrimson_10ulRet_red.

b) AML470 dataset, closed loop: 

  • 20210723_RunRailsTriggeredByTurning_Sandeep_AKS_483.7.e_mec4_Chrimson_10ulRet_red,

  • 20210721_RunRailsTriggeredByTurning_Sandeep_AKS_483.7.e_mec4_Chrimson_10ulRet_red, 

  • 20210720_RunRailsTriggeredByTurning_Sandeep_AKS_483.7.e_mec4_Chrimson_10ulRet_red/AML470_20210720_turns_0uW_3s_new3.mat')

Instructions for accessing files: 

 

1) Here are the details of the naming convention used in the filenames of the datasets used to generate figure 2, 3, S3.

Let us use the folder name “20210614_RunHeadandTailRailswithDelays_AML470_0-20-40-60-80intensity” as an example. The different components of the names are:

a) Date: In the above example, this dataset was collected on 20210614 (YYYYMMDD).

b) Experiment type: "RunHeadandTailRailswithDelays" represents the data collected by stimulating head, tail, or both.

c) Name of the strain: The above example shows the dataset collected from "AML470" strain. 

d) Experiment specific information: The tag “0-20-40-60-80intensity” says that 0, 20, 40, 60, and 80uW intensity was used during this dataset.

Moreover, once you go inside each folder, you will see many subfolders with the date and time stamp. For e.g. a subfolder in the above folder is Data20210614_141921_BoxA-PC which basically says the time stamp at which this dataset was recorded (DataYYYYMMDD_HHMMSS) followed by the name of the experimental box (BoxA-PC or BoxB-PC or BoxC-PC or BoxD-PC).

2) Here are the details of the naming convention used in the filenames of the datasets used to generate figure 4 and table 1.

Let us use the folder name “20210624_RunRailsTriggeredByTurning_Sandeep_AML67_10ulRet_red” as an example. The different components of the names are:

a) Date: In the above example, this dataset was collected on 20210624 (YYYYMMDD).

b) Experiment type: "RunRailsTriggeredByTurning" represents the data collected in closed loop protocol whereas "RunFullWormRails" represents the data collected in open loop protocol. Thus, the above example represents a dataset collected using closed loop protocol.

c) Name of the user: In the above example, the user is "Sandeep".

d) Name of the strain: The above example shows the dataset collected from "AML67" strain. Folders containing the datasets collected from AML470 will say "AKS_483.7.e_mec4_Chrimson".

e) ATR information: All the datsets used retinal on the OP50 plates to grow the worms, and hence have the tag "10ulRet".

f) Color of the stim: "red" tag says that in this protocol red stimulus was delivered to the worms.

Moreover, once you go inside each folder, you will see many subfolders with the date and time stamp. For e.g. a subfolder in the above folder is Data20210624_105852 which basically says the time stamp at which this dataset was recorded (DataYYYYMMDD_HHMMSS).

Details of files inside each experimental folder:

The dataset is in the form of the output of the real-time LabVIEW instrument for maximum compression. It still needs to go through post-processing before further analysis.

 

Post-processing can be done by running the /ProcessDateDirectory.m MATLAB script from the code repository.

 

It is organized into date directories, which aggregate all the experiments collected on the same day. 

 

Each experiment is it's own time stamped folder within a date directory, and it contains the following files:

- camera_distortion.png contains camera spatial calibration information in the image metadata

- CameraFrames.mkv is the raw camera images compressed with H.265

- labview_parameters.csv is the settings used by the instrument in the real-time study

- labview_tracks.mat contains the real-time tracking data in a MATLAB readable HDF5 format

- projector_to_camera_distortion.png contains the spatial calibration information that maps projector pixel space into camera pixel space

- tags.txt contains tagged information for the experiment and is used to organize and select experiments for analysis

- timestamps.mat contains timing information saved during the real-time experiments, including closed-loop lag.

- ConvertedProjectorFrames folder contains png compressed stimulus images converted to the camera's frame of reference.

Categories:
41 Views

This document describes the details of the BON Egocentric vision dataset. BON denotes the initials of the locations where the dataset was collected; Barcelona (Spain); Oxford (UK); and Nairobi (Kenya). BON comprises first-person video, recorded when subjects were conducting common office activities. The preceding version of this dataset, FPV-O dataset has fewersubjects for only a single location (Barcelona). To develop a location agnostic framework, data from multiple locations and/or office settings is essential.

Instructions: 

Instructions are available on the attached document

Categories:
132 Views

To address the problem of online automatic inspection of drug liquid bottles in production line, an implantable visual inspection system is designed and the ensemble learning algorithm for detection is proposed based on multi-features fusion. A tunnel structure is designed for visual inspection system, which allows the bottles inspection to be automated without changing original processes and devices. A high precision method is proposed for vision detection of drug liquid bottles.

Categories:
121 Views

Dataset asscociated with a paper in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems

"Talk the talk and walk the walk: Dialogue-driven navigation in unknown indoor environments"

If you use this code or data, please cite the above paper.

 

Instructions: 

See the docs directory.

Categories:
41 Views

Pages