The Ways To Wear a Mask or a Respirator Database (WWMR-DB) is a test database that can be used to compare the behavior of current mask detection systems with images that most closely resemble the real case. It consists of 1222 images divided into 8 classes, depicting the most common ways in which masks or respirators are worn:

- Mask Or Respirator Not Worn

- Mask Or Respirator Correctly Worn

- Mask Or Respirator Under The Nose

- Mask Or Respirator Under The Chin

- Mask Or Respirator Hanging From An Ear

- Mask Or Respirator On The Tip Of The Nose

Instructions: 

For any question, please send an email to antonio.marceddu@polito.it.

Categories:
972 Views

A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed. Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks. Due to a highly novel type of visual sensors used in event-based vision, only a few datasets aimed at visual navigation tasks are publicly available.

Instructions: 

The dataset includes the following sequences:

  • 01_forest – Closed loop, Forest trail, No wind, Daytime
  • 02_forest – Closed loop, Forest trail, No wind, Daytime
  • 03_green_meadow – Closed loop, Meadow, grass up to 30 cm, No wind, Daytime
  • 04_green_meadow – Closed loop, Meadow, grass up to 30 cm, Mild wind, Daytime
  • 05_road_asphalt – Closed loop, Asphalt road, No wind, Nighttime
  • 06_plantation – Closed loop, Shrubland, Mild wind, Daytime
  • 07_plantation – Closed loop, Asphalt road, No wind, Nighttime
  • 08_plantation_water – Random movement, Sprinklers (water drops on camera lens), No wind, Nighttime
  • 09_cattle_farm – Closed loop, Cattle farm, Mild wind, Daytime
  • 10_cattle_farm – Closed loop, Cattle farm, Mild wind, Daytime
  • 11_cattle_farm_feed_table – Closed loop, Cattle farm feed table, Mild wind, Daytime
  • 12_cattle_farm_feed_table – Closed loop, Cattle farm feed table, Mild wind, Daytime
  • 13_ditch – Closed loop, Sandy surface, Edge of ditch or drainage channel, No wind, Daytime
  • 14_ditch – Closed loop, Sandy surface, Shore or bank, Strong wind, Daytime
  • 15_young_pines – Closed loop, Sandy surface, Pine coppice, No wind, Daytime
  • 16_winter_cereal_field – Closed loop, Winter wheat sowing, Mild wind, Daytime
  • 17_winter_cereal_field – Closed loop, Winter wheat sowing, Mild wind, Daytime
  • 18_winter_rapeseed_field – Closed loop, Winter rapeseed, Mild wind, Daytime
  • 19_winter_rapeseed_field – Closed loop, Winter rapeseed, Mild wind, Daytime
  • 20_field_with_a_cow – Closed loop, Cows tethered in pasture, Mild wind, Daytime
  • 21_field_with_a_cow – Closed loop, Cows tethered in pasture, Mild wind, Daytime

Each sequence contains the following separately downloadable files:

  • <..sequence_id..>_video.mp4 – provides an overview of the sequence data (for the DVS and RGB-D sensors).
  • <..sequence_id..>_data.tar.gz – entire date sequence in raw data format (AEDAT2.0 - DVS, images - RGB-D, point clouds in pcd files - LIDAR, and IMU csv files with original sensor timestamps). Timestamp conversion formulas are available.
  • <..sequence_id..>_rawcalib_data.tar.gz – recorded fragments that can be used to perform the calibration independently (intrinsic, extrinsic and time alignment).
  • <..sequence_id..>_rosbags.tar.gz – main sequence in ROS bag format. All sensors timestamps are aligned with DVS with an accuracy of less than 1 ms.

The contents of each archive are described below..

Raw format data

The archive <..sequence_id..>_data.tar.gz contains the following files and folders:

  • ./meta-data/ - all the useful information about the sequence
  • ./meta-data/meta-data.md - detailed information about the sequence, sensors, files, and data formats
  • ./meta-data/cad_model.pdf - sensors placement
  • ./meta-data/<...>_timeconvs.json - coefficients for timestamp conversion formulas
  • ./meta-data/ground-truth/ - movement ground-truth data, calculated using 3 different Lidar-SLAM algorithms (Cartographer, HDL-Graph, LeGo-LOAM)
  • ./meta-data/calib-params/ - intrinsic and extrinsic calibration parameters
  • ./recording/ - main sequence
  • ./recording/dvs/ - DVS events and IMU data
  • ./recording/lidar/ - Lidar point clouds and IMU data
  • ./recording/realsense/ - Realsense camera RGB, Depth frames, and IMU data
  • ./recording/sensorboard/ - environmental sensors data (temperature, humidity, air pressure)

Calibration data

The <..sequence_id..>_rawcalib_data.tar.gz archive contains the following files and folders:

  • ./imu_alignments/ - IMU recordings of the platform lifting before and after the main sequence (can be used for custom timestamp alignment)
  • ./solenoids/ - IMU recordings of the solenoid vibrations before and after the main sequence (can be used for custom timestamp alignment)
  • ./lidar_rs/ - Lidar vs Realsense camera extrinsic calibration by showing both sensors a spherical object (ball)
  • ./dvs_rs/ - DVS and Realsense camera intrinsic and extrinsic calibration frames (checkerboard pattern)

ROS Bag format data

There are six rosbag files for each scene, their contents are as follows:

  • <..sequence_id..>_dvs.bag (topics: /dvs/camera_info, /dvs/events, /dvs/imu, and accordingly message types: sensor_msgs/CameraInfo, dvs_msgs/EventArray, sensor_msgs/Imu).
  • <..sequence_id..>_lidar.bag (topics: /lidar/imu/acc, /lidar/imu/gyro, /lidar/pointcloud, and accordingly message types: sensor_msgs/Imu, sensor_msgs/Imu, sensor_msgs/PointCloud2).
  • <..sequence_id..>_realsense.bag (topics: /realsense/camera_info, /realsense/depth, /realsense/imu/acc, /realsense/imu/gyro, /realsense/rgb, /tf, and accordingly message types: sensor_msgs/CameraInfo, sensor_msgs/Image, sensor_msgs/Imu, sensor_msgs/Imu, sensor_msgs/Image, tf2_msgs/TFMessage).
  • <..sequence_id..>_sensorboard.bag (topics: /sensorboard/air_pressure, /sensorboard/relative_humidity, /sensorboard/temperature, and accordingly message types: sensor_msgs/FluidPressure, sensor_msgs/RelativeHumidity, sensor_msgs/Temperature).
  • <..sequence_id..>_trajectories.bag (topics: /cartographer, /hdl, /lego_loam, and accordingly message types: geometry_msgs/PoseStamped, geometry_msgs/PoseStamped, geometry_msgs/PoseStamped).
  • <..sequence_id..>_data_for_realsense_lidar_calibration.bag (topics: /lidar/pointcloud, /realsense/camera_info, /realsense/depth, /realsense/rgb, /tf, and accordingly message types: sensor_msgs/PointCloud2, sensor_msgs/CameraInfo, sensor_msgs/Image, sensor_msgs/Image, tf2_msgs/TFMessage).

Version history

22.06.2021.

  • Realsense data now also contain depth png images with 16-bit depth, which are located in folder /recording/realsense/depth_native/
  • Added data in rosbag format

 

Categories:
330 Views

The concept of tuberculosis detection paves a major role in this recent world because, according to the Global Tuberculosis (TB) Report in 2019, more than one million cases are reported per year in India. Even though various tests are available, the chest X-ray is the most important one, without which the detection will be incomplete. In ancient poster anterior chest radiographs, several clinical and diagnostic functions are built by the use of computationally designed algorithms.

Categories:
113 Views

ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions.

Categories:
288 Views

SDU-Haier-AQD (Shandong University-Haier-Appearance Quality Detection) is an image dataset jointly constructed by Shandong University and Haier, which contains a various of air conditioner external unit image collected during actual detection process.The  Appearance Quality Detection (AQD) dataset is consisted of 10449 images, and the samples in the dataset are collected on the actual industrial production line of air conditioner.

Categories:
127 Views

The MCData was designed and produced for mouth cavity detection and segmentation. This dataset can be utilized for training and testing of mouth cavity instance segmentation networks. This dataset is the first available dataset for detecting and segmentation of mouth cavity main components to the best of the authors’ knowledge.

Categories:
322 Views

We have developed this dataset for the Bangla image caption. Here, we have recorded  500 images with one caption of each. Basically the lifestyle, festivals are mainly focused in this dataset. We have accomplished rice/harvest festivals, snake charming, palanquin, merry-go-round, slum, blacksmith, potter, fisherman, tat shilpo, jamdani, shutki chash, date juice, hal chash, tokai, pohela falgun, gaye holud, etc.

   

Categories:
968 Views

LiDAR point cloud data serves as an machine vision alternative other than image. Its advantages when compared to image and video includes depth estimation and distance measruement. Low-density LiDAR point cloud data can be used to achieve navigation, obstacle detection and obstacle avoidance for mobile robots. autonomous vehicle and drones. In this metadata, we scanned over 1200 objects and classified it into 4 groups of object namely, human, cars, motorcyclist.

Categories:
214 Views

This dataset was used in our work "See-through a Vehicle: Augmenting Road Safety Information using Visual Perception and Camera Communication in Vehicles" published in the IEEE Transactions on Vehicular Technology (TVT). In this work, we present the design, implementation and evaluation of non-line-of-sight (NLOS) perception to achieve a virtual see-through functionality for road vehicles.

Instructions: 

Non-Line of Sight Perception Vehicular Camera Communication

This project is an end-end python-3 application with a continuous loop captures and analyses 100 frames captured in a second to derive appropriate safety warnings.

Contact

Dr. Ashwin Ashok, Assistant Professor, Computer Science, Georgia State University

Collaborators

Project contents

This project contains 3 modules that should be run in parallel and interact with each other using 3 CSV files.

Modules

  1. non-line-of-sight-perception
  2. intelligent-vehicular-perception_ivp
  3. warning-transmission

CSV Files

  1. packet.csv
  2. perceived_info.csv
  3. receiver_action.csv

Usage :

Folling commands must be run in parallel. For more information on libraries needed for execution, see detailed sections below.

# Terminal 1
python3 non-line-of-sight-perception/VLC_project_flow.py zed

# Terminal 2
python3 intelligent-vehicular-perception_ivp/src-code/ivp_impl.py

# Terminal 3
python3 warning-transmission/send_bits_to_transmitter.py

1. non-line-of-sight-perception : Object Detection and Scene Perception Module

This folder, For the YOLO-V3 training and inference: This project is a fork of public repository keras-yolo3. Refer the readme of that repository here. Relevant folders from this repository have been placed in training and configuration folders in this repository.

Installation of python libraries

Use the package manager pip to install foobar.

pip install opencv-python
pip install tensorflow-gpu
pip install Keras
pip install Pillow

Hardware requirements

  1. This code was tested on Jetson Xavier, but any GPU enabled machine should be sufficient.
  2. Zed Camera: Uses Zed camera to capture the images. (Requires GPU to operate at 100 fps).
  3. (Optional) the code can be modified as per the comments in file to use zed, 0 for the camera, or ' the video path' for mp4 or svo files)

Output

perceived_info.csv

2. intelligent-vehicular-perception_ivp : Safety Message/Warning Mapping Module

This module is responsible for making intelligent recommendation to the driver as well as generating Safety Warnings for the following vehicles in the road. The module ouuputs with a fusion of Received Safety Warning through VLC channel and the vehicle's own Scene Perception Data.

Python Library Dependencies

  • json
  • operator
  • csv
  • enum
  • fileinput
  • re

Input

Output

The output is two-fold.

  • packet.csv : Intelligent Recommendation to the Driver.
  • receiver_action.csv : Generated Packet bits. Each Packet bits are logged into the 'packet.csv' file. This CSV files works as a queue. Every new packet logged here eventually gets transmitted by the VLC transmission module.

3. warning-transmission: Communication Module

Detailed transmitter notes including hardware requirement is present in transmitter_notes.txt

Python Library Dependencies

  • serial

Input

  • packet.csv : Intelligent Recommendation to the Driver.

Output

LED flashes high/low in correspondence to the packets in input file.

Dataset used for training the model

The dataset has been generated using Microsoft VoTT.

This is a "Brakelight" labelled dataset that can be used for training Brakelight detection models. The dataset contains brakelights labelled on images from

*Reference : Cui, Z., Yang, S. W., & Tsai, H. M. (2015, September). A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on (pp. 931-937). IEEE.

Labeled Dataset contains 1720 trained images as well a csv file that lists 4102 bounding boxes in the format : image | xmin | ymin | xmax | ymax | label

This can be further converted into the format as required by the training module using convert_dataset_for_training.py - (Replace annotations.txt with the Microsoft VoTT generated CSV) .

 

Acknowledgements

This work has been partially supported the US National Science Foundation (NSF) grants 1755925, 1929171 and 1901133.

Categories:
141 Views

Dataset asscociated with a paper in Computer Vision and Pattern Recognition (CVPR)

 

"Object classification from randomized EEG trials"

 

If you use this code or data, please cite the above paper.

Instructions: 

See the paper "Object classification from randomized EEG trials" on IEEE Xplore.

 

Code for analyzing the dataset is included in the online supplementary materials for the paper.

 

The code from the online supplementary materials is also included here.

 

If you use this code or data, please cite the above paper.

Categories:
336 Views

Pages