ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions.

Categories:
177 Views

SDU-Haier-AQD (Shandong University-Haier-Appearance Quality Detection) is an image dataset jointly constructed by Shandong University and Haier, which contains a various of air conditioner external unit image collected during actual detection process.The  Appearance Quality Detection (AQD) dataset is consisted of 10449 images, and the samples in the dataset are collected on the actual industrial production line of air conditioner.

Categories:
59 Views

The MCData was designed and produced for mouth cavity detection and segmentation. This dataset can be utilized for training and testing of mouth cavity instance segmentation networks. This dataset is the first available dataset for detecting and segmentation of mouth cavity main components to the best of the authors’ knowledge.

Categories:
127 Views

We have developed this dataset for the Bangla image caption. Here, we have recorded 500 images with one caption of each. Basically the lifestyle, festivals are mainly focused in this dataset. We have accomplished rice/harvest festivals, snake charming, palanquin, merry-go-round, slum, blacksmith, potter, fisherman, tat shilpo, jamdani, shutki chash, date juice, hal chash, tokai, pohela falgun, gaye holud, etc.

   

Categories:
384 Views

LiDAR point cloud data serves as an machine vision alternative other than image. Its advantages when compared to image and video includes depth estimation and distance measruement. Low-density LiDAR point cloud data can be used to achieve navigation, obstacle detection and obstacle avoidance for mobile robots. autonomous vehicle and drones. In this metadata, we scanned over 1200 objects and classified it into 4 groups of object namely, human, cars, motorcyclist.

Categories:
107 Views

This dataset was used in our work "See-through a Vehicle: Augmenting Road Safety Information using Visual Perception and Camera Communication in Vehicles" published in the IEEE Transactions on Vehicular Technology (TVT). In this work, we present the design, implementation and evaluation of non-line-of-sight (NLOS) perception to achieve a virtual see-through functionality for road vehicles.

Instructions: 

Non-Line of Sight Perception Vehicular Camera Communication

This project is an end-end python-3 application with a continuous loop captures and analyses 100 frames captured in a second to derive appropriate safety warnings.

Contact

Dr. Ashwin Ashok, Assistant Professor, Computer Science, Georgia State University

Collaborators

Project contents

This project contains 3 modules that should be run in parallel and interact with each other using 3 CSV files.

Modules

  1. non-line-of-sight-perception
  2. intelligent-vehicular-perception_ivp
  3. warning-transmission

CSV Files

  1. packet.csv
  2. perceived_info.csv
  3. receiver_action.csv

Usage :

Folling commands must be run in parallel. For more information on libraries needed for execution, see detailed sections below.

# Terminal 1
python3 non-line-of-sight-perception/VLC_project_flow.py zed

# Terminal 2
python3 intelligent-vehicular-perception_ivp/src-code/ivp_impl.py

# Terminal 3
python3 warning-transmission/send_bits_to_transmitter.py

1. non-line-of-sight-perception : Object Detection and Scene Perception Module

This folder, For the YOLO-V3 training and inference: This project is a fork of public repository keras-yolo3. Refer the readme of that repository here. Relevant folders from this repository have been placed in training and configuration folders in this repository.

Installation of python libraries

Use the package manager pip to install foobar.

pip install opencv-python
pip install tensorflow-gpu
pip install Keras
pip install Pillow

Hardware requirements

  1. This code was tested on Jetson Xavier, but any GPU enabled machine should be sufficient.
  2. Zed Camera: Uses Zed camera to capture the images. (Requires GPU to operate at 100 fps).
  3. (Optional) the code can be modified as per the comments in file to use zed, 0 for the camera, or ' the video path' for mp4 or svo files)

Output

perceived_info.csv

2. intelligent-vehicular-perception_ivp : Safety Message/Warning Mapping Module

This module is responsible for making intelligent recommendation to the driver as well as generating Safety Warnings for the following vehicles in the road. The module ouuputs with a fusion of Received Safety Warning through VLC channel and the vehicle's own Scene Perception Data.

Python Library Dependencies

  • json
  • operator
  • csv
  • enum
  • fileinput
  • re

Input

Output

The output is two-fold.

  • packet.csv : Intelligent Recommendation to the Driver.
  • receiver_action.csv : Generated Packet bits. Each Packet bits are logged into the 'packet.csv' file. This CSV files works as a queue. Every new packet logged here eventually gets transmitted by the VLC transmission module.

3. warning-transmission: Communication Module

Detailed transmitter notes including hardware requirement is present in transmitter_notes.txt

Python Library Dependencies

  • serial

Input

  • packet.csv : Intelligent Recommendation to the Driver.

Output

LED flashes high/low in correspondence to the packets in input file.

Dataset used for training the model

The dataset has been generated using Microsoft VoTT.

This is a "Brakelight" labelled dataset that can be used for training Brakelight detection models. The dataset contains brakelights labelled on images from

*Reference : Cui, Z., Yang, S. W., & Tsai, H. M. (2015, September). A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on (pp. 931-937). IEEE.

Labeled Dataset contains 1720 trained images as well a csv file that lists 4102 bounding boxes in the format : image | xmin | ymin | xmax | ymax | label

This can be further converted into the format as required by the training module using convert_dataset_for_training.py - (Replace annotations.txt with the Microsoft VoTT generated CSV) .

 

Acknowledgements

This work has been partially supported the US National Science Foundation (NSF) grants 1755925, 1929171 and 1901133.

Categories:
79 Views

Dataset asscociated with a paper in Computer Vision and Pattern Recognition (CVPR)

 

"Object classification from randomized EEG trials"

 

If you use this code or data, please cite the above paper.

Instructions: 

See the paper "Object classification from randomized EEG trials" on IEEE Xplore.

 

Code for analyzing the dataset is included in the online supplementary materials for the paper.

 

The code from the online supplementary materials is also included here.

 

If you use this code or data, please cite the above paper.

Categories:
143 Views

The UBFC-Phys dataset is a public multimodal dataset dedicated to psychophysiological studies. 56 participants followed a three-step experience where they lived social stress through a rest task T1, a speech task T2 and an arithmetic task T3. During the experience, the participants were filmed and were wearing a wristband that measured their Blood Volume Pulse (BVP) and ElectroDermal Activity (EDA) signals. Before the experience started and once it finished, the participants filled a form allowing to compute their self-reported anxiety scores.

Instructions: 

Please find more details about the UBFC-Phys dataset's organization in the READ_ME file.

If you use this dataset, please cite the following paper:

 

R. Meziati Sabour, Y. Benezeth, P. De Oliveira, J. Chappé, F. Yang. "UBFC-Phys: A Multimodal Database For Psychophysiological Studies Of Social Stress", IEEE Transactions on Affective Computing, 2021.

Categories:
823 Views

For the task of detecting casualties and persons in search and rescue scenarios in drone images and videos, our database called SARD was built. The actors in the footage have simulate exhausted and injured persons as well as "classic" types of movement of people in nature, such as running, walking, standing, sitting, or lying down. Since different types of terrain and backgrounds determine possible events and scenarios in captured images and videos, the shots include persons on macadam roads, in quarries, low and high grass, forest shade, and the like.

Categories:
432 Views

The early detection of damaged (partially broken) outdoor insulators in primary distribution systems is of paramount importance for continuous electricity supply and public safety. In this dataset, we present different images and videos for computer vision-based research. The dataset comprises images and videos taken from different sources such as a Drone, a DSLR camera, and a mobile phone camera.

Instructions: 

Please find the attached file for complete description

Categories:
198 Views

Pages