We design a solution to achieve coordinated localization between two unmanned aerial vehicles (UAVs) using radio and camera perception. We achieve the localization between the UAVs in the context of solving the problem of UAV Global Positioning System (GPS) failure or its unavailability. Our approach allows one UAV with a functional GPS unit to coordinate the localization of another UAV with a compromised or missing GPS system. Our solution for localization uses a sensor fusion and coordinated wireless communication approach.
This dataset was used in our work "See-through a Vehicle: Augmenting Road Safety Information using Visual Perception and Camera Communication in Vehicles" published in the IEEE Transactions on Vehicular Technology (TVT). In this work, we present the design, implementation and evaluation of non-line-of-sight (NLOS) perception to achieve a virtual see-through functionality for road vehicles.
This project is an end-end python-3 application with a continuous loop captures and analyses 100 frames captured in a second to derive appropriate safety warnings.
Dr. Ashwin Ashok, Assistant Professor, Computer Science, Georgia State University
This project contains 3 modules that should be run in parallel and interact with each other using 3 CSV files.
Folling commands must be run in parallel. For more information on libraries needed for execution, see detailed sections below.
# Terminal 1
python3 non-line-of-sight-perception/VLC_project_flow.py zed
# Terminal 2
# Terminal 3
This folder, For the YOLO-V3 training and inference: This project is a fork of public repository keras-yolo3. Refer the readme of that repository here. Relevant folders from this repository have been placed in training and configuration folders in this repository.
Use the package manager pip to install foobar.
pip install opencv-python
pip install tensorflow-gpu
pip install Keras
pip install Pillow
This module is responsible for making intelligent recommendation to the driver as well as generating Safety Warnings for the following vehicles in the road. The module ouuputs with a fusion of Received Safety Warning through VLC channel and the vehicle's own Scene Perception Data.
The output is two-fold.
Detailed transmitter notes including hardware requirement is present in transmitter_notes.txt
LED flashes high/low in correspondence to the packets in input file.
The dataset has been generated using Microsoft VoTT.
This is a "Brakelight" labelled dataset that can be used for training Brakelight detection models. The dataset contains brakelights labelled on images from
*Reference : Cui, Z., Yang, S. W., & Tsai, H. M. (2015, September). A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on (pp. 931-937). IEEE.
Labeled Dataset contains 1720 trained images as well a csv file that lists 4102 bounding boxes in the format : image | xmin | ymin | xmax | ymax | label
This can be further converted into the format as required by the training module using convert_dataset_for_training.py - (Replace annotations.txt with the Microsoft VoTT generated CSV) .
This work has been partially supported the US National Science Foundation (NSF) grants 1755925, 1929171 and 1901133.