Making use of a specifically designed SW tool, the authors here presents the results of an activity for the evaluation of energy consumption of buses for urban applications. Both conventional and innovative transport means are considered to obtain interesting comparative conclusions. The SW tool simulates the dynamical behaviour of the vehicles on really measured paths making it possible to evaluate their energetic performances on a Tank to Wheel (TTW) basis. Those data, on such a wide and comparable range were still unavailable in literature.
Driving behavior plays a vital role in maintaining safe and sustainable transport, and specifically, in the area of traffic management and control, driving behavior is of great importance since specific driving behaviors are significantly related with traffic congestion levels. Beyond that, it affects fuel consumption, air pollution, public health as well as personal mental health and psychology. Use of Smartphone sensors for data acquisition has emerged as a means to understand and model driving behavior. Our aim is to analyze driving behavior using on Smartphone sensors’ data streams.
The datasets folder include .csv files of sensor data like Accelerometer, Gyroscope, etc. This data was recorded in live traffic while driver was executing certain driving events. The travel time for each one way trip was approximately 5kms - 20kms. The smartphone position was fixed horizontally in the vehicles utility box. Vehicle type used for data recording was LMV.
This dataset was used in our work "See-through a Vehicle: Augmenting Road Safety Information using Visual Perception and Camera Communication in Vehicles" published in the IEEE Transactions on Vehicular Technology (TVT). In this work, we present the design, implementation and evaluation of non-line-of-sight (NLOS) perception to achieve a virtual see-through functionality for road vehicles.
Non-Line of Sight Perception Vehicular Camera Communication
This project is an end-end python-3 application with a continuous loop captures and analyses 100 frames captured in a second to derive appropriate safety warnings.
Dr. Ashwin Ashok, Assistant Professor, Computer Science, Georgia State University
This project contains 3 modules that should be run in parallel and interact with each other using 3 CSV files.
Folling commands must be run in parallel. For more information on libraries needed for execution, see detailed sections below.
# Terminal 1
python3 non-line-of-sight-perception/VLC_project_flow.py zed
# Terminal 2
# Terminal 3
This folder, For the YOLO-V3 training and inference: This project is a fork of public repository keras-yolo3. Refer the readme of that repository here. Relevant folders from this repository have been placed in training and configuration folders in this repository.
Use the package manager pip to install foobar.
pip install opencv-python
pip install tensorflow-gpu
pip install Keras
pip install Pillow
- This code was tested on Jetson Xavier, but any GPU enabled machine should be sufficient.
- Zed Camera: Uses Zed camera to capture the images. (Requires GPU to operate at 100 fps).
- (Optional) the code can be modified as per the comments in file to use zed, 0 for the camera, or ' the video path' for mp4 or svo files)
This module is responsible for making intelligent recommendation to the driver as well as generating Safety Warnings for the following vehicles in the road. The module ouuputs with a fusion of Received Safety Warning through VLC channel and the vehicle's own Scene Perception Data.
The output is two-fold.
- packet.csv : Intelligent Recommendation to the Driver.
- receiver_action.csv : Generated Packet bits. Each Packet bits are logged into the 'packet.csv' file. This CSV files works as a queue. Every new packet logged here eventually gets transmitted by the VLC transmission module.
Detailed transmitter notes including hardware requirement is present in transmitter_notes.txt
- packet.csv : Intelligent Recommendation to the Driver.
LED flashes high/low in correspondence to the packets in input file.
The dataset has been generated using Microsoft VoTT.
This is a "Brakelight" labelled dataset that can be used for training Brakelight detection models. The dataset contains brakelights labelled on images from
- experiments conducted in Atlanta, GA, USA by Dr. Ashwin Ashok's research group
- brake-light labelled images from Vehicle Rear Light Video Data*
*Reference : Cui, Z., Yang, S. W., & Tsai, H. M. (2015, September). A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on (pp. 931-937). IEEE.
Labeled Dataset contains 1720 trained images as well a csv file that lists 4102 bounding boxes in the format : image | xmin | ymin | xmax | ymax | label
This can be further converted into the format as required by the training module using convert_dataset_for_training.py - (Replace annotations.txt with the Microsoft VoTT generated CSV) .
This work has been partially supported the US National Science Foundation (NSF) grants 1755925, 1929171 and 1901133.
The data collection was carried out over several months and across several cities including but not limited to Quetta, Islamabad and Karachi, Pakistan. Ultimately, the number of images collected as part of the Pakistani dataset were, albeit in a very small quantity. The images taken were also distributed across the classes unevenly, just like the German dataset. All the 359 images were then manually cropped to filter out the unwanted image background data. All the images were sorted into folders with names corresponding to the label of the images.
Dataset is divided by classes and the images inside the folder are named randomly and contain no useful labels in their names.
This data set is shared to help the readers to reproduce the results (Figure 5 and Figure 6) of the manuscript entitled ‘’Online System Identification of a Fuel Cell Stack with Guaranteed Stability for Energy Management Applications’’ published by IEEE Transactions on Energy Conversion.
If you use this data, please cite the following paper :
10 Use cases of containers sway speed along the X-axis during loading and unloading procedures using a quay crane in Klaipeda containers terminal.
The file includes 10 use cases of container sway speed including the spreader.
Data samples: dataX_1, dataX_3, dataX_5, dataX_10, provide sway speed for the X-axis during the container unloading procedures from a ship, while other samples provide the opposite procedures.
Data sample called Y provides the time-stamp.
This dataset mainly consists 1) source codes of wide-attention and deep model (WADC); 2) datasets to evaluate the performance of the proposed model. Datasets are obtained from the Caltrans Performance Measurement System (CPeMS) http://pems.doc.ca.gov; and Fremont Bridge Bicycle Counter (FBBC), https://data.seattle.gov.
The uploaded data file is a part of data used or generated by a model proposed in a paper entitled: “A New Dynamic Stochastic EV Model for Power System Planning Applications”. The proposed model in this paper consists of two sub-models which are the travel behavior and the battery depletion sub-models. The proposed model is taken into consideration the different trip purposes, starting and ending trip times, as well as the corresponding battery depletion. The outcomes from the travel behavior sub-model are the starting times, the ending times and the trips distances.
A Traffic Light Controller PETRI_NET (Finite State Machine) Implementation.
An implementation of FSM approach can be followed in systems whose tasks constitute a well-structured list so all states can be easily enumerated. A Traffic light controller represents a relatively complex control function
This file would need to be unzipped for access