This  dataset of 7200 channels is generated at different locations in the room area of 30x15x4 m3, where the locations are separated by 0.25m in both horizontal and vertical directions. Each AP uses 10 dBm TX power and 2D BF. In the concurrent mmWave BT scenario, all APs are operating, while in the single mmWave BT scenario, we consider a single AP fixed on the center of the room’s ceiling

 

Categories:
153 Views

The files here support the research work presented in the paper submitted for IEEE Transactions on Antennas and Propagation, "Site-specific Radio Propagation Model for 5G Macrocell Coverage at Sub-6 GHz Frequencies", which is currently under revision.This paper proposes a hybrid radio wave propagation model for 5G macrocell coverage predictions in built-up areas for sub-6 GHz frequency band.

Categories:
461 Views

# -*- coding: utf-8 -*-

"""

Created on Wed Feb 26 11:19:38 2020

 

@author: ali nouruzi

"""

 

import numpy as np

import random

Categories:
221 Views

As for the experiment results in the manuscript, we would like to provide the corresponding Transmitted, Measured and Processed data zipped in the folder “Transmitted_Measured_Processed_DATA”, for readers who are interest in the SC MIMO transceiver and want to reproduce the experiment results shown in the manuscript. All the parameters including sampling rate, modulation frequency, etc., are the same as those in Experiment part.

Instructions: 

The detalied file description is ziped in the Attachment.

Categories:
111 Views

As for the experiment results in the manuscript, we would like to provide the corresponding Transmitted, Measured and Processed data zipped in the folder “Transmitted_Measured_Processed_DATA”, for readers who are interest in the SC MIMO transceiver and want to reproduce the experiment results shown in the manuscript. All the parameters including sampling rate, modulation frequency, etc., are the same as those in Experiment part.

Instructions: 

The detail descriptions of the data in the folder are shown in the following.

 

Transmitted Data in USRP B210:

 

File Name

Contents

Transimit_data_16QAM_1

Transimit_data_16QAM_2

The transmitted data with 16-QAM modulation in Tx antenna 1 and Tx antenna 2. (After RRC filter)

TrueSymbol_16QAM.mat

The transmitted random symbols for check.

Transimit_data_QPSK_1

Transimit_data_QPSK_2

The transmitted data with QPSK modulation in Tx antenna 1 and Tx antenna 2. (After RRC filter)

TrueSymbol_QPSK.mat

The transmitted random symbols for check.

Transimit_data_16QAM_Pic_Frame_1

Transimit_data_16QAM_Pic_Frame_2

The transmitted image data with 16-QAM modulation in Tx antenna 1 and Tx antenna 2.

 

Measured and Processed Data:

 

File Name

Contents

rxSig_mod.mat

The measured raw single-channel baseband signal.

coarseSyncSig.mat

The harmonic signals after coarse carrier synchronization.

rxSig_filter.mat

The harmonic signals after coarse carrier synchronization and matched filtering.

symbol_1_Fine.mat

symbol_2_Fine.mat

symbol_1_save.mat

symbol_2_save.mat

The detected symbols from the User 1 and User 2.

User_1_Received_Constellation_Diagram.fig

User_2_Received_Constellation_Diagram.fig

The received constellation diagrams of User 1 and User 2.

User_1_Recovered_Image.fig

User_2_Recovered_Image.fig

The received images from User 1 and User 2.

Categories:
36 Views
Disclaimer 
DARPA is releasing these files in the public domain to stimulate further research. Their release implies no obligation or desire to support additional work in this space. The data is released as-is. DARPA makes no warranties as to the correctness, accuracy, or usefulness of the released data. In fact, since the data was produced by research prototypes, it is practically guaranteed to be imperfect.
Instructions: 

The data containing red team activities is divided into three sets, each corresponding to the three days of evaluation: 23Sep19, 24Sep19, and 25Sep19. The fourth set (23Sep19-night) contains no threats and contains data from the first night of evaluations, when clients were left running unattended overnight to collect additional baseline data.

During the initial one thousand client test, each mainframe server hosted fifty Windows clients. Half of the clients were taken down from each server for data collection, reducing the number of clients to five hundred, which resulted in a client machine naming continuity gap (e.g. Sys001-Sys025, Sys051-Sys075, …, Sys951-Sys975).

A full description of the contents, including message formats and file structure can be found in the OpTC-data-release.md file attached to this page and included in the root directory of the OpTC.tar.gz.

Categories:
672 Views

The provided dataset computes the exact analytical bit error rate (BER) of the NOMA system in the SISO broadcast channels with the assumption of i.i.d Rayleigh fading channels. The reader has to decide on the following input: 1) Number of users. 2) Modulation orders. 3) Power assignment. 4) Pathloss. 5) Transmit signal-to-noise ratio (SNR). The output is stored in a matrix where different rows are for different users while different columns are for different transmit SNRs.

Categories:
385 Views

Another raw ADS-B signal dataset with labels, the dataset is captured using a BladeRF2 SDR receiver @ 1090MHz with a sample rate of 10MHz

Categories:
1261 Views

This dataset is being used to evaluate PerfSim accuracy and speed against a real deployment in a Kubernetes cluster based on sfc-stress workloads.

Categories:
213 Views

This dataset was used in our work "See-through a Vehicle: Augmenting Road Safety Information using Visual Perception and Camera Communication in Vehicles" published in the IEEE Transactions on Vehicular Technology (TVT). In this work, we present the design, implementation and evaluation of non-line-of-sight (NLOS) perception to achieve a virtual see-through functionality for road vehicles.

Instructions: 

Non-Line of Sight Perception Vehicular Camera Communication

This project is an end-end python-3 application with a continuous loop captures and analyses 100 frames captured in a second to derive appropriate safety warnings.

Contact

Dr. Ashwin Ashok, Assistant Professor, Computer Science, Georgia State University

Collaborators

Project contents

This project contains 3 modules that should be run in parallel and interact with each other using 3 CSV files.

Modules

  1. non-line-of-sight-perception
  2. intelligent-vehicular-perception_ivp
  3. warning-transmission

CSV Files

  1. packet.csv
  2. perceived_info.csv
  3. receiver_action.csv

Usage :

Folling commands must be run in parallel. For more information on libraries needed for execution, see detailed sections below.

# Terminal 1
python3 non-line-of-sight-perception/VLC_project_flow.py zed

# Terminal 2
python3 intelligent-vehicular-perception_ivp/src-code/ivp_impl.py

# Terminal 3
python3 warning-transmission/send_bits_to_transmitter.py

1. non-line-of-sight-perception : Object Detection and Scene Perception Module

This folder, For the YOLO-V3 training and inference: This project is a fork of public repository keras-yolo3. Refer the readme of that repository here. Relevant folders from this repository have been placed in training and configuration folders in this repository.

Installation of python libraries

Use the package manager pip to install foobar.

pip install opencv-python
pip install tensorflow-gpu
pip install Keras
pip install Pillow

Hardware requirements

  1. This code was tested on Jetson Xavier, but any GPU enabled machine should be sufficient.
  2. Zed Camera: Uses Zed camera to capture the images. (Requires GPU to operate at 100 fps).
  3. (Optional) the code can be modified as per the comments in file to use zed, 0 for the camera, or ' the video path' for mp4 or svo files)

Output

perceived_info.csv

2. intelligent-vehicular-perception_ivp : Safety Message/Warning Mapping Module

This module is responsible for making intelligent recommendation to the driver as well as generating Safety Warnings for the following vehicles in the road. The module ouuputs with a fusion of Received Safety Warning through VLC channel and the vehicle's own Scene Perception Data.

Python Library Dependencies

  • json
  • operator
  • csv
  • enum
  • fileinput
  • re

Input

Output

The output is two-fold.

  • packet.csv : Intelligent Recommendation to the Driver.
  • receiver_action.csv : Generated Packet bits. Each Packet bits are logged into the 'packet.csv' file. This CSV files works as a queue. Every new packet logged here eventually gets transmitted by the VLC transmission module.

3. warning-transmission: Communication Module

Detailed transmitter notes including hardware requirement is present in transmitter_notes.txt

Python Library Dependencies

  • serial

Input

  • packet.csv : Intelligent Recommendation to the Driver.

Output

LED flashes high/low in correspondence to the packets in input file.

Dataset used for training the model

The dataset has been generated using Microsoft VoTT.

This is a "Brakelight" labelled dataset that can be used for training Brakelight detection models. The dataset contains brakelights labelled on images from

*Reference : Cui, Z., Yang, S. W., & Tsai, H. M. (2015, September). A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on (pp. 931-937). IEEE.

Labeled Dataset contains 1720 trained images as well a csv file that lists 4102 bounding boxes in the format : image | xmin | ymin | xmax | ymax | label

This can be further converted into the format as required by the training module using convert_dataset_for_training.py - (Replace annotations.txt with the Microsoft VoTT generated CSV) .

 

Acknowledgements

This work has been partially supported the US National Science Foundation (NSF) grants 1755925, 1929171 and 1901133.

Categories:
267 Views

Pages