Several experimental measurement campaigns have been carried out to characterize Power Line Communication (PLC) noise and channel transfer functions (CTFs). This dataset contains a subset of the PLC CTFs, impedances, and noise traces measured in an in-building scenario.

The MIMO 2x2 CTFs matrices are acquired in the frequency domain, with a resolution of 74.769kHz, in the frequency range 1 - 100MHz. Noise traces, in the time domain with a duration of about 16 ms, have been acquired concurrently from the two multi-conductor ports. 

Instructions: 

The dataset is available in the MATLAB format *.mat. The instructions and basic examples to display data are available in "script_load_dataset.m".

Categories:
149 Views

Radio-frequency noise mapping data collected from Downtown, Back Bay and North End neighborhoods within Boston, MA, USA in 2018 and 2019.

Instructions: 

Data consist of :
* distance, in meters, along the measurement path. This field is likely not useful for anyone other than the authors, but is included here for completeness.
* geographic location of the measurement, in decimal degrees, WGS84
* median external radio-frequency noise power, measured in a 1 MHz bandwidth about a center frequency of 142.0 MHz, in dBm
* peak external radio-frequency noise power, also measured in a 1 MHz bandwidth about a center frequency of 142.0 MHz, in dBm. Here, peak power is defined as the threshold where 99.99% of the data lie below this value.
* for North End and Back Bay datasets, the official zoning district containing the measurement location is included. Measurements in the Downtown data were all collected within Business and Mixed Use zoning districts, and thus are not listed.

Categories:
78 Views

Speech Processing in noisy condition allows researcher to build solutions that work in real world conditions. Environmental noise in Indian conditions are very different from typical noise seen in most western countries. This dataset is a collection of various noises, both indoor and outdoor ollected over a period of several months. The audio files are of the format RIFF (little-endian) data, WAVE audio, Microsoft PCM, 8 bit, mono 11025 Hz and have been recorded using the Dialogic CTI card.

Categories:
626 Views

We present two synthetic datasets on classification of Morse code symbols for supervised machine learning problems, in particular, neural networks. The linked Github page has algorithms for generating a family of such datasets of varying difficulty. The datasets are spatially one-dimensional and have a small number of input features, leading to high density of input information content. This makes them particularly challenging when implementing network complexity reduction methods.

Instructions: 

First unzip the given file 'morse_datasets.zip' to get two datasets - 'baseline.npz' and 'difficult.npz'. These are 2 out of a family of synthetic datasets that can be generated using the given script 'generate_morse_dataset.py'. For instructions on using the script, see the docstring and/or the linked Github page.

To load data from a dataset, first download 'load_data.txt' and change its extension to '.py'

Then run the method 'load_data' and set the argument 'filename' to the path of the given dataset, for example './baseline.npz'

This will output 6 variables - xtr, ytr, xva, yva, xte, yte. These are the data (x) and labels (y) for the training (tr), validation (va) and test (te) splits. The y data is in one-hot format.

Then you can run your favorite machine learning / classification algorithm on the data.

Categories:
299 Views

The dataset consists of EEG recordings obtained when subjects are listening to different utterances : a, i, u, bed, please, sad. A limited number of EEG recordings where also obtained when the three vowels were corrupted by white and babble noise at an SNR of 0dB. Recordings were performed on 8 healthy subjects.

Instructions: 

Recordings were performed at the Centre de recherche du Centre hospitalier universitaire de Sherbrooke (CRCHUS), Sherbrooke (Quebec), Canada. The EEG recordings were performed using an actiCAP active electrode system Version I and II (Brain Products GmbH, Germany) that includes 64 Ag/AgCl electrodes. The signal was amplified with BrainAmp MR amplifiers and recorded using the Vision Recorder software. The electrodes were positioned using a standard 10-20 layout. Experiments were performed on 8 healthy subjects without any declared hearing impairment. Each session lasted approximately 90 minutes and was separated in 2 parts. The first part, lasting 30 minutes, consisted in installing the cap on the subject where an electroconductive gel was placed under each electrode to ensure a proper contact between the electrode and the scalp. The second part, which was the listening and EEG acquisition, lasted approximately 60 minutes. The subjects then had to stay still with eyes closed while avoiding any facial movement or swallowing. They had to remain concentrated on the audio signals during the full length of the experiment. Audio signals were presented to the subjects through earphones while EEGs were recorded. During the experiment, each trial was repeated randomly at least 80 times. A stimulus was presented randomly within each trial which lasted approximately 9 seconds. A 2-minute pause was given after 5 minutes of trials where the subjects could relax and stretch. Once the EEG signals were acquired, they were resampled at 500 Hz and band-pass filtered between 0.1 Hz and 45 Hz in order to extract the frequency bands of interest for this study. EEG signals were then separated into 2-second intervals where the stimulus was presented at 0.5 second within each interval. If the signal amplitude exceeded a pre-defined 75 V limit, the trial was marked for rejection. A sample code is provided to read the dataset and generate ERPs. One needs first to run the epoch_data.m for the specific subject and then run the mean_data.m file in the ERP folder. EEGLab for Matlab is required.

Categories:
949 Views