Signal Processing

This dataset utilizes Asus RT-AC86U routers and nexmon tools to collect Channel State Information (CSI) data in a 7 by 5 meters meeting room furnished with typical furniture including a conference table, several chairs, and a locker. The data, stored in .pcap format, is accompanied by processing code on GitHub, enabling parsing into CSI matrix data stored in .npy format. Each CSI matrix contains amplitude and processed phase values for four channels, encompassing data from both external and internal antennas within the room.


ABSTRACT Analysis of stock prices has been widely studied because of the strong demand among private investors and financial institutions. However, it is difficult to accurately capture the factors that cause fluctuations in stock prices, as they are affected by a variety of factors. Therefore, we used non-harmonic analysis, a frequency technique with at least  to more accurately than conventional analysis methods, to visualize the periodicity of the Nasdaq Composite Index stock price from January 4, 2010 to September 8, 2023.


Visual saliency prediction has been extensively studied in the context of standard dynamic range (SDR) display. Recently, high dynamic range (HDR) display has become popular, since HDR videos can provide the viewers more realistic visual experience than SDR ones. However, current studies on visual saliency of HDR videos, also called HDR saliency, are very few. Therefore, we establish an SDR-HDR Video pair Saliency Dataset (SDR-HDR-VSD) for saliency prediction on both SDR and HDR videos.


This dataset contains results of the 60 GHz indoor sensing measurement campaign using a bistatic OFDM radar based on 5G-specified positioning reference signals (PRSs). The data can be used for testing end-to-end indoor millimeter-wave radio positioning as well as simultaneous localization and mapping (SLAM) algorithms, including channel parameter estimation. Beamformed PRS with dense angular sampling in transmission and reception allows efficient capture of line-of-sight (LoS) as well as multipath components.


QiandaoEar22 is a high-quality noise dataset designed for identifying specific ships among multiple underwater acoustic targets using ship-radiated noise. This dataset includes 9 hours and 28 minutes of real-world ship-radiated noise data and 21 hours and 58 minutes of background noise data.


To address the challenges faced by patients with neurodegenerative disorders, Brain-Computer Interface (BCI) solutions are being developed. However, many current datasets lack inclusion of languages spoken by patients, such as Telugu, which is spoken by over 90 million people in India. To bridge this gap, we have created a dataset comprising Electroencephalograph (EEG) signal samples of commonly used Telugu words. Using the Open-BCI Cyton device, EEG samples were captured from volunteers as they pronounced these words.


The dataset consists of 4-channeled EOG data recorded in two environments. First category of data were recorded from 21 poeple using driving simulator (1976 samples). The second category of data were recorded from 30 people in real-road conditions (390 samples).

All the signals were acquired with JINS MEME ES_R smart glasses equipped with 3-point EOG sensor. Sampling frequency is 200 Hz.


The dataset involves two sets of participants: a group of twenty skilled drivers aged between 40 and 68, each having a minimum of ten years of driving experience (class 1), and another group consisting of ten novice drivers aged between 18 and 46, who were currently undergoing driving lessons at a driving school (class 2).

The data was recorded using JINS MEME ES_R smart glasses by JINS, Inc. (Tokyo, Japan).

Each file consists of a signals from one sigle ride.


The AnxiECG-PPG Database contains synchronized electrocardiogram (ECG) and mobile-acquired photoplethysmography (PPG) recordings from 47 healthy participants. Moreover, the acquisition protocol assesses three distinct states: a 5-minute Baseline, a 1-minute Physical Activated State, and a Psychological Activated state provoked through emotion-induced videos (negative, positive, and neutral emotion valence).


SeaIceWeather Dataset 

This is the SeaIceWeather dataset, collected for training and evaluation of deep learning based de-weathering models. To the best of our knowledge, this is the first such publicly available dataset for the sea ice domain. This dataset is linked to our paper titled: Deep Learning Strategies for Analysis of Weather-Degraded Optical Sea Ice Images. The paper can be accessed at: