Skip to main content

Signal Processing

This study aims to create a robust hand grasp recognition system using surface electromyography (sEMG) data collected from four electrodes. The grasps to be utilized in this study include cylindrical grasp, spherical grasp, tripod grasp, lateral grasp, hook grasp, and pinch grasp. The proposed system seeks to address common challenges, such as electrode shift, inter-day difference, and individual difference, which have historically hindered the practicality and accuracy of sEMG-based systems.

Categories:

In low-altitude unmanned aerial vehicle (UAV) detection scenarios, the initial segment of radar linear frequency modulation (LFM) signals is often corrupted due to building occlusions and noise interference, making accurate range estimation difficult. To address this issue, we propose a deep learning-based framework named Deep Time-Frequency Inverse Reconstruction Network (DTFIRNet) for radar echo signal restoration and precise ranging.

Categories:

The development of automated techniques for speech analysis-based Parkinson's disease (PD) detection has attracted a lot of interest, especially because of its possible uses in health tele-monitoring. Due to the drawbacks of the ᾳ - Synuclein Seed Amplification Assay technique, scientists are looking more closely at speech signals as a potential substitute for PD detection. In order to identify PD, this proposal describes a thorough investigation that emphasizes using both voice and unvoiced source material.

Categories:

This dataset is made of three subsets to train models for chipless RFID tags identification. There are two sets for training (one consisting of S21 recordings, and another one made of synthetically generated data) and one set made of S21 recordings for model testing. 

Categories:

Brain-Computer Interface (BCI) technology makes possible a direct interface between the brain and external devices through the interpretation of neural signals. It is essential to have patient's native language-containing datasets when designing BCI-based solutions for neurological disorders. Current BCI research, though, lacks language-specific datasets, notably for languages like Telugu, which has over 90 million speakers in India. We developed an Electroencephalograph (EEG)-based Brain-Computer Interface (BCI) dataset consisting of EEG signal samples for Telugu Vowels and Consonants.

Categories:

This dataset was curated mainly to cater to mitigation strategies for the Human-Peafowl Conflict that exists in these regions. The absence of natural predators has contributed to a significant increase in the peafowl population, exacerbating challenges for farmers. Peafowls are sometimes considered agricultural pests due to their tendency to feed on and damage crops. The vocalizations are from the Indian Peafowl (Pavo cristatus), a species native to the Indian subcontinent and especially abundant in India and Sri Lanka.

Categories:

This dataset comprises synchronized multi-modal physiological recordings—functional Near-Infrared Spectroscopy (fNIRS), Electroencephalography (EEG), Electrocardiography (ECG), and Electromyography (EMG)—collected from 16 participants exposed to emotion-eliciting video stimuli. It includes raw signals, event markers, and Python scripts for data import and preprocessing. Special emphasis is placed on fNIRS, which, though less common in affective computing, provides valuable hemodynamic insights that complement electrical signals from EEG, ECG, and EMG.

Categories:

Real-time tracking of electricians in distribution rooms is essential for ensuring operational safety. Traditional GPS-based methods, however, are ineffective in such environments due to complex non-line-of-sight (NLOS) conditions caused by dense cabinets and thick walls that obstruct satellite signals. Existing solutions, such as video-based systems, are prone to inaccuracies due to NLOS effects, while wearable devices often prove inconvenient for workers.

Categories:

The diameter of the rivet hole is 5mm. In the experiments at the Cooperative Institute, the AE sensor spacing was set to 130mm, where the centers of sensor 1 and sensor 2 were 90mm from each end of the test piece. The waveform flow data obtained in the experiment only retained the information from 30 minutes before the crack initiation to the fracture of the test piece, and the image data of the test piece during this period were recorded.

Categories: