WAV

To support research on multimodal speech emotion recognition (SER), we developed a dual-channel emotional speech database featuring synchronized recordings of bone-conducted (BC) and air-conducted (AC) speech. The recordings were conducted in a professionally treated anechoic chamber with 100 gender-balanced volunteers. AC speech was captured via a digital microphone on the left channel, while BC speech was recorded from an in-ear BC microphone on the right channel, both at a 44.1 kHz sampling rate to ensure high-fidelity audio. 

Categories:
90 Views

QiandaoEar22 is a high-quality noise dataset designed for identifying specific ships among multiple underwater acoustic targets using ship-radiated noise. This dataset includes 9 hours and 28 minutes of real-world ship-radiated noise data and 21 hours and 58 minutes of background noise data.

Categories:
1065 Views

This is the official Thaat and Raga Forest (TRF) Dataset

Please do cite our paper: Link to Paper

Dataset is also available here: Link to Dataset

Categories:
1226 Views

AIR-RS-DB: A dataset for classifying Spontaneous and Read Speech

 

A set of 1028 audio files generated from 7 mp3 files downloaded from All India Radio. https://newsonair.gov.in/ and converted into wav  and then speaker diarized is  using https://huggingface.co/pyannote/speaker-diarization (pyannote/speaker-diarization@2022072,model) and derive 1028 audio files.

Categories:
157 Views

The dataset consists of three parts, the first part consists of single notes and playing technique samples, and the second includes the triple viewed video, steoro-microphone recordings and 4 track optical vibration recordings in raw file for famous Chinese Folk music ‘Jasmine Flower’ and the first section of ‘Ambush from ten sides’. The third part concerns about the source separated tracks from optical recordings and expressive annotation files are included in the annotation files.

Categories:
100 Views

Dataset asscociated with a paper in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems

"Talk the talk and walk the walk: Dialogue-driven navigation in unknown indoor environments"

If you use this code or data, please cite the above paper.

 

Categories:
253 Views

Most of existing audio fingerprinting systems have limitations to be used for high-specific audio retrieval at scale. In this work, we generate a low-dimensional representation from a short unit segment of audio, and couple this fingerprint with a fast maximum inner-product search. To this end, we present a contrastive learning framework that derives from the segment-level search objective. Each update in training uses a batch consisting of a set of pseudo labels, randomly selected original samples, and their augmented replicas.

Categories:
2162 Views

This dataset is generated by GNU Radio.

Categories:
382 Views

The steganography and steganalysis of audio, especially compressed audio, have drawn increasing attention in recent years, and various algorithms are proposed. However, there is no standard public dataset for us to verify the efficiency of each proposed algorithm. Therefore, to promote the study field, we construct a dataset including 33038 stereo WAV audio clips with a sampling rate of 44.1 kHz and duration of 10s. And, all audio files are from the Internet through data crawling, which is for a better simulation of a real detection environment.

Categories:
3666 Views

      The following dataset consists of utterances, recorded using 24 volunteers raised in the Province of Manitoba, Canada. To provide a repeatable set of test words that would cover all of the phonemes, the Edinburg Machine Readable Phonetic Alphabet (MRPA) [KiGr08], consisting of 44 words is used. Each recording consists of one word uttered by the volunteer and recorded in one continuous session.

Categories:
2774 Views

Pages