Neural Audio Fingerprint Dataset

Citation Author(s):
Submitted by:
Sungkyun Chang
Last updated:
Wed, 07/14/2021 - 08:43
Data Format:
0 ratings - Please login to submit your rating.


Most of existing audio fingerprinting systems have limitations to be used for high-specific audio retrieval at scale. In this work, we generate a low-dimensional representation from a short unit segment of audio, and couple this fingerprint with a fast maximum inner-product search. To this end, we present a contrastive learning framework that derives from the segment-level search objective. Each update in training uses a batch consisting of a set of pseudo labels, randomly selected original samples, and their augmented replicas. These replicas can simulate the degrading effects on original audio signals by applying small time offsets and various types of distortions, such as background noise and room/microphone impulse responses. In the segment-level search task, where the conventional audio fingerprinting systems used to fail, our system using 10x smaller storage has shown promising results. Our code and dataset are available at


Neural Audio Fingerprint Dataset

(c) 2021 by Sungkyun Chang


This dataset includes all music sources, background noises and impulse-reponses (IR) samples that have been used in the work "Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning" ( 

This data set was generated by processing several external data sets, such as the Free Music Archive (FMA), Audioset, Common voice, Aachen IR, OpenAIR, Vintage MIC and the internal data set from See for details.

Dataset-mini vs. Dataset-full: the only difference between these two datasets is the size of 'test-dummy-db'.  So you can first train and test with `Dataset-mini`. `Dataset-full` is for  testing in 100x larger scale.