Datasets
Standard Dataset
Thaat and Raga Forest (TRF) Dataset
- Citation Author(s):
- Submitted by:
- Surya Majumder
- Last updated:
- Thu, 03/21/2024 - 16:20
- DOI:
- 10.21227/0xtb-fn23
- Data Format:
- Links:
- License:
641 Views
- Categories:
- Keywords:
0 ratings - Please login to submit your rating.
Abstract
The "Thaat and Raga Forest (TRF) Dataset" represents a significant advancement in computational musicology, focusing specifically on Indian Classical Music (ICM). While Western music has seen substantial attention in this field, ICM remains relatively underexplored. This manuscript presents the utilization of Deep Learning models to analyze ICM, with a primary focus on identifying Thaats and Ragas within musical compositions. Thaats and Ragas identification holds pivotal importance for various applications, including sentiment-based recommendation systems and music categorization.
The dataset comprises recordings from renowned musicians encompassing diverse musical forms such as solos, performances, choruses, instrumentals, and movie songs, available in both mp3 and wav formats. Organized meticulously, the dataset is categorized into ten primary Thaats, each containing a diverse array of Ragas. This comprehensive collection offers a valuable resource for researchers and practitioners alike, facilitating in-depth exploration and analysis of Indian Classical Music through computational means.
Instructions:
Instructions for Utilizing the "Thaat and Raga Forest (TRF) Dataset":
-
Dataset Access and Preparation:
- Obtain access to the TRF Dataset, ensuring compliance with any licensing or usage terms associated with the dataset.
- Download the dataset files provided in mp3 and wav formats, ensuring they are stored in a structured directory hierarchy for easy access and organization.
-
Data Exploration:
- Familiarize yourself with the dataset structure, including the directory organization and file naming conventions.
- Explore the dataset metadata, if available, to understand the attributes and annotations associated with each recording, such as Thaat, Raga, artist information, and genre tags.
-
Data Preprocessing:
- Preprocess the audio data as needed for your specific analysis tasks, which may include:
- Converting audio files to a consistent format (e.g., wav) for compatibility.
- Resampling audio files to a consistent sample rate if necessary.
- Segmenting audio recordings into smaller chunks or frames for model input.
- Extracting relevant features from audio signals, such as spectrograms, Mel-frequency cepstral coefficients (MFCCs), or other time-frequency representations.
- Normalizing audio amplitude levels to ensure consistency across recordings.
- Preprocess the audio data as needed for your specific analysis tasks, which may include:
-
Model Development:
- Design and implement Deep Learning models tailored to your research objectives, such as Thaat and Raga classification, sentiment analysis, or recommendation systems.
- Experiment with various neural network architectures, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), or Transformer models, depending on the nature of the audio analysis task.
- Incorporate appropriate loss functions and evaluation metrics for assessing model performance, considering factors such as accuracy, precision, recall, and F1-score.
-
Training and Evaluation:
- Split the dataset into training, validation, and test sets to train and evaluate your models effectively.
- Train the Deep Learning models using the training data while monitoring performance on the validation set to prevent overfitting.
- Fine-tune model hyperparameters and architecture based on validation performance, employing techniques like cross-validation if applicable.
- Evaluate the trained models on the test set to assess their generalization performance and robustness.
-
Analysis and Interpretation:
- Analyze the model predictions on the test set to gain insights into the effectiveness of the Deep Learning approach for Thaat and Raga identification.
- Interpret model outputs and errors to understand the challenges and limitations of computational music analysis in the context of Indian Classical Music.
- Explore potential applications of the trained models, such as sentiment-based music recommendation, automated playlist generation, or music similarity detection.
-
Documentation and Reporting:
- Document the experimental setup, including dataset preprocessing steps, model architectures, training configurations, and evaluation results.
- Report key findings, including model performance metrics, insights gained from the analysis, and potential areas for future research.
- Share the findings and codebase with the research community through publications, presentations, or open-source repositories to foster collaboration and knowledge dissemination.
Data Descriptor Article DOI: