Datasets
Standard Dataset
Ph.D
- Citation Author(s):
- Submitted by:
- Farkhand Shakeel
- Last updated:
- Mon, 03/18/2024 - 02:13
- DOI:
- 10.21227/1r2d-t222
- License:
- Categories:
- Keywords:
Abstract
Noise recognition plays an essential role in human-computer interaction and various technological applications. However, identifying individual speakers remains a significant challenge, especially in diverse and acoustically challenging environments. This paper presents the Enhanced Multi-Layer Convolutional Neural Network (EML-CNN), a novel approach to improve automated speaker recognition from audio speech. The EML-CNN architecture features multiple convolutional layers and a dense block, finely tuned to extract unique voice signatures from English speech samples. The proposed model, trained with an expanded dataset from twelve distinct speakers, significantly broadens its capacity to identify diverse speech patterns.
Advanced audio augmentation techniques were employed to augment the dataset's variability, including adding white Gaussian noise, signal manipulation (shifting and stretching), frequency modulation, tempo adjustment, and pitch variation. These methods significantly increased the dataset's diversity, enhancing the EML-CNN's robustness. Hyperband tuning and extensive parameter optimization were applied to improve the model's performance and prevent overfitting.
Our evaluation results demonstrate that the EML-CNN achieves an outstanding accuracy of 99.5\% on a specialized speech dataset, maintaining high performance with 96.2\% accuracy on the challenging THUYG-20+ benchmark. These findings highlight the EML-CNN's superior performance over traditional audio classification and machine learning methods, marking a significant advancement in automated speaker recognition technology.
N/A
Documentation
Attachment | Size |
---|---|
SPKINFO.txt | 382 bytes |