The processed Deepships dataset and ShipsEar dataset.

Citation Author(s):
Zhenyu
Zhang
Submitted by:
moya ye
Last updated:
Thu, 09/19/2024 - 05:40
DOI:
10.21227/2ywb-mv15
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Underwater acoustic target classification (UATC) aims to identify the type of unknown acoustic sources using passive sonar in oceanic remote sensing scenarios. However, the variability of underwater acoustic environment and the presence of complex background noises create significant obstacles to improving accuracy of UATC. To address these challenges, we develop an innovative deep neural network (DNN) algorithm integrated by  multiscale feature extractor and efficient channel attention mechanism. Firstly, auditory fusion features, including MFCC and GFCC, along with their differential values, are concatenated to represent the amplitude and phase structure information of underwater acoustic signals in time-frequency (TF) domain. Secondly, the integration of multi-scale convolution with an efficient channel attention (ECA) mechanism is introduced to learn and select crucial information from the auditory fusion features. The proposed algorithm efficiently manages and refine the importance of coarse-to-fine representations of acoustic signals, thereby improving the adaptability and reliability in various UATC tasks. Experimental results using the provided datasets have demonstrated that the proposed algorithm significantly outperforms state-of-the-art methods in classification accuracy.

Instructions: 

Underwater acoustic target classification (UATC) aims to identify the type of unknown acoustic sources using passive sonar in oceanic remote sensing scenarios. In this project, we propose an DNN codes integrated by a multi-scale convolution and an efficient channel attention mechanism. Firstly, auditory fusion data, including MFCC and GFCC, along with their differential values, are concatenated to represent the amplitude and phase structure information of underwater acoustic signals in time-frequency (TF) domain. Secondly, the integration of multi-scale convolution with an efficient channel attention (ECA) mechanism is introduced to automatically select crucial feature from the fused data. Our experiments are performed in datasets of ShipsEar and DeepShip. To facilitate the reproduction of UATC performance, we provide a trained DNN model in this project. Simply run "test_result.py" to utilize it. If you wish to retrain the model, please refer to the following instructions. if you wish to run Shipear Dataset, please follows: Mydemo_1s.py Data_processing.py -> num =1. if you wish to run Deepship Dataset, please follows: Mydemo_3s.py Data_processing.py -> num =3.

Here are the introduction of provided datasets:

ShipsEar. It is collected by 5 categories of underwater acoustic signals with a sampling rate of 16000 Hz. The total duration of acoustic files is 3.13 hours with a size of 1.61 GB. The recorded acoustic segments are divided into 11270 segments with a unified length of 16000 points, corresponding to 1 second.

DeepShip. It is collected by 4 categories of underwater acoustic signals with a sampling rate of 22050 Hz. The total duration of acoustic files is 47 hours with a size of 20.2 GB. The recorded acoustic segments are divided into 55450 segments with a unified length of 66150 points,corresponding to 3 seconds.

According to the confidentiality agreement, we are unable to provide the datasets. You can send an email to the original authors of the datasets to inquire. The Deepships dataset is sourced from: https://www.sciencedirect.com/science/article/abs/pii/S0957417421007016 and the shipsEar dataset is sourced from: https://www.sciencedirect.com/science/article/abs/pii/S0003682X16301566.

Comments

 

Submitted by shun bai on Wed, 09/25/2024 - 02:50