CNN-Based Image Reconstruction Method for Ultrafast Ultrasound Imaging: Data

Citation Author(s):
Dimitris
Perdios
École polytechnique fédérale de Lausanne (EPFL)
Manuel
Vonlanthen
École polytechnique fédérale de Lausanne (EPFL)
Florian
Martinez
École polytechnique fédérale de Lausanne (EPFL)
Marcel
Arditi
École polytechnique fédérale de Lausanne (EPFL)
Jean-Philippe
Thiran
École polytechnique fédérale de Lausanne (EPFL)
Submitted by:
Dimitris Perdios
Last updated:
Fri, 04/01/2022 - 15:27
DOI:
10.21227/vn0e-cw64
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

This repository contains the data related to the paper “CNN-Based Image Reconstruction Method for Ultrafast Ultrasound Imaging” (10.1109/TUFFC.2021.3131383). It contains multiple datasets used for training and testing, as well as the trained models and results (predictions and metrics). In particular, it contains a large-scale simulated training dataset composed of 31000 images for the three different imaging configuration considered (i.e., low quality, high quality, and ultrahigh quality). It also contains images of a dedicated numerical test phantom (300 realizations) associated to ultrasound-specific image metrics, 300 additional (test) samples simulated identically to the training dataset, an in vivo test dataset acquired on the carotid of a volunteer (60 frames, longitudinal view), and an in vitro test dataset acquired on a CIRS model 054GS.

The accepted version of this paper is also available on arXiv: arXiv:2008.12750.

The corresponding code is available online at https://github.com/dperdios/dui-ultrafast.

Paper Abstract

Ultrafast ultrasound (US) revolutionized biomedical imaging with its capability of acquiring full-view frames at over 1 kHz, unlocking breakthrough modalities such as shear-wave elastography and functional US neuroimaging. Yet, it suffers from strong diffraction artifacts, mainly caused by grating lobes, side lobes, or edge waves. Multiple acquisitions are typically required to obtain a sufficient image quality, at the cost of a reduced frame rate. To answer the increasing demand for high-quality imaging from single unfocused acquisitions, we propose a two-step convolutional neural network (CNN)-based image reconstruction method, compatible with real-time imaging. A low-quality estimate is obtained by means of a backprojection-based operation, akin to conventional delay-and-sum beamforming, from which a high-quality image is restored using a residual CNN with multiscale and multichannel filtering properties, trained specifically to remove the diffraction artifacts inherent to ultrafast US imaging. To account for both the high dynamic range and the oscillating properties of radio frequency US images, we introduce the mean signed logarithmic absolute error (MSLAE) as a training loss function. Experiments were conducted with a linear transducer array, in single plane-wave (PW) imaging. Trainings were performed on a simulated dataset, crafted to contain a wide diversity of structures and echogenicities. Extensive numerical evaluations demonstrate that the proposed approach can reconstruct images from single PWs with a quality similar to that of gold-standard synthetic aperture imaging, on a dynamic range in excess of 60 dB. In vitro and in vivo experiments show that trainings carried out on simulated data perform well in experimental settings.

License

Data released under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0).

If you are using this data and/or code, please cite the corresponding paper.

Acknowledgments

This work was supported in part by the Swiss National Science Foundation under Grant 205320_175974 and Grant 206021_170758.

Instructions: 

The detailed description of the data available in this repository can be found online at https://github.com/dperdios/dui-ultrafast/#data.

Funding Agency: 
Swiss National Science Foundation

Comments

good job!!

Submitted by nour ben ammar on Fri, 08/09/2024 - 09:45

Dataset Files

LOGIN TO ACCESS DATASET FILES
Open Access dataset files are accessible to all logged in  users. Don't have a login?  Create a free IEEE account.  IEEE Membership is not required.