This dataset contains 1216 data, which are scanned by HIS-RING PACT system.

the data sampling rate of our system is 40 MSa/s, a 128-elements 2.5MHz full-view ring-shaped transducer with 30mm radius. 

 continuous updating.....

Categories:
269 Views

Interventional applications of photoacoustic imaging typically require visualization of point-like targets, such as the small, circular, cross-sectional tips of needles, catheters, or brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use deep learning techniques to identify these type of noise artifacts for removal in experimental photoacoustic data.

Instructions: 

Our paper demonstrates the benefits of using deep convolutional neural networks as an alternative to traditional model-based photoacoustic beamforming and data reconstruction techniques [1]. To foster reproducibility and future comparisons, our trained code, a few of our experimental datasets, and instructions for use are freely available on IEEE DataPort [2]. Our code contains three main components, (1) the simulation component, (2) the neural network component and (3) the analysis component. All of the code for these three components can be found in the GitHub repository associated with our paper [3].

 

  1. The simulation component relies on the MATLAB toolbox k-Wave to simulate sources for specified medium sound speeds, 2D locations, and transducer parameters. Once the photoacoustic sources are simulated, the sources are shifted and superimposed to generate a database of images representing photoacoustic channel data containing sources and artifacts. Along with the channel data images, annotation files are generated which indicate the location and class (i.e., source or artifact) of the objects in the image. This database can then be used to train a network.

 

  1. The neural network component is written in the Caffe Framework. The repository contains all necessary files to run the neural network processing. For instructions on how to set up your machine to run this program please see the original Faster-RCNN code repository [4]. To train a network, a dataset of simulated channel data images along with the annotation files containing the locations of all objects in the images must be input to the network. The network then processes the training data and outputs the trained network. When testing, channel data is input to the network which then processes the images and outputs a list of detections with confidence scores and bounding boxes. The IEEE Data Port repository [2] for this project contains a pre-trained network along with the dataset used to generate it and as well as the waterbath and phantom experimental channel data used in our paper.

 

  1. Analysis code is also available on the GitHub repository [3]. The analysis code is intended to be used when testing performance on simulated data by evaluating detections made by the network and outputting information regarding classification, misclassification, and missed detection rate as well as error. All experimental data was analyzed by hand.

 

More detailed instructions can be found in their respective repositories.

 

[1] Allman D, Reiter A, Bell MAL, Photoacoustic source detection and reflection artifact removal enabled by deep learning, IEEE Transactions on Medical Imaging, 37(6):1464-1477, 2018

[2] https://ieee-dataport.org/open-access/photoacoustic-source-detection-and-reflection-artifact-deep-learning-dataset

[3] https://github.com/derekallman/Photoacoustic-FasterRCNN

[4] https://github.com/rbgirshick/py-faster-rcnn

 

If you use the data/code for research, please cite [1]. For commercial use, please contact mledijubell@jhu.edu to discuss the related IP.

 

Categories:
1718 Views