This dataset is for the paper titled: Segmentation of Cervical Cell Images based on Generative Adversarial Networks. The dataset is used to train and test the Cell-GAN, a generative adversarial network. After training, the Cell-GAN is able to generate a complete single-cell image which has the similar contour to the cell to be segmented.

Categories:
Category: 
223 Views

For the development and evaluation of organ localization methods, we build a set of annotations of organ bounding boxes based on the MICCAI Liver Tumor Segmentation (LiTS) challenge dataset. Bounding boxes of 11 body organs are included:  heart (53/28), left lung (52/21), right lung (52/21), liver (131/70), spleen (131/70), pancreas (131/70), left kidney (129/70), right kidney (131/69), bladder (109/67), left femoral head (109/66) and right femoral head (105/66). The number in the parentheses indicates the number of the organs annotated in training and testing sets.

Categories:
1809 Views

We make our dataset publicly avaiable. It consists of 50 H&E stained histopathology annotated images at the nuclei level. This dataset is ideal for those who want an exhaustive annotation of H&E breast cancer patient from a Tripple Negative Breast Cancer cohort.

Categories:
1079 Views

Access the dataset for images of typical diabetic retinopathy lesions and also normal retinal structures annotated at a pixel level, focused on an Indian population. This dataset provides information on the disease severity of diabetic retinopathy, and diabetic macular edema for each image.

Instructions: 
The dataset is divided into three parts:
A. Segmentation: It consists of
1. Original color fundus images (81 images divided into train and test set - JPG Files)
2. Groundtruth images for the Lesions (Microaneurysms, Haemorrhages, Hard Exudates and Soft Exudates divided into train and test set - TIF Files) and Optic Disc (divided into train and test set - TIF Files)
B. Disease Grading: it consists of
1. Original color fundus images (516 images divided into train set (413 images) and test set (103 images) - JPG Files)
2. Groundtruth Labels for Diabetic Retinopathy and Diabetic Macular Edema Severity Grade (Divided into train and test set - CSV File)
C. Localization: It consists of
1. Original color fundus images (516 images divided into train set (413 images) and test set (103 images) -
JPG Files)
2. Groundtruth Labels for Optic Disc Center Location (Divided into train and test set - CSV File)
3. Groundtruth Labels for Fovea Center Location (Divided into train and test set - CSV File)
 
For more information visit idrid.grand-challenge.org
Categories:
36155 Views

Interventional applications of photoacoustic imaging typically require visualization of point-like targets, such as the small, circular, cross-sectional tips of needles, catheters, or brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use deep learning techniques to identify these type of noise artifacts for removal in experimental photoacoustic data.

Instructions: 

Our paper demonstrates the benefits of using deep convolutional neural networks as an alternative to traditional model-based photoacoustic beamforming and data reconstruction techniques [1]. To foster reproducibility and future comparisons, our trained code, a few of our experimental datasets, and instructions for use are freely available on IEEE DataPort [2]. Our code contains three main components, (1) the simulation component, (2) the neural network component and (3) the analysis component. All of the code for these three components can be found in the GitHub repository associated with our paper [3].

 

  1. The simulation component relies on the MATLAB toolbox k-Wave to simulate sources for specified medium sound speeds, 2D locations, and transducer parameters. Once the photoacoustic sources are simulated, the sources are shifted and superimposed to generate a database of images representing photoacoustic channel data containing sources and artifacts. Along with the channel data images, annotation files are generated which indicate the location and class (i.e., source or artifact) of the objects in the image. This database can then be used to train a network.

 

  1. The neural network component is written in the Caffe Framework. The repository contains all necessary files to run the neural network processing. For instructions on how to set up your machine to run this program please see the original Faster-RCNN code repository [4]. To train a network, a dataset of simulated channel data images along with the annotation files containing the locations of all objects in the images must be input to the network. The network then processes the training data and outputs the trained network. When testing, channel data is input to the network which then processes the images and outputs a list of detections with confidence scores and bounding boxes. The IEEE Data Port repository [2] for this project contains a pre-trained network along with the dataset used to generate it and as well as the waterbath and phantom experimental channel data used in our paper.

 

  1. Analysis code is also available on the GitHub repository [3]. The analysis code is intended to be used when testing performance on simulated data by evaluating detections made by the network and outputting information regarding classification, misclassification, and missed detection rate as well as error. All experimental data was analyzed by hand.

 

More detailed instructions can be found in their respective repositories.

 

[1] Allman D, Reiter A, Bell MAL, Photoacoustic source detection and reflection artifact removal enabled by deep learning, IEEE Transactions on Medical Imaging, 37(6):1464-1477, 2018

[2] https://ieee-dataport.org/open-access/photoacoustic-source-detection-and-reflection-artifact-deep-learning-dataset

[3] https://github.com/derekallman/Photoacoustic-FasterRCNN

[4] https://github.com/rbgirshick/py-faster-rcnn

 

If you use the data/code for research, please cite [1]. For commercial use, please contact mledijubell@jhu.edu to discuss the related IP.

 

Categories:
Category: 
1054 Views

These datasets were used to produce the results of the following TMI paper: "3D Quantification of Filopodia in Motile Cancer Cells", Castilla C., et al. (2019). IEEE Transactions on Medical Imaging 38(3):862,872.

 

Categories:
Category: 
224 Views

The investigation of the performance of different Positron Emission Tomography (PET) reconstruction and motion compensation methods requires an accurate and realistic representation of the anatomy and motion trajectories as observed in real subjects during acquisitions. The generation of well- controlled clinical datasets is difficult due to the many different clinical protocols, scanner specifications, patient sizes and physiological variations. Alternatively, computational phantoms can be used to generate large datasets for different disease states, providing a ground truth.

Categories:
Category: 
124 Views

A database of lips traces
Cheiloscopy is a forensic investigation technique that deals with identification of humans based on lips traces. Lip prints are unique and permanent for each individual, and next to the fingerprinting, dental identification, and DNA analysis can be one of the basis for criminal/forensics analysis.

Instructions: 

SUT-Lips-DB database is free for scientific and testing purposes. However, you are asked to cite the data set and our papers mentioned at Home Project web site every time when you publish your own research conducted with the use of our data set or when you compare your own results with ours.

The main ZIP archive contains several folders. Each folder may contain several lip traces as JPG files only for one person. Data are anonimized. The name of the folder contains the informormation on the gender of the person. Additional CSV file contains information about year of birth of people for who we collected samples.

Categories:
776 Views

The class of registration methods proposed in the framework of Stokes Large Deformation
Diffeomorphic Metric Mapping is a particularly interesting family of physically
meaningful diffeomorphic registration methods.
Stokes-LDDMM methods are formulated as a conditioned variational problem,
where the different physical models are imposed using the associated partial differential equations
as hard constraints.
The most significant limitation of Stokes-LDDMM framework is its huge computational complexity.

Categories:
Category: 
65 Views

Pages