Datasets
Standard Dataset
cyclegan-abnormal2normal
- Citation Author(s):
- Submitted by:
- Yanjie Zhu
- Last updated:
- Mon, 07/08/2024 - 15:58
- DOI:
- 10.21227/cbv5-bm86
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
Accurate detection and segmentation of brain tumors is critical for medical diagnosis. We propose a novel framework Two-Stage Generative Model (TSGM) that combines Cycle Generative Adversarial Network (CycleGAN) and Variance Exploding stochastic differential equation using joint probability (VE-JP) to improve brain tumor segmentation. TSGM was trained on the BraTs2020 brain tumor dataset. The CycleGAN is trained on unpaired data to generate abnormal images from healthy images. Then VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide, which alters only pathological regions but not regions of healthy. We validated the proposed TSGM method on three datasets, and compared with other unsupervised methods for anomaly detection and segmentation. The results show that our method achieves better segmentation performance and has better generalization.
For BraTs dataset, we shuffled the training set and split it into two subsets with a ratio of 9:1. Slices with no tumor identified on the ground truth mask are defined as healthy slices. we sliced the 3D MR scans into axial slices and chose slices ranging from 80 to 128 out of the original 155 slices, as tumors are rarely on the upper and lower regions of the brain. Additionally, we discarded any blank or redundant slices with pixel values less than 15. More specifically, our training set consisted of 5,795 healthy slices and 10,473 diseased slices, while the testing set included 508 healthy and 1282 diseased slices.