SynthRAD2023 Grand Challenge

Citation Author(s):
yilong
li
Submitted by:
li yilong
Last updated:
Thu, 03/20/2025 - 09:17
DOI:
10.21227/9grc-xj64
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Medical imaging has become increasingly important in the diagnosis and treatment of oncological patients, particularly in radiotherapy. 

Traditionally, X-ray-based imaging is widely adopted in RT for patient positioning and monitoring before, during, or after the dose delivery.

Computed tomography (CT) is considered the primary imaging modality in RT, providing accurate and high-resolution patient geometry and enabling direct electron density conversion needed for dose calculations [Chernak et al., 1975]. Also, cone-beam computed tomography (CBCT) plays a vital role in image-guided adaptive radiation therapy (IGART) for photon and proton therapy. 

However, due to the severe scatter noise and truncated projections, CBCT is affected by artifacts, e.g. as shading, streaking, and cupping that makes it unsuitable for accurate dose calculations [Ramella et al., 2017]. 

Image synthesis has been proposed to improve the quality of CBCT to the CT level, producing the so-called “synthetic CT” (sCT) [Kida et al., 2018]. The conversion of CBCT-to-CT would allow accurate dose computation, enabling adaptive CBCT-based RT and improving the quality of IGART provided to the patients.

In the last decades, magnetic resonance imaging (MRI) has also proved its added value for tumors and organs-at-risk delineation thanks to its superb soft-tissue contrast . MRI can be acquired to simulate the treatment planning or to match patient positioning to the planned one and monitor changes before, during, or after the dose delivery [Lagendijk et al., 2004].

To benefit from the complementary advantages offered by different imaging modalities, MRI is generally registered to CT. Such a workflow requires obtaining a CT, increasing workload, and introducing additional radiation to the patient. Recently, MRI-only based RT has been proposed to simplify and speed up the workflow, decreasing patients' exposure to ionizing radiation, which is particularly relevant for repeated simulations or fragile populations like children. MRI-only RT may reduce overall treatment costs and workload, and eliminate residual registration errors when using both imaging modalities. Additionally, the development of MRI-only techniques can be beneficial for MRI-guided RT [Edmund and Nyholm, 2017].

The main obstacle in introducing MRI-only RT is the lack of tissue attenuation information required for accurate dose calculations. Many methods have been proposed to convert MR to CT-equivalent images, obtaining synthetic CT (sCT) for treatment planning and dose calculation. 

In recent years, the derivation of sCT from MRI or CBCT has increased interest based on artificial intelligence algorithms such as machine learning or deep learning. However, no public data or challenges have been designed to provide ground truth for this task.

A recent review of deep learning-based sCT generation advocated for public challenges to provide data and evaluation metrics to compare different approaches openly.

 

Instructions: 

 

This study utilizes the SynthRAD2023 challenge dataset . The dataset consists of imaging data from patients receiving radiation therapy to the brain or pelvic region between 2018 and 2022, collected from three Dutch institutions: the
University Medical Center Radboud, the University Medical Center Utrecht, and the University Medical Center Groningen. The challenge was divided into two tasks: Task 1 addressed the MR-to-CT image synthesis problem and consisted of MR/CT image pairs, while Task 2 focused on CBCT-to-CT image conversion, comprising CBCT/CT image pairs. The focus of this study is on the brain dataset in Task 2, which involves generating sCT images from CBCT. All pre-processing and post-processing were applied to the 3D images, and a total of 180 CBCT-CT brain pairs were available. A random 80/10/10 split was used to create the training, validation, and test sets. The training, validation, and test set distributions for each institution, classified by center name (A, B, C) Furthermore, to further validate the applicabilityof the method in real-world scenarios, this study introduces a clinical dataset for testing. The dataset is sourced from (hospital name/institution) and consists of CBCT and CT image pairs from the head and neck region, with a total of 30 pairs. The dataset includes various device types and imaging scenarios, providing a robust basis for assessing the model’s generalization capability and stability.