Hackathon: Imaging Informatics (CT Imaging)

Submission Dates:
09/17/2021 to 10/03/2021
Citation Author(s):
Bobak
Mortazavi
Submitted by:
Sorush Omidvar
Last updated:
Sun, 10/03/2021 - 18:02
DOI:
10.21227/50q7-xv87
License:
Creative Commons Attribution
250 Views

Abstract 

Please see the descriptions below.

Instructions: 

Registration

Please register at https://easychair.org/account/signin?l=pStWPAd56eImr92BxqJrMt#

Dataset description:

  1. MosMedData includes 1,110 CT volumes collected from subjects in Moscow from 2020 March 1st to April 25th. This dataset contains deidentified human pulmonary CT scans with and without COVID-related radiological findings. At the beginning of the pandemic, CT served as a key tool to diagnose and monitor the progression of COVID-19 in Moscow. Clinical experts developed a procedure to grade the severity of COVID-19 based on CT radiological findings. The COVID-19 triage including follow-up by phone, admission to hospital or intensive care unit was decided by these severity findings along with other symptoms[1, 2].
  2. 1,110 subjects [age, (min, max, median), (18, 97, 47)] were recruited of which 42% were males, 56% females, 2% are other/unknown. They underwent a standard CT protocol using a Canon (Toshiba) Aquilion 64 CT scanner. The in-plane resolution is 0.8*0.8 mm2, the interslice distance is also 0.8 mm. However, this study only preserved every 10th slice of the original volume for storage. Therefore, the effective increment was 8mm[2]. The image matrix size is 512 x 512 x (36-41).
  3. Though there are several public datasets that have been used to investigate the application of deep learning in classifying COVID-19 findings in CT sliced images [3, 4], few studies focus on patient-wise, AI-based COVID-19 severity grading and categorical classification. The MosMedData provides 1,110 CT scans from non-repetitive subjects and corresponding 5-category annotations (ground truth). The grading of COVID-19 severity in CT was performed with a visual semi-quantitative scale adopted by the Russian Federation and used in Moscow hospitals. The dataset contains 254 scans without COVID-19 symptoms. The rest is split into 4 categories: CT1 (affected lung percentage 25% or below, 684 scans), CT2 (from 25% to 50%, 125 scans), CT3 (from 50% to 75%, 45 scans), CT4 (75% and above, 2 scans) (Figure 1). The final grade was decided based on the initial reading in clinics and a second reading by experts from the advisory department of the Center for Diagnostics and Telemedicine (CDT)[2].
  4. Dataset usage and previous studies:We suggest utilizing the MosMedData to develop a volume-based deep learning model to identify the COVID-19 scans (binary classification) and evaluate the severity (categorical classification). Such an AI model is of great significance and of practical value for triage.
  5. We suggest removing category CT4 as it only has 2 scans. Therefore, the final DL model will first identify the suspect COVID-19 CT scan and then classify the images into CT1 (mild), CT2 (moderate), and CT3 (severe). A patch/ROI/slice-based classification model plus a voting system may be sufficient. If memory allows, a model that directly takes the entire volume (matrix size is 512x512x38) as input can also be considered. Participants may refer to some memory-efficient networks.
  6. A previous study has explored using this dataset for binary classification (i.e., COVID vs. non-COVID), they achieved an AUC of 0.93[5], which implies that the categorical classification may be challenging but doable/feasible.

Performance Evaluation Criteria

1). The performance of each model will be judged by the number of correct classes (CT-0, CT-1, CT-2, CT-3) matched to the clinical/human expert classification. The results with the smallest classification error will be the winner.

2.) Results and programs of the model multi-classifier must be posted on GitHub to be considered in the competition.

3.) Each competitor/group should submit through the website, a 4-page paper summarizing the method and results, following the standard IEEE format for conference paper submissions. The report should provide the link to the GitHub post. The report should contain the proper citation/acknowledgment for the data use.