Datasets
Standard Dataset
vessel segmentation
- Citation Author(s):
- Submitted by:
- Xutao Sun
- Last updated:
- Tue, 10/22/2024 - 03:15
- DOI:
- 10.21227/rk3g-h591
- License:
- Categories:
- Keywords:
Abstract
The DRIVE dataset, developed by Staal et al. (2004), utilizes the elongated structure of vessel ridges for automatic vessel classification in the Utrecht database. It consists of 40 images (565 × 584 pixels) in JPEG format, captured at a 45° field of view, divided into 20 training and 20 testing images.
The CHASEDB1 dataset employs multiscale Gabor filtering and morphological transformations, comprising 28 images related to cardiovascular health from 14 students of various ethnicities, with a resolution of 999 × 960 pixels and a 30° field of view. Each image was manually segmented by two independent annotators, with the first annotator's segmentation serving as the standard.
The FIVES dataset includes 800 retinal images from 573 individuals, averaging 48 years of age, with 469 images from females and 331 from males. The images have a resolution of 2048 × 2048 pixels, and segmentation annotations exhibit high intra- and inter-observer consistency, created by three ophthalmologists and 24 medical staff. The dataset features balanced labels across normal eyes, AMD (Age-related Macular Degeneration), DR (Diabetic Retinopathy), and GC (Glaucoma) conditions.
The DRIVE dataset, developed by Staal et al. (2004), utilizes the elongated structure of vessel ridges for automatic vessel classification in the Utrecht database. It consists of 40 images (565 × 584 pixels) in JPEG format, captured at a 45° field of view, divided into 20 training and 20 testing images.
The CHASEDB1 dataset employs multiscale Gabor filtering and morphological transformations, comprising 28 images related to cardiovascular health from 14 students of various ethnicities, with a resolution of 999 × 960 pixels and a 30° field of view. Each image was manually segmented by two independent annotators, with the first annotator's segmentation serving as the standard.
The FIVES dataset includes 800 retinal images from 573 individuals, averaging 48 years of age, with 469 images from females and 331 from males. The images have a resolution of 2048 × 2048 pixels, and segmentation annotations exhibit high intra- and inter-observer consistency, created by three ophthalmologists and 24 medical staff. The dataset features balanced labels across normal eyes, AMD (Age-related Macular Degeneration), DR (Diabetic Retinopathy), and GC (Glaucoma) conditions.