Medical Imaging
Fabdepth HMI is designed for hand gesture detection for Human Machine Interaction. It contains total of 8 gestures performed by 150 different individuals. These individuals range from toddlers to senior citizens which adds diversity in this dataset. These gestures are available in 3 different formats namely resized, foreground=-background separated and depth estimated images. Additional aspect is added in terms of video format of 150 samples. Researchers may choose their combination of data modalities based on their application.
- Categories:
We provide the abstract from the paper below:
- Categories:
This is the data as collected within the Study comparing different DVR techniques in terms of correctness, efficiency and perceived workload in the context of visceral surgery.
- Categories:
The datset inculed the BrainWeb data which consists of T1-weighted (T1w), T2-weighted (T2w), and proton density-weighted (PDw) normal brain noise-free MR images (the size is with resolution), two real T1w MR brain datasets (OAS30040 and OAS30072) from the Open Access Series of imaging Studies (OASIS) database,and the synthetic DW-MRI dataset
- Categories:
Retinal Fundus Multi-disease Image Dataset (RFMiD 2.0) is an auxiliary dataset to our previously published dataset. RFMiD 2.0 is a more challenging dataset to research society to develop the computer-based disease diagnosis system. Diabetic Retinopathy, cataracts, and refractive error in the eye are leading disease which causes permanent vision loss more frequently. Therefore, developing an AI-based model to classify these diseases is useful for ophthalmologists. This dataset consists of 860 images of frequently and rarely observed 51 diseases.
- Categories:
This is just a preliminary collation of the relevant TCGA datasets collated and used in our methodology. We will continue to upload the full dataset later for your reference and use. We hope to make a small contribution to the study of automatic 3D MRI classification of gliomas and the problem of domain adaptation on medical images.
- Categories:
Due to the complex and unstructured nature of the intestine, 3D reconstruction and visual navigation are imperative for clinical endoscopists performing the skill-intensive colonoscopy. Unsupervised 3D reconstruction methods, as a mainstream paradigm in auto-driving scenarios, exploit warping loss to predict 6-DOF pose and depth information jointly. However, owing to illumination inconsistency, repeated texture regions, and non-Lambertian reflection, the geometry warping constraint cannot be efficiently applied to the colonic environment.
- Categories:
Here we provide fully sampled multi-dimensional datasets at different regions of interest for reproducibility validation of our submitted paper.
- Categories:
Data diversity and volume are crucial to the success of training deep learning models, while in the medical imaging field, the difficulty and cost of data collection and annotation are especially huge. Specifically in robotic surgery, data scarcity and imbalance have heavily affected the model accuracy and limited the design and deployment of deep learning-based surgical applications such as surgical instrument segmentation.
- Categories:
Synaptic vesicle glycoprotein 2A (SV2A) is the most widely distributed transmembrane glycoprotein present on secretory vesicles in the pre-synaptic terminal of neurons throughout the central nervous system (Bajjalieh et al., 1994). SV2A can be used as a marker to visualize pre-synaptic density distribution in vivo using positron emission tomography (PET) imaging thanks to the SV2A radioligands available, including [11C]UCB-J (Nabulsi et al., 2016). Given the brain-wide distribution of SV2A, regional analysis of SV2A PET data may be limiting the amount of information that can be obtained.
- Categories: