This dataset contains the trained model that accompanies the publication of the same name:
Anup Tuladhar*, Serena Schimert*, Deepthi Rajashekar, Helge C. Kniep, Jens Fiehler, Nils D. Forkert, "Automatic Segmentation of Stroke Lesions in Non-Contrast Computed Tomography Datasets With Convolutional Neural Networks," in IEEE Access, vol. 8, pp. 94871-94879, 2020, doi:10.1109/ACCESS.2020.2995632. *: Co-first authors
The dataset contains 3 parts:
- Pre-processing: Script to extract brain volume from surrounding skull in non-contrast computed tomography (NCCT) scans and instructions for further pre-processing.
- Trained convolutional neural network (CNN) to perform automated segmentations
- Post-processing script to improve CNN-based segmentations
Independent Instructions for each part are also contained within each folder.
The nucleus and micronucleus images in this dataset are collected manually from Google images. Many of these images are in RGB color while a few of them are in grayscale. The dataset includes 148 nucleus images and 158 micronucleus images. The images are manually curated, cropped, and labeled into these two classes by a domain of experts in biology. The images have different sizes and different resolutions. The sizes and shapes for nucleuses and micronucleuses images differ from one image to another. Each image may contain one or more nucleus or micronucleus.
The dataset comprises up to two weeks of activity data taken from the ankle and foot of 14 people without amputation and 17 people with lower limb amputation. Walking speed, cadence, and lengths of strides taken at and away from the home were considered in this study. Data collection came from two wearable sensors, one inertial measurement unit (IMU) placed on the top of the prosthetic or non-dominant foot, and one accelerometer placed on the same ankle. Location information was derived from GPS and labeled as ‘home’, ‘away’, or ‘unknown’. The dataset contains raw acce
This dataset is comprised of 31 Matlab .mat files. Each .mat file contains all sensor data for one individual participant. Files for participants with lower limb amputation (n = 17) are named as ‘S##.mat’ and files for control participants (n = 14) are named as ‘C##.mat’.
EEG signals of various subjects in text files are uploaded. It can be useful for various EEG signal processing algorithms- filtering, linear prediction, abnormality detection, PCA, ICA etc.
A custom made multispectral camera was used to collect a novel dataset of images of untreated lettuce leaves or leaves treated with vinegar, oil, or a combination of these. The camera captured image data at 10 wavelengths ∈[380nm,980nm] across the electromagnetic spectrum in the visible and NIR (near-infrared) regions. Imaging was done in a lab environment with the presence of ambient light.
The dataset consists of two populations of fetuses: 160 healthy and 102 Late Intra Uterine Growth Restricted (IUGR). Late IUGR is an adverse pathological condition encompassing chronic hypoxia as a consequence of placental insufficiency, resulting in an abnormal rate of fetal growth. In standard clinical practice, Late IUGR diagnosis can only be suspected in the third trimester and ultimately confirmed at birth. This data collection comprises of a set of 31 Fetal Heart Rate (FHR) indices computed at different time scales and domains accompanied by the clinical diagnosis.
The data for healthy and Late IUGR populations are included in a single .xlsx file.
Participants are listed by rows and features by columns. In the following we report an exhaustive list of features contained in the dataset accompanied by their units, time interval employed for the computation, and scientific literature references:
Fetal and Maternal Domains
- Clinical Diagnosis [HEALTHY/LATE IUGR]: binary variable to report the clinical diagnosis of the participant
- Gestational Age [days]: gestational age at the time of CTG examination
- Maternal Age [years]: maternal age at the time of CTG examination
- Sex [Male (1)/Female (2)]: fetal sex
Morphological and Time Domains
- Mean FHR [bpm] – 1-min epoch: the mean of FHR excluding accelerations and decelerations
- Std FHR [bpm] – 1-min epoch: the standard deviation of FHR excluding accelerations and decelerations
- DELTA [ms] – 1-min epoch: defined in accordance with ,  excluding accelerations and decelerations
- II  – 1-min epoch: defined in accordance with ,  excluding accelerations and decelerations
- STV [ms] – 1-min epoch: defined in accordance with ,  excluding accelerations and decelerations
- LTI [ms] – 3-min epoch: defined in accordance with ,  excluding accelerations and decelerations
- ACC_L [#] – entire recording: the count of large accelerations defined in accordance with , 
- ACC_S [#] – entire recording: the count of small accelerations defined in accordance with , 
- CONTR [#]– entire recording: the count of contractions defined in accordance with , 
- LF [ms²/Hz] – 3-min epoch: defined in accordance with , LF band is defined in the range [0.03 - 0.15] Hz
- MF [ms²/Hz] – 3-min epoch: defined in accordance with , MF band is defined in the range [0.15 - 0.5] Hz
- HF [ms²/Hz] – 3-min epoch: defined in accordance with , HF band is defined in the range HF [0.5 - 1 Hz]
- ApEn [bits] – 3-min epoch: defined in accordance with , m = 1, r = 0.1*standard deviation of the considered epoch
- SampEn [bits] – 3-min epoch: defined in accordance with , m = 1, r = 0.1*standard deviation of the considered epoch
- LCZ_BIN_0 [bits] – 3-min epoch: defined in accordance with , binary coding and p = 0
- LCZ_TER_0 [bits] – 3-min epoch: defined in accordance with , tertiary coding and p = 0
- AC/DC/DR [bpm] – entire recording: defined in accordance with –, considering different combinations of parameters T and s, L is constant and equal 100 samples; e.g, AC_T1_s2 is defined as the acceleration capacity computed setting the parameters T = 1 and s = 2
 D. Arduini, G. Rizzo, A. Piana, P. Bonalumi, P. Brambilla, and C. Romanini, “Computerized analysis of fetal heart rate—Part I: description of the sys- tem (2CTG),” J Matern Fetal Invest, vol. 3, pp. 159–164, 1993.
 M. G. Signorini, G. Magenes, S. Cerutti, and D. Arduini, “Linear and nonlinear parameters for the analysis of fetal heart rate signal from cardiotocographic recordings,” IEEE Trans. Biomed. Eng., vol. 50, no. 3, pp. 365–374, 2003.
 FIGO, “Guidelines for the Use of Fetal Monitoring,” Int. J. Gynecol. Obstet., vol. 25, pp. 159–167, 1986.
 R. Rabinowitz, E. Persitz, and E. Sadovsky, “The relation between fetal heart rate accelerations and fetal movements.,” Obstet. Gynecol., vol. 61, no. 1, pp. 16–18, 1983.
 S. M. Pincus and R. R. Viscarello, “Approximate entropy: a regularity measure for fetal heart rate analysis.,” Obstet. Gynecol., vol. 79, no. 2, pp. 249–55, 1992.
 D. E. Lake, J. S. Richman, M. P. Griffin, and J. R. Moorman, “Sample entropy analysis of neonatal heart rate variability,” Am. J. Physiol. - Regul. Integr. Comp. Physiol., vol. 283, no. 3, pp. R789–R797, 2002.
 A. Lempel and J. Ziv, “On the complexity of finite sequences,” IEEE Trans. Inf. Theory, vol. 22, no. 1, pp. 75–81, 1976.
 A. Bauer et al., “Phase-rectified signal averaging detects quasi-periodicities in non-stationary data,” Phys. A Stat. Mech. its Appl., vol. 364, pp. 423–434, 2006.
 A. Fanelli, G. Magenes, M. Campanile, and M. G. Signorini, “Quantitative assessment of fetal well-being through ctg recordings: A new parameter based on phase-rectified signal average,” IEEE J. Biomed. Heal. Informatics, vol. 17, no. 5, pp. 959–966, 2013.
 M. W. Rivolta, T. Stampalija, M. G. Frasch, and R. Sassi, “Theoretical Value of Deceleration Capacity Points to Deceleration Reserve of Fetal Heart Rate,” IEEE Trans. Biomed. Eng., pp. 1–10, 2019.
Pressing demand of workload along with social media interaction leads to diminished alertness during work hours. Researchers attempted to measure alertness level from various cues like EEG, EOG, Video-based eye movement analysis, etc. Among these, video-based eyelid and iris motion tracking gained much attention in recent years. However, most of these implementations are tested on video data of subjects without spectacles. These videos do not pose a challenge for eye detection and tracking.
Disease-Specific Faces (DSF) database is used to research the phenotype and genotype of the diseases.
Disease-Specific Face images collected from:
♦ Professional medical publications
♦ Professional medical websites
♦ Medical Forums
with definite diagnostic results.
The database is updated every three months.
If you would like to use DSF database, please send email to email@example.com.