Participants were 61 children with ADHD and 60 healthy controls (boys and girls, ages 7-12). The ADHD children were diagnosed by an experienced psychiatrist to DSM-IV criteria, and have taken Ritalin for up to 6 months. None of the children in the control group had a history of psychiatric disorders, epilepsy, or any report of high-risk behaviors.

 

Instructions: 

 

Extract the Zip files. Load the ".mat" data into MATLAB.

 

If you want to import the electrode location into EEGLAB, please use the attached".ced" file.

 

 

 

Normal
0

false
false
false

EN-US
X-NONE
AR-SA

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:8.0pt;
mso-para-margin-left:0in;
line-height:107%;
mso-pagination:widow-orphan;
font-size:11.0pt;
mso-bidi-font-size:14.0pt;
font-family:"Calibri",sans-serif;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"B Nazanin";}

Categories:
1406 Views

This dataset is very vast and contains Bengali tweets related to COVID-19. There are 36117 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.    

Instructions: 

The script to load data is written in documentation.

Categories:
374 Views

This dataset is very vast and contains Spanish tweets related to COVID-19. There are 18958 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.    

Instructions: 

Use the code snippet provided written in python to load data.

Categories:
190 Views

This dataset contains cardiovascular data recorded during progressive exsanguination in a porcine model of hemorrhage. Both wearable and catheter-based sensors were used to capture cardiovascular function; the wearable system contained a fusion of ECG, SCG, and PPG sensors while the catheter-based system was comprised of pressure catheters in the aortic arch, femoral artery, and right and left atria via a Swan-Ganz catheter.

Instructions: 

Experimental Protocol

This protocol included 6 Yorkshire swine (3 castrated male, 3 female, Age: 114–-150 days, Weight: 51.5-–71.4 kg), each of which passed a health assessment examination but were not subject to other exclusion criteria. Anesthesia was induced in the animal with xylazine and telazol and maintained with inhaled isoflurane during mechanical ventilation. Intravenous heparin was administered as needed to prevent coagulation of blood during the protocol. Before the induction of hypovolemia, a blood sample was taken to assess baseline plasma absorption. Following this baseline sample, Evans Blue dye was administered for blood volume estimation. After waiting several minutes to allow for even distribution of the dye, a second blood sample was taken to measure plasma volume. In this method, plasma volume is used along with hematocrit to estimate total blood volume. For one animal in the protocol (Pig 4), atropine was administered to raise the starting heart rate and blood pressure due to critically low values.

Hypovolemia was induced by draining blood through an arterial line at four levels of blood volume loss (7%, 14%, 21%, and 28%) as determined by the estimated total blood volume from the Evans Blue dye protocol. After draining passively through the arterial line, the blood was stored in a sterile container. Following each level of blood loss, exsanguination was paused for approximately 5-10 minutes to allow the cardiovascular system to stabilize. If cardiovascular collapse occurred once a level was reached, as defined by a 20% drop in mean aortic pressure from baseline after stabilization, exsanguination was terminated. Note that cardiovascular collapse was reached at different blood volume levels for each animal: Pigs 1, 3, and 4 reached 21% blood volume loss; Pigs 2 and 6 reached 28% blood volume loss; and Pig 5 reached 14% blood volume loss before the experimental protocol was terminated.

 

Signals from wearable sensors were continuously recorded using a BIOPAC MP160 data acquisition system (BIOPAC Systems, Inc., Goleta, California, USA) with a sampling frequency of 2 kHz. Electrocardiogram (ECG) signals were captured using a three-lead system of adhesive-backed Ag/AgCl electrodes placed in Einthoven Lead II configuration, which interfaced with a BIOPAC ECG100C amplifier. Reflectance-mode photoplethysmogram (PPG) was captured with a BIOPAC TSD270A transreflectance transducer, which interfaced with a BIOPAC OXY200 veterinary pulse oximeter. The transducer was placed over the femoral artery on either the right or left caudal limb, contralateral to inducer placement. Seismocardiogram (SCG) signals were captured using an ADXL354 accelerometer (Analog Devices, Inc., Norwood, Massachusetts, USA) placed on the mid-sternum, interfacing with a BIOPAC HLT100C transducer interface module.

Aortic root pressure was captured by inserting a fluid-filled catheter through a vascular introducer in the right carotid artery, fed through to the aortic root. Femoral artery pressure was obtained directly from an introducer placed on either the left or right femoral artery depending on accessibility. Right and left atrial pressures were captured with a Swan-Ganz catheter with proximal and distal monitoring ports inserted in either the right or left femoral vein. Left atrial pressure was inferred via PCWP captured using an Edwards 131F7 Swan-Ganz catheter (Edwards Lifesciences Corp, Irvine, California, USA). The vascular introducers were connected via pressure monitoring lines to ADInstruments MLT0670 pressure transducers (ADInstruments Inc., Colorado Springs, Colorado, USA). Data from the catheters were continuously recorded with an ADInstruments Powerlab 8/35 acquisition system sampling at 2 kHz.

 

Signal Pre-Processing

All signals were filtered with finite impulse response band-pass filters with Kaiser window, both in the forward and reverse directions to offset phase shift. Cutoff frequencies were 0.5–-40Hz for ECG and 1-40Hz for SCG. Only the dorso-ventral component of the SCG acceleration signal was used in this study. PPG signals, along with all four catheter-based pressure signals, were filtered with cutoffs at 0.5-10Hz. After filtering, data from all signals were heartbeat-separated using ECG R-peaks. The signal segments were then abbreviated to a length of 1,000 samples (500 ms) to enable more uniform analysis; however, due to the long left ventricular ejection time of Pig 3, a length of 1,500 samples (750 ms) is provided for this subject.

 

Using the Dataset

This dataset contains a separate .mat file for each of the 6 animal subjects in the protocol. The variables "scg" and "ppg" contain R-peak-separated signals from the SCG and PPG respectively during the protocol. The variables "aortic", "femoral", "rightAtrium", and "wedge" contain the R-peak-separated pressure waveforms from the catheters placed in the aortic root, femoral artery, right atrium, and left atrium (wedge pressure) respectivley. Each of these variables is a struct, with each of its fields representing a different level of blood volume loss. The field "B1" corresponds to the baseline level (pre-exsanguination); "L1", "L2", "L3", and "L4" correspond to the 7%, 14%, 21%, and 28% drop in blood volume respectively. Thus, the data in each field represents the heartbeat-separated signals collected during each blood volume level. The data has been selected such that periods of active draining of blood have been removed, such that the provided data reflects the heartbeat-separated signals during the resting period between blood-draws. The data is formatted in columnwise matrices, with the columns arranged in sequention order such that the first column is the first heartbeat and the last row is the last heartbeat.

 

The indices of ECG R-peaks are provided as a vector as well during each blood volume level, such that each element in the vector corresponds to its respective column in the provided column matrices. The unit of these values is in miliseconds, staring from t = 0 (onset of baseline recording).

Categories:
139 Views

This repository introduces a novel dataset for the classification of Chronic Obstructive Pulmonary Disease (COPD) patients and Healthy Controls. The Exasens dataset includes demographic information on 4 groups of saliva samples (COPD-HC-Asthma-Infected) collected in the frame of a joint research project, Exasens (https://www.leibniz-healthtech.de/en/research/projects/bmbf-project-exasens/), at the Research Center Borstel, BioMaterialBank Nord (Borstel, Germany).

Instructions: 

 

Definition of 4 sample groups included within the Exasens dataset:

(I) Outpatients and hospitalized patients with COPD without acute respiratory infection (COPD).

(II) Outpatients and hospitalized patients with asthma without acute respiratory infections (Asthma).

(III) Patients with respiratory infections, but without COPD or asthma (Infected).

(IV) Healthy controls without COPD, asthma, or any respiratory infection (HC).

Attribute Information:

1- Diagnosis (COPD-HC-Asthma-Infected)

2- ID

3- Age

4- Gender (1=male, 0=female)

5- Smoking Status (1=Non-smoker, 2=Ex-smoker, 3=Active-smoker)

6- Saliva Permittivity:

a) Imaginary part (Min(Δ)=Absolute minimum value, Avg.(Δ)=Average)

b) Real part (Min(Δ)=Absolute minimum value, Avg.(Δ)=Average)

In case of using the introduced Exasens dataset or the proposed classification methods, please cite the following papers:

  • P. S. Zarrin, N. Roeckendorf and C. Wenger., "In-vitro Classification of Saliva Samples of COPD Patients and Healthy Controls Using Machine Learning Tools," in IEEE Access, doi: 10.1109/ACCESS.2020.3023971.

  • Soltani Zarrin, P.; Ibne Jamal, F.; Roeckendorf, N.; Wenger, C. Development of a Portable Dielectric Biosensor for Rapid Detection of Viscosity Variations and Its In Vitro Evaluations Using Saliva Samples of COPD Patients and Healthy Control. Healthcare 2019, 7, 11.

  • Soltani Zarrin, P.; Jamal, F.I.; Guha, S.; Wessel, J.; Kissinger, D.; Wenger, C. Design and Fabrication of a BiCMOS Dielectric Sensor for Viscosity Measurements: A Possible Solution for Early Detection of COPD. Biosensors 2018, 8, 78.

  • P.S. Zarrin and C. Wenger. Pattern Recognition for COPD Diagnostics Using an Artificial Neural Network and Its Potential Integration on Hardware-based Neuromorphic Platforms. Springer Lecture Notes in Computer Science (LNCS), Vol. 11731, pp. 284-288, 2019.

Categories:
554 Views

Real-Life Diabetogenic (RLD) database is built for evaluating the cross-modal retrieval algorithm in real-life dietary environment, and it has 4500 multimodal pairs in total,where each images can be related to multiple texts and each text can be related to multiple images.

For more details, you can refer to our paper: P. Zhou, C. Bai, J. Xia and S. Chen, "CMRDF: A Real-Time Food Alerting System Based on Multimodal Data," in IEEE Internet of Things Journal, doi: 10.1109/JIOT.2020.2996009.

Please cite the above paper if you use this database. 

Instructions: 

Package structure

Since this is a multimodal database, the images in RLD is related to texts by share the same tag, which is saved in `food/im_label`* `Lifelog`: the real-world food images and the associative instant bio-data {** `RL`:  the folder that contains all the real-world food images.** `biodata.csv`:  the csv file that contains all the associative instant bio-data, these data are associated to food images by the file names of images.** `biodata.txt`:  the txt that indicate the attributes of each column in `biodata.csv`.}* `Food`: the food description texts and the associative food nutrition composition data{** `description.csv`:  the csv file that contains all the food description texts refered to each tag.** `description.txt`:  the txt file that indicate the attributes of each column in `description.csv`.** `composition.csv`:  the csv file that contains all the food nutrition composition data refered to each tag.** `composition.txt`:  the txt file that indicate the attributes of each column in `composition.csv`.** `im_label.csv`:  the csv file that contains all the tags related to each image.** `im_label.txt`:  the txt file that indicate the attributes of each column in `image.csv`.}* `data_category.csv`: the health category tags that help the model test the performance of cross-modal retrieval.

Categories:
119 Views

104 participants  (54 female and 50 male) walked over a treadmill. Gait data based on 25 joint trajectories was recorded using a single Kinect V2 depth sensor placed in frontal view. We gradually increased the speed of the motorized treadmill from 0m/s to 1.2m/s. All recordings start once 1.2m/s speed is reached. After approximately 30 seconds of continuous walking, the recording stops and then the slowdown starts.

Instructions: 

The .zip file contains:

1) A .xlsx file containing important information about each participant (age,sex,height and weight)

2) A .jpg file containing an overview of the workspace.

3) 104 walking recordings (54 female and 50 male recordings) .

 Each recording has the identification code "K3"+"the participant reference number". Each recording is a time series vector organized as follows:

0.Time 

1.Shoulder Right (x,y,z)

2.Elbow Right (x,y,z)

3.Wrist Right (x,y,z)

4.Hand Right (x,y,z)

5.Hand tip Right (x,y,z)

6.Thumb Right (x,y,z)

7.Hip Right (x,y,z)

8.Knee Right (x,y,z)

9.Ankle Right (x,y,z)

10.Foot Right (x,y,z)

11.Shoulder Left (x,y,z)

12.Elbow Left (x,y,z)

13.Wrist Left (x,y,z)

14.Hand Left (x,y,z)

15.Hand tip Left (x,y,z)

16.Thumb Left (x,y,z)

17.Hip Left (x,y,z)

18.Knee Left (x,y,z)

19.Ankle Left (x,y,z)

20.Foot Left (x,y,z)

21.Head (x,y,z)

22.Neck (x,y,z)

23.Spine Shoulder (x,y,z)

24.Spine mid (x,y,z)

25.Spine Base (x,y,z)

Categories:
220 Views

This dataset contains the trained model that accompanies the publication of the same name:

 Anup Tuladhar*, Serena Schimert*, Deepthi Rajashekar, Helge C. Kniep, Jens Fiehler, Nils D. Forkert, "Automatic Segmentation of Stroke Lesions in Non-Contrast Computed Tomography Datasets With Convolutional Neural Networks," in IEEE Access, vol. 8, pp. 94871-94879, 2020, doi:10.1109/ACCESS.2020.2995632. *: Co-first authors

 

Instructions: 

The dataset contains 3 parts:

  • Pre-processing: Script to extract brain volume from surrounding skull in non-contrast computed tomography (NCCT) scans and instructions for further pre-processing.
  • Trained convolutional neural network (CNN) to perform automated segmentations
  • Post-processing script to improve CNN-based segmentations

 

Independent Instructions for each part are also contained within each folder.

Categories:
369 Views

The nucleus and micronucleus images in this dataset are collected manually from Google images. Many of these images are in RGB color while a few of them are in grayscale. The dataset includes 148 nucleus images and 158 micronucleus images. The images are manually curated, cropped, and labeled into these two classes by a domain of experts in biology. The images have different sizes and different resolutions. The sizes and shapes for nucleuses and micronucleuses images differ from one image to another. Each image may contain one or more nucleus or micronucleus.

Categories:
90 Views

This dataset has been collected in the Patient Recovery Center (a  24-hour,  7-day  nurse  staffed  facility)  with  medical  consultant   from  the  Mobile  Healthcare  Service of Hamad Medical Corporation.

Categories:
468 Views

Pages