Disease-Specific Faces (DSF) database is used to research the phenotype and genotype of the diseases.
Disease-Specific Face images collected from:
♦ Professional medical publications
♦ Professional medical websites
♦ Medical Forums
♦ Hospitals
with definite diagnostic results.
The database is updated every three months.
If you would like to use DSF database, please send email to genex.tw@gmail.com.
Categories:
1641 Views

BraTS has always been focusing on the evaluation of state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. BraTS 2019 utilizes multi-institutional pre-operative MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Furthemore, to pinpoint the clinical relevance of this segmentation task, BraTS’19 also focuses on the prediction of patient overall survival, via integrative analyses of radiomic features and machine learning algorithms.

Last Updated On: 
Fri, 02/28/2020 - 06:31

Synergistic prostheses enable the coordinated movement of the human-prosthetic arm, as required by activities of daily living. This is achieved by coupling the motion of the prosthesis to the human command, such as residual limb movement in motion-based interfaces. Previous studies demonstrated that developing human-prosthetic synergies in joint-space must consider individual motor behaviour and the intended task to be performed, requiring personalisation and task calibration.

Instructions: 

Task-space synergy comparison data-set for the experiments performed in 2019-2020.

Directory:

  • Processed: Processed data from MATLAB in ".mat" format. Organised by session and subject.
  • Raw: Raw time-series data gathered from sensors in ".csv" format. Each file represents a trial where a subject performed a reaching task. Organised by subject, modality and session. Anonymised subject information is included in a ".json" file.
    • Columns of the time-series files represent the different data gathered.
    • Rows of the time-series files represent the values at the given time "t".
  • Scripts: MATLAB scripts used to process and plot data. See ProcessAndUpdateSubjectData for data processing steps.
Categories:
143 Views

Ear-EEG recording collects brain signals from electrodes placed in the ear canal. Compared with existing scalp-EEG,  ear-EEG is more wearable and user-comfortable compared with existing scalp-EEG.

Instructions: 

** Please note that this is under construction, and instruction is still being updated **

 

 

Participants

6 adults ( 2 males/ 4 females, age:22-28) participated in this experiment. The subjects were first given information about the study and then signed an informed consent form. The study was approved by the ethics committee at the  City  University of  Hong  Kong(Reference number:  2-25-201602_01).

 

Hardware and Software

We recorded the scalp-EEG using the a Neuroscan Quick Cap (Model C190) . Ear-EEG were recorded simultaneously with scalp-EEG. The 8 ear electrodes placed at the front and back ear canal (labeled as xF,  xB), and two upper and bottom positions in the concha (labeled as xOU and xOD). All ear and scalp electrodes were referenced to a  scalp  REF electrode.  The scalp  GRD  electrode  was  used as a  ground reference. The signals were sampled at 1000 Hz then filtered with a  bandpass filter between  0.5  Hz and  100  Hz together with a  notch filter to suppress the line noises.  The recording amplifier was SynAmps2,  and  Curry  7  was used for real-time data monitoring and collecting.

 

Experimental design

Subjects were seated  in front of a computer monitor. A fixation cross presented in  the center of  the monitor for 3s, followed by an arrow pseudo-randomly pointing to  the  right  or  left for 4s. During  the  4  s  arrow presentation, subjects needed to imagine and grasp the left or right hand according  to  the arrow direction. A short  warning beep was played  2  s  after the cross onset to call the subjects. 

 

Data Records

The data and the metadata from 6 subjects are stored in the IEEE Dataport. Note that Subject 1-4 completed 10 blocks of trials, subject 6 finished  only  5  blocks.  Each  block contained 16  trials.  In our dataset, each folder contain individual dataset from one subject.  For each individual dataset, there were four type of files (.dat, .rs3, .ceo, .dap). All four files were needed for EEGLAB and MNE package processing.  Each individual dataset contains the raw EEG data from 122 channels (from scale EEG recording), 8 channels (from ear EEG recording), and 1 channels (REF electrode). 

Individual dataset of subject 1,5,6 has different sub-datasets. The index indicates the time order of that sub-dataset (motor1, then followed by motor2, motor3, motor 4 etc).  While Individual dataset of subject 2,3,4 has one main dataset.

Each dataset has timestamps for epoch extraction. Two event labels marked the start of the arrow, which indicated the start of subject hand grasping (event number 1: left hand; event number 2: right hand).

Categories:
379 Views

This dataset contains the images used in the paper "Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time". M. E. Morocho Cayamcela and W. Lim, "Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time," 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 2019, pp. 100-104.

Instructions: 

The code is written for MATLAB. We used transfer learning using AlexNet and GoogLeNet as convolutional neural network (CNN) backbones.

In MATLAB, replace the directory path with yours. If you want to recognize other classes, just add the images from different classes on labeled folders.

Categories:
588 Views

The dataset comprises raw data to validate methods for reliable data collection. We proposed the data collection methods in a path to assess digital healthcare apps. To validate the methods, we conducted experiments in Amazon Mechanical Turk (MTurk), and then we showed that the methods have a significant meaning based on statistical tests.

Categories:
87 Views

Visual representations are always better than narrations in accordance to children, for better understanding. This is quite advantageous in learning school lessons and it eventually helps in engaging the children and enhancing their imaginative skills.

Instructions: 

File would need to be unzipped for access

Categories:
119 Views

Nowadays technology is being used worldwide to cure deadly diseases. Hepatitis is rapidly spreading in Asia over time. Every 12th Pakistani is suffering from a specific form of hepatitis. In this study, we have explored design and technology solutions for assisting patients of hepatitis and to create awareness among the general public. We have suggested an android app, LiveDliver and a paper-based diary, HepOrganizer to help the patients manage their disease and the general public to acquire awareness.

Instructions: 

File would need to be unzipped for access

Categories:
102 Views

Visual representations are always better than narrations in accordance to children, for better understanding. This is quite advantageous in learning school lessons and it eventually helps in engaging the children and enhancing their imaginative skills. Using natural language processing techniques and along the computer graphics it is possible to bridge the gap between these two individual fields, it will not only eliminate the existing manual labor involved instead it can also give rise to efficient and effective system frameworks that can form a foundation for complex applications.

Categories:
88 Views

When sleep matters for the promotion of heart health, multidisciplinary research is essential. The present dataset is fetched from the National Health And Nutrition Survey (NHANES), with the main consumption of carbohydrates, bedtime and waking hours, and High sensitivity C- Reactive Protein (HSCRP) translating cardiovascular risk. As the outcome variable, HSCRP records from 5,665 participants are available in this dataset for analysis purpose.

Categories:
296 Views

Pages