This dataset is created with the usage of Galvanic Skin Response Sensor and Electrocardiogram sensor of MySignals Healthcare Toolkit. MySignals toolkit consists of the Arduino Uno board and different sensor ports. The sensors were connected to the different ports of the hardware kit which was controlled by Arduino SDK.

Categories:
11 Views

The dataset contains:

- performance for random parameter values for the Embree datastructure on different scenes

- specific experiment data regarding the stability of triangle splitting, characterize by the angle of specific geometry

- partial tuning experiments, where parameters would be optimized while others would stay set

Categories:
13 Views

Data consists of an EMG registry obtained with a hybrid electrostimulation and electromyography device. Electrodes were placed to record activity from the extensor muscle of the fingers while the subject was squeezing a hand gripper for 10 seconds and resting for another 10.

Categories:
24 Views

This dataset contains the output from 3D gait analysis. Over a period of 3 months, between January 1st and March 31st in 2019, 5 children were familiarized with the Hibbot by using the walking aid for 30 minutes, twice a week, under the supervision of a physiotherapist.

Categories:
17 Views

EmoSurv is a dataset containing keystroke data along with emotion labels. Timing and frequency data is recorded while participants are typing free and fixed texts before and after being induced specific emotions. These emotions are: Anger, Happiness, Calmness, Sadness, and Neutral state.

First, data is collected while the participant is in a neutral state. Then, the participant watches an eliciting video. Once the emotion is induced in the participant, he types another fixed and free text.

Instructions: 

The dataset contains 4 .csv files:

  • File 1: Fixed Text Typing Dataset which is collected while a participants it typing a fixed text and it  includes the following features: User Id, Emotion Index, Index, Key Code, key Down, key Up, D1U1, D1U2, D1D2, U1D2, U1U2, D1U3, D1D3, and Answer.

  • File 2: Free Text Typing Dataset which is collected while a participants it typing a free text and it  includes the following features:  User Id, Emotion Index, Index, Key Code, key Down, key Up, D1U1, D1U2, D1D2, U1D2, U1U2, D1U3, D1D3, and Answer.

  • File 3: Frequency Dataset which includes frequency related features like User ID, textIndex, EmotionIndex, DelFreq, LeftFreq, and TotTime.

  • File 4: Participants Information dataset which includes demographics information like UserID, TypeWith, TypistType, PCTimeAverage, AgeRange, gender, status, degree, and country.

NOTE:

  • UserID: each participant is allocated the same ID in the 4 files.

  • Emotion Index: H (for Happy), S (for Sad), A (for Angry), C (for Calm), and N (for Neutral state).

  • Key Code: the key pressed by the participant.

  • Key Down: is the exact timestamp of the key down event. 

  • Key Up: is the exact timestamp of the key up event.

  • TextIndex: the type of text typed being either FI (for Fixed text) or FR (for Free text)

  • D1U1 (DT1): Time between first key down and first key up 1

  • D1U2 (Dig2): Time between first key down and second key up 2

  • D1D2 (Dig1): Time between first key down and second key down 2

  • U1D2 (FT1 / FT2): Time between first key up and second key down 2

  • U1U2 (Dig3): Time between first key up and second key up 2

  • D1U3 (Trig2): Time between first key down and third key up 3

  • D1D3 (Trig1): Time between first key down and third key down 3

  • Answer: Takes “R” (as right answer) if the participant answered correctly the accuracy question and “W” (as wrong answer) if he incorrectly answered it. (The accuracy question is a MCQ related to the video that the participant has watched)

  • DelFreq: Relative frequency of delete key NA

  • LeftFreq: Relative frequency of backspace key NA

  • Typing speed: Number of key pressed in each task the time spent from the first key pressed to the last key released (in the same task). 

  • TypeWith: specifies if the participant types using one hand or two hands

  • TypistType: specifies whether the participant uses one finger, two fingers, or is a touch typist (multiple fingers) to type a text.

  • PCTimeAverage: is the average time a user spends on his/her computer per day.

  • AgeRange: 16-19, 20-29, 30-39, >= 40years old. 

  • Gender: Male, or female

  • Status: Student, or professional

  • Degree: College/University, or High school. 

  • Country: Place of residence.

The following figure represents how the timing features are calculated.

 

Grant of License

We grant You a non-exclusive, non-transferable, revocable license to use the EmoSurv  Dataset solely for Your non-commercial, educational, and research purposes only, but without any right to copy or reproduce, publish or otherwise make available to the public or communicate to the public, sell, rent or lend the whole or any constituent part of the Emosurv Dataset thereof. 

 

Categories:
65 Views

The dataset provides Abilify Oral user reviews and ratings for drug’s satisfaction, effectiveness, and ease of use on different age groups.

Categories:
42 Views

A dataset of senior high student. 

Instructions: 

A dataset of senior high student. That can be used to verify student performance prediction method such as graph neural network for grade prediction

Categories:
59 Views

The dataset is composed of digital signals obtained from a capacitive sensor electrodes that are immersed in water or in oil. Each signal, stored in one row, is composed of 10 consecutive intensity values and a label in the last column. The label is +1 for a water-immersed sensor electrode and -1 for an oil-immersed sensor electrode. This dataset should be used to train a classifier to infer the type of material in which an electrode is immersed in (water or oil), given a sample signal composed of 10 consecutive values.

Instructions: 

The dataset is acquired from a capacitive sensor array composed of a set of sensor electrodes immersed in three different phases: air, oil, and water. It is composed of digital signals obtained from one electrode while it was immersed in the oil and water phases at different times. 

## Experimental setup

The experimental setup is composed of a capacitive sensor array that holds a set of sensing cells (electrodes) distributed vertically along the sensor body (PCB). The electrodes are excited sequentially and the voltage (digital) of each electrode is measured and recorded. The voltages of each electrode are converted to intensity values by the following equation:

intensity = ( |Measured Voltage - Base Voltage| / Base Voltage ) x 100

Where the Base Voltage is the voltage of the electrode recorded while the electrode is immersed in air. The intensity values are stored in the dataset instead of the raw voltage values.

## Experimental procedure 

The aim of the experiments is to get fixed-size intensity signals from one electrode (target electrode) when being immersed in water and oil; labeled as +1 (water) or -1 (oil). For this purpose, the following procedure was applied:

- The linear actuator was programmed to move the sensor up and down at a constant speed (20 mm / second).

- The actuator stops when reaching the upper and bottom positions for a fixed duration of time (60 seconds).

- At the upper position, the target electrode is immersed in oil; intensity signals are labeled -1 and sent to the PC.

- At the bottom position, the target electrode is immersed in water; intensity signals are labeled +1 and sent to the PC.

- The sampling rate is 100 msec; since each intensity signal contains 10 values, it takes 1 second to record one intensity signal 

## Environmental conditions

The experiments were perfomed under indoors laboratory conditions with room temperature of around 23 degree Celsius. 

## Dataset structure 

The signals included in the dataset are composed of intensity signals each with 10 consecutive values and a label in the last column. The label is +1 for a water-immersed electrode and -1 for an oil-immersed electrode.

## Application

The dataset should be used to train a classifier to differentiate between electrodes immersed in water and oil phases given a sample signal.

Categories:
356 Views

This dataset contains the experimental materials for "Use and Perceptions of Multi-Monitor Workstations".

There are two files:

  1. survey.txt: the survey questions
  2. survey-results.csv: the answers obtained from the 101 respondents tot he survey

 

Instructions: 

The data is straightforward.

A small number of entries are in Hebrew.

Categories:
19 Views

Most text-simplification systems require an indicator of the complexity of the words. The prevalent approaches to word difficulty prediction are based on manual feature engineering. Using deep learning based models are largely left unexplored due to their comparatively poor performance. We have explored the use of one of such in predicting the difficulty of words. We have treated the problem as a binary classification problem. We have trained traditional machine learning models and evaluated their performance on the task.

Instructions: 

The data is in CSV format. Please check the research paper for obtaining the difficulty label from the I_Z score.

Categories:
1049 Views

Pages