Previous neuroimaging research has been traditionally confined to strict laboratory environments due to the limits of technology. Only recently have more studies emerged exploring the use of mobile brain imaging outside the laboratory. This study uses electroencephalography (EEG) and signal processing techniques to provide new opportunities for studying mobile subjects moving outside of the laboratory and in real world settings. The purpose of this study was to document the current viability of using high density EEG for mobile brain imaging both indoors and outdoors.

Instructions: 

Study Summary:

The purpose of this study is to test the reliability of brain and body dynamics recordings in a real world environment, compare electrocortical dynamics and behaviors in outdoor vs. indoor settings, determine behavioral, biomechanical, and EEG correlates to visual search task performance, and search for EEG and behavioral parameters related to increased mental stress. Each subject walked outdoors on a heavily wooded trail and indoors on a treadmill, performing a visual search object recognition task. After a baseline condition without targets, the subject was tasked to identify light green, target flags vs. dark green, nontarget flags. During two conditions, non-stress and stress, the subject received $0.25 for each correct flag identification. During the stress condition, the subject also received a punishment (loss of $1.00) for each incorrect flag identification, plus an automatic punishment (loss of $1.00) approximately every 2 minutes. Each of the 3 conditions lasted approximately 20 minutes. Saliva samples were collected at the start and end of each condition. The order of non-stress and stress conditions was randomized for each subject. Please note there are some events, where the subject was assumed to have perceived a stimulus, that are lacking the Participant/Effect HED tag. This tag allows for automated processing of events. These particular events (e.g. occasional experimenter instructions to walk down a certain part of the outdoor trail) are of low importance for the purposes of data analysis.

 

Data Summary

In accordance with the Terms of Service, this dataset is made available under the terms of the "Creative Commons" Attribution (CC-BY) license. (https://ieee-dataport.org/faq/who-owns-datasets-ieee-dataport).

 

Number of Sessions: 98

Number of Subjects: 49

Subject Groups: normal

Primary source of event information: Tags

Number of EEG Channels: 264 (105 recordings)

Recorded Modalities: EEG (105 recordings), Eye_tracker (88 recordings), Force_plate (47 recordings), IMU (99 recordings), Pulse_from_EEG (52 recordings), Pulse_sensor (86 recordings)

EEG Channel Location Type(s): Custom (105 recordings),

 

Data organization

This study is an EEG Study Schema (ESS) Standard Data Level 1 container. This means that it contains raw, unprocessed EEG data arranged in a standard manner. Data is in a container folder and ready to be used with MATLAB to automate access and processing. All other data measures other than EEG are in .mat (MATLAB) format. For more information please visit eegstudy.org.

 

There is one folder for every subject that includes the following files when available:

(1) Indoor EEG session (<ID number_Indoor.set>)

EEG files have been imported into EEGLAB and are stored as unprocessed raw .set format in standard EEGLAB Data Structures.

(https://sccn.ucsd.edu/wiki/A05:_Data_Structures)

 

(2) Outdoor EEG session (<ID number_Outdoor.set>)

Same as Indoor EEG session (above)

 

(3) Indoor IMU session (<ID number_Indoor_imu.mat>)

The IMU .mat file contains a structure with 6 fields (variable name: IMU)

 

IMU.dataLabel: string including ID number, environment, and sensor type

IMU.dataArray: 10xNx6 matrix. Third dimension refers to each of 6 IMU sensors (left foot, right foot, left ankle, right ankle, chest, and waist). Columns are frame numbers. Rows are: 

• x, y, and z direction of accelerations, in m/s^2 

• x, y, and z direction of gyroscopes, in rad/s 

• x, y, and z direction of magnetometers, in microteslas

• Temperature, in degrees Celsius

IMU.axisLabel: String headings for ‘dataType’ and ‘frame’ and ‘sensorNumber’

IMU.axisValue: 1x10 cell array of string headings for each row of data type, and 1x6 cell array of string headings for each IMU sensor

IMU.samplingRate: Sampling rate

IMU.dateTime: String of date and time information of recording

 

(4) Outdoor IMU session (<ID number_Outdoor_imu.mat>)

Same as Indoor IMU session (above).

 

(5) Indoor eye tracking session (<ID number_Indoor_eye_tracker.mat>)

The eye tracker .mat file contains a structure with 6 fields (variable name: Eye_tracker)

 

Eye_tracker.dataLabel: string including ID number, environment, and sensor type

Eye_tracker.dataArray: 7xN matrix. Columns are frame numbers. Rows are: 

• x and y coordinates of the master spot, in eye image pixels

• x and y coordinates of the pupil center, in eye image pixels

• Pupil radius, in eye image pixels

• Eye direction with respect to the scene image, in scene image pixels

The eye and scene images are displayed and recorded with resolution of 640 x 480 pixels. The origin is the top left of the image with the X-axis positive to the right and the Y-axis positive downwards. Unavailable data is shown by the number –2000.

 

Eye_tracker.axisLabel: String headings for ‘dataType’ and ‘frame’

Eye_tracker.axisValue: 1x7 cell array of string headings for each row of data type

Eye_tracker.samplingRate: Sampling rate

Eye_tracker.dateTime: String of date and time information of recording

 

(6) Outdoor eye tracking session (<ID number_Outdoor_eye_tracker.mat>)

Same as Indoor eye tracking session (above).

 

(7) Indoor heart rate from pulse sensor session (<ID number_Indoor_pulse_sensor.mat>)

The pulse sensor .mat file contains a structure with 6 fields (variable name: Pulse_sensor)

 

Pulse_sensor.dataLabel: string including ID number, environment, and sensor type

Pulse_sensor.dataArray: 3xN matrix. Columns are frame numbers. Rows are: 

• pulse (normalized wave), in volts  

• Inter-beat Interval (IBI), in milliseconds

• heart rate, in beats per minute (BPM)

Pulse_sensor.axisLabel: String headings for ‘dataType’ and ‘frame’

Pulse_sensor.axisValue: 1x3 cell array of string headings for each row of data type

Pulse_sensor.samplingRate: Sampling rate

Pulse_sensor.dateTime: String of date and time information of recording

 

(8) Outdoor heart rate from pulse sensor session 

(<ID number_Outdoor_pulse_sensor.mat>)

Same as Indoor pulse sensor session (above).

 

(9) Indoor heart rate from EEG session (<ID number_Indoor_pulse_from_eeg.mat>)

If pulse rate was recovered from EEG ECG a corresponding file is available. The pulse from EEG .mat file contains a structure with 6 fields (variable name: Pulse_from_EEG)

 

Pulse_from_EEG.dataLabel: string including ID number, environment, and sensor type

Pulse_from_EEG.dataArray: 3xN matrix. Columns are frame numbers. Rows are: 

• pulse (normalized wave), in volts  

• Inter-beat Interval (IBI), in milliseconds

• heart rate, in beats per minute (BPM)

Pulse_from_EEG.axisLabel: String headings for ‘dataType’ and ‘frame’

Pulse_from_EEG.axisValue: 1x3 cell array of string headings for each row of data type

Pulse_from_EEG.samplingRate: Sampling rate

Pulse_from_EEG.dateTime: String of date and time information of recording

 

(10) Outdoor heart rate from EEG session (<ID number_Outdoor_pulse_from_eeg.mat>)

Same as Indoor pulse from EEG session (above).

 

(11) Indoor treadmill force plate session (<ID number_Indoor_force_plate.mat>)

The force plate .mat file contains a structure with 6 fields (variable name: Force_plate)

 

Force_plate.dataLabel: string including ID number, environment, and sensor type

Force_plate.dataArray: 3xNx2 matrix. Third dimension is for left and right force plates, respectively. Columns are frame numbers. Rows are: 

• x, y, and z direction of force, in newtons

Force_plate.axisLabel: String headings for ‘dataType’ and ‘frame’ and ‘sensorNumber’

Force_plate.axisValue: 1x3 cell array of string headings for each row of data type

Force_plate.samplingRate: Sampling rate

Force_plate.dateTime: String of date and time information of recording

 

(12) EEG digitized head map (<ID number.sfp>)

Besa coordinates of all electrode positions.

 

 (13) Indoor eye tracking video (<ID number_Indoor_eye_tracker.avi>)

The eye tracker .avi file is a video from the subject’s perspective (640x480 resolution, 30 frames/sec)

 

(14) Outoor eye tracking video (<ID number_Outdoor_eye_tracker.avi>)

The eye tracker .avi file is a video from the subject’s perspective (640x480 resolution, 30 frames/sec)

 

(15) Indoor video camera (<ID number_Indoor_video_camera(#).avi>)

The camcorder .avi file is a video from the experimenter’s perspective (704x384 resolution, 30 frames/sec). If there are multiple parts the (#) appended indicates the order.

 

(16) Outdoor video camera (<ID number_Outdoor_video_camera(#).avi>)

The camcorder .avi file is a video from the experimenter’s perspective (704x384 resolution, 30 frames/sec). If there are multiple parts the (#) appended indicates the order.

 

Cortisol (Cortisol_all_subjects.xlsx)

Salivary cortisol data is provided as a single spreadsheet ‘Cortisol_all_subjects.xlsx’. It contains the following variables:

 

  • subid: ID number

  • sex: 1 = male, 2 = female

  • age: in years

  • height: in inches

  • weight: in pounds

  • environment: 1 = outdoors, 2 = indoors

  • ordererenvironment: 1 = outdoor first, 2 = indoor first

  • orderstress: 1 = stress first, 2 = non-stress first

  • condition: 1 = Initial sample taken before walking started, 2 = Baseline sample after baseline walking, 3 = Non-stress sample taken after non-stress condition, 4 = Stress sample taken after stress condition

  • concentration: cortisol levels in µg/L

  • cond_ordered = order of conditions by environment

Categories:
680 Views

Electroencephalography (EEG) signal data was collected from twelve healthy subjects with no known musculoskeletal or neurological deficits (mean age 25.5 ± 3.7, 11 male, 1 female, 1 left handed, 11 right handed) using an EGI Geodesics© Hydrocel EEG 64-Channel spongeless sensor net. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the University of Wisconsin-Milwaukee (17.352).

Categories:
916 Views

The IEEE Brain Initiative intends to continue to present Neurotech innovation/education opportunities around the world after sponsoring several challenges and competitions in 2017 in St.

Last Updated On: 
Thu, 08/30/2018 - 18:37
Citation Author(s): 
J.A. Anguera, J. Boccanfuso, J.L. Rintoul, O. Al-Hashimi, F. Faraji, J. Janowich, E. Kong, Y.Larraburo, C. Rolle, E. Johnston and A. Gazzaley

This is the last in a series of challenges and competitons sponsored by IEEE Brain Initiative in 2017 that explore various brain/neuro datasets.  Results and final presentations are expected to be made at the Boston (Cambridge) event, December 9, 2017.  NOTE: EVENT IS STILL ON AS SCHEDULED DESPITE WEATHER. 

COMPETITION DETAILS: https://brain.ieee.org/news/call-participation-ieee-brain-data-bank-chal...

Last Updated On: 
Tue, 05/29/2018 - 14:22
Citation Author(s): 
Barbey, AK; Kramer, A; Cohen, N; Hillman, C

Terabytes of brain EEG data are available through open sources, collected from tests associated with human cognitive capability, stroke patient recovery, class learning ability, and other social environments, over a wide range of demographics. Some also play with stimulus such as audio, music, video, lights and digital games. 

Last Updated On: 
Thu, 02/15/2018 - 09:50
Citation Author(s): 
J.A. Anguera, J. Boccanfuso, J.L. Rintoul, O. Al-Hashimi, F. Faraji, J. Janowich, E. Kong, Y.Larraburo, C. Rolle, E. Johnston and A. Gazzaley

Terabytes of brain EEG data are available through open sources, collected from tests associated with human cognitive capability, stroke patient recovery, class learning ability, and other social environments, over a wide range of demographics. Some also play with stimulus such as audio, music, video, lights and digital games. 

Last Updated On: 
Wed, 10/11/2017 - 15:44
Citation Author(s): 
J.A. Anguera, J. Boccanfuso, J.L. Rintoul, O. Al-Hashimi, F. Faraji, J. Janowich, E. Kong, Y.Larraburo, C. Rolle, E. Johnston and A. Gazzaley

Terabytes of brain EEG data are available through open sources, collected from tests associated with human cognitive capability, stroke patient recovery, class learning ability, and other social environments, over a wide range of demographics. Some also play with stimulus such as audio, music, video, lights and digital games.  

Last Updated On: 
Fri, 04/20/2018 - 15:35
Citation Author(s): 
To be provided

Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals.  Brain motion in these recordings pose a unique challenge. The location of individual neurons must  be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement.

Instructions: 

This folder contains datasets that accompany the publication:

 

  Nguyen JP, Linder AN, Plummer GS, Shaevitz JW, Leifer AM (2017) Automatically tracking neurons in a moving and deforming brain. PLoS Comput Biol 13(5): e1005517. https://doi.org/10.1371/journal.pcbi.1005517

 

and correspond to the analysis software located at: https://github.com/leiferlab/NeRVEclustering.git

 

The data for each of Worm consists of 3 video streams, each video stream recorded open loop on  different clocks. The videos can be synced using the timing of a camera flash. The two datasets shown in the paper are present. Worm 1 has a shorter recording, and includes manually annotated data neuron locations as well as automated points. Worm 2 is a longer recording.

 

How to Download

The dataset is available two ways: as a large all-in-one  0.3 TB  tarball available for download via the web interface; and as a set of individual files that are avalable for browsing or downloading via Amazon S3.

 

Quick Summary

 

The Demo Folder has 5 demo codes that give a flavor for each step of the analysis. The analysis was originally designed to run on a computing cluster, and may not run well on local machines. The demo code, in contrast, is desgined to run locally. Each demo code will take some files in the ouputFiles folder and move them into the main data directory to do a small part of the analysis. The python code in the repo is not needed for these demos. The demos skip part 0, which is used for timing alignment of videos.  

 

 

Details

Raw Videos

These are the raw video inputs to the analysis pipeline.

sCMOS_Frames_U16_1200x600.dat -     Binary image file for HiMag images. Stream of 1200 x 600 uint16 images. The images created by two image channels projected onto a sCMOS camera. The first half of the image (1-600) is the RFP image, the second  half (601-1200) is the gCAMP6s image. Both fluorophores are expressed panneuronally in the nucleus. Images are taken at 200Hz, and the worm is scanned with a 3 Hz triangle wave. AVI files with behavior -        Low magnification dark field video. AVI files with fluorescence -         Low magnification fluorescent images. *HUDS versions of the avi have additional data on the video frame, such as the frame number and program status.  

Raw Text files

These are the additional inputs to the analysis pipeline. These files contain timing information for every frame of each of the video feeds. They also contain information about the positions of the stage and the objective. This information, along with the videos themselves, are used to align the timing for all of the videos. Several camera flashes are used throughout the recording. labJackData.txt -     Raw outputs from LabVIEW for the stage, the piezo that drives the objective, the sCMOS camera, and the function generator (FG), taken at 1kHz. The objective is mounted on a piezo stage that is driven by the output voltage of the function generator. The 1kHz clock acts as the timing for each event. Columns:     FuncGen -     Programmed output from FG, a triangle wave at 6 Hz.     Voltage -     Actual FG output     Z Sensor -     voltage from piezo, which controls Z position of objective.     FxnGen Sync-     Trigger output from FG, triggers at the center of the triangle.     Camera Trigger-    Voltage from HiMag camera, down sweeps indicate a frame has been grabbed from HiMag Camera     Frame Count -     Number of frames that have been grabbed from HiMag Camera. Not all frames that are grabbed are saved, and the saved frames will be indicated in the saved frames field in the next text file.     Stage X -    X position from stage     Stage Y -     Y position from stage CameraFrameData.txt -     Metadata from each grabbed frame for HiMag  images, saved in LabVIEW. The timing for each of these frames can be pulled from the labJackData.txt. Columns:     Total Frames -    Total number of grabbed frames     Saved Frames -     The current save index, not all grabbed frames are saved. If this increments, the frame has been saved.     DC offset    -    This is the signal sent to the FG to translate center of the triangle wave to keep the center of the worm in the middle of the wave.     Image STdev -     standard deviation of all pixel intensities in each image. (used for trackign in teh axial dimension.) .yaml files -         Data from each grabbed frame for both low mag .avi files     Lots of datafields here are anachronistic holdovers, but of main interest are the FrameNumber, Selapsed (seconds elapsed) and msRemElapsed (ms elapsed). These are parsed in order to determine the timing of each frame.  

 

Processed mat files

 

These files contain processed output from the analysis pipeline at different steps. STEP 0: Output of Timing alignment Code. hiResData.mat -     data for each frame of HiMag video. This contains the information about the imaging plane, position of the stage, timing, and which volume each frame belongs to. Fields:     Z -         z voltage from piezo for each frame, indicating the imaging plane.     frameTime -    time of each frame after flash alignment in seconds.     stackIdx -    number of the stack each frame belongs to. Each volume recorded is given an increasing number starting at 1 for the first volume. For example, the first 40 images will belong to stackIdx=1, then the next 40 will have stackIdx=2 etc etc…     imSTD -     standard dev of each frame     xpos and ypos - stage position for each frame     flashLoc -    index of the frame of each flash *note some of these fields have an extra point at the end, just remove it to make everything the same size *flashTrack.mat - 1xN vector where N is the number of frames in the corresponding video. The values of flashTrack are the mean of each image. It will show a clear peak when a flash is triggered. This can be used to align the videos. *YAML.mat - 1xN vector where N is the number of frames in the corresponding video. Each element of the mcdf has all of the metadata for each frame of the video. Using this requires code from https://github.com/leiferlab/MindControlAccessUtils.git github repo. alignments.mat -     set of affine transformations between videos feeds. Each has a "tconcord" field that works with matlab’s imwarp function. Fields:     lowresFluor2BF-    Alignment from low mag fluorescent video to low mag behavior video     S2AHiRes -    Alignment from HiMag Red channel to HiMag green channel. This alignment is prior to cropping of the HiMag Red channel.     Hi2LowResF -    Alignment from HiMag Red to low mag fluorescent video      STEP 1: WORM CENTERLINE DETECTION     initializeCLWorkspace.m (done locally for manual centerline initialization)     Python submission code:         submitWormAnalysisCenterline.py     Matlab analysis code:         clusterWormCenterline.m     File Outputs:    CLstartworkspace.mat, initialized points and background images for darkfield images             CL_files folder, containing partial CL.mat files             BehaviorAnalysis folder, containing the centerline.mat file with XY coordinates for each image.     * due to poor image quality of dark field images, it may be necessary to use some of the code developed by AL to manually adjust centerlines     ** worm 1 centerline was found using a different method, so STEP1 can be skipped for worm1. STEP 2: STRAIGHTEN AND SEGMENTATION     Python submission code:         submitWormStraightening.py     Matlab analysis code:         clusterStraightenStart.m         clusterWormStraightening.m     File Outputs:    startWorkspace.mat, initial workspace used for during straightening for all volumes             CLStraight* folder, folder containing all saved straightened tif files and results of segmentation. STEP 3: NEURON REGISTRATION VECTOR ENCODING AND CLUSTERING     Python submission code:         submitWormAnalysisPipelineFull.py     Matlab analysis code:         clusterWormTracker.m         clusterWormTrackCompiler.m     File Outputs:    TrackMatrixFolder, containing all registrations of sample volumes with reference volumes.             pointStats.mat, struccture containing all coordinates from all straightened volumes along with a trackIdx, the result of initial tracking of points. STEP 4: ERROR CORRECTION     Python submission code:         submitWormAnalysisPipelineFull.py     Matlab analysis code:         clusterBotChecker.m         clusterBotCheckCompiler.     File Outputs:     botCheckFolder, folder containing all coordinate guesses for all times, one mat file for each neuron.             pointStatsNew.mat, matfile containing the refined trackIdx after error correction. STEP 5: SIGNAL EXTRACTION     Python submission code:         submitWormAnalysisPipelineFull.py     Matlab analysis code:         fiducialCropper3.m     File Output:    heatData.mat, all signal results from extracting signal from the coordinates.       

Simple dat file Viewer

HiMag fluorescent data files are stored as a stream of binary uint16. The gui "ScanBinaryImageStack" is a simple MATLAB program to view these images. The GUI can be found in the NeRVE git repo at https://github.com/leiferlab/NeRVEclustering.git. Feel free to add features. To view images, click the button "Select Folder", and select the .dat file. You can now use the slider in the gui or the arrow  keys on your keyboard to view different images. You can change the step size to change how much each arrow push increments. You can also change the bounds of the slider away from the default of first frame/last frame. The save snapshot button saves a tiff of the current frame into the directory of the dat file. The size of the image must be specified in the image size fields. All .dat files included are 1200x600 pixels. The program also works if .avi files are selected. In this case, the size of the images is determined automatically.    

 

Categories:
1828 Views

Pages