Recent advances in scalp electroencephalography (EEG) as a neuroimaging tool have now allowed researchers to overcome technical challenges and movement restrictions typical in traditional neuroimaging studies.  Fortunately, recent mobile EEG devices have enabled studies involving cognition and motor control in natural environments that require mobility, such as during art perception and production in a museum setting, and during locomotion tasks.


This dataset is associated with the paper, Jackson & Hall 2016, which is open source, and can be found here:

The DataPort Repository contains the data used primarily for generating Figure 1.


This paper introduces a dataset capturing brain signals generated by the recognition of 100 Malayalam words, accompanied by their English translations. The dataset encompasses recordings acquired from both vocal and sub-vocal modalities for the Malayalam vocabulary. For the English equivalents, solely vocal signals were collected. This dataset is created to help Malayalam speaking patients with neuro-degenerative diseases.


Problems of neurodegenerative disorder patients can be solved by developing Brain-Computer Interface (BCI) based solutions. This requires datasets relevant to the languages spoken by patients. For example, Marathi, a prominent language spoken by over 83 million people in India, lacks BCI datasets based on the language for research purposes. To tackle this gap, we have created a dataset comprising Electroencephalograph (EEG) signal samples of selected common Marathi words.


This dataset consists of electroencephalography (EEG) data from 6 participants aged between 23 and 28 years, with a mean age of 25 years. The dataset is the motor imagery EEG signals of six different rehabilitation training movements in the upper limbs. We recruited six participants aged between 23 and 28 years, with a mean age of 25 years. Three of the participants are male. Subjects sat in a comfortable chair, facing the computer monitor that displayed the trial-based paradigm and their right arm was naturally placed on the table to avoid muscle fatigue.


Faces and bodies provide critical cues for social interaction and communication. Their structural encoding depends on configural processing, as suggested by the detrimental effect of stimulus inversion for both faces (i.e., face inversion effect - FIE) and bodies (body inversion effect - BIE). An occipito-temporal negative event-related potential (ERP) component peaking around 170 ms after stimulus onset (N170) is consistently elicited by human faces and bodies and it is affected by the inversion of these stimuli.


In today’s context, it is essential to develop technologies to help older patients with neurocognitive disorders communicate better with their caregivers. Research in Brain Computer Interface, especially in thought-to-text translation has been carried out in several languages like Chinese, Japanese and others. However, research of this nature has been hindered in India due to scarcity of datasets in vernacular languages, including Malayalam. Malayalam is a South Indian language, spoken primarily in the state of Kerala by bout 34 million people.


The dataset introduces a novel physics-embedded deep learning neural network for accelerating traditional FWI algorithms, thereby reducing the required imaging time while overcoming the challenge of needing a high-quality initial model for traditional FWI inversion. The provided dataset includes training, validation, and testing sets, along with executable files related to PEN-FWI network training and validation.

Last Updated On: 
Thu, 11/09/2023 - 22:10

Autism spectrum disorder (ASD) is characterized by qualitative impairment in social reciprocity, and by repetitive, restricted, and stereotyped behaviors/interests. Previously considered rare, ASD is now recognized to occur in more than 1% of children. Despite continuing research advances, their pace and clinical impact have not kept up with the urgency to identify ways of determining the diagnosis at earlier ages, selecting optimal treatments, and predicting outcomes. For the most part this is due to the complexity and heterogeneity of ASD.


This is an auditory attention decoding dataset including EEG recordings of 21 subjects when they were instructed to attend to one of the two competing speakers at two different locations.

Unlike previous datasets (such as the KUL dataset), the locations of the two speakers are randomly drawn from fifteen alternatives.

All subjects have given formal written consent approved by the Nanjing University ethical committee before the experiment and received financial compensation upon completion.