Brokoslaw Laschowski's picture

Congratulations!  You have been automatically subscribed to IEEE DataPort and can access all datasets on IEEE DataPort!

First Name: 
Brokoslaw
Last Name: 
Laschowski
Affiliation: 
Toronto Rehabilitation Institute
Job Title: 
Research Scientist
Expertise: 
computational neuroscience, neural networks, artificial intelligence, brain-machine interfaces
Short Bio: 
Dr. Brokoslaw Laschowski is a computational neuroscientist. He works as a Research Scientist and Principal Investigator at the Toronto Rehabilitation Institute, Canada’s largest rehabilitation hospital, and as an Assistant Professor at the University of Toronto, where he leads a multidisciplinary research lab that explores the intersection of neuroscience and artificial intelligence. His research focuses on the development of new mathematical, computational, and machine learning models to reverse-engineer and/or interface with the brain. In addition to advancing the scientific understanding of intelligence in biological and artificial systems, one of the clinical applications of his models is to control robotic and neuroprosthetic technologies to assist patients with physical disabilities, ranging from autonomous control using brain-inspired algorithms to neural control using brain-machine interfaces.

Datasets & Competitions

Surface electromyography (EMG) can be used to interact with and control robots via intent recognition. However, most machine learning algorithms used to decode EMG signals have been trained on small datasets with limited subjects, impacting their generalization across different users and tasks. Here we developed EMGNet, a large-scale dataset for EMG neural decoding of human movements. EMGNet combines 7 open-source datasets with processed EMG signals for 132 healthy subjects (152 GB total size).

Categories:
1001 Views

Vision is important for transitions between different locomotor controllers (e.g., level-ground walking to stair ascent) by sensing the environment prior to physical interactions. Here we developed StairNet to support the development and comparison of deep learning models for visual recognition of stairs. The dataset builds on ExoNet – the largest open-source dataset of egocentric images of real-world walking environments.

Categories:
2807 Views

Egocentric vision is important for environment-adaptive control and navigation of humans and robots. Here we developed ExoNet, the largest open-source dataset of wearable camera images of real-world walking environments. The dataset contains over 5.6 million RGB images of indoor and outdoor environments, which were collected during summer, fall, and winter. 923,000 of the images were human-annotated using a 12-class hierarchical labelling architecture.

Categories:
5962 Views

Reference: Laschowski B, McNally W, McPhee J, and Wong A. (2019). Preliminary Design of an Environment Recognition System for Controlling Robotic Lower-Limb Prostheses and Exoskeletons. IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 868-873. DOI: 10.1109/ICORR.2019.8779540.

Categories:
602 Views