IEEE BioCAS 2024 Grand Challenge on Neural Decoding for Motor Control of non-Human Primates

Submission Dates:
04/26/2024 to 08/02/2024
Citation Author(s):
Biyan
Zhou
Pao-Sheng Vincent
Sun
Jason
Yik
Charlotte
Frenkel
Vijay Janapa
Reddi
Joseph E.
O'Doherty
Mariana M. B.
Cardoso
Joseph G.
Makin
Philip N.
Sabes,
Arindam
Basu
Submitted by:
Pao-Sheng Sun
Last updated:
Fri, 05/31/2024 - 08:44
DOI:
10.21227/bp1f-te92
License:
Creative Commons Attribution
438 Views

Abstract 

Millions of people around the world suffer from paralysis, and are dependent on the care provided by others for daily tasks. The quality of their lives can be improved significantly if there exists a method that is capable of taking in neural activity and decode these activities for motor movement. The intra-cortical Brain-Machine Interface (iBMI) is one of the technologies to emerge to tackle this task. In this challenge, we have provided six different neural recordings of non-human primates for participants to design neural decoders to help push the boundary and capabilities of iBMIs. The winning teams will have an opportunity to present their work at the 2024 IEEE BioCAS conference.

Instructions: 

About

A study reported in 2013 pegs the number of people who have paralysis at approximately 6 million in the US [1]. This amounts to a significant number of people dependent on assistance from caregivers for daily living. The need to restore their ability to perform activities for daily living has motivated the development of a host of assistive technologies. The most promising among the reported ones is the intra-cortical Brain-Machine Interface (iBMI). iBMIs aim to substantially improve patients' lives affected by spinal cord injury or debilitating neurodegenerative disorders such as tetraplegia, amyotrophic lateral sclerosis etc. These systems take neural activity as input and drive effectors such as a computer cursor [2], wheelchair [3], and prosthetic limbs [4] for communication, locomotion, and artificial hand control, respectively. Recently, there are also efforts to use similar technology to address speech disorders by converting imagined speech to produce a linguistic output [5]. Unsurprisingly, the global neurotechnology [6] (brain-computer interface [7]) markets are expected to increase at a CAGR of 12% (13.9%) to reach around USD 21 Billion by 2026 (USD 5.46 Billion by 2030).

 

However, despite compelling advances, barriers to scalability and clinical translation remain. Apart from scaling the number of sensors [10], two major challenges related to system usability [11] are highlighted next. First, the decoding algorithms are currently being implemented on a PC with a wired connection from the headstage, increasing the bulkiness of the overall system and reducing patient mobility [2]–[4], [9]. We refer to this as the "mobility issue." Second, the present systems (we refer to them as wired iBMI or simply iBMI) involve the use of wires for data transmission from the implant through a hole in the skull, increasing the risk of infection. We refer to this as the "skull opening issue."

 

Wireless iBMIs can solve the two issues related to infection due to a skull hole and mobility due to wiring to a computer. However, this also raises scalability issues, as wireless iBMIs are difficult to scale beyond data rates of few tens of Mbps [12], [13] due to increased bit-error rates, low run-time between battery charges, as well as power dissipation constraints within cortical implants of 80 mW/cm2 [14], [15] leading to limiting the number of recording channels to around 100 (~150-200 neurons). It is expected that dexterous prostheses would require simultaneous recording from ~10,000 neurons [16]; a more refined understanding of the brain also requires recording an increasing number of neurons and is hence an essential goal in this field. Therefore, there is an urgent need to explore solutions to compress neural data to fit the wireless budget available for implants.

 

Several solutions have been proposed to compress the neural data. Compression schemes such as compressive sensing (CS) [17] or Autoencoder (AE) [18] fall short of the requirements to meet the available wireless data rates as the number of channels increases to 10,000. Only the integration of decoders (Dec) [19,20,21] in the implant can solve the problem in a scalable way. This underlines the importance of developing new neural decoders with a good tradeoff between accuracy and resource usage, suitable for deployment in implants. While traditional decoder designs have used signal processing approaches, recent advances in neural networks (such as model compression, quantization and new networks like spiking neural networks) are promising candidates for future iBMIs. This IEEE BioCAS Grand Challenge is geared in that direction and aims to push the boundary for neural decoder design toward next-generation BMI systems.

 

References

[1] CRF, “Paralysis statistics - Reeve Foundation,” 2014. https://www.christopherreeve.org/living-with-paralysis/stats-about-paralysis

[2] C. Pandarinath, P. Nuyujukian, C. H. Blabe, B. L. Sorice, J. Saab, F. R. Willett, L. R. Hochberg, K. Shenoy, and J. M. Henderson, “High performance communication by people with paralysis using an intracortical brain-computer interface,” Elife, vol. 6, p. e18554, Feb. 2017.

[3] C. Libedinsky, R. So, Z. Xu, T. K. Kyar, C. Guan, M. Je, and S. C. Yen, “Independent mobility achieved through a wireless brain-machine interface,” PLoS One, vol. 11, no. 11, pp. 1–13, 2016, doi: 10.1371/journal.pone.0165773.

[4] J. L. Collinger, B. Wodlinger, J. E. Downey, W. Wang, E. C. Tyler-Kabara, D. J. Weber, A. J. McMorland, M. Velliste, M. L. Boninger and A. B. Schwarz, “High-performance neuroprosthetic control by an individual with tetraplegia,” Lancet, vol. 381, no. 9866, pp. 557–564, Feb. 2013.

[5] C. Cooney, R. Folli, and D. Coyle, “Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface.,” iScience, vol. 8, pp. 103–125, Oct. 2018.

[6] “Neurotechnology market.” https://www.expertmarketresearch.com/reports/neurotechnology-market

[7] “BCI market.” https://www.globenewswire.com/en/news-release/2021/08/12/2279545/0/en/Global-Brain-Computer-Interface-Market-Is-Expected-to-Reach-5-46-Billion-by-2030-Says-AMR.html

[8] “Neuralink.” https://www.neuralink.com/

[9] G. Santhanam, S. I. Ryu, B. M. Yu, A. Afshar, and K. V. Shenoy, “A high-performance brain–computer interface,” Nature, vol. 442, no. 7099, pp. 195–198, Jul. 2006, doi: 10.1038/nature04968.

[10] I. H. Stevenson and K. P. Kording, “How advances in neural recording affect data analysis.,” Nat. Neurosci., vol. 14, no. 2, pp. 139–42, Feb. 2011, doi: 10.1038/nn.2731.

[11] A. Nurmikko, “Challenges for Large-Scale Cortical Interfaces,” Neuron, vol. 108, pp. 259–269, 2020.

[12] D. A. Borton, M. Yin, J. Aceros, and A. Nurmikko, “An implantable wireless neural interface for recording cortical circuit dynamics in moving primates,” J. Neural Eng., vol. 10, no. 2, 2013.

[13] H. Miranda, V. Gilja, C. A. Chestek, K. V. Shenoy, and T. H. Meng, “HermesD: A high-rate long-range wireless transmission system for simultaneous multichannel

neural recording applications,” IEEE Trans. Biomed. Circuits Syst., vol. 4, no. 3, pp. 181–191, 2010.

[14] S. Kim, R. A. Normann, R. Harrison, and F. Solzbacher, “Preliminary Study of the Thermal Impact of a Microelectrode Array Implanted in the Brain,” in 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, Aug. 2006, vol. 1, pp. 2986–2989. doi: 10.1109/IEMBS.2006.260307.

[15] T. M. Seese, H. Harasaki, G. M. Saidel, and C. R. Davies, “Characterization of tissue morphology, angiogenesis, and temperature in the adaptive response of muscle tissue to chronic heating.,” Lab. Invest., vol. 78, no. 12, pp. 1553–1562, 1998.

[16] D. A. Schwarz, M. A. Lebedev, T. L. Hanson, D. F. Dimitrov, G. Lehew, J. Meloy, S. Rajangam, V. Subramanian, P. J. Ifft, Z. Li, A. Ramakrishnan, A. Tate, K. Z. Zhuang, and M. A. L. Nicolelis, “Chronic, wireless recordings of large-scale brain activity in freely moving rhesus monkeys,” Nat. Methods, vol. 11, no. 6, pp. 670–676, 2014.

[17] W. Zhao, B. Sun, T. Wu, and Z. Yang, “On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices,” IEEE Trans. Biomed. Circuits Syst., vol. 12, no. 1, pp. 242–254, 2018.

[18] T. Wu, W. Zhao, E. Keefer, and Z. Yang, “Deep compressive autoencoder for action potential compression in large-scale neural recording,” J. Neural Eng., vol. 15, no. 6, p. 66019, 2018.

[19] Y. Chen, E. Yao, and A. Basu, “A 128-Channel Extreme Learning Machine-Based Neural Decoder for Brain Machine Interfaces,” IEEE Trans. Biomed. Circuits Syst., vol. 10, no. 3, pp. 679–692, 2016.

[20] S. Shaikh, R. So, T. Sibindi, C. D. Libedinsky, and A. Basu, “Towards Intelligent Intra-cortical BMI (i2BMI): Low-power Neuromorphic Decoders that outperform Kalman Filters,” IEEE Trans. Biomed. Circuits Syst., vol. 13, no. 6, pp. 1615–24, 2019.

[21] M. A> Shaeri, U. Shin et al, “MiBMI: A 192/512-Channel 2.46 mm² Miniaturized Brain-Machine Interface Chipset Enabling 31-Class Brain-to-Text Conversion Through Distinctive Neural Codes,” in Proc. of ISSCC, 2024.

 

Timeline

Start of Registration                                                                                       Friday, Apr 26, 2024

Start of Project Submission                                                                     Wednesday, May 1, 2024

End of Registration/ Regular Paper Submission                                           Friday, May 17, 2024

End of Project Submission                                                                              Friday, Aug 2, 2024

Author Notification Date                                                                               Friday, Aug 16, 2024

Final paper submission deadline                                                                      Friday, Sep 6, 2024

Conference Registration                                                                                   Friday, Oct 4, 2024

 

Participation

1.     The competition is open to individuals, colleges/universities, scientific research institutions, and enterprises. The maximum number of team members is 3.

2.     To ensure compliance with local regulations during the competition, all participants should comply with the export control laws of their country. In case of any negative impact on the competition due to violation of any export control laws, the team members reserve the right to disqualify the relevant contestants and take legal action.

3.     Participants shall express their interest to participate in this Grand Challenge by filling in the application form (https://forms.gle/XGu4Bo9qq2YEYWWe6 ). They have to further submit an abstract in the Conference paper submission website choosing the track as “Grand Challenge”.

 

Awards

Top performing teams will be invited to submit papers in a week after their notification on Aug 16. Do note that gaining one of the top places on the leaderboard does not necessarily mean the acceptance of your paper –the paper will still go through a peer review process and may be rejected. The winning entries may however present their work at the conference regardless of paper acceptance. They will also receive certificates.

 

Dataset, Challenge and Code Harness

Dataset

The dataset chosen for this challenge consists of recordings from the “Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology” dataset, which can be found at the following website: https://zenodo.org/records/583331. This dataset contains the recording of spikes generated by two Macaque monkeys while they were tasked to make self-paced reaches to targets placed in an 8×8 grid, without gaps or pre-movement delay intervals. One monkey reached with the right arm (recordings made in the left hemisphere of their brain) while the other reached with their left arm (recordings made in the right hemisphere of their brain). For most of the sessions, only the M1 (primary motor cortex) recordings were made over 96 channels, for the rest both M1 and S1 (somatosensory cortex) were recorded over 192 channels. The data from monkey 1 (named Indy) contains recording with 96 channels, while those from monkey 2 (named Loco) contains recording with 192 channels. The recordings contain 37 sessions recorded over 10 months for monkey 1 and 10 sessions over 1 month for monkey 2.

 

We have carefully chosen six specific recordings from this dataset comprising three recordings from two monkeys each, with the recording dates spanning beginning, middle and end of the time of their respective total sessions.

The six recordings chosen are:

          1.     indy_20160622_01

          2.     indy_20160630_01

          3.     indy_20170131_02

          4.     loco_20170131_02

          5.     loco_20170215_02

          6.     loco_20170301_05

 

The recordings are stored in Matlab’s .mat file format, and contains the following variables (n refers to the number of recording channels, u refers to the number of sorted units, and k refers to the number of samples):

          -       chan_names (n x 1)

          A cell array of channel identifier strings, e.g. “M1 001”.

          -       cursor_pos (k x 2)

          The position of the cursor in Cartesian coordinates (x, y), expressed in millimeters.

          -       finger_pos (k x 3 or k x 6)

          If dimension is (k x 3), the position of the working fingertip in Cartesian coordinates (z, -x, -y), as reported by the hand tracker in cm. The cursor position is an affine transformation of the fingertip position using the following matrix:

                [(  0     0)

                 (-10   0)

                 ( 0   -10)]

          If dimension is (k x 6), the position of the working fingertip with the orientation of the sensor, and is defined with (z, -x, -y, azimuth, elevation, roll). 

          -       target_pos (k x 2)

          The position of the target in Cartesian coordinates (x, y), expressed in millimeters. 

          -       t (k x 1)

          The timestamp corresponding to each sample of the cursor_pos, finger_pos and target_pos, expressed in seconds.

          -       spikes (n x u)

          A cell array of spike event vectors. Each element in the cell array is a vector of spike event timestamps, in seconds. The first unit (u1) is the "unsorted" unit, meaning it contains the threshold crossings which remained after the spikes on that channel were sorted into other units (u2, u3, etc.) For some sessions spikes were sorted into up to 2 units (i.e. u=3); for others, 4 units (u=5). 

          -       wf (n x u)

          A cell array of spike event waveform “snippets”. Each element in the cell array is a matrix of spike event waveforms. Each waveform corresponds to a timestamp in “spikes”. Waveform samples are in microvolts. 

For this challenge, the variables used are cursor_pos, t and spikes. These variables are all loaded with the code harness, which will be explained in depth in the next section.

Code Harness

For this challenge, the data download and evaluation metrics are all encapsulated with the NeuroBench algorithm benchmarks, a community-driven project. The GitHub link to the project is: https://github.com/NeuroBench/neurobench

The benchmark comes with a poetry file, which allows the user to maintain a virtual env consistent with the deployment environment. If you directly cloned NeuroBench from the GitHub link provided, you can go in the root directory of the cloned repository and run the following command:

 

pip install poetry

poetry install

Note: poetry requires python >= 3.9.

You can run end-to-end examples from the poetry environment:

poetry run python neurobench/examples/primate_reaching/benchmark_2d_ann.py

 

The NeuroBench benchmark v1.0 currently contains the following tasks:

          - Keyword Few-shot Class-incremental Learning (FSCIL)

          - Event Camera Object Detection

          - Non-human Primate (NHP) Motor Prediction

          - Chaotic Function Prediction.

The non-human Primate (NHP) Motor Prediction task is the one directly related to this challenge. You can use the harness to automatically download the dataset instead of downloading it manually from the host site.

You can load the dataset with the following command in Python:

<pre><code>

From neurobench.datasets import PrimateReaching

 

pr_dataset = PrimateReaching(file_path=file_path,

                                            filename=filename,

                                            num_steps=1,

                                            train_ratio=0.5,

                                            bin_width=0.004,

                                            biological_delay=0,

                                            download=True)

</code></pre>

where:

          - file_path (str): The path to the directory storing the matlab files

          - filename (str): The name of the file that will be loaded

          - num_steps (int): Number of consecutive timesteps that are included per sample. In the real-time case, this should be 1

          - train_ratio (float): Ratio for how the dataset will be split into training/(validation + test) set. Default is 0.8 (80% of data is training)

          - bin_width (float): The size of the bin_window. Default is 0.028 (28ms).

          - biological_delay (int): How many steps of delay are to be applied to the dataset. Default is 0 i.e. no delay applied.

          - download (bool): If True, downloads the dataset from the internet and puts it in the root directory. If the dataset is already downloaded, it will not be downloaded again.

 

The harness will also extract all of the relevant variables to generate a PyTorch Dataset object where the self.labels contains the x and y-velocity, and self.samples contains the binned spike data. The object also contains indices for train, validation and test set, an example can be seen below:

from torch.utils.data import DataLoader, Subset

 

# For the training set

train_set_loader = DataLoader(Subset(pr_dataset, pr_dataset.ind_train), batch_size=256, shuffle=False)

 

# for the validation set

val_set_loader = DataLoader(Subset(pr_dataset, pr_dataset.ind_val), batch_size=256, shuffle=False)

 

# for the test set

test_set_loader = DataLoader(Subset(pr_dataset, pr_dataset.ind_test), batch_size=256, shuffle=False)

 

NeuroBench contains the following modules:

          - neurobench.benchmarks

          - neurobench.datasets

          - neurobench.models

          - neurobench.preprocessing

          - neurobench.postprocessing

          - neurobench.examples

 

For a complete tutorial of how the NeuroBench harness is used to load the dataset and run the benchmark metrics, please take a look at the script located in the NeuroBench Github repository:

neurobench/neurobench/examples/primate_reaching/benchmark_2d_ann.py

or the jupyter notebook:

neurobench/neurobench/examples/primate_reaching/Primate_Reaching_tutorial.ipynb.

Challenge Tracks and Evaluation Metrics

The challenge requires users to predict the finger movement velocities from the spikes recorded from the multi-electrode array. The data split to be used for training and testing are as per the description in the code harness.

Evaluation Metrics

For the metrics evaluated for this challenge, they are all based on the implementation of the NeuroBench benchmark harness. The metrics used for this challenge is as follows:

          1.     R2 Score

          The R2 score of the velocity prediction, based on the following equation:

                              R^2 = 1 - ([ sum(y_i - y')^2 ]/[ sum(y_i - y_bar)^2 ]) 

          2.     Memory Footprint

          The memory footprint of the model in bytes.

 

          3.     Synaptic Operations (Dense/MACs/ACs)

          Number of synaptic operations of the model.

          Dense: Total number of operations, including operations that concerns zeros.

          MACs: Total number of Multiply-ACcumlates operations. Used for ANNs

          ACs: Total number of ACcumulates operations. Used for SNNs

 

The metrics can be generated directly from the code harness described above.

Note that the harness infrastructure currently supports a subset of PyTorch modules and other features. Challenge submissions are expected to extend the harness in order to include components necessary for measuring your solution. For assistance with developing using the harness, please create an issue and tag the organizers.

 

Challenge Tracks

There are two main tracks:

Track 1: Obtaining the highest accuracy as measured by the R2 metric.
Track 2: Obtaining the best tradeoff between accuracy and solution complexity. 

The tracks do not require separate submissions, but are used as criteria for determining the challenge winners. Teams may choose to mention that certain results tailored towards a specific track in their write-up. 

One winner will be chosen as the top submission from each track, and the third winner will be the best runner-up in one of the tracks.

 

Results Submission

Results should be self-reported by all submitters. The organizers will verify the results of the winning solutions. Any number of results from different models / solutions may be submitted, provided they are sufficiently different (e.g., more different than a hyperparameter sweep).

At the submission deadline, the solution code is expected to be open-sourced for verification by the organizers and cross-validation between the submitters. The code should include all data and scripts necessary for reproducing the results, (e.g. hyperparameters, random seeds, etc.).

A submission form will be shared with registered teams, which will include formatted areas for filling in metrics and instructions on a brief solution write-up document.

 

Organizers

Biyan Zhou,City University of Hong Kong

Pao-Sheng Vincent Sun, City University of Hong Kong

Jason Yik, Harvard University

Charlotte Frenkel, Delft University of Technology

Vijay Janapa Reddi, Harvard University

Arindam Basu, City University of Hong Kong

 

Useful Links

Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology:

https://zenodo.org/records/583331

NeuroBench Algorithm Benchmarks:

https://github.com/NeuroBench/neurobench/tree/main

NeuroBench paper:

https://arxiv.org/abs/2304.04640