In this dataset, we performed a seven-day motor imagery (MI) based BCI experiment without feedback training on 20 healthy subjects. The MI tasks include left hand, right hand, feet and idle task.
20 healthy subjects (11 males, mean age: 23.2±1.47 years, all right-handed) participated in this study. The recruited subjects were asked to participate seven sessions within two weeks. Each session lasted around 40 minutes and was organized into 6 runs. Subjects could have a short break between runs. During each run, subjects had to perform 40 trials (4 different MI-tasks, 10 trials per task, presented in random order), each trial lasting 9s. The direction of the arrow informed the subjects which task to perform, i.e., the left arrow corresponding to MI of the left hand, the right arrow corresponding to MI of the right hand, down corresponding to MI of both feet, up corresponding to the idle task.
We present two synthetic datasets on classification of Morse code symbols for supervised machine learning problems, in particular, neural networks. The linked Github page has algorithms for generating a family of such datasets of varying difficulty. The datasets are spatially one-dimensional and have a small number of input features, leading to high density of input information content. This makes them particularly challenging when implementing network complexity reduction methods.
First unzip the given file 'morse_datasets.zip' to get two datasets - 'baseline.npz' and 'difficult.npz'. These are 2 out of a family of synthetic datasets that can be generated using the given script 'generate_morse_dataset.py'. For instructions on using the script, see the docstring and/or the linked Github page.
To load data from a dataset, first download 'load_data.txt' and change its extension to '.py'
Then run the method 'load_data' and set the argument 'filename' to the path of the given dataset, for example './baseline.npz'
This will output 6 variables - xtr, ytr, xva, yva, xte, yte. These are the data (x) and labels (y) for the training (tr), validation (va) and test (te) splits. The y data is in one-hot format.
Then you can run your favorite machine learning / classification algorithm on the data.
Website fingerprinting attacks, which use statistical analysis on network traffic to compromise user privacy, have been shown to be effective even if the traffic is sent over anonymity-preserving networks such as Tor. The classical attack model used to evaluate website fingerprinting attacks assumes an on-path adversary, who can observe all traffic traveling between the user's computer and the secure network.
Untar the data.Every directory means different settings of the data collection:
linux_chrome: the data was collected on linux OS using chrome browser.
linux_ff59:the data was collected on linux OS using firefox browser.
linux_tor:the data was collected on linux OS using Tor browser.
win_chrome: the data was collected on windows10 OS using chrome browser.
win_ff59:the data was collected on windows10 oS using firefox browser.
mac_safari: the data was collected on Mac machine using safari browser.
linux_tor_counter :the data was collected on linux OS using Tor browser while running countermeasures.
CW: closed worls detting , every website appear several times.
OW: open worls setting, every website appear only one (or very few) times.
Use the following online colab script to run the test set on the classifiers;