The dataset comprises of image file s of size 20 x 20 pixels for various types of metals and non-metal.The data collected has been augmented, scaled and modified to represent a number a training set dataset.It can be used to detect and identify object type based on material type in the image.In this process both training data set and test data set can be generated from these image files.
The dataset is contained in a zip file named as object_type_material_type.zip.Download it and unzip it.
# command unzip object_type_material_type.zip in linux
# Simply unzip in windows
The folder contains five classes as followed.
1.copper 2. iron 3. nickel 4. plastic 5. silver.
These are stored as sub-directories under main directory(object_type_material_type).Each sub-directory contains 100 image files in jpg format of size 20 x 20 pixels.
Out of these classes 4 are metals type as copper, iron, nickel ,silver and one non-metal type as plastic.These image files can be used as training data set and test dataset as well.
We introduce a novel dataset containing a total of 61 distinct HEAs. The proposed appliances (e.g. fans, fridges, washers, etc.) are of different kinds, ages, brands and powerlevels. They have been recorded in steady-state conditions in a French 50 Hz electrical grid. The measurement setup consists of an AC current probe (E3N Chauvin Arnoux) with a 10 mV/A sensitivity and a differential voltage probe witha 1/100 attenuation.
CUPSNBOTTLES is an object data set, recorded by a mobile service robot. There are 10 object classes, each with a varying number of samples. Additionally, there is a clutter class, containing samples where the object detector failed.
Download and extract the ZIP file containing all files. There is python code available (under 'scripts') to easily load the data set. Other programming languages should also handle .jpg, .hdf and .csv files for easy access. For easy access with python, a pickle dump file has been added. This has no extra information compared to the .csv file.
The development of electronic nose (e-nose) for a rapid, simple, and low-cost meat assessment system becomes the concern of researchers in recent years. Hence, we provide time-series datasets that were recorded from e-nose for beef quality monitoring experiment. This dataset is originated from 12 type of beef cuts including round (shank), top sirloin, tenderloin, flap meat (flank), striploin (shortloin), brisket, clod/chuck, skirt meat (plate), inside/outside, rib eye, shin, and fat.
This dataset comes up as a benchmark dataset for machines to automatically recognizing the handwritten assamese digists (numerals) by extracting useful features by analyzing the structure. The Assamese language comprises of a total of 10 digits from 0 to 9. We have collected a total of 516 handwritten digits from 52 native assamese people irrespective of their age (12-86 years), gender, educational background etc. The digits are captured in .jpeg format using a paint mobile application developed by us which automatically saves the images in the internal storage of the mobile.
The recent interest in using deep learning for seismic interpretation tasks, such as facies classification, has been facing a significant obstacle, namely the absence of large publicly available annotated datasets for training and testing models. As a result, researchers have often resorted to annotating their own training and testing data. However, different researchers may annotate different classes, or use different train and test splits.
#Basic Intructions for usage
Make sure you have the following folder structure in the data directory after you unzip the file:
│ ├── test1_labels.npy
│ ├── test1_seismic.npy
│ ├── test2_labels.npy
│ └── test2_seismic.npy
The train and test data are in NumPy .npy format ideally suited for Python. You can open these file in Python as such:
import numpy as np
train_seismic = np.load('data/train/train_seismic.npy')
Make sure the testing data is only used once after all models are trained. Using the test set multiple times makes it a validation set.
We also provide fault planes, and the raw horizons that were used to generate the data volumes in addition to the processed data volumes before splitting to training and testing.
1- Netherlands Offshore F3 block. [Online]. Available: https://opendtect.org/osr/pmwiki.php/Main/Netherlands OffshoreF3BlockComplete4GB
2- Alaudah, Yazeed, et al. "A machine learning benchmark for facies classification." Interpretation 7.3 (2019): 1-51.
Network traffic analysis, i.e. the umbrella of procedures for distilling information from network traffic, represents the enabler for highly-valuable profiling information, other than being the workhorse for several key network management tasks. While it is currently being revolutionized in its nature by the rising share of traffic generated by mobile and hand-held devices, existing design solutions are mainly evaluated on private traffic traces, and only a few public datasets are available, thus clearly limiting repeatability and further advances on the topic.
MIRAGE-2019 is a human-generated dataset for mobile traffic analysis with associated ground-truth, having the goal of advancing the state-of-the-art in mobile app traffic analysis. MIRAGE-2019 takes into consideration the traffic generated by more than 280 experimenters using 40 mobile apps via 3 devices.
A sampled version of the dataset (one app per category) is readily downloadable, whereas the complete version is available on request.
APP LIST reports the details on the apps contained in the two versions of the dataset.
If you are using MIRAGE-2019 human-generated dataset for scientific papers, academic lectures, project reports, or technical documents, please help us increasing its impact by citing the following reference:
Giuseppe Aceto, Domenico Ciuonzo, Antonio Montieri, Valerio Persico and Antonio Pescapè,"MIRAGE: Mobile-app Traffic Capture and Ground-truth Creation",4th IEEE International Conference on Computing, Communications and Security (ICCCS 2019), October 2019, Rome (Italy).
We present two synthetic datasets on classification of Morse code symbols for supervised machine learning problems, in particular, neural networks. The linked Github page has algorithms for generating a family of such datasets of varying difficulty. The datasets are spatially one-dimensional and have a small number of input features, leading to high density of input information content. This makes them particularly challenging when implementing network complexity reduction methods.
First unzip the given file 'morse_datasets.zip' to get two datasets - 'baseline.npz' and 'difficult.npz'. These are 2 out of a family of synthetic datasets that can be generated using the given script 'generate_morse_dataset.py'. For instructions on using the script, see the docstring and/or the linked Github page.
To load data from a dataset, first download 'load_data.txt' and change its extension to '.py'
Then run the method 'load_data' and set the argument 'filename' to the path of the given dataset, for example './baseline.npz'
This will output 6 variables - xtr, ytr, xva, yva, xte, yte. These are the data (x) and labels (y) for the training (tr), validation (va) and test (te) splits. The y data is in one-hot format.
Then you can run your favorite machine learning / classification algorithm on the data.