SDU-Haier-ND (Shandong University-Haier-Noise Detection) is a sound dataset jointly constructed by Shandong University and Haier, which contains the operating sound of the internal air conditioner collected during the product quality inspection. We collected and marked a batch of quality inspection sounds of air conditioners in real production environments to form this data set, including normal sound samples and abnormal sound samples.
This dataset is used for i) analyzing the influence of process information on monitoring signals through signal processing methods; ii) training and testing models of tool monitoring and tool wear prediction especially for cutting conditions with large variations including cutting parameters, material and geometry of cutting tools, and workpiece materials, and also cutting conditions with continuous changes. This data set includes monitoring signals collected from machining process of sidewalls and closed pockets. The sidewall machining belongs to the cutting process with fixed cutting conditions; the closed pocket machining belongs to the cutting process of continuously varying cutting conditions for the reason that the tool path of closed pocket includes line, arc, full cutting and non-full cutting. Although cutting parameters are given fixed in the arc tool path area, the actual cutting parameters (such as feed, cutting width) are constantly changing due to the change of cutting geometry.
Han, M., Günay, S.Y., Schirner, G. et al. HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands. Intel Serv Robotics 13, 179–185 (2020). https://doi.org/10.1007/s11370-019-00293-8
The dataset file contains 5 folders:
where xxx ranges from 1-413
These are all the eye-view images, which are only for the label collection of labellers INSTEAD OF training CNN
Taken by webcam (Logitech Webcam C600, 1600*1200 resolution)
where xxx ranges from 1-413
These are all the raw hand-view images, which were NOT segmented and pre-processed, NEITHER for labelling NOR trainning
Taken by GoPro Camera (GoPro Hero Session, resolution of 3648*2736 pixels)
Named _LabellingRules.txt for the rule of label indexing to the 5 grasps
Named Label_Complete.csv for the complete label information of all training and testing images
Named Test_Label.csv for the complete label information of all training images
Named Train_Label.csv for the complete label information of all testing images
Labels were collected from 11 labellers, who gave the labels according to the eye-view images
Named abcd_nn (folders)
where abcd is the name of object, and nn is the orientation number of the same object
These folders are the raw data collected for each project and each orientation, and they include images, videos and EMG files
where xxx ranges from 1-413, yy ranges from 1-11
Each xxx and yy are corresponding to different raw hand-view images and labellers respectively
These are all of the segmented and pre-processed hand-view images for training directly
What's more inside the folders:
image_name: the name of training images, corresponding to the images in folder 'TraningImages'
xmin,xmax,ymin,ymax: the bounding box coordinate of the object location inside the training image
label: the label index corresponding to '_LabellingRules.txt'
Test_Label.csv: 20% images randomly selected from 'Label_Complete.csv'
Train_Label.csv: 80% images randomly selected from 'Label_Complete.csv'
inside each abcd_nn folder:
abcd_nn.JPG: the raw hand-view image of object abcd with orientation nn
, taken by GoPro
abcd_nn_eye.jpg: the raw eye-view image of object abcd with orientation nn
, taken by webcam
abcd_nn.mp4: the raw hand-view grasp video of object abcd with orientation nn
, taken by GoPro
abcd_nn_eye.wmv: the raw eye-view grasp video of object abcd with orientation nn, taken by webcam
acceleration, duration, emg, gyroscope, orientation: the EMG data and other activity information of the grasp collected from MYO armband
To train a CNN using hand-view images and their corresponding labels:
Load images from folder 'TraningImages' directly, and load training and testing labels from 'Train_Label.csv' and 'Test_Label.csv' respectively
ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions.
With the rapid growth of the Bangla music industry huge volume of Bangla songs is produced every day. An immense number of producers, lyricists, singers, and artists are involved in the production of songs from different genres. Among many genres of Bangla music; classical, folk, baul, modern music, Rabindra Sangeet, Nazrul Geeti, film music, rock music, and fusion music have gained the highest popularity. Lyricists try to express their feelings and views towards any situation or subject through their writings.
The dataset contains the separate folders named after Bangla Song Writers or authors. Each folder contains the word files having the raw format of song lyrics. Download the files and use natural language processing to develop advance methods.
SDU-Haier-AQD (Shandong University-Haier-Appearance Quality Detection) is an image dataset jointly constructed by Shandong University and Haier, which contains a various of air conditioner external unit image collected during actual detection process.The Appearance Quality Detection (AQD) dataset is consisted of 10449 images, and the samples in the dataset are collected on the actual industrial production line of air conditioner.
A dataset asscociated with paper “Learning-based Sparse Data Reconstruction for Compressed Data Aggregation in IoT Networks” in IEEE Internet of Things Journal. Five different structured sparse models (SSMs) are considered in the synthesized dataset, including random sparse (Sparse Model A), row sparse (Sparse Model B), row-sparse with embedded element-sparse (Sparse Model C), row-sparse plus element-spares (Sparse Model D) and block diagonal sparse (Block Sparse or group sparse).
The MCData was designed and produced for mouth cavity detection and segmentation. This dataset can be utilized for training and testing of mouth cavity instance segmentation networks. This dataset is the first available dataset for detecting and segmentation of mouth cavity main components to the best of the authors’ knowledge.