Deep Learning

  We use industrial cameras to take images of steel wire ropes under different conditions, and use these wire rope images to train the U_Net network, and realize the semantic segmentation of the wire rope images by the U_Net network.



The human gait is unique and so is the impact of a walking human on the propagation of wireless signals within a wireless network. Using appropriate pattern recognition techniques, a person can thus be identified just from a time series of Received Signal Strength (RSS) measurements. This dataset holds bidirectional RSS measurements recorded within a mesh network of four Bluetooth sensor devices. During the measurements, a total of 14 subjects walked individually through the setup. A total of more than 10,000 recordings are provided.


Recently, a limited number of datasets that exist are used to detect errors in the printing process of the 3D printer. Limited datasets lead most researchers to dive into sensor data fault classification.

The dataset is captured and labelled before being fed to the DL model. The image dataset is captured in a time-lapse video mode with a 15-second duration for each printing process. Next, the time-lapse is used to extract around 50 images per video. In total, 2297 images containing four classes are collected.


This data contains training and testing data for single-shot deflectometry generated by the deformable mirror. The training data has total of 4000 data with single input composite pattern Ic and four outputs (Dx, Dy, Mx, and My).

The test data contains a pre-trained model, a script for testing, and test images


To enable intelligent vehicles and transportation systems, the vehicles and relevant systems need to have the ability to sense environment and recognize objects. In order to benefit from the robustness of radar for sensing, knowing how to use the radar system for effective object recognition is critical. Observing this, we in this paper propose a novel deep learning-aided object recognition system for radar systems by combining the You only look once (YOLO) system with a proposed object recheck system.


Water leakage problems increased over the last few years, and innovative tools and techniques appeared to solve this widespread problem. The still unresolved problem concerns the identification of water leaks at the nearest point; at the household level, the most common and inexpensive devices are still mechanical meters, which cannot detect leaks.


The deployment of unmanned aerial vehicles (UAV) for logistics and other civil purposes is consistently disrupting airspace security. Consequently, there is a scarcity of robust datasets for the development of real-time systems that can checkmate the incessant deployment of UAVs in carrying out criminal or terrorist activities. VisioDECT is a robust vision-based drone dataset for classifying, detecting, and countering unauthorized drone deployment using visual and electro-optical infra-red detection technologies.



For academic purposes, we are happy to release our datasets. This dataset is in support of my research paper 'TOW-IDS: Intrusion Detection System based on Three Overlapped Wavelets in Automotive Ethernet'. If you want to use our dataset for your experiment, please cite our paper.


Ground reaction forces (GRFs) and center of pressure trajectories (CoPs) are required for a comprehensive biomechanical analysis. They are also important outcome measures in sports sciences or clinical areas. GRFs and CoPs are usually measured by force plate, which is rarely equipped on staircases in laboratories. We present a one-dimensional convolutional neural network for estimating GRFs and CoPs during stair ascent and descent using multi-level of kinematics as input.


This dataset contains pathloss and ToA radio maps generated by the ray-tracing software WinProp from Altair. The dataset allows to develop and test the accuracies of pathloss radio map estimation methods and localization algorithms based on RSS or ToA in realistic urban scenarios.