Deep Learning

We propose a coupled physics-driven and data-driven algorithm to improve standard deep learning workflow. In order to evaluate the proposed method, a 2.5D geological model including dip, fault and anisotropic formation is considered.  Comparing the inversion imaging performance of the proposed physics-driven method with the traditional classical residual network (Resnet), it shows a significant improvement in resistivity accuracy.



This project builds a length-versatile and noise-robust LoRa radio frequency fingerprint identification (RFFI) system. The LoRa signals are collected from 10 commercial-off-the-shelf LoRa devices, with the spreading factor (SF) set to 7, 8, 9, respectively. The packet preamble part and device labels are provided.


  We use industrial cameras to take images of steel wire ropes under different conditions, and use these wire rope images to train the U_Net network, and realize the semantic segmentation of the wire rope images by the U_Net network.



The human gait is unique and so is the impact of a walking human on the propagation of wireless signals within a wireless network. Using appropriate pattern recognition techniques, a person can thus be identified just from a time series of Received Signal Strength (RSS) measurements. This dataset holds bidirectional RSS measurements recorded within a mesh network of four Bluetooth sensor devices. During the measurements, a total of 14 subjects walked individually through the setup. A total of more than 10,000 recordings are provided.


Recently, a limited number of datasets that exist are used to detect errors in the printing process of the 3D printer. Limited datasets lead most researchers to dive into sensor data fault classification.

The dataset is captured and labelled before being fed to the DL model. The image dataset is captured in a time-lapse video mode with a 15-second duration for each printing process. Next, the time-lapse is used to extract around 50 images per video. In total, 2297 images containing four classes are collected.


This data contains training and testing data for single-shot deflectometry generated by the deformable mirror. The training data has total of 4000 data with single input composite pattern Ic and four outputs (Dx, Dy, Mx, and My).

The test data contains a pre-trained model, a script for testing, and test images


To enable intelligent vehicles and transportation systems, the vehicles and relevant systems need to have the ability to sense environment and recognize objects. In order to benefit from the robustness of radar for sensing, knowing how to use the radar system for effective object recognition is critical. Observing this, we in this paper propose a novel deep learning-aided object recognition system for radar systems by combining the You only look once (YOLO) system with a proposed object recheck system.


Water leakage problems increased over the last few years, and innovative tools and techniques appeared to solve this widespread problem. The still unresolved problem concerns the identification of water leaks at the nearest point; at the household level, the most common and inexpensive devices are still mechanical meters, which cannot detect leaks.


This dataset was collected with the goal of providing researchers with access to a collection of hundreds of images for efficient classification of plant attributes and multi-instance plant localisation and detection. There are two folders, i.e. Side view and Top View.Each folder includes label files and image files in the.jpg format (.txt format). Images of 30 plants grown in 5 hydroponic systems have been collected for 66 days. Thirty plants of three species (Petunia, Pansy and Calendula) were grown in a hydroponic system for the purpose of collecting and analysing images.


The deployment of unmanned aerial vehicles (UAV) for logistics and other civil purposes is consistently disrupting airspace security. Consequently, there is a scarcity of robust datasets for the development of real-time systems that can checkmate the incessant deployment of UAVs in carrying out criminal or terrorist activities. VisioDECT is a robust vision-based drone dataset for classifying, detecting, and countering unauthorized drone deployment using visual and electro-optical infra-red detection technologies.