3D point Cloud

As with most AI methods, a 3D deep neural network needs to be trained to properly interpret its input data. More specifically, training a network for monocular 3D point cloud reconstruction requires a large set of recognized high-quality data which can be challenging to obtain. Hence, this dataset contains the image of a known object alongside its corresponding 3D point cloud representation. To collect a large number of categorized 3D objects, we use the ShapeNetCore (https://shapenet.org) dataset.

Categories:
626 Views

The dataset includes the Stanford Bunny, Elephant, and Pony models, which have been processed with added noise to adapt to the proposed algorithm. Additionally, the data encompasses point clouds of residential areas obtained by LIDAR, also subjected to noise addition. This comprehensive dataset, with its various models and point clouds, serves to thoroughly test and validate the robustness and effectiveness of the proposed algorithm in handling noisy data from different sources.

Categories:
255 Views

The IAMCV Dataset was acquired as part of the FWF Austrian Science Fund-funded Interaction of Autonomous and Manually-Controlled Vehicles project. It is primarily centred on inter-vehicle interactions and captures a wide range of road scenes in different locations across Germany, including roundabouts, intersections, and highways. These locations were carefully selected to encompass various traffic scenarios, representative of both urban and rural environments.

Categories:
437 Views

To thoroughly investigate the non-overlapping registration problem, we created our own datasets: Pokemon-Zero for zero overlap and Pokemon-Neg for negative overlap. In this section, we describe the process of dataset creation. 

Categories:
244 Views

Supplementary material of the article "Precise 2D and 3D fluoroscopic Imaging by using an FMCW Millimeter-Wave Radar".

Categories:
102 Views

教学楼的BIM与点云

Categories:
167 Views

The proposed dataset, termed PC-Urban (Urban Point Cloud), is captured with an Ouster LiDAR sensor with 64 channels. The sensor is installed on an SUV that drives through the downtown of Perth, Western Australia (WA), Australia. The dataset comprises over 4.3 billion points captured for 66K sensor frames. The labelled data is organized as registered and raw point cloud frames, where the former has a different number of registered consecutive frames. We provide 25 class labels in the dataset covering 23 million points and 5K instances.

Categories:
2535 Views