point cloud

Human activity recognition, which involves recognizing human activities from sensor data, has drawn a lot of interest from researchers and practitioners as a result of the advent of smart homes, smart cities, and smart systems. Existing studies on activity recognition mostly concentrate on coarse-grained activities like walking and jumping, while fine-grained activities like eating and drinking are understudied because it is more difficult to recognize fine-grained activities than coarse-grained ones.


Accurate detection and segmentation of apple trees are crucial in high throughput phenotyping, further guiding apple trees yield or quality management. A LiDAR and a camera were attached to the UAV to acquire RGB information and coordinate information of a whole orchard. The information was integrated by simultaneous localization and mapping network to form a dataset of RGB-colored point clouds. The dataset can be used for methods related to apple detection and segmentation based on point clouds.


We propose a real world data set comprising light field images of 19 objects captured with the Lytro Illum camera in outdoor scenes and their corresponding 3D point clouds, as ground truth, captured with the 3dMD scanner. This data set allows more precise 3D pointcloud level comparison of algorithms for the task of depth estimation or 3D point cloud reconstruction from light field images.



This data set contains 100,000 pcd files taken by LiDAR, a 3-D image sensor, of a vehicle orbiting an indoor field.

Data Acquisition

The indoor field was built as a 1/60 scale model of an intersection, where two vehicles kept moving along pre-fixed tracks independently of each other.

The size of the vehicles was 0.040 m  × 0.035 m × 0.240 m 

We captured the indoor field by two LiDAR sensor units, which was commercialized by Velodyne.


We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets c


Dataset of rosbags collected during autonomous drone flight inside a warehouse of stockpiles. PCD files created using reconstruction method proposed by article.

Data still being move to IEEE-dataport. 


This dataset contains aerial images acquired with a medium format digital camera and point clouds collected using an airborne laser scanning (ALS) unit, as well as ground control points and direct georeferencing data. The flights were performed in 2014 over an urban area in Presidente Prudente, State of São Paulo, Brazil, using different flight heights. These flights covered several features of interest for research, including buildings of different sizes and roof materials, roads and vegetation.