agriculture

This dataset contains both the artificial and real flower images of bramble flowers. The real images were taken with a realsense D435 camera inside the West Virginia University greenhouse. All the flowers are annotated in YOLO format with bounding box and class name. The trained weights after training also have been provided. They can be used with the python script provided to detect the bramble flowers. Also the classifier can classify whether the flowers center is visible or hidden which will be helpful in precision pollination projects.

Categories:
240 Views

Lettuce Farm SLAM Dataset (LFSD) is a VSLAM dataset based on RGB and depth images captured by VegeBot robot in a lettuce farm. The dataset consists of RGB and depth images, IMU, and RTK-GPS sensor data. Detection and tracking of lettuce plants on images are annotated with the standard Multiple Object Tracking (MOT) format. It aims to accelerate the development of algorithms for localization and mapping in the agricultural field, and crop detection and tracking.

Categories:
362 Views

Guava fruit production is one of the main sources of economic growth in Asian countries, the world production of guava in 2019 was 55 million tons. Guava disease is an important factor in economic loss as well as quantity and quality of guava. The original guava fruit disease dataset consist of 38 images of phytophthora, 30 images of root and 34 images of scab guava disease with 650x650x3 pixel.

Categories:
900 Views

A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed. Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks. Due to a highly novel type of visual sensors used in event-based vision, only a few datasets aimed at visual navigation tasks are publicly available.

Categories:
969 Views

A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed. Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks. Due to a highly novel type of visual sensors used in event-based vision, only a few datasets aimed at visual navigation tasks are publicly available.

Categories:
1081 Views