Skip to main content

object detection

Annotating the scene text in the PRIVATY-TEXT-IMAGE dataset was done in Adobe Photoshop.   To maintain the rationality of the annotation operation, the images' aesthetics, and the textures' consistency around the deleted text areas, we utilized the content-aware fill feature of Photoshop.   This feature can enhance intelligent editing and modification capabilities during the image processing, automatically analyze the image content around the private text areas, and generate matching filling content to make the images look more natural and complete.  

Categories:

Annotating the scene text in the PRIVATY-TEXT-IMAGE dataset was done in Adobe Photoshop.   To maintain the rationality of the annotation operation, the images' aesthetics, and the textures' consistency around the deleted text areas, we utilized the content-aware fill feature of Photoshop.   This feature can enhance intelligent editing and modification capabilities during the image processing, automatically analyze the image content around the private text areas, and generate matching filling content to make the images look more natural and complete.  

Categories:

Annotating the scene text in the PRIVATY-TEXT-IMAGE dataset was done in Adobe Photoshop.   To maintain the rationality of the annotation operation, the images' aesthetics, and the textures' consistency around the deleted text areas, we utilized the content-aware fill feature of Photoshop.   This feature can enhance intelligent editing and modification capabilities during the image processing, automatically analyze the image content around the private text areas, and generate matching filling content to make the images look more natural and complete.  

Categories:

Annotating the scene text in the PRIVATY-TEXT-IMAGE dataset was done in Adobe Photoshop.   To maintain the rationality of the annotation operation, the images' aesthetics, and the textures' consistency around the deleted text areas, we utilized the content-aware fill feature of Photoshop.   This feature can enhance intelligent editing and modification capabilities during the image processing, automatically analyze the image content around the private text areas, and generate matching filling content to make the images look more natural and complete.  

Categories:

Following the successful completion of two collaboration projects on AI, IERE has proposed a third initiative. We are now extending this invitation for the "Artificial Intelligence (AI) Collaboration Project" to all IERE members, inviting your participation in this exciting opportunity.

Please kindly confirm your participation by sending the attached Answer Sheet to IERE Central Office by March 10, 2025.

We look forward to your positive response and active participation in this project.

Categories:

Scene understanding in a contested battlefield is one of the very difficult tasks for detecting and identifying threats. In a complex battlefield, multiple autonomous robots for multi-domain operations are likely to track the activities of the same threat/objects leading to inefficient and redundant tasks. To address this problem, we propose a novel and effective object clustering framework that takes into account the position and depth of objects scattered in the scene. This framework enables the robot to focus solely on the objects of interest.

Categories:

FLAME2-DT (Forest Fire Detection Dataset with Dual-modality Labels) is a comprehensive multi-modal dataset specifically designed for UAV-based forest fire detection research. The dataset consists of 1,280 paired RGB-thermal infrared images captured by a Mavic 2 Enterprise Advanced UAV system, with high-resolution (640×512) and precise pixel-level annotations for both fire and smoke regions. This dataset addresses critical challenges in forest fire detection by providing paired multi-modal data that captures the complementary characteristics of visible light and thermal imaging.

Categories:

While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 Brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N=25) before and after a contusion injury.

Categories:

The removal of surgical tools from the brain is a critical aspect of post-operative care. Surgical sponges such as cotton balls are one of the most commonly retained tools, as they become visually indistinguishable from the surrounding brain tissue when soaked with blood and can fragment into smaller pieces. This can lead to life-threatening immunological responses and invasive reoperation, demonstrating the need for new foreign body object detection methods.

Categories: