The JKU-ITS AVDM contains data from 17 participants performing different tasks with various levels of distraction.
The data collection was carried out in accordance with the relevant guidelines and regulations and informed consent was obtained from all participants.
The dataset was collected using the JKU-ITS research vehicle with automated capabilities under different illumination and weather conditions along a secure test route within the


Various modes of transportation traverse our roadways, highlighting the importance of object classification for improving traffic safety. Optical sensors that rely on visual data encounter challenges in adverse weather conditions, where poor visibility hinders target classification. In this project we use an off-the-shelf millimeter wave Frequency Modulated Continuous Wave (FMCW) radar -- Texas Instruments IWR1843BOOST module to classify on road objects. By combining the radar module, Robot Operating System (ROS), and Python scripts, we extracted a dataset of 3D point cloud images.


In international contexts, natural scenes may include text in multiple languages. Especially, Latin and Arabic scene character image dataset is essential for training models to accurately detect and recognize text regions within real-world images. This is crucial for applications such as text translation, image search, content analysis, and autonomous vehicles that need to interpret text in different languages.


Prostate cancer is a major global health challenge. In this study, we present an approach for the early detection of prostate cancer through the semantic segmentation of adenocarcinoma tissues, specifically focusing on distinguishing Gleason patterns 3 and 4. Our method leverages deep learning techniques to improve diagnostic accuracy and enhance patient treatment strategies. We developed a new dataset comprising 100 digitized whole-slide images of prostate needle core biopsy specimens, publicly available for research purposes.


In contemporary digital environments, the development of a high-resolution synthetic Latin character dataset holds paramount significance across various real-world applications within the domains of  computer vision and artificial intelligence. This relevance extends from tasks such as image restoration to the implementation of sophisticated recognition systems.


This dataset contains measured data from five sensor modules designed for monitoring the oxygen concentration in the air in a hospital environment, especially in rooms where oxygen therapy can potentially occurs. This data is crucial from a safety point of view, as a higher oxygen concentration can increase the risk of fire development.


This dataset, presents the results of motion detection experiments conducted on five distinct datasets sourced from bungalows, boats, highway, fall and pedestrians. The motion detection process was executed using two distinct algorithms: the original ViBe algorithm proposed by Barnich et al. (G-ViBe) and the CCTV-optimized ViBe algorithm known as α-ViBe.


The Marketable Foods (MF) dataset was originally constructed to fine-tune the language and visual network layers and facilitates backdoor injections in text-to-image generative models. The dataset consists of images from three popular food corporations with prominent, recognisable brands (Coffee = Starbucks, Burger = McDonald's, Drink = Coca Cola). Samples were collected from the internet and were cleaned using a filtering algorithm discussed in the corresponding paper.


The FMK (Finger Major Knuckle) dataset was proposed and created to support the experiments of identity verificatio of knuckles of middle and thumb fingers modalites. The images of this dataset were captured using the rear camera of an OPPO A12 smartphone. This dataset was created from 20 different subjects between the ages of 30 and 67. For each subject there are 3 images of major knuckle for the middle finger and 3 images of major knuckle for thumb finger.. The FMK dataset was proposed and constructed for testing and evaluation.


This data package(.zip) includes data of follower robots'  motion track、errors  and velocities in five simulatd experimental cases:(1),(2): set obstacle range on 0.8m and 1.0m,2 groups;(3):  oneside situation,  and the number of follower robots rises to 5.   (4):complex environment, which we place more obstacles.   (5): change the lead-follower formation  (6),(7):two types of formation tracks, circle and straight line,compare follower1,2.