The dataset consists of vessel tracking data in the form of AIS observations in the Baltic Sea during years 2017-19. The AIS observations have been enriched with vessel metadata such as power, max speed and draft. The data has been collected for master’s thesis work and the data has been splitter into training and validation sets. The AIS observations do not cover all months of the collection period. 


Along with the increasing use of unmanned aerial vehicles (UAVs), large volumes of aerial videos have been produced. It is unrealistic for humans to screen such big data and understand their contents. Hence methodological research on the automatic understanding of UAV videos is of paramount importance.


=================  Authors  ===========================

Lichao Mou,

Yuansheng Hua,

Pu Jin,

Xiao Xiang Zhu,


=================  Citation  ===========================

If you use this dataset for your work, please use the following citation:


  title= {{ERA: A dataset and deep learning benchmark for event recognition in aerial videos}},

  author= {Mou, L. and Hua, Y. and Jin, P. and Zhu, X. X.},

  journal= {IEEE Geoscience and Remote Sensing Magazine},

  year= {in press}



==================  Notice!  ===========================

This dataset is ONLY released for academic uses. Please do not further distribute the dataset on other public websites.


The dataset has been colected around the Valencia Seaport. This dataset contains AIS raw data from January, February, March 2017. This dataset has been used in the testing of the Seaport Data Space proposed in the journal article titled "Seaport Data Space for Improving Logistic Maritime Operations".


Dataset of GPS, inertial and WiFi data collected during road vehicle trips in the district of Porto, Portugal. It contains 40 trip datasets collected with a smartphone fixed on the windshield or dashboard, inside the road vehicle. The dataset was collected and used in order to develop a proof-of-concept for "MagLand: Magnetic Landmarks for Road Vehicle Localization", an approach that leverages magnetic anomalies created by existing road infrastructure as landmarks, in order to support current vehicle localization system (e.g. GNSS, dead reckoning).


Dataset is organized in folders by date.Inside each folder, it is separated in folders by collection app or equipment.Inside collection app/equipemnt folders, it is separated by sensor.For each sensor there is a time series per trip.For details about the trips, including vehicles, smartphones, apps, and dates for data collection please read "README.txt".


This is the image of data.


Pedestrian detection has never been an easy task for computer vision and automotive industry. Systems like the advanced driver assistance system (ADAS) highly rely on far infrared (FIR) data captured to detect pedestrians at nighttime. The recent development of deep learning-based detectors has proven the excellent results of pedestrian detection in perfect weather conditions. However, it is still unknown what is the performance in adverse weather conditions.


Prefix _b - means benchmark, otherwise used for training/testing


Each recording folder contains:

  16BitFrames - 16bit original capture without processing.

  16BitTransformed - 16bit capture with low pass filter applied and scaled to 640x480.

  annotations - annotations and 8bit images made from 16BitTransformed.

  carParams.csv - a CAN details with coresponding frame ID.

  weather.txt - weather information in which the recording was made.


Annotations are made in YOLO (You only look once) Darknet format.


To have images without low pass filter applied you should make the following steps:

- Take 16bit images from 16BitFrames folder and open with OpenCV function like: Mat input = imread(<image_full_path>, -1);

- Then use convertTo function like: input.convertTo(output, input.depth(), sc, sh), where output is transformed Mat, sc is scale and sh is shift from carParams.csv file.

- Finally, scale image to 640x480 


Research on damage detection of road surfaces has been an active area of research, but most studies have focused so far on the detection of the presence of damages. However, in real-world scenarios, road managers need to clearly understand the type of damage and its extent in order to take effective action in advance or to allocate the necessary resources. Moreover, currently there are few uniform and openly available road damage datasets, leading to a lack of a common benchmark for road damage detection.


The file '' is the dataset collected from the GNSS sensor of "Xinda" autonomous vehicle in the Connected Autonomous Vehicles Test Fields (the CAVs Test Fields) Weishui Campus,Chang'an University.

The file '' is the simulated faults in the healthy data in '.mat' format, where X_abrupt, X_noise and X_drift represent abrupt faults, noise and drift in the long run are added into the healthy data, respectively.


Dataset consists of various open GIS data from the Netherlands as Population Cores, Neighbhourhoods, Land Use, Neighbourhoods, Energy Atlas, OpenStreetMaps, openchargemap and charging stations. The data was transformed for buffers with 350m around each charging stations. The response variable is binary popularity of a charging pool.


Use the first n_RFID variable as a response, the rest as predictors.