For the task of detecting casualties and persons in search and rescue scenarios in drone images and videos, our database called SARD was built. The actors in the footage have simulate exhausted and injured persons as well as "classic" types of movement of people in nature, such as running, walking, standing, sitting, or lying down. Since different types of terrain and backgrounds determine possible events and scenarios in captured images and videos, the shots include persons on macadam roads, in quarries, low and high grass, forest shade, and the like.
Smart speakers and voice-based virtual assistants are core components for the success of the IoT paradigm. Unfortunately, they are vulnerable to various privacy threats exploiting machine learning to analyze the generated encrypted traffic. To cope with that, deep adversarial learning approaches can be used to build black-box countermeasures altering the network traffic (e.g., via packet padding) and its statistical information.
This dataset contains several pcap files generated by the Google Home smart speaker placed under different conditions.
- Mic_on_off_8h contains two pcap files generated by keeping the microphone on (with silence) and off for 8 hours respectively.
- Mic_on_off_gquic_8h contains two pcap files generated by keeping the microphone on (with silence) and off for 8 hours respectively, excluding all network traffic not belonging to the google: gquic protocol.
- Mic_on_off_noise_3d contains three pcap files generated by holding on (with silence), off, and on (with noise) the microphone respectively for 3 days.
- Mic_on_off_noise_gquic_3d contains three pcap files generated by holding on (with silence), off, and on (with noise) the microphone respectively for 3 days. excluding all network traffic not belonging to the google protocol: gquic.
- media_pcap_anonymized contains several pcap files after the execution of queries such as "Whats' the latest news?" or "Play some music" (On each file has been stored network traffic collected after the execution of one query).
- travel_pcap_anonymized contains several pcap files after the execution of queries such as "How is the weather today?" (On each file has been stored network traffic collected after the execution of one query).
- utilities_pcap_anonymized contains several pcap files after the execution of queries such as "What's on my agenda today?" or "What time is it?" (On each file has been stored network traffic collected after the execution of one query).
Human Neck movements data acquired using Meatwear - CPRO device - Accelerometer-based Kinematic data. Data fed to OpenSim simulation software extracted Kinematics and Kinetics (Muscles, joints - Forces, Acceleration, Position)
This dataset includes the data used in our two research papers. GNN4TJ and GNN4IP.
Crowds express emotions as a collective individual, which is evident from the sounds that a crowd produces in particular events, e.g., collective booing, laughing or cheering in sports matches, movies, theaters, concerts, political demonstrations, and riots.
Extract locally the zip files, read the readme file.
Instructions for dataset usage are included in the open access paper: Franzoni, V., Biondi, G., Milani, A., Emotional sounds of crowds: spectrogram-based analysis using deep learning (2020) Multimedia Tools and Applications, 79 (47-48), pp. 36063-36075. https://doi.org/10.1007/s11042-020-09428-x
File are released under Creative Commons Attribution-ShareAlike 4.0 International License
This dataset is part of my Master's research on malware detection and classification using the XGBoost library on Nvidia GPU. The dataset is a collection of 1.55 million of 1000 API import features extract from jsonl format of the EMBER dataset 2017 v2 and 2018. All data is pre-processing, duplicated records are removed. The dataset contains 800,000 malware and 750,000 "goodware" samples.
* FEATURES *
Column name: sha256
Description: SHA256 hash of the example
Column name: appeared
Description: appeared date of the sample
Type: date (yyyy-mm format)
Column name: label
Description: specify malware or "goodware" of the sample
Type: 0 ("goodware") or 1 (malware)
Column name: GetProcAddress
Description: Most imported function (1st)
Type: 0 (Not imported) or 1 (Imported)
Column name: LookupAccountSidW
Description: Least imported function (1000th)
Type: 0 (Not imported) or 1 (Imported)
The full dataset features header can be downloaded at https://github.com/tvquynh/api_import_dataset/blob/main/full_dataset_fea...
All processing code will be uploaded to https://github.com/tvquynh/api_import_dataset/
Three well-known Border Gateway Anomalies (BGP) anomalies:
WannaCrypt, Moscow blackout, and Slammer, occurred in May 2017, May 2005, and January 2003, respectively.
The Route Views BGP update messages are publicly available from the University of Oregon Route Views Project and contain:
WannaCrypt, Moscow blackout, and Slammer: http://www.routeviews.org/routeviews/.
Raw data from the "route collector route-views2" are organized in folders labeled by the year and month of the collection date.
Complete datasets for WannaCrypt, Moscow blackout, and Slammer are available from the Route Views route collector route-views2 site:
University of Oregon Route Views Project: http://www.routeviews.org/routeviews/
Route Views Collector Map: http://www.routeviews.org/routeviews/index.php/map/
University of Oregon Route Views Archive Project: http://archive.routeviews.org/
MRT format RIBs and UPDATEs (quagga bgpd, from route-views2.oregon-ix.net): http://archive.routeviews.org/bgpdata/
The date of last modification and the size of the datasets are also included.
BGP update messages are originally collected in multi-threaded routing toolkit (MRT) format.
"Zebra-dump-parser" written in Perl is used to extract to ASCII the BGP updated messages.
The 37 BGP features were extracted using a C# tool to generate uploaded datasets (csv files).
Labels have been added based on the periods when data were collected.
The early detection of damaged (partially broken) outdoor insulators in primary distribution systems is of paramount importance for continuous electricity supply and public safety. In this dataset, we present different images and videos for computer vision-based research. The dataset comprises images and videos taken from different sources such as a Drone, a DSLR camera, and a mobile phone camera.
Please find the attached file for complete description
This dataset is released with our research paper titled “Scene-graph Augmented Data-driven Risk Assessment of Autonomous Vehicle Decisions” (https://arxiv.org/abs/2009.06435). In this paper, we propose a novel data-driven approach that uses scene-graphs as intermediate representations for modeling the subjective risk of driving maneuvers. Our approach includes a Multi-Relation Graph Convolution Network, a Long-Short Term Memory Network, and attention layers.