The dataset includes the sweep scanning paths and measured points in two experiments.

 

Categories:
18 Views

Cloud forensics is different than digital forensics because of the architectural implementation of the cloud. In an Infrastructure as a Service (IaaS) cloud model. Virtual Machines (VM) deployed over the cloud can be used by adversaries to carry out a cyber-attack using the cloud as an environment.

Instructions: 

 

 

 

 

 

 

 

 

About the dataset
The dataset generated is a KVM monitoring dataset however we proposed a novel feature-set. The methodology used to generate these novel features are under publication and will be updated once the research article is published. This is one portion of the dataset. where the features can be used to train ML models for evidence detection.  

The second portion of the dataset is published under the standard dataset of IEEE Dataport under the name of Memory Dumps of Virtual Machines for Cloud Forensics.  

How to use
These two datasets can be used together as they are the outcome of the same experiment. Memory dumps have timestamp and VMID, UUID features. 
or 
This Dataset can be used to study the impact of an attack (origin) on the Rate of Resource utilization of a VM monitored at the hypervisor.

 

Sr No

Category

Feature

Description

1

Meta-data

LAST_POLL

epoch timestamp

2

VMID

The ID of the VM

3

UUID

unique identifier of the domain

4

dom

domain name

5

Network

rxbytes_slope

Rate of received bytes from the network

6

rxpackets_slope

Rate of received packets from the network

7

rxerrors_slope

Rate of the number of receive errors from the network

8

rxdrops_slope

Rate of the number of received packets dropped from the network

9

txbytes_slope

Rate of transmitted bytes from the network

10

txpackets_slope

Rate of transmitted packets from the network

11

txerrors_slope

Rate of the number of transmission errors from the network

12

txdrops_slope

Rate of the number of transmitted packets dropped from the network

13

Memory

timecpu_slope

Rate of time spent by vCPU threads executing guest code

14

timesys_slope

Rate of time spent in kernel space

15

timeusr_slope

Rate of time spent in userspace

16

state_slope

Rate of running state

17

memmax_slope

Rate of maximum memory in kilobytes

18

mem_slope

Rate of memory used in kilobytes

19

cpus_slope

Rate of the number of virtual CPUs chaged

20

cputime_slope

Rate of CPU time used in nanoseconds

21

memactual_slope

Rate of Current balloon value (in KiB)

22

memswap_in_slope

Rate of The amount of data read from swap space (in KiB)

23

memswap_out_slope

Rate of The amount of memory written out to swap space (in KiB)

24

memmajor_fault_slope

Rate of The number of page faults where disk IO was required

25

memminor_fault_slope

Rate of The number of other page faults

26

memunused_slope

Rate of The amount of memory left unused by the system (in KiB)

27

memavailable_slope

Rate of The amount of usable memory as seen by the domain (in KiB)

28

memusable_slope

Rate of The amount of memory that can be reclaimed by balloon without causing host swapping (in KiB)

29

memlast_update_slope

Rate of The timestamp of the last update of statistics (in seconds)

30

memdisk_cache_slope

Rate of The amount of memory that can be reclaimed without additional I/O, typically disk caches (in KiB)

31

memhugetlb_pgalloc_slope

Rate of The number of successful huge page allocations initiated from within the domain

32

memhugetlb_pgfail_slope

Rate of The number of failed huge page allocations initiated from within the domain

33

memrss_slope

Rate of Resident Set Size of the running domain's process (in KiB)

34

Disk

vdard_req_slope

Rate of the number of reading requests on the vda block device

35

vdard_bytes_slope

Rate of the number of reading bytes on the vda block device

36

vdawr_reqs_slope

Rate of the number of write requests on the vda block device

37

vdawr_bytes_slope

Rate of the number of write requests on vda  the block device

38

vdaerror_slope

Rate of the number of errors in the vda block device

39

hdard_req_slope

Rate of the number of read requests on the hda block device

40

hdard_bytes_slope

Rate of the number of read bytes on the had block device

41

hdawr_reqs_slope

Rate of the number of write requests on the hda block device

42

hdawr_bytes_slope

Rate of the number of write bytes on the hda  block device

43

hdaerror_slope

Rate of the number of errors in the hda block device

44

TARGET

Status

Attack/Normal

 

Categories:
188 Views

The dataset includes information on the user testing results of the study about the effectiveness mesuerement odf the use of static maps andtheir banded versions. The main variables are (quantitative) : Completion time and success rates and quantitative (number of votes about the effectiveness of each map).

Categories:
46 Views

DevicePure.com is the large free repository for device specifications website, with the Manuals, Documents of devices, and applications available on the Internet.

It provides Samsung Firmware update for all Samsung smartphone devices. The database includes over 60.000 firmware update information of Samsung and describes updates with 79 languages.

Instructions: 

Since all of the archived data is stored in a non-relational database, you can download and extract using MongoDB queries against the data warehouse.

The research data is available to those doing academic and scholarly research on the Free/Open Source Software phenomenon.

Categories:
311 Views

Industrial Internet of Things (IIoTs) are high-value cyber targets due to the nature of the devices and connectivity protocols they deploy. They are easy to compromise and, as they are connected on a large scale with high-value data content, the compromise of any single device can extend to the whole system and disrupt critical functions. There are various security solutions that detect and mitigate intrusions.

Categories:
612 Views

Dataset I mainly consists of 30 subjects, which are respectively composed of gait data collected by mobile phone placed on arm, wrist, hand, waist, and ankle. This dataset is used to verify the impact of the mobile phone's placement on the recognition effect. Dataset II and Dataset III are composed of 113 subjects. Dataset II is the data collected from a mobile phone placed in the hand position, while Dataset III is the gait data collected from a mobile phone placed in the waist position. These two data sets are used primarily to verify the identification effect of the proposed model.

Instructions: 

There are five subfiles in the folder "DataSet I": arm, wrist, hand, waist, and ankle. Each subfile contains training set and test set files, and there are 30 .csv files under each training set and test set belonging to 30 subjects respectively.

The DataSet II and DataSet III folders directly contain two subfiles, the training set, and the test set, because they only collect the gait data of a certain location. There are 113 .csv files in both the training set and test set folders, which belonging to 113 subjects respectively.

       All data sets are raw data that has been stripped of the head and tail useless data. 

Categories:
111 Views

Human activity recognition (HAR) has been one of the most prevailing and persuasive research topics in different fields for the past few decades. The main idea is to comprehend individuals’ regular activities by looking at bits of knowledge accumulated from people and their encompassing living environments based on sensor observations. HAR has a great impact on human-robot collaborative work, especially in industrial works. In compliance with this idea, we have organized this year’s Bento Packaging Activity Recognition Challenge.

Last Updated On: 
Sat, 07/31/2021 - 02:40
Citation Author(s): 
Sayeda Shamma Alia, Kohei Adachi, Paula Lago, Nazmun Nahid, Haru Kaneko, Sozo Inoue

The dataset contains 4600 samples of 12 different hand-movement gestures. Data were collected from four different people using the FMCW AWR1642 radar. Each sample is saved as a CSV file associated with its gesture type.

Instructions: 

Dataset is divided into 12 separate folders associated to different gesture types. Each folder contains gesture samples saved as a CSV file. First line of the CSV file is a headline describing the columns of data: FrameNumber, ObjectNumber, Range, Velocity, PeakValue, x coordinate, y coordinate. In order to read the gestures into matrix representation copy all 12 folders into single folder called “data”. Copy the “read_gesture.py” script to the same folder as “data” and run it. Script will convert CSV files of given gesture type into the numpy matrix.

Categories:
256 Views

The AOLAH databases are contributions from Aswan faculty of engineering to help researchers in the field of online handwriting recognition to build a powerful system to recognize Arabic handwritten script. AOLAH stands for Aswan On-Line Arabic Handwritten where “Aswan” is the small beautiful city located at the south of Egypt, “On-Line” means that the databases are collected the same time as they are written, “Arabic” cause these databases are just collected for Arabic characters, and “Handwritten” written by the natural human hand.

Instructions: 

* There are two databases; first database is for Arabic characters, it consists of 2,520 sample files written by 90 writers using simulation of a stylus pen and a touch screen. The second database is for Arabic characters’ strokes, it consists of 1,530 sample files for 17 strokes. The second database is extracted from the previous accepted database by extracting strokes from characters.
* Writers are volunteers from Aswan faculty of engineering with ages from 18 to 20 years old.
* Natural writings with unrestricted writing styles.
* Each volunteer writes the 28 characters of Arabic script using the GUI.
* It can be used for Arabic online characters recognition.
* The developed tools for collecting the data is code acts as a simulation of a stylus pen and a touch screen, pre-processing data samples of characters are also available for researchers.
* The database is available free of charge (for academic and research purposes) to the researchers.
* The databases available here are the training databases.

Categories:
181 Views

Human Activity Recognition (HAR) is the process of handling information from sensors and/or video capture devices under certain circumstances to correctly determine human activities. Nowadays, several simple and automatic HAR methods based on sensors and Artificial Intelligence platforms can be easily implemented.

In this challenge, participants are required to determine the nurse care daily activities by utilizing the accelerometer data collected from the smartphone, which is the cheapest and easy-to-implement way in real life.

Last Updated On: 
Wed, 06/30/2021 - 21:50
Citation Author(s): 
Sayeda Shamma Alia, Kohei Adachi, Paula Lago, Le Nhat Tan, Haru Kaneko, Sozo Inoue

Pages