These are OMNeT++ flooding simulation results of four-UAV FANET in three different topologies, as mentioned in "The Broadcast Storm Problem in FANETs and the Dynamic Neighborhood-Based Algorithm as a Countermeasure".

Instructions: 

Results are packaged in the 7zip file and were generated using OMNeT++ 5.0.They are all UTF-8 encoded text files.

 

OMNeT++ raw results (with extension .sca, .vci, and .vec) are named as follows:

 

T [#topology] - [#replication]

 

where #topology ranges from 1 to 3, and #replication ranges from 0 to 9.

 

CSV files contain list of sent and received messages.

Categories:
Category: 
108 Views

This dataset contains daily maximum load data with the average demand, customer count and PV capacity at two substations Arkana and Muchea, Western Australia used in the accepted IEEE Transactions on Power Systemspaper titled “The Use of Extreme Value Theory for Forecasting Long-Term Substation Maximum Electricity Demand” by Li and Jones (2019).  The dataset spans from 01/01/2008 to 30/06/2022, part history (01/01/2008 to 16/09/2018) and part forecast (17/09/2018 to 30/06/2022).  The dataset is beneficial to various research such as long-term load forecast.

Categories:
Category: 
245 Views

This dataset contains daily maximum load data with the average demand, customer count and PV capacity at two substations Arkana and Muchea, Western Australia used in the accepted IEEE Transactions on Power Systemspaper titled “The Use of Extreme Value Theory for Forecasting Long-Term Substation Maximum Electricity Demand” by Li and Jones (2019).  The dataset spans from 01/01/2008 to 30/06/2022, part history (01/01/2008 to 16/09/2018) and part forecast (17/09/2018 to 30/06/2022).  The dataset is beneficial to various research such as long-term load forecast.

Categories:
Category: 
97 Views

This dataset contains daily maximum load data with the average demand, customer count and PV capacity at two substations Arkana and Muchea, Western Australia used in the accepted IEEE Transactions on Power Systemspaper titled “The Use of Extreme Value Theory for Forecasting Long-Term Substation Maximum Electricity Demand” by Li and Jones (2019).  The dataset spans from 01/01/2008 to 30/06/2022, part history (01/01/2008 to 16/09/2018) and part forecast (17/09/2018 to 30/06/2022).  The dataset is beneficial to various research such as long-term load forecast.

Categories:
Category: 
76 Views

This dataset contains a sequence of network events extracted from a commercial network monitoring platform, Spectrum, by CA. These events, which are categorized by their severity, cover a wide range of events, from a link state change up to critical usages of CPU by certain devices. Regarding the layers they cover, they are focused on the physical, network and application layer. As such, the whole set gives a complete overview of the network’s general state.

Instructions: 

The dataset is composed by a single plain text file in csv format.  This csv we contains the following variables:

• Severity: the importance of the event. It is divided in four different levels: Blank, Minor, Major and Critical.

• Created On: the date and time when the event was created.Theschemeis"month/day/year hour:minute:second".

• Name: (anonymized) name of the device the event happened on.

• EventType: hexadecimal code detailing the category the event pertains to.

• Event: message associated with the event.

 

Thus, a certain event will be a combination of an event type on a certain device on a certain time, it will be described by its severity and explained by the event message.

Categories:
213 Views

This dataset details the state machine based experiments of PowerWatch.

Instructions: 

PowerWatch Experiment Summaries

 

This dataset summarizes the experiments done for the PowerWatch paper. The accompanying code will be

shared after the paper is published.

 

There are 2 files:

    * expres.csv: Each entry in this file represents the summary with respect to a unique state machine, represented by

        the field "state_machine_id".

    * runres.csv: For each state machine, a total of 45 runs are conducted, each individual runs are represented by

        an entry.

 

The fields in "expres.csv" are explained as follows.

    * state_machine_id: A number uniquely identifies an experiment. The ID was also used as a random seed.

        The naming here is, unfortunately, confusing.

    * bucket_size: Chosen bucket size.

    * window_size: Chosen window size.

 

    Next 12 fields represent the "complexity" of the machine with respect to call lists they emit.

    In each experiment, two machines were run: benign and malicious. The difference between those are that, in the

    malicious machine, there is one more state emitting an unique call list.

 

    * cumulative_call_size_benign: Sum of the number of call lists emitted by benign states.

    * mean_call_size_benign: Mean of the call lists emitted by benign states.

    * variance_call_size_benign: Variance of the call lists emitted by benign states.

    * malicious_state_call_size: Number of calls emitted by the malicious state.

    * malicious_state_vocabulary_size: Number of different calls emitted by the malicious state.

    * cumulative_edit_distance_every_state: The edit distance between every state. Represents

        how the individual computing states vary from each other.

    * mean_edit_distance_every_state: Mean of the edit distance computed between every state.

    * variance_of_edit_distance_every_state: Variance of the edit distances computed between every state.

    * cumulative_edit_distance_good_bad: Total edit distance computed between every benign state and the malicious state.

    * mean_edit_distance_good_bad: Mean edit distance computed between every benign state and the maliicous state.

    * min_edit_distance_good_bad: Minimum of edit distances computed between every benign state and the maliicous state.

    * variance_edit_distance_good_bad: Variance of edit distances computed between every benign state and the maliicous state.

 

    * training_time: Total time required for training the machine learning model.

    * prediction_time: Total time required for prediction stage.

 

    * svm_accuracy: Accuracy of a SVM model taking inputs of maximum activity signal per run.

    * svm_margin: Unused.

 

    * mean_benign_train_activity_index: Mean activity index, calculated on the training set.

    * mean_benign_test_activity_index: Mean activity index, calculated on the data obtained from the benign machine, but not

        used for training.

    * mean_malicious_activity_index: Mean activity index, calculated on the data obtained from the malicious machine.

 

 

Originally, a cascade of max-pooling and convolution mechanism were considered, but we later decided to use a single

convolution step after the prediction stage. The naming of the fields are made with respect to the initial algorithm,

and a little misleading, explained below where necessary:

    * state_machine_id: The ID of the associated experiment.

    * run_number: The number of the run.

    * malicious: If the run contained the malicious state.

    * trained_on: If the resulting data used in training.

 

    Remember that the first convolution yields the activity signal. Individual points in the activity signal are

    the activity index. Statistics about the activity signal is given in the following fields:

 

    * min_of_first_convolution: Minimum value of the first convolution. This is the minimum activity index in the activity signal.

    * max_of_first_convolution: Maximum value of the first convolution. This is the minimum activity index in the activity signal.

    * mean_of_first_convolution: Mean value of the first convolution. This is the minimum activity index in the activity signal.

    * variance_of_first_convolution: Variance value of the first convolution. This is the minimum activity index in the activity signal.

 

    * prediction_time: Time required to predict data generated in this run.

    * reduction_time: Time required during the convolution stage.

    * sp_accuracy: Accuracy of the predictor (predicting the next call).

    * sp_misclassification: 1 - sp_accuracy.

 

    * activity_index: This value was calculated WRT initial model, and completely useless in the final model. Disregard.

 

Categories:
127 Views

 Measurements collected from R1 for root cause analyses of the network service states defined from quality and service design perspectives

Categories:
Category: 
205 Views

The data through Figure 1~3 in the manuscript "Spatio-Temporal Correlation Analysis of Online Monitoring Data for Anomaly Detection and Location in Distribution Networks".

Categories:
Category: 
111 Views

Accurate short-term load forecasting (STLF) plays an increasingly important role in reliable and economical power system operations. This dataset contains The University of Texas at Dallas (UTD) campus load data with 13 buildings, together with 20 weather and calendar features. The dataset spans from 01/01/2014 to 12/31/2015 with an hourly resolution. The dataset is beneficial to various research such as STLF.

Categories:
775 Views

In an aging population, the demand for nurse workers increases to care for elders. Helping nurse workers make their work more efficient, will help increase elders quality of life, as the nurses can focus their efforts on care activities instead of other activities such as documentation.
Activity Recognition can be used for this goal. If we can recognize what activity a nurse is engaged in, we can partially automate documentation process to reduce time spent on this task, monitor care plan compliance to assure that all care activities have been done for each elder, among others.

Last Updated On: 
Fri, 12/06/2019 - 03:40

Pages