This folder contains two csv files and one .py file. One csv file contains NIST ground PV plant data imported from https://pvdata.nist.gov/. This csv file has 902 days raw data consisting PV plant POA irradiance, ambient temperature, Inverter DC current, DC voltage, AC current and AC voltage. Second csv file contains user created data. The Python file imports two csv files. The Python program executes four proposed corrupt data detection methods to detect corrupt data in NIST ground PV plant data.
When batteries supply behind-the-meter services such as arbitrage or peak load management, an optimal controller can be designed to minimize the total electric bill. The limitations of the batteries, such as on voltage or state-of-charge, are represented in the model used to forecast the system's state dynamics. Control model inaccuracy can lead to an optimistic shortfall, where the achievable schedule will be costlier than the schedule derived using the model.
/* Style Definitions */
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-bidi-font-family:"Times New Roman";
Run instructions: Steps
1. Open config.ini (saved in scripts as a .txt file) and select the extended [CRM] parameters associated with the desired simulation case.
a. Mean parameters:
i. ChargeCapacity= 135.2366
b. Extreme case:
i. ChargeCapacity -3 sigma = 127.4366
ii. CoulombicEfficiency -3 sigma = 0.9240
iii. R0 +3 sigma = 0.016364
2. Open the file under “Case specific optimization/simulation code files” associated with the desired simulation case in a python editor / compiler (e.g. Visual Studio Code).
3. Compile and run the python program
Output description: Each program has four outputs: pyomo optimization output, relevant plots, the relevant customer bills, and an exported MATLAB data file.
1. Pyomo allows for the display of optimization conditions and variables. When the program is run it will iterate through approximately 50 steps in the command line before displaying EXIT: Optimal Solution Found. Note that this process is repeated for every recalculation step in closed-loop control but this output is not displayed.
2. The open loop programs display plots for electrical Load, Calculated Net Load, and Achieved Net Load, along with Power and State-of-Charge. The closed loop programs do not display Calculated Net Load as this line changes every time the control schedule is recalculated.
3. The command line output will print the bills calculated and achieved through the simulation. For open-loop simulations, both calculated bill and achieved bill are printed. For closed-loop simulations, only the achieved bill is printed.
4. The results of each simulation are exported into a MATLAB data file. The data files are named according to the simulation case but they all contain the same MATLAB variable (model_data) containing column vectors for each variable of interest.
The LCL filter design procedure presented here is similar to that presented in references  and . Here, the procedure is presented in a step-by-step way and the simulation file is freely available on http://busarello.prof.ufsc.br/
 Teodorescu, R. ; Liserre, M. ; Rodriguez, P. . Grid Converters for Photovoltaic and Wind Power Systems. 2011, Wiley.
This dataset includes gathering 18-month raw PV data at time intervals of about 200 µs (5 kHz sampling). A post-processing 365-day day-by-day downsampled version, converted to 10 ms intervals (100 Hz sampling), is also included. The end results are two databases: 1. The original, raw, data, including both fast (short circuit, 200 µs) and slow (sweep, 2.5-3.9 s) information for 18 months. These show intervals of missing points, but are provided to allow potential users to reproduce any new work. 2.
For the PV_Data_Clean_1_year zip, there are 365 folders included organized by dates. Each folder contains a readme txt, summarizing the 10 ms short circuit currents and 2.5-3.9 s sweeps (short-circuit current, open-circuit voltage, and MPP voltage, current, and power extracted).
The dataset contains PMU measurements of all ten generators of IEEE 39-bus transmission system model, installed at the generators terminal. The dataset was obtained by using RTDS power system simulator and GTNETx2 based PMUs, and was stored by using Synchro-measurement Application Development Framework (SADF) Matlab library. Dataset constructs in total 86.6s of simulation and 5197 PMU measurements per generator.
The Matlab dataset in struct format contains:
- positive sequence voltage and current synchrophasor (magnitude and angle) measurements
- frequency measurements
- rate-of-change-of-frequency measurements
- delta frequency measurements from nominal system frequency
- corresponding measurement timestamps
- PMU measurement quality indicators.
To load dataset into Matlab use the following command: load('IEEE-39-bus_10_generator_PMU.mat').
This data set is composed by samples of load signature of electric devices acquired on a non-intrusively form. The test-bench was performed using four identical fluorescent lamps, four identical slots and four identical switches. Identical term means the same technical specifications (nominal voltage, power, isolation voltage, among others). The sensors are connected to the power supply in order to measure the electrical variations when appliances are turned on/off. We have 16 possible network configurations with 4 appliances, in which one, two, three or four appliances can be turned on.
This dataset contains the library call lists obtained from programs implemented by using libiec61850. Call lists are marked either as benign, or according to the name of the attack.
Each file is a sequential list of library calls, separated by a newline. No special attention is required in processing the files.
This dataset details the state machine based experiments of PowerWatch.
PowerWatch Experiment Summaries
This dataset summarizes the experiments done for the PowerWatch paper. The accompanying code will be
shared after the paper is published.
There are 2 files:
* expres.csv: Each entry in this file represents the summary with respect to a unique state machine, represented by
the field "state_machine_id".
* runres.csv: For each state machine, a total of 45 runs are conducted, each individual runs are represented by
The fields in "expres.csv" are explained as follows.
* state_machine_id: A number uniquely identifies an experiment. The ID was also used as a random seed.
The naming here is, unfortunately, confusing.
* bucket_size: Chosen bucket size.
* window_size: Chosen window size.
Next 12 fields represent the "complexity" of the machine with respect to call lists they emit.
In each experiment, two machines were run: benign and malicious. The difference between those are that, in the
malicious machine, there is one more state emitting an unique call list.
* cumulative_call_size_benign: Sum of the number of call lists emitted by benign states.
* mean_call_size_benign: Mean of the call lists emitted by benign states.
* variance_call_size_benign: Variance of the call lists emitted by benign states.
* malicious_state_call_size: Number of calls emitted by the malicious state.
* malicious_state_vocabulary_size: Number of different calls emitted by the malicious state.
* cumulative_edit_distance_every_state: The edit distance between every state. Represents
how the individual computing states vary from each other.
* mean_edit_distance_every_state: Mean of the edit distance computed between every state.
* variance_of_edit_distance_every_state: Variance of the edit distances computed between every state.
* cumulative_edit_distance_good_bad: Total edit distance computed between every benign state and the malicious state.
* mean_edit_distance_good_bad: Mean edit distance computed between every benign state and the maliicous state.
* min_edit_distance_good_bad: Minimum of edit distances computed between every benign state and the maliicous state.
* variance_edit_distance_good_bad: Variance of edit distances computed between every benign state and the maliicous state.
* training_time: Total time required for training the machine learning model.
* prediction_time: Total time required for prediction stage.
* svm_accuracy: Accuracy of a SVM model taking inputs of maximum activity signal per run.
* svm_margin: Unused.
* mean_benign_train_activity_index: Mean activity index, calculated on the training set.
* mean_benign_test_activity_index: Mean activity index, calculated on the data obtained from the benign machine, but not
used for training.
* mean_malicious_activity_index: Mean activity index, calculated on the data obtained from the malicious machine.
Originally, a cascade of max-pooling and convolution mechanism were considered, but we later decided to use a single
convolution step after the prediction stage. The naming of the fields are made with respect to the initial algorithm,
and a little misleading, explained below where necessary:
* state_machine_id: The ID of the associated experiment.
* run_number: The number of the run.
* malicious: If the run contained the malicious state.
* trained_on: If the resulting data used in training.
Remember that the first convolution yields the activity signal. Individual points in the activity signal are
the activity index. Statistics about the activity signal is given in the following fields:
* min_of_first_convolution: Minimum value of the first convolution. This is the minimum activity index in the activity signal.
* max_of_first_convolution: Maximum value of the first convolution. This is the minimum activity index in the activity signal.
* mean_of_first_convolution: Mean value of the first convolution. This is the minimum activity index in the activity signal.
* variance_of_first_convolution: Variance value of the first convolution. This is the minimum activity index in the activity signal.
* prediction_time: Time required to predict data generated in this run.
* reduction_time: Time required during the convolution stage.
* sp_accuracy: Accuracy of the predictor (predicting the next call).
* sp_misclassification: 1 - sp_accuracy.
* activity_index: This value was calculated WRT initial model, and completely useless in the final model. Disregard.
Accurate short-term load forecasting (STLF) plays an increasingly important role in reliable and economical power system operations. This dataset contains The University of Texas at Dallas (UTD) campus load data with 13 buildings, together with 20 weather and calendar features. The dataset spans from 01/01/2014 to 12/31/2015 with an hourly resolution. The dataset is beneficial to various research such as STLF.