Dynamic-MKP Benchmark Datasets

Citation Author(s):
Brunel University London
Submitted by:
Jonas Skackauskas
Last updated:
Wed, 06/15/2022 - 15:54
Data Format:
Link to Paper:
0 ratings - Please login to submit your rating.


With increasing research on solving Dynamic Optimization Problems (DOPs), many metaheuristic algorithms and their adaptations have been proposed to solve them. However, from currently existing research results, it is hard to evaluate the algorithm performance in a repeatable way for combinatorial DOPs due to the fact that each research work has created its own version of a dynamic problem dataset using stochastic methods. Up to date, there are no combinatorial DOP benchmarks with replicable qualities. This work introduces a non-stochastic consistent Dynamic Multidimensional Knapsack Problem (Dynamic MKP) dataset generation method that is also extensible to solve the research replicability problem. Using this method, generated and published 1405 Dynamic MKP benchmark datasets using existing famous static MKP benchmark instances as the initial state.


Benchmark dataset files are organised in folders by dynamism from SAM-0.01 to SAM-0.2. Each dynamism folder contains folders of generated DMKP benchmarks named after the initial state of static MKP benchmarks from popular GK and OR libraries. Finally, each benchmark folder contains 101 .dat files. Each represents a fully defined dynamic optimisation problem state. The dataset files are ordered in sequence from State000.dat to State100.dat. The State000.dat is the initial state taken from popular GK and OR MKP libraries, and the remaining 100 states are generated.