Datasets
Standard Dataset
A Fuzzy Decision Variables Framework for Large-scale Multiobjective Optimization
- Citation Author(s):
- Submitted by:
- yang xu
- Last updated:
- Tue, 05/17/2022 - 22:18
- DOI:
- 10.21227/kt94-yj43
- Research Article Link:
- License:
- Categories:
Abstract
In large-scale multiobjective optimization, too many decision variables hinder the convergence search of evolutionary algorithms. Reducing the search range of the decision space will significantly alleviate this puzzle. With this in mind, this paper proposes a fuzzy decision variables framework for large-scale multiobjective optimization. The framework divides the entire evolutionary process into two main stages: fuzzy evolution and precise evolution. In fuzzy evolution, we blur the decision variables of the original solution to reduce the search range of the evolutionary algorithm in the decision space so that the evolutionary population can quickly converge. The degree of fuzzification gradually decreases with the evolutionary process. Once the population approximately converges, the framework will turn to precise evolution. In precise evolution, the actual decision variables of the solution are directly optimized to increase the diversity of the population so as to be closer to the true Pareto optimal front. Finally, this paper embeds some representative algorithms into the proposed framework and verifies the framework's effectiveness through comparative experiments on various large-scale multiobjective problems with 500 to 5000 decision variables. Experimental results show that in large-scale multiobjective optimization, the framework proposed in this paper can significantly improve the performance and computational efficiency of multiobjective optimization algorithms.
Multiobjective optimization problems (MOPs) involve multiple conflicting objectives that need to be optimized simultaneously and are widespread among practical applications [1]–[5]. The conflict makes MOPs not have a unique solution that is optimal for all objectives, so the optimization algorithm can only obtain a set of trade-off solutions between the objectives. Multiobjective evolutionary algorithms (MOEAs) have been verified to effectively deal with this kind of problem and have undergone unprecedented development in the past two decades [6]–[9]. From the classic NSGA-II [10] to the most cutting-edge MaPSO [11], most studies on MOEAs focus on the scalability of the objective dimension, and some MOEAs have been able to solve many-objective optimization problems of up to 15 dimensions [12]–[16]. However, these existing MOEAs rarely consider the scalability of decision variables in MOPs. In reality, in practical applications, there are many MOPs with a large number of decision variables [17]–[21]. Usually, MOPs with thousands of dimensional decision variables are considered large-scale MOPs [22].
When solving large-scale optimization problems, those sophisticated MOEAs will not achieve good performance because of a large number of decision variables. As the number of decision variables increases linearly, the size of the search space will increase exponentially, which will lead to the algorithm prematurely converging to a local optimum or converging to a too large region [23]. In recent years, there are also some MOEAs for large-scale MOPs. According to the technical characteristics used, we can roughly divide these algorithms into four categories.
The first category is MOEAs based on cooperative coevolution (CC). This method divides the decision variables into multiple groups and then optimizes each group separately. For example, Antonio et al. applied a differential evolution algorithm (GDE3) in the cooperative coevolution framework for solving large-scale many-objective problems [24], and then further proposed to combine MOEA/D [25] with coevolutionary techniques for decomposition in both objective and decision spaces [26]. Li et al. [27] also proposed a cooperative coevolutionary algorithm, in which a fast interdependency identification grouping method is utilized for large-scale MOPs. Meselhi et al. [28] proposed a new algorithm using multiple optimizers, along with a need-based allocation of computational budget for the sub-components.
The second category is MOEAs based on decision variable analysis. This method focuses on proposing a mechanism for analyzing decision variables and strives to get the best grouping of decision variables. Different from the first category, it is divided into different groups according to the types of decision variables and optimizes for each type of decision variable by different strategies. For example, Ma et al. [29] proposed an MOEA based on decision variable analyses (MOEA/DVA). The algorithm divides decision variables into distance variables and diverse variables by analyzing the relationship between decision variables and convergence and diversity attributes. The algorithm first optimizes the distance variable and then optimizes the diversity variable. Later, Zhang et al. [30] proposed an evolutionary algorithm based on clustering of decision variables (LMEA). First, the decision variable clustering method divides decision variables into two categories: convergence-related variables and diversity-related variables. Then, the convergence optimization strategy and the diversity optimization strategy are used to optimize the two types of decision variables.
The third category is MOEAs based on problem transformation. This method solves the problem by transforming the original large-scale problem into a small-scale problem through the problem transformation function. For example, Zille et al. [31] proposed a weight optimization framework (WOF), which optimizes the weight vector instead of decision variables, thus transforming the original large-scale problem into a small-scale problem. Subsequently, He et al. [32] proposed a problem reconstruction framework (LSMOF), which reconstructs the decision space through a series of reference solutions and weight variables. Then, the original large-scale MOP is transformed into a low-dimensional single-objective problem.
The last category is a new search method based on competitive swarm optimizer (CSO). This method utilizes the strong searchability of the CSO to solve in the original decision space. The most representative algorithm in this category is LMOCSO proposed by Tian et al. [33]. In the algorithm, the inferior particles learn from the superior particles to produce promising offspring, thereby accelerating the global optimization search.
Although the existing large-scale MOEAs have shown encouraging performance, each algorithm has its shortcomings. For example, MOEAs based on the cooperative coevolution and grouping mechanism need to spend a lot of time analyzing decision variables to complete the grouping of decision variables. In addition, the performance of MOEAs based on the CC framework can be severely degraded due to inappropriate grouping. It is worth noting that the hypothesis of separability between decision variables is not always true. Therefore, it is not suitable for solving large-scale MOPs in which all decision variables interact. MOEAs based on problem transformation need to find a problem transformation function to ensure that the information loss is as little as possible after the original problem is transformed into a new problem. However, it is very difficult to find a perfect problem transformation function, and it is even more impossible in particularly complex problems. MOEAs based on CSO directly find the optimal solution in the original decision space. In a decision space of hundreds of dimensions, this type of algorithm can find a better solution by increasing the number of function evaluations of the evolutionary algorithm. As the dimensionality of decision variables increases, the size of the decision space increases exponentially. In MOPs with thousands of dimensional decision variables, it is not enough to solve the problem simply by increasing the number of function evaluations of the evolutionary algorithm.
Based on this analysis, there are two points: First, the dimensionality of the decision space can be reduced through the decision variable grouping and problem transformation mechanism so that large-scale MOPs become low-dimensional MOPs. However, the correctness of the grouping mechanism and the correctness of the problem transformation function are difficult to guarantee. Second, The CSO operator can directly find the best solution in the original decision space due to its powerful searchability. This type of algorithm has poor scalability in the dimension of the decision space. As the dimension of the decision space increases, the algorithm is prone to fall into a local optimum or converge prematurely. We found that directly narrowing the search range of the algorithm in the original decision space instead of reducing the dimensionality is an effective way to obtain the advantages of both and discard their disadvantages. Therefore, we propose a fuzzy decision variables (FDV) framework for large-scale multiobjective optimization. In particular, the main contributions of this paper are summarized as follows:
1) A method of sub-stage division of fuzzy evolution is proposed. This method divides the fuzzy evolution stage into multiple sub-stages with a gradually decreasing degree of fuzzification. The higher the degree of fuzzification in the sub-stage, the lower the accuracy of the solution obtained by FDV optimization. Therefore, in the later stage of evolution, the accuracy of the solution obtained by FDV optimization is higher.
2) Formulas for fuzzifying decision variables are proposed. The fuzzy formula adapts to multiple fuzzy evolution sub-stages. In a sub-stage, first, the two fuzzy target values of the decision variables are calculated. Then the degree of membership is calculated for the decision variable belonging to two fuzzy sets, and a fuzzy set is mapped to a fuzzy target value. Finally, the value of the decision variable is fuzzified into a fuzzy target value represented by a fuzzy set with a larger degree of membership.
3) In order to verify the effectiveness of the proposed FDV in solving MOPs, several representative MOEAs are embedded in the FDV and compared with the original algorithm on the multiobjective test suite DTLZ [34]. Experimental results show that MOEAs embedded in the FDV are significantly better than the original algorithm in all test cases, and the possibility of using existing MOEAs to solve large-scale problems is realized. Furthermore, the CSO [35] is embedded in the FDV framework (FDVCSO) and then compared with several state-of-the-art large-scale MOEAs on the large-scale multiobjective test suite LSMOP [36]. Experimental results show that FDVCSO is significantly better than other large-scale MOEAs in most test cases.
The rest of this paper is organized as follows. In Section II, we introduce the concepts and theoretical knowledge used to propose the FDV framework and explain the motivation. In Section III, the FDV framework and program implementation principle are described in detail. In Section IV, parameter analysis and experimental result analysis. Section V concludes the paper.