Volume 18 Number 2
April 2021
Article Contents
Citation: W. J. Hong, P. Yang, K. Tang. Evolutionary computation for large-scale multi-objective optimization: a decade of progresses. International Journal of Automation and Computing, vol.18, no.2, pp.155–169, 2021. http://doi.org/10.1007/s11633-020-1253-0 doi:  10.1007/s11633-020-1253-0
Cite as: Citation: W. J. Hong, P. Yang, K. Tang. Evolutionary computation for large-scale multi-objective optimization: a decade of progresses. International Journal of Automation and Computing , vol.18, no.2, pp.155–169, 2021. http://doi.org/10.1007/s11633-020-1253-0 doi:  10.1007/s11633-020-1253-0

Evolutionary Computation for Large-scale Multi-objective Optimization: A Decade of Progresses

Author Biography:
  • Wen-Jing Hong received the B. Eng. and Ph. D. degrees in computer science and technology from University of Science and Technology of China, China in 2012 and 2018, respectively. She is currently a postdoctoral fellow with Department of Computer Science and Engineering, Southern University of Science and Technology, China, and School of Management, University of Science and Technology of China, China. She is also with the Guangdong–Hong Kong–Macao Greater Bay Area Center for Brain Science and Brain-inspired Intelligence, China.Her research interests include multi-objective optimization, large-scale optimization and evolutionary computation.E-mail: hongwj@sustech.edu.cnORCID iD: 0000-0001-9054-5714

    Peng Yang received the B. Eng. degree and Ph. D. degree in computer science and technology from University of Science and Technology of China, China in 2012 and 2017, respectively. From 2018, He has been working as a research assistant professor at Department of Computer Science and Engineering, Southern University of Science and Technology, China. He has been served as a regular reviewer of several world-class journals and a program committee member of a set of top international conferences.His research interests include evolutionary policy optimization and its applications. E-mail: yangp@sustech.edu.cnORCID iD: 0000-0001-5333-6155

    Ke Tang received the B. Eng. degree from Huazhong University of Science and Technology, China in 2002 and the Ph. D. degree from Nanyang Technological University, Singapore in 2007.From 2007 to 2017, he was with School of Computer Science and Technology, University of Science and Technology of China, China, first as an associate professor from 2007 to 2011 and later as a professor from 2011 to 2017. He is currently a professor with Department of Computer Science and Engineering, Southern University of Science and Technology, China. He has over 7000 Google Scholar citations with an H-index of 43. He has published over 70 journal papers and over 80 conference papers. He was a recipient of the Royal Society Newton Advanced Fellowship in 2015 and the 2018 IEEE Computational Intelligence Society Outstanding Early Career Award. He is an associate editor of IEEE Transactions on Evolutionary Computation and served as a member of editorial boards for a few other journals. His research interests include evolutionary computation, machine learning and their applications. E-mail: tangk3@sustech.edu.cn (Corresponding author)ORCID iD: 0000-0002-6236-2002

  • Received: 2020-07-21
  • Accepted: 2020-09-08
  • Published Online: 2021-01-05
  • Large-scale multi-objective optimization problems (MOPs) that involve a large number of decision variables, have emerged from many real-world applications. While evolutionary algorithms (EAs) have been widely acknowledged as a mainstream method for MOPs, most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables. More recently, it has been reported that traditional multi-objective EAs (MOEAs) suffer severe deterioration with the increase of decision variables. As a result, and motivated by the emergence of real-world large-scale MOPs, investigation of MOEAs in this aspect has attracted much more attention in the past decade. This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles. From the key difficulties of the large-scale MOPs, the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables. From the perspective of methodology, the large-scale MOEAs are categorized into three classes and introduced respectively: divide and conquer based, dimensionality reduction based and enhanced search-based approaches. Several future research directions are also discussed.
  • 加载中
  • [1] A. M. Zhou, B. Y. Qu, H. Li, S. Z. Zhao, P. N. Suganthan, Q. F. Zhang.  Multiobjective evolutionary algorithms: A survey of the state of the art[J]. Swarm and Evolutionary Computation, 2011, 1(1): 32-49. doi: 10.1016/j.swevo.2011.03.001
    [2] W. Stadler.  A survey of multicriteria optimization or the vector maximum problem, Part I: 1776–1960[J]. Journal of Optimization Theory and Applications, 1979, 29(1): 1-52. doi: 10.1007/BF00932634
    [3] Y. Xiang, X. W. Yang, Y. R. Zhou, H. Huang.  Enhancing decomposition-based algorithms by estimation of distribution for constrained optimal software product selection[J]. IEEE Transactions on Evolutionary Computation, 2020, 24(2): 245-259. doi: 10.1109/TEVC.2019.2922419
    [4] Z. X. Zhu, J. Xiao, J. Q. Li, F. X. Wang, Q. F. Zhang.  Global path planning of wheeled robots using multi-objective memetic algorithms[J]. Integrated Computer-Aided Engineering, 2015, 22(4): 387-404. doi: 10.3233/ICA-150498
    [5] H. Ishibuchi, T. Murata.  A multi-objective genetic local search algorithm and its application to flowshop scheduling[J]. IEEE Transactions on Systems, Man, and Cybernetics – Part C (Applications and Reviews), 1998, 28(3): 392-403. doi: 10.1109/5326.704576
    [6] A. Trivedi, D. Srinivasan, K. Sanyal, A. Ghosh.  A survey of multiobjective evolutionary algorithms based on decomposition[J]. IEEE Transactions on Evolutionary Computation, 2017, 21(3): 440-462. doi: 10.1109/TEVC.2016.2608507
    [7] M. Panda, B. Das, B. Subudhi, B. B. Pati.  A comprehensive review of path planning algorithms for autonomous underwater vehicles[J]. International Journal of Automation and Computing, 2020, 17(3): 321-352. doi: 10.1007/s11633-019-1204-9
    [8] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan.  A fast and elitist multiobjective genetic algorithm: NSGA-Ⅱ[J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 182-197. doi: 10.1109/4235.996017
    [9] Q. F. Zhang, H. Li.  MOEA/D: A multiobjective evolutionary algorithm based on decomposition[J]. IEEE Transactions on Evolutionary Computation, 2007, 11(6): 712-731. doi: 10.1109/TEVC.2007.892759
    [10] N. Beume, B. Naujoks, M. Emmerich.  SMS-EMOA: Multiobjective selection based on dominated hypervolume[J]. European Journal of Operational Research, 2007, 181(3): 1653-1669. doi: 10.1016/j.ejor.2006.08.008
    [11] Z. Z. Liu, Y. Wang.  Handling constrained multiobjective optimization problems with constraints in both the decision and objective spaces[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 870-884. doi: 10.1109/TEVC.2019.2894743
    [12] K. Deb.  Multi-objective genetic algorithms: Problem difficulties and construction of test problems[J]. Evolutionary Computation, 1999, 7(3): 205-230. doi: 10.1162/evco.1999.7.3.205
    [13] B. D. Li, J. L. Li, K. Tang, X. Yao.  Many-objective evolutionary algorithms: A survey[J]. ACM Computing Surveys, 2015, 48(1): 13-. doi: 10.1145/2792984
    [14] S. Bechikh, M. Elarbi, L. B. Said. Many-objective optimization using evolutionary algorithms: A survey. Recent Advances in Evolutionary Multi-objective Optimization, S. Bechikh, R. Datta, A. Gupta, Eds., Cham, Germany: Springer, pp. 105–137, 2007. DOI: 10.1007/978-3-319-42978-6_4.
    [15] D. J. Li, Y. Y. Li, J. X. Li, Y. Fu.  Gesture recognition based on BP neural network improved by chaotic genetic algorithm[J]. International Journal of Automation and Computing, 2018, 15(3): 267-276. doi: 10.1007/s11633-017-1107-6
    [16] G. Y. Li, C. Qian, C. H. Jiang, X. F. Lu, K. Tang. Optimization based layer-wise magnitude-based pruning for DNN compression. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, ACM, Stockholm, Sweden, pp. 2383−2389, 2018.
    [17] M. Chavoshian, M. Taghizadeh, M. Mazare.  Hybrid dynamic neural network and PID control of pneumatic artificial muscle using the PSO algorithm[J]. International Journal of Automation and Computing, 2020, 17(3): 428-438. doi: 10.1007/s11633-019-1196-5/
    [18] K. Li, T. Xu, S. Feng, L. S. Qiao, H. W. Shen, T. Y. Lv, X. Q. Cheng, E. H. Chen.  The propagation background in social networks: Simulating and modeling[J]. International Journal of Automation and Computing, 2020, 17(3): 353-363. doi: 10.1007/s11633-020-1227-2
    [19] W. J. Hong, C. Qian, K. Tang. Efficient minimum cost seed selection with theoretical guarantees for competitive influence maximization. IEEE Transactions on Cybernetics, published online. DOI: 10.1109/TCYB.2020.2966593.
    [20] P. L. Dai, K. Liu, L. Feng, H. J. Zhang, V. C. S. Lee, S. H. Son, X. Wu.  Temporal information services in large-scale vehicular networks through evolutionary multi-objective optimization[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 20(1): 218-231. doi: 10.1109/TITS.2018.2803842
    [21] K. Tang, J. Wang, X. D. Li, X. Yao.  A scalable approach to capacitated arc routing problems based on hierarchical decomposition[J]. IEEE Transactions on Cybernetics, 2017, 47(11): 3928-3940. doi: 10.1109/TCYB.2016.2590558
    [22] F. Luna, D. L. González-Álvarez, F. Chicano, M. A. Vega-Rodríguez. On the scalability of multi-objective metaheuristics for the software scheduling problem. In Proceedings of the 11th International Conference on Intelligent Systems Design and Applications, IEEE, Córdoba, Spain, pp. 1110–1115, 2011. DOI: 10.1109/ISDA.2011.6121807.
    [23] Y. Tian, S. S. Yang, L. Zhang, F. C. Duan, X. Y. Zhang.  A surrogate-assisted multiobjective evolutionary algorithm for large-scale task-oriented pattern mining[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2019, 3(2): 106-116. doi: 10.1109/TETCI.2018.2872055
    [24] J. J. Durillo, A. J. Nebro, C. A. Coello Coello, F. Luna, E. Alba. A comparative study of the effect of parameter scalability in multi-objective metaheuristics. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Hong Kong, China, pp. 1893–1900, DOI: 10.1109/CEC.2008.4631047.
    [25] J. J. Durillo, A. J. Nebro, C. A. Coello Coello, J. García-Nieto, F. Luna, E. Alba.  A study of multiobjective metaheuristics when solving parameter scalable problems[J]. IEEE Transactions on Evolutionary Computation, 2010, 14(4): 618-635. doi: 10.1109/TEVC.2009.2034647
    [26] K. Sastry, D. E. Goldberg, M. Pelikan. Limits of scalability of multiobjective estimation of distribution algorithms. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Edinburgh, UK, pp. 2217–2224, 2005. DOI: 10.1109/CEC.2005.1554970.
    [27] H. Ishibuchi, M. Yamane, N. Akedo, Y. Nojima. Many-objective and many-variable test problems for visual examination of multiobjective search. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Cancun, Mexico, pp. 1491–1498, 2013. DOI: 10.1109/CEC.2013.6557739.
    [28] W. J. Hong, K. Tang, A. M. Zhou, H. Ishibuchi, X. Yao.  A scalable indicator-based evolutionary algorithm for large-scale multiobjective optimization[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(3): 525-537. doi: 10.1109/TEVC.2018.2881153
    [29] L. M. Antonio, C. A. Coello Coello. Use of cooperative coevolution for solving large scale multiobjective optimization problems. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Cancun, Mexico, pp. 2758–2765, 2013. DOI: 10.1109/CEC.2013.6557903.
    [30] H. Qian, Y. Yu. Solving high-dimensional multi-objective optimization problems with low effective dimensions. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, AAAI, California, USA, pp. 875−881, 2017.
    [31] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, V. G. da Fonseca.  Performance assessment of multiobjective optimizers: An analysis and review[J]. IEEE Transactions on Evolutionary Computation, 2003, 7(2): 117-132. doi: 10.1109/TEVC.2003.810758
    [32] E. Zitzler, L. Thiele. Multiobjective optimization using evolutionary algorithms – A comparative case study. In Proceedings of the 5th International Conference on Parallel Problem Solving from Nature, Springer, Amsterdam, The Netherlands, pp. 292–301, 1998. DOI: 10.1007/BFb0056872.
    [33] M. Emmerich, N. Beume, B. Naujoks. An EMO algorithm using the hypervolume measure as selection criterion. In Proceedings of the 3rd International Conference on Evolutionary Multi-Criterion Optimization, Springer, Guanajuato, Mexic, pp. 62–76, 2005. DOI: 10.1007/978-3-540-31880-4_5.
    [34] P. A. N. Bosman, D. Thierens.  The balance between proximity and diversity in multiobjective evolutionary algorithms[J]. IEEE Transactions on Evolutionary Computation, 2003, 7(2): 174-188. doi: 10.1109/TEVC.2003.810761
    [35] E. Zitzler, K. Deb, L. Thiele.  Comparison of multiobjective evolutionary algorithms: Empirical results[J]. Evolutionary Computation, 2000, 8(2): 173-195. doi: 10.1162/106365600568202
    [36] K. Deb, L. Thiele, M. Laumanns, E. Zitzler. Scalable Test Problems for evolutionary multiobjective optimization. Evolutionary Multiobjective Optimization, A. Abraham, L. Jain, R. Goldberg, Eds., London, UK: Springer, pp. 105–145, 2005. DOI: 10.1007/1-84628-137-7_6.
    [37] S. Huband, L. Barone, L. While, P. Hingston. A scalable multi-objective test problem toolkit. In Proceedings of the 3rd International Conference on Evolutionary Multi-Criterion Optimization, Springer, Guanajuato, Mexico, pp. 280–295, 2005. DOI: 10.1007/978-3-540-31880-4_20.
    [38] Q. F. Zhang, A. M. Zhou, S. Z. Zhao, P. N. Suganthan, W. D. Liu, S. Tiwari. Multiobjective Optimization Test Instances for the CEC 2009 Special Session and Competition, Technical Report CES–487, University of Essex, UK, 2009.
    [39] R. Cheng, Y. C. Jin, M. Olhofer, B. Sendhoff.  Test problems for large-scale multiobjective and many-objective optimization[J]. IEEE Transactions on Cybernetics, 2017, 47(12): 4108-4121. doi: 10.1109/TCYB.2016.2600577
    [40] H. Zille, S. Mostaghim. Comparison study of large-scale optimisation techniques on the LSMOP benchmark functions. In Proceedings of IEEE Symposium Series on Computational Intelligence, IEEE, Honolulu, USA, 2017. DOI: 10.1109/SSCI.2017.8280974.
    [41] S. Huband, P. Hingston, L. Barone, L. While.  A review of multiobjective test problems and a scalable test problem toolkit[J]. IEEE Transactions on Evolutionary Computation, 2006, 10(5): 477-506. doi: 10.1109/TEVC.2005.861417
    [42] A. Tiwari, R. Roy. Variable dependence interaction and multi-objective optimisation. In Proceedings of Genetic and Evolutionary Computation Conference, Morgan Kaufmann, New York, USA, pp. 602−609, 2002.
    [43] K. Li, M. N. Omidvar, K. Deb, X. Yao. Variable interaction in multi-objective optimization problems. In Proceedings of the 14th International Conference on Parallel Problem Solving from Nature, Springer, Edinburgh, UK, pp. 399−409, 2016. DOI: 10.1007/978-3-319-45823-6_37.
    [44] S. Mahdavi, M. E. Shiri, S. Rahnamayan.  Metaheuristics in large-scale global continues optimization: A survey[J]. Information Sciences, 2015, 295(): 407-428. doi: 10.1016/j.ins.2014.10.042
    [45] A. LaTorre, S. Muelas, J. M. Peña.  A comprehensive comparison of large scale global optimizers[J]. Information Sciences, 2015, 316(): 517-549. doi: 10.1016/j.ins.2014.09.031
    [46] P. Yang, K. Tang, X. Yao.  Turning high-dimensional optimization into computationally expensive optimization[J]. IEEE Transactions on Evolutionary Computation, 2018, 22(1): 143-156. doi: 10.1109/TEVC.2017.2672689
    [47] H. Ishibuchi, Y. Setoguchi, H. Masuda, Y. Nojima.  Performance of decomposition-based many-objective algorithms strongly depends on Pareto front shapes[J]. IEEE Transactions on Evolutionary Computation, 2017, 21(2): 169-190. doi: 10.1109/TEVC.2016.2587749
    [48] A. Jaszkiewicz.  On the performance of multiple-objective genetic local search on the 0/1 knapsack problem – a comparative experiment[J]. IEEE Transactions on Evolutionary Computation, 2002, 6(4): 402-412. doi: 10.1109/TEVC.2002.802873
    [49] E. Zitzler, L. Thiele.  Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach[J]. IEEE Transactions on Evolutionary Computation, 1999, 3(4): 257-271. doi: 10.1109/4235.797969
    [50] J. D. Knowles, D. W. Corne. M-PAES: A memetic algorithm for multiobjective optimization. In Proceedings of Congress on Evolutionary Computation, IEEE, La Jolla, USA, pp. 325-332, 2000. DOI: 10.1109/CEC.2000.870313.
    [51] A. Jaszkiewicz.  Genetic local search for multi-objective combinatorial optimization[J]. European Journal of Operational Research, 2002, 137(1): 50-71. doi: 10.1016/S0377-2217(01)00104-7
    [52] E. Zitzler, M. Laumanns, L. Thiele. SPEA2: Improving the Strength Pareto Evolutionary Algorithm, Technical Report 103, Swiss Federal Institute of Technology Zurich, Swizerland 2001.
    [53] D. W. Corne, N. R. Jerram, J. D. Knowles, M. J. Oates. PESA-Ⅱ: Region-based selection in evolutionary multiobjective optimization. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, ACM, San Francisco, USA, pp. 283−290, 2001.
    [54] J. Knowles, D. Corne. The Pareto archived evolution strategy: A new baseline algorithm for Pareto multiobjective optimisation. In Proceedings of Congress on Evolutionary Computation, IEEE, Washington, USA, pp. 98–105, 1999. DOI: 10.1109/CEC.1999.781913.
    [55] M. Reyes, C. A. Coello Coello. Improving PSO-based multi-objective optimization using crowding, mutation and ε-dominance. In Proceedings of the 3rd International Conference on Evolutionary Multi-Criterion Optimization, Springer, Guanajuato, Mexico, pp. 509–519, 2005. DOI: 10.1007/978-3-540-31880-4_35.
    [56] A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, E. Alba.  MOCell: A cellular genetic algorithm for multiobjective optimization[J]. International Journal of Intelligent Systems, 2009, 24(7): 726-746. doi: 10.1002/int.20358
    [57] S. Kukkonen, J. Lampinen. GDE3: The third evolution step of generalized differential evolution. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Edinburgh, UK, pp. 443–450, 2005. DOI: 10.1109/CEC.2005.1554717.
    [58] A. J. Nebro, F. Luna, E. Alba, B. Dorronsoro, J. J. Durillo, A. Beham.  AbYSS: Adapting scatter search to multiobjective optimization[J]. IEEE Transactions on Evolutionary Computation, 2008, 12(4): 439-457. doi: 10.1109/TEVC.2007.913109
    [59] L. M. Antonio, C. A. Coello Coello. Decomposition-based approach for solving large scale multi-objective problems. In Proceedings of the 14th International Conference on Parallel Problem Solving from Nature, Springer, Edinburgh, UK, pp. 525–534, 2016. DOI: 10.1007/978-3-319-45823-6_49.
    [60] H. Zille, H. Ishibuchi, S. Mostaghim, Y. Nojima.  A framework for large-scale multiobjective optimization based on problem transformation[J]. IEEE Transactions on Evolutionary Computation, 2018, 22(2): 260-275. doi: 10.1109/TEVC.2017.2704782
    [61] H. Masuda, Y. Nojima, H. Ishibuchi. Visual examination of the behavior of EMO algorithms for many-objective optimization with many decision variables. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Beijing, China, pp. 2633–2640, 2014. DOI: 10.1109/CEC.2014.6900642.
    [62] K. Tang, P. Yang, X. Yao.  Negatively correlated search[J]. IEEE Journal on Selected Areas in Communications, 2016, 34(3): 542-550. doi: 10.1109/JSAC.2016.2525458
    [63] S. Watanabe, M. Ito, K. Sakakibara. A proposal on a decomposition-based evolutionary multiobjective optimization for large scale vehicle routing problems. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Sendai, Japan, pp. 2581–2588, 2015. DOI: 10.1109/CEC.2015.7257206.
    [64] R. H. Shang, K. Y. Dai, L. C. Jiao, R. Stolkin.  Improved Memetic Algorithm based on route distance grouping for Multiobjective large scale capacitated arc routing problems[J]. IEEE Transactions on Cybernetics, 2016, 46(4): 1000-1013. doi: 10.1109/TCYB.2015.2419276
    [65] D. Kimovski, J. Ortega, A. Ortiz, R. Baños.  Parallel alternatives for evolutionary multi-objective optimization in unsupervised feature selection[J]. Expert Systems with Applications, 2015, 42(9): 4239-4252. doi: 10.1016/j.eswa.2015.01.061
    [66] R. L. Lü, X. M. Guan, X. Y. Li, I. Hwang.  A large-scale flight multi-objective assignment approach based on multi-island parallel evolution algorithm with cooperative coevolutionary[J]. Science China Information Sciences, 2016, 59(7): 072201-. doi: 10.1007/s11432-015-5495-3
    [67] I. M. Cooper, M. P. John, R. Lewis, C. L. Mumford, A. Olden. Optimising large scale public transport network design problems using mixed-mode parallel multi-objective evolutionary algorithms. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Beijing, China, pp. 2841–2848, 2014. DOI: 10.1109/CEC.2014.6900362.
    [68] R. D. Friese. Efficient genetic algorithm encoding for large-scale multi-objective resource allocation. In Proceedings of IEEE International Parallel and Distributed Processing Symposium Workshops, IEEE, Chicago, USA, pp. 1360–1369, 2016. DOI: 10.1109/IPDPSW.2016.36.
    [69] C. Qian.  Distributed pareto optimization for large-scale noisy subset selection[J]. IEEE Transactions on Evolutionary Computation, 2020, 24(4): 694-707. doi: 10.1109/TEVC.2019.2929555
    [70] A. Gaur, A. K. M. Khaled Talukder, K. Deb, S. Tiwari, S. Xu, D. Jones. Finding near-optimum and diverse solutions for a large-scale engineering design problem. In Proceedings of IEEE Symposium Series on Computational Intelligence, IEEE, Honolulu, USA, 2017. DOI: 10.1109/SSCI.2017.8285271.
    [71] X. H. Gan, J. Liu. A multi-objective evolutionary algorithm for emergency logistics scheduling in large-scale disaster relief. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, San Sebastián, Spain, pp. 51–58, 2017. DOI: 10.1109/CEC.2017.7969295.
    [72] Amarjeet, J. K. Chhabra.  Many-objective artificial bee colony algorithm for large-scale software module clustering problem[J]. Soft Computing, 2018, 22(19): 6341-6361. doi: 10.1007/s00500-017-2687-3
    [73] D. S. Sanches, T. W. de Lima, J. B. A. London Junior, A. C. B. Delbem, R. S. Prado, F. G. Guimarães. Multi-objective evolutionary algorithm with discrete differential mutation operator for service restoration in large-scale distribution systems. In Proceedings of the 8th International Conference on Evolutionary Multi-Criterion Optimization, Springer, Guimarães, Portugal, pp. 498-513, 2015. DOI: 10.1007/978-3-319-15892-1_34.
    [74] L. M. Antonio, C. A. Coello Coello.  Coevolutionary Multiobjective evolutionary algorithms: Survey of the state-of-the-art[J]. IEEE Transactions on Evolutionary Computation, 2018, 22(6): 851-865. doi: 10.1109/TEVC.2017.2767023
    [75] X. F. Lu, S. Menzel, K. Tang, X. Yao.  Cooperative co-evolution based design optimisation: A concurrent engineering perspective[J]. IEEE Transactions on Evolutionary Computation, 2018, 22(2): 173-188. doi: 10.1109/TEVC.2017.2713949
    [76] Z. Y. Yang, K. Tang, X. Yao. Differential evolution for high-dimensional function optimization. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Singapore, pp. 3523–3530, 2007. DOI: 10.1109/CEC.2007.4424929.
    [77] A. Song, Q. Yang, W. N. Chen, J. Zhang. A random-based dynamic grouping strategy for large scale multi-objective optimization. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Vancouver, Canada, pp. 468–475, 2016. DOI: 10.1109/CEC.2016.7743831.
    [78] X. L. Ma, F. Liu, Y. T. Qi, X. D. Wang, L. L. Li, L. C. Jiao, M. L. Yin, M. G. Gong.  A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables[J]. IEEE Transactions on Evolutionary Computation, 2016, 20(2): 275-298. doi: 10.1109/TEVC.2015.2455812
    [79] X. Y. Zhang, Y. Tian, R. Cheng, Y. C. Jin.  A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization[J]. IEEE Transactions on Evolutionary Computation, 2018, 22(1): 97-112. doi: 10.1109/TEVC.2016.2600642
    [80] H. Zille, H. Ishibuchi, S. Mostaghim, Y. Nojima. Weighted optimization framework for large-scale multi-objective optimization. In Proceedings of Genetic and Evolutionary Computation Conference Companion, ACM, Denver, USA, pp. 83–84, 2016. DOI: 10.1145/2908961.2908979.
    [81] Z. Y. Yang, K. Tang, X. Yao.  Large scale evolutionary optimization using cooperative coevolution[J]. Information Sciences, 2008, 178(15): 2985-2999. doi: 10.1016/j.ins.2008.02.017
    [82] M. N. Omidvar, X. D. Li, Y. Mei, X. Yao.  Cooperative co-evolution with differential grouping for large scale optimization[J]. IEEE Transactions on Evolutionary Computation, 2014, 18(3): 378-393. doi: 10.1109/TEVC.2013.2281543
    [83] C. He, L. H. Li, Y. Tian, X. Y. Zhang, R. Cheng, Y. C. Jin, X. Yao.  Accelerating large-scale multiobjective optimization via problem reformulation[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(6): 949-961. doi: 10.1109/TEVC.2019.2896002
    [84] H. Zille, H. Ishibuchi, S. Mostaghim, Y. Nojima. Mutation operators based on variable grouping for multi-objective large-scale optimization. In Proceedings of IEEE Symposium Series on Computational Intelligence, IEEE, Athens, Greece, 2016. DOI: 10.1109/SSCI.2016.7850214.
    [85] K. Deb.  An efficient constraint handling method for genetic algorithms[J]. Computer Methods in Applied Mechanics and Engineering, 2000, 186(2–4): 311-228. doi: 10.1016/S0045-7825(99)00389-8
    [86] S. C. Liu, K. Tang, X. Yao. Automatic construction of parallel portfolios via explicit instance grouping. In Proceedings of AAAI Conference on Artificial Intelligence, AAAI, Hawaii, USA, pp. 1560–1567, 2019. DOI: 10.1609/aaai.v33i01.33011560.
    [87] B. Cao, J. W. Zhao, Z. H. Lv, X. Liu.  A distributed parallel cooperative coevolutionary multiobjective evolutionary algorithm for large-scale optimization[J]. IEEE Transactions on Industrial Informatics, 2017, 13(4): 2030-2038. doi: 10.1109/TⅡ.2017.2676000
    [88] P. García-Sánchez, J. Ortega, J. González, P. A. Castillo, J. J. Merelo. Addressing high dimensional multi-objective optimization problems by coevolutionary islands with overlapping search spaces. In Proceedings of the 19th European Conference on the Applications of Evolutionary Computation, Springer, Porto, Portugal, pp. 107–117, 2016. DOI: 10.1007/978-3-319-31153-1_8.
    [89] J. Bergstra, Y. Bengio.  Random search for hyper-parameter optimization[J]. The Journal of Machine Learning Research, 2012, 13(): 281-305.
    [90] Z. Y. Wang, M. Zoghi, F. Hutter, D. Matheson, N. de Freitas. Bayesian optimization in high dimensions via random embeddings. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence, AAAI, Beijing, China, pp. 1778−1784, 2013.
    [91] O. Schütze, S. Mostaghim, M. Dellnitz, J. Teich. Covering Pareto sets by multilevel evolutionary subdivision techniques. In Proceedings of the 2nd International Conference on Evolutionary Multi-Criterion Optimization, Springer, Faro, Portugal, pp. 118−132, 2003. DOI: 10.1007/3-540-36970-8_9.
    [92] Q. F. Zhang, A. M. Zhou, Y. C. Jin.  RM-MEDA: A regularity model-based multiobjective estimation of distribution algorithm[J]. IEEE Transactions on Evolutionary Computation, 2008, 12(1): 41-63. doi: 10.1109/TEVC.2007.894202
    [93] Y. Tian, C. Lu, X. Y. Zhang, K. C. Tan, Y. C. Jin. Solving large-scale multiobjective optimization problems with sparse optimal solutions via unsupervised neural networks. IEEE Transactions on Cybernetics, published online. DOI: 10.1109/TCYB.2020.2979930.
    [94] A. Lara, C. A. Coello Coello, O. Schütze. Using gradient-based information to deal with scalability in multi-objective evolutionary algorithms. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Trondheim, Norway, pp. 16−23, 2009. DOI: 10.1109/CEC.2009.4982925.
    [95] R. Cheng, Y. C. Jin, K. Narukawa, B. Sendhoff.  A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling[J]. IEEE Transactions on Evolutionary Computation, 2015, 19(6): 838-856. doi: 10.1109/TEVC.2015.2395073
    [96] Y. T. Qi, L. Bao, X. L. Ma, Q. G. Miao, X. D. Li.  Self-adaptive multi-objective evolutionary algorithm based on decomposition for large-scale problems: A case study on reservoir flood control operation[J]. Information Sciences, 2016, 367–368(): 529-549. doi: 10.1016/j.ins.2016.06.005
    [97] H. Hiba, A. A. Bidgoli, A. Ibrahim, S. Rahnamayan. CGDE3: An efficient center-based algorithm for solving large-scale multi-objective optimization problems. In Proceedings of IEEE Congress on Evolutionary Computation, IEEE, Wellington, New Zealand, pp. 350−358, 2019. DOI: 10.1109/CEC.2019.8790351.
    [98] Y. Tian, X. Y. Zhang, C. Wang, Y. C. Jin.  An evolutionary algorithm for large-scale sparse multiobjective optimization problems[J]. IEEE Transactions on Evolutionary Computation, 2020, 24(2): 380-393. doi: 10.1109/TEVC.2019.2918140
    [99] H. K. Chen, X. M. Zhu, W. Pedrycz, S. Yin, G. H. Wu, H. Yan. PEA: Parallel evolutionary algorithm by separating convergence and diversity for large-scale multi-objective optimization. In Proceedings of the 38th IEEE International Conference on Distributed Computing Systems, IEEE, Vienna, Austria, pp. 223−232, 2018. DOI: 10.1109/ICDCS.2018.00031.
    [100] P. S. Oliveto, J. He, X. Yao.  Time complexity of evolutionary algorithms for combinatorial optimization: A decade of results[J]. International Journal of Automation and Computing, 2007, 4(3): 281-293. doi: 10.1007/s11633-007-0281-3
    [101] M. Laumanns, L. Thiele, E. Zitzler.  Running time analysis of multiobjective evolutionary algorithms on pseudo-boolean functions[J]. IEEE Transactions on Evolutionary Computation, 2004, 8(2): 170-182. doi: 10.1109/TEVC.2004.823470
  • 加载中
  • [1] Yue Wu, Jun-Wei Liu, Chen-Zhuo Zhu, Zhuang-Fei Bai, Qi-Guang Miao, Wen-Ping Ma, Mao-Guo Gong. Computational Intelligence in Remote Sensing Image Registration: A survey . International Journal of Automation and Computing, 2021, 18(1): 1-17.  doi: 10.1007/s11633-020-1248-x
    [2] Song Lu, Yang-Min Li, Bing-Xiao Ding. Multi-objective Dimensional Optimization of a 3-DOF Translational PKM Considering Transmission Properties . International Journal of Automation and Computing, 2019, 16(6): 748-760.  doi: 10.1007/s11633-019-1184-9
    [3] Xiao-Hong Qiu, Yu-Ting Hu, Bo Li. Sequential Fault Diagnosis Using an Inertial Velocity Difierential Evolution Algorithm . International Journal of Automation and Computing, 2019, 16(3): 389-397.  doi: 10.1007/s11633-016-1008-0
    [4] Chao-Long Zhang, Yuan-Ping Xu, Zhi-Jie Xu, Jia He, Jing Wang, Jian-Hua Adu. A Fuzzy Neural Network Based Dynamic Data Allocation Model on Heterogeneous Multi-GPUs for Large-scale Computations . International Journal of Automation and Computing, 2018, 15(2): 181-193.  doi: 10.1007/s11633-018-1120-4
    [5] Qin-Qin Fan, Yi-Lian Zhang, Xue-Feng Yan, Zhi-Huan Wang. Enhancing the Performance of JADE Using Two-phase Parameter Control Scheme and Its Application . International Journal of Automation and Computing, 2018, 15(4): 462-473.  doi: 10.1007/s11633-018-1119-x
    [6] Hafizul Azizi Ismail, Michael S. Packianather, Roger I. Grosvenor. Multi-objective Invasive Weed Optimization of the LQR Controller . International Journal of Automation and Computing, 2017, 14(3): 321-339.  doi: 10.1007/s11633-017-1061-3
    [7] Ting Liu, Yuan-Hua Wang, Dai-Zhan Cheng. Dynamics and Stability of Potential Hyper-networked Evolutionary Games . International Journal of Automation and Computing, 2017, 14(2): 229-238.  doi: 10.1007/s11633-017-1056-0
    [8] De-Rong Liu,  Hong-Liang,  Li Ding Wang. Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey . International Journal of Automation and Computing, 2015, 12(3): 229-242.  doi: 10.1007/s11633-015-0893-y
    [9] Khalid Jebari, Abdelaziz Bouroumi, Aziz Ettouhami. Fuzzy Genetic Sharing for Dynamic Optimization . International Journal of Automation and Computing, 2012, 9(6): 616-626 .  doi: 10.1007/s11633-012-0687-4
    [10] R. Saravanan, S. Ramabalan, C. Balamurugan, A. Subash. Evolutionary Trajectory Planning for an Industrial Robot . International Journal of Automation and Computing, 2010, 7(2): 190-198.  doi: 10.1007/s11633-010-0190-8
    [11] Subramanian Appavu Alias Balamurugan, Ramasamy Rajaram. Effective and Efficient Feature Selection for Large-scale Data Using Bayes’ Theorem . International Journal of Automation and Computing, 2009, 6(1): 62-71.  doi: 10.1007/s11633-009-0062-2
    [12] Warit Silpsrikul,  Suchin Arunsawatwong. Computation of Peak Output for Inputs Restricted in L2 and L Norms Using Finite Difference Schemes and Convex Optimization . International Journal of Automation and Computing, 2009, 6(1): 7-13.  doi: 10.1007/s11633-009-0007-9
    [13] Francisco Flórez-Revuelta, José Manuel Casado-Díaz, Lucas Martínez-Bernabeu. An Evolutionary Approach to the Delineation of Functional Areas Based on Travel-to-work Flows . International Journal of Automation and Computing, 2008, 5(1): 10-21.  doi: 10.1007/s11633-008-0010-6
    [14] S. C. Chiam,  K. C. Tan,  A. Al Mamum. Evolutionary Multi-objective Portfolio Optimization in Practical Context . International Journal of Automation and Computing, 2008, 5(1): 67-80.  doi: 10.1007/s11633-008-0067-2
    [15] Walter Hussak,  Shuang-Hua Yang. Formal Reduction of Interfaces to Large-scale Process Control Systems . International Journal of Automation and Computing, 2007, 4(4): 413-421.  doi: 10.1007/s11633-007-0413-9
    [16] Min Wu, Bei He, Jin-Hua She. A Fast LDL-factorization Approach for Large Sparse Positive Definite System and Its Application to One-to-one Marketing Optimization Computation . International Journal of Automation and Computing, 2007, 4(1): 88-94.  doi: 10.1007/s11633-007-0088-2
    [17] Siddhartha Shakya, John McCall. Optimization by Estimation of Distribution with DEUM Framework Based on Markov Random Fields . International Journal of Automation and Computing, 2007, 4(3): 262-272.  doi: 10.1007/s11633-007-0262-6
    [18] Pietro S. Oliveto,  Jun He,  Xin Yao. Time Complexity of Evolutionary Algorithms for Combinatorial Optimization:A Decade of Results . International Journal of Automation and Computing, 2007, 4(3): 281-293.  doi: 10.1007/s11633-007-0281-3
    [19] Modelling and Multi-Objective Optimal Control of Batch Processes Using Recurrent Neuro-fuzzy Networks . International Journal of Automation and Computing, 2006, 3(1): 1-7.  doi: 10.1007/s11633-006-0001-4
    [20] Yun Li, Kiam Heong Ang, Gregory C.Y. Chong, Wenyuan Feng, Kay Chen Tan, Hiroshi Kashiwagi. CAutoCSD-Evolutionary Search and Optimisation Enabled Computer Automated Control System Design . International Journal of Automation and Computing, 2004, 1(1): 76-88.  doi: 10.1007/s11633-004-0076-8
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Tables (1)

Metrics

Abstract Views (107) PDF downloads (35) Citations (0)

Evolutionary Computation for Large-scale Multi-objective Optimization: A Decade of Progresses

Abstract: Large-scale multi-objective optimization problems (MOPs) that involve a large number of decision variables, have emerged from many real-world applications. While evolutionary algorithms (EAs) have been widely acknowledged as a mainstream method for MOPs, most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables. More recently, it has been reported that traditional multi-objective EAs (MOEAs) suffer severe deterioration with the increase of decision variables. As a result, and motivated by the emergence of real-world large-scale MOPs, investigation of MOEAs in this aspect has attracted much more attention in the past decade. This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles. From the key difficulties of the large-scale MOPs, the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables. From the perspective of methodology, the large-scale MOEAs are categorized into three classes and introduced respectively: divide and conquer based, dimensionality reduction based and enhanced search-based approaches. Several future research directions are also discussed.

Citation: W. J. Hong, P. Yang, K. Tang. Evolutionary computation for large-scale multi-objective optimization: a decade of progresses. International Journal of Automation and Computing, vol.18, no.2, pp.155–169, 2021. http://doi.org/10.1007/s11633-020-1253-0 doi:  10.1007/s11633-020-1253-0
Citation: Citation: W. J. Hong, P. Yang, K. Tang. Evolutionary computation for large-scale multi-objective optimization: a decade of progresses. International Journal of Automation and Computing , vol.18, no.2, pp.155–169, 2021. http://doi.org/10.1007/s11633-020-1253-0 doi:  10.1007/s11633-020-1253-0
    • Many real-world optimization problems involve multiple objectives, which are called multi-objective optimization problems (MOPs)[1]. The conflict between the multiple objectives makes it impossible to find a single solution that can optimize all objectives simultaneously. Searching for the best trade-off solutions, called Pareto-optimal solutions[2], for MOPs is thus critical to a decision maker. Evolutionary algorithms (EAs) are able to approximate the whole set of Pareto-optimal solutions in a single run due to their population-based nature and do not make particular assumptions about problems like continuity or differentiability. Thus, the use of EAs to deal with MOPs have been extensively studied[3-5] and plenty of powerful multi-objective EAs (MOEA) have been proposed[6-11].

      Scalability of MOEAs, which characterizes the changing trend of algorithm performance with the problem size, is not only a long-standing concern in the evolutionary computation community[12], but also is critical to whether MOEAs can be applied to broader real-world problems. In the context of MOPs, the number of decision variables and the number of objectives are two key factors for the problem size. Scalability with respect to the number of objectives, referred to as many-objective optimization, has drawn steady attention in past decades and a number of surveys are available[13, 14]. On the other hand, although scalability with respect to the number of decision variables has attracted wide attention, there is no review paper in this regard. In recent years, there has been an increasing interest in the scalability of MOEAs with respect to decision variables, as large-scale MOPs (i.e., MOPs with a large number of decision variables) appear widely in various real-world applications. For example, the training of deep neural networks (DNN) needs to take into account both empirical risks (such as training errors) and structural risks (such as network sparseness), while typical industrial DNNs often contain millions of weights to be trained[15-17]. Commercial promotion on social networks requires both influence maximization and cost minimization, while the number of nodes involved in real networks (such as Weibo, Facebook, twitter, etc.) can be in the order of millions or even hundreds of millions[18, 19]. Problems recently faced in logistics scheduling[20, 21], software engineering[22], pattern mining[23], and many other fields have also shown the characteristics of large-scale multi-objective optimization problems, leading to an urgent need for efficient and effective approaches. Nonetheless, most current practice of MOEAs has been restricted in MOPs with small-scale decision variables (normally, no more than 30). More recently, empirical studies have shown a severe decrease in efficiency of traditional MOEAs when increasing the number of decision variables[24, 25]. As a result, evolutionary computation for large-scale MOPs has attracted a lot of research effort in the past decade, which mainly focuses on two major problems:

      1) What are the difficulties associated with large-scale multi-objective optimization? The scalability of MOEAs is determined jointly by their search mechanisms and the problem properties. An in-depth analysis of the specific difficulties associated with different large-scale MOPs when adopting different MOEAs can benefit the evaluation of algorithms, and thereby facilitate the development of new and improved large-scale MOEAs. To this end, there has been some work to systematically explore the scalability of traditional MOEAs and put forward some discoveries about the challenges faced by the algorithms[26-28]. This paper briefly reviews recent efforts in this aspect.

      2) How to enhance the scalability of an MOEA? As mentioned above, research on designing MOEAs with high scalability originates from practical needs. To this end, basic ideas include simplifying large-scale MOPs via divide-and-conquer methodology[29] and dimensionality reduction[30] and improving the search ability of MOEAs by rebalancing the exploration and exploitation[28]. From these three aspects, this paper summarizes the advances in scalable algorithm designs. The main focus is on how characteristics of MOPs and scalability challenges are integrated into the design of highly scalable MOEAs.

      To summarize, this paper focuses on the scalability of MOEAs with respect to the number of decision variables and presents an extensive review of recent progresses on evolutionary computation for large-scale multi-objective optimization in the last decade. We first introduce the study on the capabilities of traditional MOEAs to properly scale when increasing the number of decision variables as well as the difficulties encountered by these algorithms, and then categorize recent developments on scalable MOEAs into three classes based on the key idea used: the divide-and-conquer, dimensionality reduction and enhanced search-based approaches.

      The remainder of this paper is organized as follows. Section 2 introduces the background of large-scale multi-objective optimization. Section 3 briefly discusses the scalability of traditional MOEAs and challenges associated with large-scale multi-objective optimization. Section 4 summarizes the advances in scalable MOEA designs. Finally, the paper is concluded in Section 5 with some potential directions for future research.

    • In this section, the background of large-scale multi-objective optimization is presented, including the problem definition, the Pareto optimality, the scalability measurements, benchmark problems and the concepts of variable dependencies.

      An MOP can be mathematically formulated as

      $ \begin{array}{l} \min F({{x}}) = ({f_1}({{x}}), \cdots ,{f_M}({{x}}))\\ \;\;\;\;\;\;\;\;{{x}} = ({x_1}, \cdots ,{x_D}) \in \Omega \end{array}$

      where the decision vector ${{x}}$ consists of $D$ decision variables, the objective function vector $F:\Omega \to {{{{\rm{R}}}}^M}$ consists of $M$ objective functions, and $\Omega $ and ${{{{\rm{R}}}}^M}$ denote the decision and objective spaces, respectively. Large-scale MOPs refer to the subclass of MOPs with a large number of decision variables. In this paper, we define a large-scale MOP as an MOP with $M \geq 2$ and $D \geq 100$[29]. This is done to highlight the challenges posed by a large number of decision variables to existing MOEAs[24, 25]. As stronger scalable MOEAs are developed in the future, the number $D$ would increase accordingly.

      Given an MOP, the concept of Pareto optimality[1, 2] is defined as follows.

      Definition 1. Given two solutions $u,v$ and their corresponding objective vectors $F(u),F(v)$, $u$ dominates $v$ (denoted as $u \prec v$) if and only if $\forall i \in \{ 1, \cdots ,M\} $, ${f_i}(u) \leq {f_i}(v)$ and $\exists j \in \{ 1, \cdots ,M\} $, ${f_j}(u) < {f_j}(v)$.

      Definition 2. A solution ${{{x}}^*}$ is Pareto-optimal if and only if there exists no $u \in \Omega $ such that $F(u) \prec F({{{x}}^*})$. The set of all Pareto-optimal solutions is called the Pareto set. The corresponding objective vector set of the Pareto set is called the Pareto front.

      The goal of optimizing a large-scale MOP is to obtain an approximation set $P$ to the Pareto front (PF) with two characteristics: 1) High convergence: All solutions in $P$ are as close as possible to the Pareto front; 2) High diversity: All solutions in $P$ are as diverse as possible in the objective space[31]. To achieve such optimization goal, the hypervolume indicator[32, 33] and inverted generational distance (IGD)[34] which consider both convergence and diversity comprehensively are commonly adopted to measure the quality of an approximation set.

      The scalability of an MOEA stands for the trend of its solution performance with the number of decision variables. It can be measured by identifying how the performance of the given algorithm changes as the number of decision variables increases and the performance can be described in terms of three aspects: the quality of the obtained approximation set, the search efficiency, and the search behaviors of the given algorithm in the objective space. One way to evaluate the search efficiency is to calculate the computational effort (e.g., the number of objective function evaluations) required by the given algorithm to achieve the desired goal (e.g., an approximation set with expected quality). For example, the study in [24] employs the number of objective function evaluations needed by algorithms to produce a solution set with a hypervolume value which is larger than 90% of the hypervolume of the Pareto front. To study the search behaviors, one can plot the approximation sets obtained at different evolutionary stages in the objective space or the evolutionary curves of quantified quality indicators.

      Benchmark problems for assessing the scalability of MOEAs and designing large-scale MOEAs (i.e., MOEAs for large-scale multi-objective optimization) are often expected to be scalable in terms of the number of decision variables while keeping an invariant Pareto front. The Zizler-Deb-Thiele (ZDT)[35], Deb-Thiele-Laumanns-Zizler (DTLZ)[36], walking-fish-group (WFG)[37], CEC 2009 competition[38] and large-scale multi-objective and many-objective optimization problem (LSMOP)[39, 40] test function family are commonly used in the specialized literature. These problems are characterized by multimodality, complicated Pareto sets and Pareto fronts, separability between decision variables and correlation between decision variables and objectives, deception and epistasis[12], posing various difficulties for MOEAs to scale up to large-scale MOPs.

      Variable dependencies, including separability and control properties that characterize how a decision variable is correlated to the convergence and diversity, are important aspects when considering scalability. A large-scale MOP is separable if and only if each variable can be optimized independently, while there exist correlations between at least two decision variables for non-separable problems[41]. Thus, it is beneficial to solve a large-scale MOP with high separability by decomposing it into several small-scale problems. The separability of some popular benchmarks has been studied recently[42, 43]. On the other hand, the decision variables of an MOP can be categorized into three types in terms of how they are mapped to the objective space, i.e., the position variables, the distance variables and the mixed variables[41]. If modifying a variable on its own only results in non-dominated objective function vectors, it is called a position variable. If modifying a variable on its own only results in dominating or dominated objective function vectors, it is called a distance variable. The other decision variables are called mixed variables. Thus, this property allows one to separate the convergence and diversity aspects of sets of solutions for an MOP.

      It should be noted that although many-objective optimization ($M \geq 4$) and large-scale single-objective optimization ($M = 1,D \geq 100$)[44-46] are related to large-scale multi-objective optimization, current studies in these two directions do not yield effective MOEAs for large-scale MOPs. On one hand, effective methods for high-dimensional objective space (i.e., many-objective optimization) may encounter degradation in high-dimensional decision space (i.e., large-scale multi-objective optimization) because the mappings between decision variables and objectives can be quite complicated[38, 47]. On the other hand, instead of searching for a single optimal solution as that has been done in the large-scale single-objective optimization, MOPs aim at the Pareto set, implying a fundamental difference.

    • Generally, the main difficulty of MOEAs on large-scale MOPs lies in that the search space of a problem expands exponentially with the dimensionality. Correspondingly, the landscape of the search space may become more complicated and the conflicts between multiple objectives may be more serious. Such an expanded and complicated search space may quickly exceed the search ability of the existing MOEAs, and thus cannot be fully explored within a reasonable time budget. Therefore, it is critical to consider the capabilities of MOEAs to properly scale when increasing the number of decision variables.

      In the multi-objective research community, scalability with respect to the number of decision variables has rarely been considered before 2008. To our knowledge, the only two related works in this direction before that are the comparative study presented in [48] in which the performance of four MOEAs (strength Pareto evolutionary algorthm (SPEA)[49], memetic-Pareto archive evolution strategy (M-PAES)[50], Ishibuchi's and Murata's multiple-objective genetic local search (IMMOGLS)[5] and multiple-objective genetic local search (MOGLS)[51]) are investigated on multi-objective 0/1 knapsack problems with up to 750 decision variables, and a study on the scalability of multi-objective estimation of distribution algorithms (MOEDAs)[26]. An increasing interest in studying the scalability of MOEAs has begun mainly since the systematic experimental studies were presented in [24, 25], where eight representative MOEAs (including nondominated sorting genetic algorithm II (NSGA-II)[8], SPEA2[52] Pareto envelop based search algorithm II (PESA-II)[53], Pareto archived evolution strategy (PAES)[54], one multi-objective particle swarm optimizer (OMOPSO)[55], multi-objective cellular genetic algorithm (MOCel)l[56], generalized differential evolution 3 (GDE3)[57] and archive-based hybrid scatter search (AbYSS)[58]) are examined on large-scale benchmark problems with up to 2048 decision variables. By using the number of objective function evaluations required by an algorithm to reach an acceptable approximation of the Pareto front (i.e., an approximate set with its hypervolume larger than 98% of the hypervolume of the Pareto front) as the evaluation criterion of scalability, this work reveals the severe performance deterioration of the above eight MOEAs when increasing the number of decision variables from 8 to 2048. The scalability of multi-objective evolutionary algorithm based on decomposition (MOEA/D) is examined in [59] by measuring the closeness of the obtained approximation set to the Pareto front under the same number of objective function evaluations. Unfortunately, the experiment results show that the efficacy of the algorithms examined above decreases as the number of decision variables increases. Although OMOPSO generally performs the best among the eight algorithms examined in terms of the efficiency in reaching the Pareto front, the number of objective function evaluations it requires can increase by more than 1000 times, when the number of decision variables increases from 8 to 2048. Thus, these algorithms could hardly scale to large-scale MOPs efficiently.

      The deficiency of these traditional MOEAs has thus inspired further exploration of possible challenges faced by these algorithms. For an in-depth analysis of the reasons behind the inadequate scalability of these algorithms, a systematic and comprehensive analysis is presented in [28]. It empirically studies the performance of three traditional MOEAs (NSGA-II, MOEA/D[9], S metric selection evolutionary multi-objective optimization algorithm (SMS-EMOA)[10]) and two MOEAs for large-scale MOPs (weighted optimization framework-enhanced speed-constrained multi-objective particle swarm optimization (WOF-SMPSO)[60] and random embedding NSGA-II (Re-NSGA)-II[30]) when applied to solve problems with up to 8192 decision variables from the ZDT, DTLZ, WFG and CEC 2009 competition benchmark sets. The experiment results suggest two important phenomena. First, it is found that the behaviors of the three traditional MOEAs can be roughly used as a classification indicator for the tested problems. More specifically, the problems can be categorized into three groups based on the analysis: convergence-focused, diversity-type I and diversity-type II problems. The detailed results can be found in [28]. Convergence-focused problems only require MOEAs to have stronger convergence as a set of well-distributed solutions can be obtained relatively easily once several good solutions have been found. Such problems can thus be mitigated using techniques employed in large-scale single-objective optimization. Diversity-type I problems require MOEAs to have stronger diversification but ignore the correlation between position and distance functions. Diversity-type II problems pose a great challenge to the balance between diversification and convergence by considering a correlation between position and distance functions. The position and distance functions define the convergence and diversity aspects of sets of solutions for a problem[41]. For diversity-type problems, when the number of decision variables increases, achieving a good spread along the Pareto front becomes more difficult and the diversity of the algorithm deteriorates severely, which is called the diversity loss. Such diversity loss associated with diversity-type I problems is shown to be manageable by adopting NSGA-II, while the diversity loss associated with diversity-type II problems poses a great challenge for the three traditional MOEAs. A similar diversity loss is also reported for large-scale multi-objective distance minimization problems in [27, 61] by examining NSGA-II, SPEA2 and MOEA/D. Second, when applying the two recent large-scale MOEAs on these problems, it is found that their performance deteriorates rapidly with the increase of the number of decision variables especially on diversity-type II problems, which highlights the need for further attention to these problems.

      The separability of decision variables is a critical property for large-scale multi-objective optimization. It has been widely believed that fully-separable MOPs can be solved simply by optimizing each variable in a fully independent manner. Thus, some knowledge about separability can be conducive to algorithm design and significantly alleviates the curse of dimensionality. Motivated by this, a simple but efficient algorithm called dimension-wise MOEA (DW-MOEA) is presented in [28] so that one can empirically analyze the separability property of a black-box problem. It first divides the original D-dimensional MOPs into D exclusive 1-dimensional sub-MOPs, and then optimizes each sub-MOP separately. Indeed, experimental studies on popular benchmarks including ZDT, DTLZ and WFG show that DW-MOEA is able to achieve significantly better efficiency than the representative NSGA-II, MOEA/D and SMS-EMOA when applied to most separable MOPs. Thus, DW-MOEA can be easily implemented and employed as a quick attempt to check whether it is worthy of developing more sophisticated algorithms. It should be noted that this does not mean separable MOPs are unworthy of studying. In fact, some of them can be difficult due to other problem characteristics such as multimodality[62]. In [26], the scalability of MOEDAs are examined on a set of boundedly-difficult additively-separable MOPs. Experiment results imply that even when the separability can be identified preliminarily, the examined MOEDAs can scale up exponentially with the number of decision variables due to the combinatorial growth in the number of Pareto-optimal solutions.

      To summarize, this section reviews recent studies on scalability analysis and challenge analysis with respect to large-scale multi-objective optimization by taking into account different problem properties. It is expected that such investigations can facilitate the development of new and improved large-scale MOEAs, and can benefit the design and evaluation of algorithms for specific aspects.

    • To improve the scalability of MOEAs, basic ideas include simplifying large-scale MOPs via divide-and-conquer methodology and dimensionality reduction and improving the search ability of MOEAs by rebalancing the exploration and exploitation. Among them, divide-and-conquer and dimensionality reduction approaches aim to divide or transform the original search space of an MOP, so that an algorithm only needs to search in one or more subspaces. Since the subspaces are usually low-dimensional, such methods can alleviate the curse of dimensionality caused by the increase in the number of decision variables. On the other hand, to avoid the issue that some Pareto-optimal solutions fall outside the subspaces constructed via space division or transformation, which would result in performance deficiencies, the enhanced search-based directly explore the original search space, but being equipped with enhanced search ability. These strategies have also been adopted to solve some applications of large-scale MOPs, as presented in Table 1. In this section, the progresses of large-scale MOEAs are reviewed from these three classes respectively.

      ApplicationScaleSolution technique
      Vehicle routing problem[63]288Divide and conquer
      Capacitated arc routing problem[64]375Divide and conquer
      Feature selection[65]512Divide and conquer
      Flight assignment[66]1 664Divide and conquer
      Public transport network design[67]2 000Divide and conquer
      Resource allocation[68]1 000 000Divide and conquer
      Sparse regression[69]1 080 000Divide and conquer
      Engineering design problem[70]145Enhanced search
      Emergency logistics scheduling[71]250Enhanced search
      Software module clustering[72]401Enhanced search
      Service restoration in large-scale distribution systems[73]15 440Enhanced search

      Table 1.  Applications of large-scale MOPs

    • In this type of large-scale MOEAs, cooperative coevolution[29, 74, 75] has been one of the most popular approaches. Its main idea is to decompose the original high-dimensional decision vector of the MOP into multiple low-dimensional sub-components, which are expected to be solvable more easily. More specifically, each decision variable of the problem is assigned to a species population based on some decomposition strategy and thus each species population optimizes a particular part of the large-scale MOP. After that, to assemble a full solution to the original MOP, the objective function evaluations of any individual from a species population is assigned based on its interactions with individuals from other species populations. Without prior knowledge about the objective functions, a key challenge for this kind of algorithm is how to decompose the problem while keeping the correlations among different species population minimal. If two or more decision variables with high correlation are assigned to different species populations and optimized separately, the search direction of the algorithm may be misleading, resulting in performance degradation. For this reason, various cooperative coevolutionary MOEAs with different decomposition strategies are proposed for large-scale multi-objective optimization.

      Generally, the currently available decomposition approaches for large-scale multi-objective optimization can be divided into two groups: those that decompose a decision vector in a random way, and those that decompose a decision vector based on an in-depth analysis or learning of problem properties. Random decomposition-based methods are mainly proposed for the reason that dividing the decision vector into random groups could provide better results than applying a deterministic scheme, when dealing with non-separable problems[76]. Moreover, this kind of method can be further categorized into static and dynamic methods. Dynamic methods are proposed to further increase the chance of optimizing highly-correlated decision variables together. The pioneering work in this direction starts the research on large-scale MOEAs by introducing the first framework of cooperative coevolution adopted within GDE3, namely cooperative coevolutionary GDE3 (CCGDE3)[29]. The decision variables are divided into multiple groups in a random way and the cooperation among the groups takes place in the objective function evaluation, where a random individual is obtained from each species population and then they are combined to form a complete set of solutions. Results on ZDT problems with up to 5000 decision variables verify the efficiency of CCGDE3. A similar random decomposition-based method adopted within MOEA/D is proposed in [59] and verified on DTLZ problems with up to 1200 decision variables. However, since it has been shown that ZDT and DTLZ problems are mainly separable, their effectiveness when applied to non-separable MOPs needs further analysis. A random-based dynamic grouping strategy combined with MOEA/D for large-scale multi-objective optimization is presented in [77]. It contains a random group size selection strategy where a group size is dynamically selected with probability in the evolution process. The efficacy of the resultant algorithm is examined on WFG and CEC 2009 competition problems with up to 1000 decision variables comparing with two MOEA/D variants.

      Analysis-based decomposition approaches aim at extracting some useful knowledge for the design of a suitable decomposition. Two types of variable dependencies are commonly considered in the specialized literature[41]. One is the separability which considers correlations among different decision variables and the other is the control property towards convergence and diversity which considers correlations between decision variables and objective functions. As these properties are often hard to obtain in real-world problems, an empirical test method based on decision variable analyses is integrated into MOEAs, namely MOEA/DVA[78]. It first decomposes the decision vector into two groups based on control properties of decision variables, and then further divides the group consisting of distance variables into several groups via an interdependence variable analysis. In this way, a large-scale MOP can be effectively decomposed into smaller and simpler problems and be conducive for solving separable and partially separable MOPs. However, the analyses are conducted by studying dominance-based relationships between sampled solutions. Thus, it suffers from prohibitively high computation costs and its benefit deteriorates rapidly as the number of decision variables increases. Concretely, the total number of objective function evaluations required for the variable analyses is $D \times NCA + 1.5 \times NIA \times D \times (D - 1)$, where NCA indicates the evaluation number for control analysis, NIA indicates that for interdependence analysis. This makes MOEA/DVA difficult to scale up to large-scale MOPs within a limited computation budget. A new variable analysis method based on clustering to study the control property towards convergence and diversity is presented in [79]. It adopts the k-means method with features measured by the angles between sample solutions and the direction of convergence, where smaller angles indicate more contributions to convergence and larger angles to diversity. This angle-based clustering method is shown to be able to reduce the computation costs consumed for variable analysis and the corresponding large-scale many-objective evolutionary optimization (LMEA) could provide better scalability. The total number of objective function evaluations required for the variable analyses is $D \times nSel \times nPel + 1.5 \times nCor \times D \times (D - 1)$, where nSel indicates the number of selected solutions for variable clustering, nPel indicates the number of perturbations on each solution for variable clustering and nCor indicates the number of selected solutions for variable interaction analysis. Nevertheless, the computation costs required for variable analysis also makes it difficult for LMEA to scale up to large-scale MOPs.

      Unlike the above cooperative coevolutionary approaches that optimize different groups of decision variables independently, WOF that optimizes the groups in a different way is presented in [60, 80]. The main idea of WOF is to identify one promising reduced subspace based on variable grouping and problem transformation. It first decomposes decision variables into several groups, then assigns a weight for each group by applying the same weight value (in terms of multiplication) to every variable in the same group. After that, it adopts a problem transformation scheme to extend the concept of weighting the decision variables[81] to multiple objectives. Therefore, given well-defined grouping and transformation functions, the above weighting scheme is able to find a desirable reduced subspace. This is non-trivial, since such information can be hardly available in advance for real-world problems. To solve this issue, WOF adopts an optimization-based method to search for the best configurations. It formulates the above process as a weighting optimization problem to search for a best weight vector. Due to the variable grouping, the weighting optimization problem can be with low dimensionality and thus the sub-problems face by WOF can be small-scale. It is worth mentioning that instead of optimizing each group of decision variables independently as was done in cooperative coevolution based large-scale MOEAs, the grouping in WOF is to find a set of weight variables. Based on the weighting scheme, WOF optimizes the decision variables as a whole and thus can be applicable to non-separable MOPs. To overcome the drawback of limiting the search space due to the weighting scheme, WOF alternates two different phases of optimization: the weighting optimization step and the normal multi-objective optimization step. The performance of WOF is comprehensively studied by considering different grouping mechanisms, problem transformation functions and basic multi-objective optimizers. More concretely, four grouping mechanisms (random grouping, linear grouping, ordered grouping and differential grouping[82]), three transformation functions (product transformation, p-value transformation, and interval-intersection transformation) and thee classical MOEAs (SMPSO, NSGA-II, GDE3) are examined to verify the effectiveness of WOF. These WOF-based algorithms are applied to solve ZDT, DTLZ, CEC 2009 competition and WFG problems with 1000 decision variables and show superiority to state-of-the-art algorithms especially in terms of efficiency. After WOF, the problem transformation scheme is further studied and a novel problem reformulation method by extending the unidirectional search to a bi-directional search is proposed in [83].

      There is also some work studying the influence of variable grouping inside genetic operators for large-scale multi-objective optimization. The work shown in [84] investigates the effect of grouping mechanisms inside the mutation operators based on the well-known polynomial mutation[85]. Three types of mutation operators are proposed: the linked polynomial, the grouped polynomial and the grouped linked polynomial mutations. The linked polynomial mutation binds together the amount of changes of the mutated variables per individual, assuming that it would be beneficial to alter them by the same amount. The grouped polynomial mutation separates variables into distinct groups and then applies mutation to a group as a whole, assuming that highly correlated decision variables should usually be altered together. The grouped linked polynomial mutation combines the above two concepts. Two popular grouping mechanisms, i.e., the ordered grouping and the differential grouping, are examined and the resultant mutation operators are adopted within NSGA-II and SMPSO. Experiment results conducted on WFG problems with 1000 decision variables indicate that the proposed mutation operators can significantly improve the performance of existing MOEAs for large-scale multi-objective optimization.

      In recent years, with the popularization of computing resources such as cloud computing platforms and high-performance computing servers, some large-scale MOEAs based on distributed or parallel mechanisms have also been proposed. By using distributed resources or parallel approaches to deal with multiple sub-problems obtained by decomposition, the computational efficiency of the algorithm can be increased significantly[86]. A message passing interface-based distributed parallel cooperative coevolutionary MOEA (DPCCMOEA) is presented in [87]. It first decomposes decision variables into several groups based on decision variable analyses similar to that in [78] and then optimizes each group by a species subpopulation. The optimization of these groups is carried out in a distributed platform. Within each species subpopulation, the individuals are further separated to several sets, each of which is assigned to a CPU, for a better parallelism degree. Experimental studies on DTLZ and WFG with 1000 decision variables show that DPCCMOEA could achieve a significant improvement in terms of computation time with more than 200× speedups compared with CCGDE3 and MOEA/DVA, when using the Tianhe-2 supercomputer with 360 cores. The work in [88] addresses large-scale MOPs by coevolutionary islands with overlapping search spaces. It follows the basic framework of cooperative coevolutionary MOEAs and each species subpopulation corresponds to an island. Two kinds of coevolution among islands are investigated: coevolution with disjoint islands where individuals in different subpopulations explore disjoint subsets of decision variables and coevolution with overlapping islands where overlapping of optimized decision variables is allowed. Experiments on ZDT problems with up to 2048 decision variables show the efficiency of the proposed methods, and also show that when increasing the number of islands, the overlapping method significantly outperforms the disjoint one.

      Generally, the divide-and-conquer based large-scale MOEAs have shown to be effective when applied to solve most large-scale MOPs where no or weak correlations are involved among decision variables. The problem is that many real-world MOPs have highly complicated correlations among decision variables, and thus the decomposition becomes harder to perform and the benefit of consuming large amounts of computing resources to analyze correlations may be marginal.

    • The main idea of large-scale MOEAs based on dimensionality reduction is to transform the original decision space into a low-dimensional subspace, so that the original high-dimensional problem can be solved in the low-dimensional space. Two main difficulties in developing this kind of approach are: 1) How to ensure the consistency between the Pareto set of the transformed problem and the Pareto set of the original problem; 2) How to get the solution of the original problem after it is optimized in the transformed space. In this section, some recent dimensionality reduction approaches for large-scale multi-objective optimization are reviewed.

      The intrinsic dimension of a problem which characterizes the number of decision variables that would affect objective functions significantly is a critical factor of a large-scale MOP. If most decision variables of a given large-scale MOP do not change objective functions markedly, such problem is said to have low effective dimensionality[30]. For this kind of problem, if an MOEA is able to find their low-dimensional effective subspaces, it is possible to improve the efficiency significantly. Random embedding is a dimensionality reduction technique which can exploit low effective dimensionality without knowing which dimensions are important in advance and its effectiveness and efficiency has been shown when applied to solve large-scale single-objective optimization problems with low intrinsic dimensionality[89, 90]. Inspired by this, the random embedding technique is extended to large-scale multi-objective optimization in [30]. A general, theoretically-grounded yet simple approach named multi-objective optimization via random embedding (ReMO) is presented. ReMO can scale any derivative-free MOEA to large-scale non-convex MOPs with low effective dimensions using random embedding. It performs the optimization in the low-dimensional effective subspace and evaluates the objective functions of a solution through embedding it into the original high-dimensional search space. Theoretical and experimental results on modified ZDT problems with 30 effective dimensions verify the scalability of ReMO for large-scale MOPs with low intrinsic dimensionality. The drawback is that its performance may deteriorate rapidly as the effective dimensionality becomes higher.

      According to the Karush-Kuhn-Tucker condition[91], the Pareto-optimal solutions of a continuous M-objective optimization problem form a (M–1)-dimensional piecewise continuous manifold under some mild conditions, e.g., the problem is differentiable[92]. That is, the original search space is reducible under some mild conditions since M is always much smaller than the number of decision variables D. Based on this regularity property, an unsupervised learning method to produce a low-dimensional representation of training points from a high-dimensional input space is presented in [93], namely Pareto-optimal subspace learning-based MOEA (MOEA/PSL). It focuses on sparse large-scale MOPs in which most decision variables of the Pareto-optimal solutions are zero. Unsupervised neural networks are adopted to solve sparse large-scale MOPs by learning the Pareto-optimal subspace. A sparse distribution of the decision variables is learned by a restricted Boltzmann machine based on the decision variables of the non-dominated solutions and a compact representation is learned by a denoising autoencoder. The combination of the sparse distribution and compact representation is regarded as an approximation of the Pareto-optimal subspace, which allows the optimization to be performed in the learnt low-dimensional subspace and the offspring solutions to be mapped back to the original search space by the two neural networks. Experiments on sparse large-scale MOPs with 0.1 sparsity and 10000 decision variables verify the efficiency of MOEA/PSL on this particular kind of large-scale MOPs.

      Generally, the dimensionality reduction technique can improve the scalability of an MOEA with a suitable transformation. However, this is non-trivial since the problem properties of real-world problems are usually unknown. Thus, when applied to problems that are not consistent to the assumption of the employed transformation, the effectiveness of this kind of algorithm is not clear.

    • Different from the divide-and-conquer and dimensionality reduction-based approaches that improve the scalability by constructing low-dimensional space via division or transformation, the enhanced search-based large-scale MOEAs aim to improve the search ability of MOEAs by rebalancing the diversification and convergence. This kind of algorithms is suitable for dealing with large-scale MOPs or sub-problems appearing in the division and transformation that are difficult to divide and to reduce dimensionality. For example, even additively-separable MOPs can bring great challenges to the scalability of MOEAs[26]. Thus, an enhanced search ability is critical for large-scale multi-objective optimization.

      As shown in Section 2, it is found that existing MOEAs deteriorate severely and suffer a diversity loss with the increase of the number of decision variables especially on a type of MOPs that have highly-correlated relationships among different decision variables. Motivated by this observation, a scalable MOEA with an enhanced diversification mechanism is developed in [28]. Its main idea is to encourage diversity by forcing the search toward different parts of the Pareto front. This is non-trivial because the search space increases rapidly and the mappings between the Pareto sets and Pareto front are complicated. A new solution generator based on a novel dual local search (DLS) mechanism is employed to solve this issue. In addition to the normal population which considers the comprehensive performance of the algorithm on both diversity and convergence, an external archive is provided for the new solution generator to emphasize the diversification ability. DLS is conducted on the archive to generate a set of diverse new solutions. Two key points shape the DLS. One is to search efficiently in the high-dimensional decision space, and the other is to keep diversity in the objective space simultaneously. First, considering that frequently changing the search direction of an individual may reduce its search efficiency, an indicator-based local search that limits the feasible search space of an individual based on the local improvement of a certain indicator is employed. Second, in order to ensure that different areas of the Pareto front can be fully explored, a local search is employed in the objective space. That is, the individuals in the archive are explored in a roll-polling way to avoid the diversity loss caused by local convergence. A synergy of the two parts constitutes the DLS mechanism. After that, DLS is integrated with the classical indicator-based SMS-EMOA to achieve a better balance between convergence and diversity, leading to the resultant DLS-MOEA. Experimental studies on ZDT, DTLZ, WFG and CEC 2009 competition problems with up to 8192 decision variables verify the competitiveness of DLS-MOEA compared with a number of state-of-the-art algorithms and its advantage in the balance between diversification and convergence. In particular, the results on CEC 2009 competition show the superiority of DLS-MOEA in terms of diversification ability.

      Gradient-based methods are possible choices for designing local search operators, which can make the search more directional but require the objective functions to be differentiable. To utilize this advantage, some gradient-based information is employed in [94] for better scalability. It defines descent direction as a unit vector that intuitively represents a decrement in all objectives or in most of them, and keeps the same value in the others. The gradient-based information is integrated with MOEAs and results in a two-stage algorithm named gradient-based multi-objective evolutionary strategy (GBMES). The first stage is to find a small set of Pareto-optimal solutions and the second is to reconstruct the entire front departing from a few points from such front. The gradient-based method is employed to perform a local search for a small number of superior solutions with the aim of accelerating the search toward the Pareto front. Experimental studies on DTLZ2 with up to 100 decision variables verify the scalability of GBMES.

      Unlike most works that obtain a point in the objective space by calculating the objective function values of the decision vector, a model-based method for representing and searching non-dominated solutions is presented in [95], so that a solution can be obtained through direct sampling in the objective space. Its main idea is to construct Gaussian process-based inverse models that map all found non-dominated solutions from the objective space to the decision space. These inverse models are then used to generate new solutions by directly sampling in the objective space. Experiment results on WFG and modified ZDT and DTLZ problems with up to 104 decision variables show the robust search performance of this method. The drawback is that since the problem is simplified severely to facilitate inverse modeling, its performance on more complicated problems and problems with larger scales may not be optimistic.

      There are also some attempts to customize search operators to improve the scalability of certain MOEAs[96, 97]. The work in [98] focuses on a specific type of large-scale problems named large-scale sparse MOPs, where most decision variables of the optimal solutions are zero. To deal with this kind of problem and improve the sparsity of the generated solutions, a new population initialization strategy and genetic operators by taking the sparse nature of the Pareto optimal solutions into consideration are proposed. A test suite for large-scale sparse multi-objective optimization is also presented. Experiments on the test suite with up to 1000 decision variables and four application examples (feature selection, pattern mining, critical node detection and neural network training problems) with up to 1241 decision variables show its superiority in solving large-scale sparse MOPs.

      Parallel large-scale MOEAs have also been studied within this category. Environment selection operators of MOEAs play an important role in the balance between convergence and diversity. Usually, environment selection operators are difficult to parallelize as the selection is mainly performed after all candidates are evaluated. Thus, this process can be computationally expensive when faced with large-scale multi-objective optimization. To accelerate the efficiency, a novel parallel framework, namely parallel EA (PEA), that separates the environmental selection operator from the entire evolutionary process is proposed in [99]. The main idea is to separate convergence and diversity aspects of the environmental selection. More specifically, a series of independent sub-populations are employed and optimized in parallel to pursue a high convergence ability, and only a few representative solutions from different sub-populations are selected as an archive to emphasize the diversification ability. The control properties including the concept of distance and position variables are used to remove the dependencies among sub-processes (sub-populations) and improve the parallelization of an algorithm. Experimental results on the LSMOP benchmark problems with up to 1039 decision variables show the superiority of PEA in terms of the convergence, diversity and speedup.

    • In this paper, the evolutionary computation for large-scale multi-objective optimization during the past decade of progress is reviewed. First, we summarize the scalability analysis and challenges of traditional MOEAs when applied to MOPs with a large number of decision variables. Second, the MOEAs for large-scale MOPs are divided into three categories based on the mechanism of improving the scalability of an algorithm: large-scale MOEAs based on divide-and-conquer, large-scale MOEAs based on dimension reduction and enhanced search-based large-scale MOEAs. The research progress within each category is reviewed and discussed respectively. In general, large-scale MOEAs, as an emerging research direction, are still in their initial stage of development. Even though a number of large-scale MOEAs have been proposed in recent years, there are still many open problems that need to be solved. Some directions that we believe are worth investigating within this area are provided in the following.

      Large-scale MOP analysis system. The above-mentioned scalability analysis and algorithmic characteristic analysis imply that different types of large-scale MOPs require different types of large-scale MOEAs. A system that can analyze the characteristics of a large-scale MOP systematically and comprehensively would be conducive to providing fast algorithm design recommendations for large-scale black-box MOPs and achieving the best efficiency for a given problem. Although there are some preliminary attempts in [25, 28], it still lacks a systematic analysis, and thus it is worthy of further research. On one hand, although characteristics of MOPs, such as whether an evenly distributed sample of parameter vectors in the search space maps to an evenly distributed set of objective vectors in fitness space (i.e., bias), many-to-one mappings, and dissimilar parameter domains, have been widely studied[36-38], they are rarely studied from the perspective of algorithm scalability. Their respective impacts on the scalability of an algorithm are worthy of in-depth analysis. On the other hand, in the case that some characteristics are known to have a key impact on algorithm scalability, a tool that can detect these characteristics can help to provide some prior knowledge for a black-box problem. Therefore, research on such detection methods can be beneficial to the design of algorithms.

      In-depth analyses and understandings of the effectiveness of existing methods. The current practice of validating large-scale MOEAs is mainly based on empirical studies on artificially constructed test problems. Although they can reflect the difficulties of MOPs to some extent, they are not specifically designed to study the scalability of algorithms. Therefore, a test problem set that can well reflect the characteristics of algorithms in this respect is important for the research on algorithm scalability. In spite of this, without a clear understanding of the difficulties a real-world large-scale MOP presents to an MOEA, the characteristics of the existing test problems in the context of large-scale multi-objective optimization are not well understood. Thus, it is difficult to assert the veracity of empirical studies conducted on these test problems. In order to draw accurate conclusions, it is imperative that the test problems employed be well understood and be sufficiently representative of real-world problems. On the other hand, the theoretical analysis of MOEAs has only been scarcely studied, and most of them focus on the analysis of some relatively simple discrete optimization problems such as pseudo-boolean functions[100, 101]. Therefore, more in-depth analyses and understandings of large-scale MOEAs are needed. To be specific, three aspects could be studied progressively. First, in addition to MOP benchmark problems, further studies of the performance of existing algorithms on real-world large-scale MOPs are needed. Results observed from these studies as well as prior knowledge of these tested problems can be studied to reveal some new challenges induced by the increase of the number of decision variables. Second, an in-depth analysis of these challenges should be conducted to discover new characteristics that are important to the scalability of an algorithm but are currently unknown. Such analyses would lead to a test problem set tailed for scalability study. Third, on the basis of the analyses as well as the newly constructed test problems, the scalability of existing algorithms can be reassessed and studied, resulting in a systematic inference about the advantages and disadvantages of existing methods.

      Multi-method fusion. Many existing large-scale MOEAs are not mutually exclusive. For example, many enhanced search-based approaches can be used as an optimizer for sub-problems in decomposition-based approaches. The way to utilize the synergy of different methods and integrate them to achieve a better scalability has rarely been studied. Given a large-scale MOP, the difficulties involved in the problem can be analyzed systematically through the above analysis system. Multiple solution strategies could then be selected accordingly from existing approaches, or designed from scratch. Finally, an algorithm that integrates these strategies adaptively can be developed specifically for the target problem.

    • This work was supported by the Natural Science Foundation of China (Nos. 61672478 and 61806090), the National Key Research and Development Program of China (No. 2017YFB1003102) , the Guangdong Provincial Key Laboratory (No. 2020B121201001), the Shenzhen Peacock Plan (No. KQTD2016112514355531), the Guangdong−Hong Kong−Macao Greater Bay Area Center for Brain Science and Brain-inspired Intelligence Fund (No. 2019028), the Fellowship of China Postdoctoral Science Foundation (No. 2020M671900), and the National Leading Youth Talent Support Program of China.

    • This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

      The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

      To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reference (101)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return