banner

CEC’17 Competition on Evolutionary Multi-task Optimization
IEEE Congress on Evolutionary Computation 2017
June 5–8, Donostia - San Sebastián, Spain
http://www.cec2017.org/

(This competition is in conjunction with CEC’17 Special Session on Memetic Computing)

The winner, plus runner-up and third place will receive a certificate, and 300USD, 200USD, 100USD prize, respectively.

Important Dates
Submission Deadline:
Special Session
Competition
Jan 30, 2017
May 15, 2017

OVERVIEW AND AIM

The human possesses the most remarkable ability to manage and execute multiple tasks simultaneously, e.g., talking while walking. This desirable multitasking capability has inspired computational methodologies and approaches to tackle multiple tasks at the same time by leveraging commonalities and differences across different tasks to improve the performance and efficiency of resolving component tasks compared to when dealing with them separately. As a well-known example, multi-task learning [1] is a very active subfield of machine learning whereby multiple learning tasks are performed together using a shared model representation such that the relevant information contained in related tasks can be exploited to improve the learning efficiency and generalization performance of task-specific models.

Multi-task optimization (MTO) [2]-[5] is a newly emerging research area in the field of optimization, which investigates how to effectively and efficiently tackle multiple optimization problems at the same time. In the multitasking scenario, solving one optimization problem may assist in solving other optimization problems (i.e., synergetic problem-solving) if these problems bear commonality and/or complementarity in terms of optimal solutions and/or fitness landscapes. As a simple example, if some problems have the same globally optimal solution but distinct fitness landscapes, obtaining the global optimum to any problem makes the others also get solved. Recently, an evolutionary MTO paradigm, so-called evolutionary multitasking [3], [4], was proposed to make the best use of the potential of evolutionary algorithms (EAs) incorporated with a unified solution representation space for MTO. As a population-based optimizer, EAs feature the Darwinian survival-of-the-fittest principle and nature-inspired reproduction operations which inherently promote implicit knowledge transfer across tasks during problem-solving. The superiority of this novel evolutionary multitasking framework over traditional ways of solving each task independently has been demonstrated on synthetic and real-world MTO problems by using a multi-factorial evolutionary algorithm (MFEA) [3],[5] developed under this framework.

Evolutionary multitasking opens up new horizons for researchers in the field of evolutionary computation. It provides a promising means to deal with the ever-increasing number, variety and complexity of optimization tasks. More importantly, rapid advances in cloud computing would eventually turn optimization into the on-demand service hosted on the cloud, as illustrated in Fig. 1. In such a scenario, a variety of optimization tasks would be simultaneously executed by the service engine where evolutionary multitasking may become the technical backbone that harnesses the underlying synergy between multiple tasks to provide service consumers with faster and better solutions while generating higher economic gains for the service provider.

Optimization as a Cloud Service: A Promising Application Scenario of Evolutionary Multitasking
Figure 1: Optimization as a Cloud Service: A Promising Application Scenario of Evolutionary Multitasking

The promising perspectives and encouraging preliminary success of evolutionary MTO call for intensive research efforts in this new area. However, there is a lack of unified MTO test suites to facilitate research pursuits from both algorithmic and theoretical aspects. This fact motivates us to have designed two suites of MTO benchmark problems [6], [7] for single-objective and multi-objective continuous optimization tasks, respectively.

This competition aims at promoting research advances in evolutionary MTO where the two developed test suites are required to be used as a common platform to enable fair and easy performance evaluation and comparison.

TEST SUITES

Single-objective and multi-objective continuous optimization have been intensively studied in the community of evolutionary optimization where many well-known test suites are available. As a preliminary attempt, we have designed two MTO test suites [6], [7] for single-objective and multi-objective continuous optimization tasks, respectively.

The test suite for multi-task single-objective optimization (MTSOO) [6] contains nine MTO benchmark problems where each MTO problem consists of two single-objective continuous optimization tasks which bear certain commonality and complementarity in terms of the global optimum and the fitness landscape. These nine MTO problems possess different degrees of latent synergy between their involved two component tasks.

The test suite for multi-task multi-objective optimization (MTMOO) [7] includes nine MTO benchmark problems where each MTO problem consists of two multi-objective continuous optimization tasks which bear certain commonality and complementarity in terms of the Pareto optimal solutions and the fitness landscape. The nine MTO problems feature different degrees of latent synergy between their involved two component tasks.

All benchmark problems included in these two test suites are elaborated in technical reports [6], [7], respectively. Their associated code in Matlab is downloadable at http://www.cil.ntu.edu.sg/mfo/download.html.

COMPETITION PROTOCOL

Potential participants in this competition may target at either or both of MTSOO and MTMOO while using all benchmark problems in the corresponding test suites as described above for performance evaluation.

For MTSOO test suite:

(1) Experimental settings

For each of 9 benchmark problems in this test suite, an algorithm is required to be executed for 30 runs where each run should employ different random seeds for the pseudo-random number generator(s) used in the algorithm. Note: It is prohibited to execute multiple 30 runs and deliberately pick up the best one.

For all 9 benchmark problems, the maximal number of function evaluations (maxFEs) used to terminate an algorithm in a run is set to 300,000. In the multitasking scenario, one function evaluation means calculation of the objective function value of any component task without distinguishing different tasks.

To enable comparison of the utmost performances of algorithms, the parameter setting of an algorithm is allowed to be calibrated w.r.t. each benchmark problem in this test suite. Note: Participants are required to report the used parameter setting for each problem in the final submission to the competition. Please refer to “SUBMISSION GUIDELINE” for more details.

(2) Intermediate results required to be recorded

When an algorithm is executed to solve a specific benchmark problem in a run, the so far achieved best function error value (BFEV) w.r.t. each component task of this problem should be recorded when the current number of function evaluations reaches any of the predefined values which are set to k*maxFEs/100, k =1, …, 100 in this competition. BFEV is calculated as the difference between the best objective function value achieved so far and the globally optimal objective function value known in advance. As a result, for any benchmark problem, 100 BFEVs would be recorded w.r.t. each component task in each run.

Intermediate results for each benchmark problem are required to be saved separately into nine “.txt” files named as “MTSOO_P1.txt”, …, “MTSOO_P9.txt” where the data contained in each “.txt” file must conform to the following format:


1*maxFEs/100, BFEV_{1,1}^1, BFEV_{1,1}^2, …, BFEV_{30,1}^1, BFEV_{30,1}^2
.
.
k*maxFEs/100, BFEV_{1,k}^1, BFEV_{1,k}^2, …, BFEV_{30,k}^1, BFEV_{30,k}^2
.
.
100*maxFEs/100, BFEV_{1,100}^1, BFEV_{1,100}^2, …, BFEV_{30,100}^1, BFEV_{30,100}^2

where BFEV_{j,k}^i (i = 1, 2; j = 1, …, 30; k = 1, …, 100) stands for the BFEV w.r.t. the ith component task obtained in the jth run at the kth predefined number of function evaluations. The first column stores the predefined numbers of function evaluations at which intermediate results are recorded. The subsequent columns store intermediate results for each of 30 runs with each run occupying two consecutive columns w.r.t. two component tasks, respectively. Note: The comma is used as a delimiter to separate any two numbers next to each other in a row. As an example, “.txt” files obtained by MFEA are provided as reference.

(3) Overall ranking criterion

To derive the overall ranking for each algorithm participating in the competition, we will take into account of the performance of an algorithm on each component task in each benchmark problem under varying computational budgets from small to large. Specifically, we will treat each component task in each benchmark problem as one individual task, ending up with a total of 18 individual tasks. For each algorithm to be ranked, the median BFEV over 30 runs will be calculated at each checkpoint which corresponds to different computational budgets for each of 18 individual tasks. Based on these calculated data, the overall ranking criterion will be defined. To avoid deliberate calibration of the algorithm to cater for the overall ranking criterion, we will release the formulation of the overall ranking criterion after the competition submission deadline.

For MTMOO test suite:

(1) Experimental settings

For each of 9 benchmark problems in this test suite, an algorithm is required to be executed for 30 runs where each run should employ different random seeds for the pseudo-random number generator(s) used in the algorithm. Note: It is prohibited to execute multiple 30 runs and deliberately pick up the best one.

For all 9 benchmark problems, the maximal number of function evaluations (maxFEs) used to terminate an algorithm in a run is set to 300,000. In the multitasking scenario, one function evaluation means calculation of the values of multiple objective functions of any component task without distinguishing different tasks.

To enable comparison of the utmost performances of algorithms, the parameter setting of an algorithm is allowed to be calibrated w.r.t. each benchmark problem in this test suite. Note: Participants are required to report the used parameter setting for each problem in the final submission to the competition. Please refer to “SUBMISSION GUIDELINE” for more details.

(2) Intermediate results required to be recorded

When an algorithm is executed to solve a specific benchmark problem in a run, the obtained inverted generational distance (IGD) value w.r.t. each component task of this problem should be recorded when the current number of function evaluations reaches any of the predefined values which are set to k*maxFEs/100, k =1, …, 100 in this competition. IGD [8] is a commonly used performance metric in multi-objective optimization to evaluate the quality (convergence and diversity) of the currently obtained Pareto front by comparing it to the optimal Pareto front known in advance. As a result, for any benchmark problem, 100 IGD values would be recorded w.r.t. each component task in each run.

Intermediate results for each benchmark problem are required to be saved separately into nine “.txt” files named as “MTMOO_P1.txt”, …, “MTMOO_P9.txt” where the data contained in each “.txt” file must conform to the following format:


1*maxFEs/100, IGD_{1,1}^1, IGD_{1,1}^2, …, IGD_{30,1}^1, IGD_{30,1}^2
.
.
k*maxFEs/100, IGD_{1,k}^1, IGD_{1,k}^2, …, IGD_{30,k}^1, IGD_{30,k}^2
.
.
100*maxFEs/100, IGD_{1,100}^1, IGD_{1,100}^2, …, IGD_{30,100}^1, IGD_{30,100}^2

where IGD_{j,k}^i (i = 1, 2; j = 1, …, 30; k = 1, …, 100) stands for the IGD value w.r.t. the ith component task obtained in the jth run at the kth predefined number of function evaluations. The first column stores the predefined numbers of function evaluations at which intermediate results are recorded. The subsequent columns store intermediate results for each of 30 runs with each run occupying two consecutive columns w.r.t. two component tasks, respectively. Note: The comma is used as a delimiter to separate any two numbers next to each other in a row. As an example, “.txt” files obtained by MFEA are provided as reference.

(3) Overall ranking criterion

To derive the overall ranking for each algorithm participating in the competition, we will take into account of the performance of an algorithm on each component task in each benchmark problem under varying computational budgets from small to large. Specifically, we will treat each component task in each benchmark problem as one individual task, ending up with a total of 18 individual tasks. For each algorithm compared for ranking, the median IGD value over 30 runs will be calculated at each checkpoint corresponding to different computational budgets for each of 18 individual tasks. Based on these calculated data, the overall ranking criterion will be defined. To avoid deliberate calibration of the algorithm to cater for the overall ranking criterion, we will release the formulation of the overall ranking criterion after the competition submission deadline.

SUBMISSION GUIDELINE

This competition is organized together with CEC’17 Special Session on Memetic Computing (http://www.memecs.org/mcw/SpecialSession_CEC_2017.html). Interested participants may report their approaches and results in a paper and submit it to this special session before the paper submission deadline (January 16, 2017).

For those who are merely interested in the competition, please archive the following files into a single .zip file and then send it to mtocompetition@gmail.com before the competition submission deadline (May 15, 2017):

  • For participants in MTSOO: nine “.txt” files (i.e., “MTSOO_P1.txt”, … , “MTSOO_P9.txt”), “param_SO.txt” and “code.zip”.
  • For participants in MTMOO: nine “.txt” files (i.e., “MTMOO_P1.txt”, … , “MTMOO_P9.txt”), “param_MO.txt” and “code.zip”.
  • For participants in both MTSOO and MTMOO: 18 “.txt” files (i.e., “MTSOO_P1.txt”, … , “MTSOO_P9.txt”, “MTMOO_P1.txt”, …, “MTMOO_P9.txt”), “param_SO.txt”, “param_MO.txt” and “code.zip”.

Here, “param_SO.txt” and “param_MO.txt” contain the parameter setting of the algorithm for each benchmark problem in MTSOO and MTMOO test suites, respectively. “code.zip” contains the source code of the algorithm which should allow the generation of reproducible results.

The submission samples for MTSOO and MTMOO are available below:

If you would like to participate in the competition, please kindly inform us about your interest via email (mtocompetition@gmail.com) so that we can update you about any bug fixings and/or the extension of the deadline.

COMPETITION ORGNIZERS

Kai Qin
Department of Computer Science and Software Engineering, Swinburne University of Technology, Australia
Email: kqin@swin.edu.au
Short Bio: Kai Qin is an Associate Professor at Swinburne University of Technology (Melbourne, Australia). He received the PhD degree at Nanyang Technology University (Singapore) in 2007. From 2007 to 2009, he worked as a Postdoctoral Fellow at the University of Waterloo (Waterloo, Canada). From 2010 to 2012, he worked at INRIA (Grenoble, France), first as a Postdoctoral Researcher and then as an Expert Engineer. He joined RMIT University (Melbourne, Australia) in 2012 as a Vice-Chancellor’s Research Fellow, and then worked as a Lecturer and a Senior Lecturer between 2013 and 2017. In 2017, he joined Swinburne University of Technology as an Associate Professor. His major research interests include evolutionary computation, machine learning, computer vision, GPU computing and services computing. Two of his authored/co-authored journal papers have become the 1st and 4th most-cited papers among all of the papers published in the IEEE Transactions on Evolutionary Computation (TEVC) over the last 10 years according to the Web of Science Essential Science Indicators. He is the recipient of the 2012 IEEE TEVC Outstanding Paper Award. One of his conference papers was nominated for the best paper at the 2012 Genetic and Evolutionary Computation Conference (GECCO’12). He won the Overall Best Paper Award at the 18th Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES’14). He is serving as the Chair of the IEEE Emergent Technologies Task Force on “Collaborative Learning and Optimization”, promoting the emerging research of the synergy between machine learning and optimization. He had co-organized and chaired the Special Session on “Differential Evolution: Past, Present and Future” held at CEC between 2012 and 2017.

Liang Feng
College of Computer Science, Chongqing University, China
E-Mail: liangf@cqu.edu.cn
Short Bio: Liang Feng received the PhD degree from the School of Computer Engineering, Nanyang Technological University, Singapore, in 2014. He was a Postdoctoral Research Fellow at the Computational Intelligence Graduate Lab, Nanyang Technological University, Singapore. He is currently an Assistant Professor at the College of Computer Science, Chongqing University, China. His research interests include Computational and Artificial Intelligence, Memetic Computing, Big Data Optimization and Learning, as well as Transfer Learning.

Yuan Yuan
Department of Computer Science and Engineering, Michigan State University, USA
E-Mail: yyuan@msu.edu
Short Bio: Yuan Yuan is a Postdoctoral Fellow in the Department of Computer Science and Engineering, Michigan State University, USA. He received the PhD degree with the Department of Computer Science and Technology, Tsinghua University, China, in July 2015. From January 2014 to January 2015 he was a visiting PhD student with the Centre of Excellence for Research in Computational Intelligence and Applications, University of Birmingham, UK. He worked as a Research Fellow at the School of Computer Science and Engineering, Nangyang Technological University, Singapore, from October 2015 to November 2016. His current research interests include multi-objective optimization, genetic improvement, and evolutionary multitasking. Two of his conference papers were nominated for the best paper at the GECCO 2014 and GECCO 2015, respectively.

Yew-Soon Ong
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Email: asysong@ntu.edu.sg
Website: http://www.ntu.edu.sg/home/asysong/
Short Bio: Yew-Soon Ong is Professor and Chair of the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He is Director of the A*Star SIMTECH-NTU Joint Lab on Complex Systems and Programme Principal Investigator of the Data Analytics & Complex System Programme in the Rolls-Royce@NTU Corporate Lab. He was Director of the Centre for Computational Intelligence or Computational Intelligence Laboratory from 2008-2015. He received his Bachelors and Masters degrees in Electrical and Electronics Engineering from Nanyang Technological University and subsequently his PhD from University of Southampton, UK. He is founding Editor-In-Chief of the IEEE Transactions on Emerging Topics in Computational Intelligence, founding Technical Editor-In-Chief of Memetic Computing Journal (Springer), Associate Editor of IEEE Computational Intelligence Magazine, IEEE Transactions on Evolutionary Computation, IEEE Transactions on Neural Network & Learning Systems, IEEE Transactions on Cybernetics, IEEE Transactions on Big Data, International Journal of Systems Science, Soft Computing Journal, and chief editor of Book Series on Studies in Adaptation, Learning, and Optimization as well as Proceedings in Adaptation, Learning, and Optimization He is also guest editors of IEEE Transactions on Evolutionary Computation, IEEE Trans SMC-B, Soft Computing Journal, Journal of Genetic Programming and Evolvable Machines, co-edited several books, including Multi-Objective Memetic Algorithms, Evolutionary Computation in Dynamic and Uncertain Environments, and a volume on Advances in Natural Computation published by Springer Verlag. He served as Chair of the IEEE Computational Intelligence Society Emergent Technology Technical Committee (ETTC) from 2011-2012, and has been founding chair of the Task Force on Memetic Computing in ETTC since 2006 as well as a member of IEEE CIS Evolutionary Computation Technical Committee from 2008 - 2010. He was also Chair of the IEEE Computational Intelligence Society Intelligent Systems Applications Technical Committee (ISATC) from 2013-2014. His current research interests include computational intelligence spanning memetic computing, evolutionary optimization using approximation/surrogate/meta-models, complex design optimization, intelligent agents in game, and Big Data Analytics. His research grants comprises of external funding from both national and international partners that include National Grid Office, A*STAR, Singapore Technologies Dynamics, Boeing Research & Development (USA), Rolls-Royce (UK) and Honda Research Institute Europe (Germany), National Research Foundation and MDA-GAMBIT. His research work on Memetic Algorithm was featured by Thomson Scientific's Essential Science Indicators as one of the most cited emerging area of research in August 2007. Recently, he was selected as a 2015 Thomson Reuters Highly Cited Researcher and 2015 World's Most Influential Scientific Minds. He also received the 2015 IEEE Computational Intelligence Magazine Outstanding Paper Award and the 2012 IEEE Transactions on Evolutionary Computation Outstanding Paper Award for his work pertaining to Memetic Computation.

Xu Chi
Planning and Operations Management Group of Singapore Institute of Manufacturing Technology (SIMTech), Singapore
E-Mail: cxu@simtech.a-star.edu.sg
Short Bio: Dr. Xu Chi received his Ph.D. and Bachelor (Honors) in Electrical and Electronic School from Nanyang Technological University, Singapore, in 2010 and 2003 respectively. He was a researcher in Positioning and Wireless Technology Center in Nanyang Technological University, working on RFID track and trace using ultra wideband (UWB) signal. Currently, he is a research scientist in the Planning and Operations Management Group of Singapore Institute of Manufacturing Technology (SIMTech). Dr. Xu has more than 7 years of research and development experience in areas of data analytics and information management for supply chain applications. He has led and completed various research and industry projects in the areas of supply chain track and trace and customer intelligence analysis, and licensed a few of the developed technologies to industry. He served as the programme committee member for IEEE Workshop on Big data Analytics in Manufacturing and Supply Chains in IEEE International Conference on Big Data 2015 and 2016. His current research interests include track and trace information management, text mining and sentiment analysis.

REFERENCES

[1] R. Caruana, “Multitask learning”, Machine Learning, 28(1): 41-75, 1997.
[2] K. Swersky, J. Snoek and R. P. Adams, “Multi-task Bayesian optimization”, Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS'13), pp. 2004-2012, Lake Tahoe, Nevada, USA, December 5-10, 2013.
[3] A. Gupta, Y. S. Ong and L. Feng, “Multifactorial evolution: Toward evolutionary multitasking”, IEEE Transactions on Evolutionary Computation, 20(3):343-357, 2016.
[4] Y. S. Ong and A. Gupta, “Evolutionary multitasking: A computer science view of cognitive multitasking”, Cognitive Computation, 8(2): 125-142, 2016.
[5] A. Gupta, Y. S. Ong, L. Feng and K. C. Tan, “Multi-objective multifactorial optimization in evolutionary multitasking”, accepted by IEEE Transactions on Cybernetics, 2016.
[6] B. S. Da, Y. S. Ong, L. Feng, A. K. Qin, A. Gupta, Z. X. Zhu, C. K. Ting, K. Tang and X. Yao, “Evolutionary multitasking for single-objective continuous optimization: Benchmark problems, performance metrics and baseline results”, Technical Report, Nanyang Technological University, 2016.
[7] Y. Yuan, Y. S. Ong, L. Feng, A. K. Qin, A. Gupta, B. S. Da, Q. F. Zhang, K. C. Tan, Y. C. Jin and H. Ishibuchi, “Evolutionary multitasking for multi-objective continuous optimization: Benchmark problems, performance metrics and baseline results”, Technical Report, Nanyang Technological University, 2016.
[8] P. Czyzzak and A. Jaszkiewicz, “Pareto simulated annealing– a metaheuristic technique for multiple-objective combinatorial optimization”, Journal of Multi-Criteria Decision Analysis, 7:34-47, 1998.

IEEE logo CIS logo