The PPSN 2022 Workshop on Parallelism in Knowledge Transfer

To be held in conjunction with

The 17th International Conference on Parallel Problem Solving from Nature (PPSN XVII) 

The workshop will take place on September 10, 2022

Dortmund, Germany

https://ppsn2022.cs.tu-dortmund.de/

Organizers

Abhishek Gupta, Amiram Moshaiov, Yaochu Jin

Abstract

The utilization of knowledge from past experiences is common to the process of learning and to problem-solving by humans. Inspired by humans, researchers in computational intelligence have been developing transfer learning and transfer optimization techniques (TL&TO). TL refers to transfer of knowledge in machine learning techniques to improve performance under limited training data, whereas TO refers to such transfer for accelerating convergence rates in the search for optimal solutions under some criteria. In the last two decades various TL techniques have been studied and their effectiveness has been demonstrated over a large set of problems. This success has been followed with similar attempts in the area of TO, with a particular emphasis on population based bio-inspired optimization approaches.

The main goal of this workshop is to provide a meeting place for PPSN participants who are interested in research on bio-inspired TL&TO techniques. We aim to discuss current research on the development of such techniques and on their real-life applications. Finally, we expect to suggest future research directions for TL&TO.

The scope of this workshop includes topics such as:

·         Single/Multi-objective search and optimization algorithms with transfer capability for continuous or combinatorial optimization including multi-modal optimization.

·         Theoretical studies that enhance our understandings on transfer learning and optimization.

·         Transfer learning and optimization using big data and data analytics.

·         Transfer evolutionary optimization and learning for dynamic optimization problems.

·         Transfer evolutionary optimization with domain adaptation and domain generalization.

·         Hybridization of evolutionary computation and neural networks, and fuzzy systems for transfer learning and optimization.

·         Hybridization of evolutionary computation and machine learning, information theory, statistics, etc., for transfer learning.

·         TL&TO algorithms that are tailored for parallel processing

·         Interactive TL&TO

·         Real-world applications, e.g. expensive and complex optimization, text mining, computer vision, image analysis, face recognition, etc.

 

Workshop Plan

9:30-9:35 Introduction to the workshop

9:35-10:15  Invited Talk – Mengjie Zhang

                     “Evolutionary Transfer Learning for Image and Pattern Recognition”

10:15-11:45 Invited Talk – Alma Rahat

                       “Knowledge in Bayesian Optimisation”

10:45-11:00 Coffee break

11:00-11:30 Invited Talk – Markus Olhofer

                      “Learning Representations for Interactive 3D Shape Design Tasks”

11:30- 11:50 Abhishek Gupta et al.

                       “Tightening Regret Bounds in Multi-Source Transfer Bayesian Optimization”

11:50-12:10 Xilu Wang and Yaochu Jin

                      “Knowledge Transfer Based on Particle Filters for Multi-objective Optimization”

12:10-12:25 Discussions

12:25 -12:30 Closing Remarks

 

Abstracts

Evolutionary Transfer Learning for Image and Pattern Recognition

Mengjie Zhang

School of Engineering and Computer Science, Victoria University of Wellington, New Zealand

Abstract: This talk will focus on evolutionary transfer learning techniques for image analysis and pattern recognition tasks. Firstly, the area of evolutionary transfer learning will be defined and presented. The talk will then use genetic programming as an example of evolutionary computation techniques to show how transfer learning is performed for image classification and analysis with possible interpretable models. Recent work on evolutionary transfer learning for symbolic regression, classification, and job shop scheduling as a hyperheuristic learning approach will be presented.

 

 

Knowledge in Bayesian Optimisation

Alma Rahat

 Swansea University, United Kingdom

Abstract. TBD

Learning Representations for Interactive 3D Shape Design Tasks

Markus Olhofer

Honda Research Institute Europe GmbH, Germany

 Abstract. Optimisation and design problems in real world can involve highly expensive numerical simulations in order to determine the quality of candidate solutions. This is especially true for the optimisation of mechanical structures. Early on, surrogate based methods were developed in order to utilize information which are generated during the optimisation process to reduce the computational costs related to the evaluation of designs. Information from similar optimisation tasks or available solutions from other design processes are more difficult to utilize for the design of a specific structure. In recent years various approaches are proposed to learn these types of information. However, improving the estimation of quality measures of proposed solutions is only one possible way to use prior information. Other options are the design of the search space and its parameter, the adaptation of operators, or the adaptation of infill criteria in surrogate based methods. In the talk some recent work is presented focusing on the utilization of prior knowledge in design optimisation and on work facilitating the research in this context.

 

 

Tightening Regret Bounds in Multi-Source Transfer Bayesian Optimization

Abhishek Gupta1,2, Ray Lim2, Chin Chun Ooi1, and Yew-Soon Ong1,2

1 Agency for Science, Technology and Research (A*STAR), Singapore

2 School of Computer Science and Engineering, Nanyang Technological University, Singapore

Abstract. Recent theoretical results have shown that instilling knowledge transfer into black-box optimization with Gaussian process surrogates, aka transfer Bayesian optimization, tightens cumulative regret bounds compared to the notransfer case. Faster convergence under strict function evaluation budgets – often in the order of a hundred or fewer function evaluations – is thus expected, overcoming the cold start problem of conventional Bayesian optimization. In this short paper, we prove that the regret bounds can be further tightened when extending the method to multi-source settings (where each source may depict distinct source-target correlation) while also maintaining algorithmic scalability. Experimental results verifying our theoretical claim of performance gain are provided as well.

 

Knowledge Transfer Based on Particle Filters for Multi-objective Optimization

Xilu Wang, and Yaochu Jin

Faculty of Technology, Bielefeld University, Germany

Abstract: Particle filters, also known as sequential Monte Carlo (SMC) methods, is a class of importance sampling and resampling techniques designed to use simulations to perform on-line filtering. Recently, particle filters have been extended for optimization by utilizing their ability to track a sequence of distributions. In this work, we incorporate transfer learning capabilities into the optimizer by using particle filters. To achieve this, we propose a novel particle filter based multi-objective optimization algorithm (PF-MOA) by transferring knowledge acquired from the search experience. The key insight adopted here is that, if we can construct a sequence of target distributions that can balance the multiple objectives and make the degree of the balance controllable, we can approximate the Pareto optimal solutions by simulating each target distribution via particle filters. As the importance weight updating step takes the previous target distribution as the proposal distribution, and the current target distribution as the target distribution, the knowledge acquired from the previous run can be utilized in the current run by carefully designing the set of target distributions. The experimental results on the DTLZ test suite show that the proposed PF-MOA achieves competitive performance compared with some state-of-the-art multi-objective evolutionary algorithms on most test instances.

 

About the organizers

Abhishek Gupta (Senior Member, IEEE) received the PhD degree in Engineering Science from the University of Auckland, New Zealand, in 2014. He is currently a Scientist and Technical Lead in the Singapore Institute of Manufacturing Technology, a research institute in Singapore’s Agency for Science, Technology and Research (A*STAR). He also holds a joint appoint as a Research Scientist at the Data Science and Artificial Intelligence Research Center of the Nanyang Technological University. Abhishek has diverse research experience in computational science, ranging from topics in engineering sciences to computational intelligence. Currently, his main research interests lie in the theory and algorithms of transfer and multitask learning for optimization, neuro-evolution, surrogate modeling, and scientific machine learning. Abhishek is the recipient of the 2019 IEEE Transactions on Evolutionary Computation Outstanding Paper Award for his work on evolutionary multitasking. He received the IEEE Transactions on Emerging Topics in Computational Intelligence 2021 Outstanding Associate Editor Award. He is also editorial board member of the Complex & Intelligent Systems journal, the Memetic Computing journal, and the Springer book series on Adaptation, Learning, and Optimization.

Amiram (Ami) Moshaiov is a faculty member of the School of Mechanical Engineering, and of the Sagol School of Neuroscience at Tel-Aviv University (TAU). Previously, he was a faculty member at MIT, USA. He was an Associate Editor of the IEEE Transactions on Emerging Topics in Computational Intelligence, and of the Journal of Memetic Computing. In addition, he was a member of many program committees of scientific conferences, a reviewer to many scientific journals, and a member of the Management Board of the European Network of Excellence in Robotics. He is currently a member of the IEEE Task Forces on Evolutionary Deep Learning and Applications, the TF on Artificial Life and Complex Adaptive Systems, and the TF on Transfer Learning & Transfer Optimization. At TAU, Moshaiov heads a research group on computational intelligence. The main research areas of his group include: Multi-payoff Games: Theory & Evolutionary Search of Rationalizable Strategies to such Games, Multi-objective Topology and Weight Evolution of Artificial Neural-Networks, Multi-objective Optimization & Multi-Criteria Decision-Making, Multi-objective Concept Exploration, Optimization & Selection, and Multi-objective Neuro-Fuzzy Inference Systems. His research group develops computational intelligence methods which are applied to problems from a wide range of application areas.

Yaochu Jin is an Alexander von Humboldt Professor for Artificial Intelligence endowed by the German Federal Ministry of Education and Research, with the Chair of Nature Inspired Computing and Engineering, Faculty of Technology, Bielefeld University, Germany. He is also a Distinguished Chair, Professor in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K. He was a “Finland Distinguished Professor” of University of Jyväskylä, Finland, “Changjiang Distinguished Visiting Professor”, Northeastern University, China, and “Distinguished Visiting Scholar”, University of Technology Sydney, Australia. His main research interests include evolutionary optimization, evolutionary learning, trustworthy machine learning, and evolutionary developmental systems.

Prof Jin is presently the Editor-in-Chief of Complex & Intelligent Systems. He was the Editor-in-Chief of the IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, an IEEE Distinguished Lecturer in 2013-2015 and 2017-2019, and the Vice President for Technical Activities of the IEEE Computational Intelligence Society (2015-2016). He was the General Co-Chair of the 2016 IEEE Symposium Series on Computational Intelligence and the Chair of the 2020 IEEE Congress on Evolutionary Computation. He is the recipient of the 2018 and 2021 IEEE Transactions on Evolutionary Computation Outstanding Paper Award, and the 2015, 2017, and 2020 IEEE Computational Intelligence Magazine Outstanding Paper Award. He was named by the Web of Science as “a Highly Cited Researcher” from 2019 to 2021 consecutively. He is a Member of Academia Europaea and Fellow of IEEE.