Tutorials

Tutorials

Adaptive Parameter Choices in Evolutionary Computation
Carola Doerr, Sorbonne University, Paris, France
[Saturday, 8 Sep, 11:00-12:30] [+]

Evolutionary algorithms and other popular black-box optimization techniques are highly parametrized algorithms. To run these algorithms, we typically need to decide upon their population sizes, mutation strengths, crossover rates, selective pressure, etc. This parametrization allows to adjust the behavior of the algorithms to the problem at hand. The chosen parameter values can have a decisive influence on performance. We thus need to select them with care.

Unfortunately, the identification of good parameter values still is one of the most challenging tasks in evolutionary computation. What complicates the parameter selection problem is the observation that different parameter values can be optimal in different stages of the optimization process. In the beginning of an optimization process, for example, one may want to allow for more exploration, while later on we may prefer a more focused search (“exploitation”). This observation calls for adaptive parameter choices, which automatically adjust the parameter values to the current state of the optimization process.

Adaptive parameter choices are today standard in continuous optimization. Quite surprisingly, however, this is not the case in discrete optimization, where they play only a rather marginal role. A paradigm change towards a more systematic use of non-static parameter choices is much needed. This tutorial aims to contribute to this goal, by providing an in-depth discussion of online parameter selection techniques. We survey both experimental and theoretical results, which demonstrate the unexploited potential of non-static parameter choices.

Other information: This tutorial addresses experimentally as well as theory-oriented researcher. No specific background is required to follow this tutorial, but some familiarity with evolutionary computation is certainly of advantage. Tutorial slides will be uploaded on www-ia.lip6.fr, shortly before or after the tutorial.

About the speaker
Carola Doerr (Carola.Doerr@mpi-inf.mpg.de, www-ia.lip6.fr/~doerr) is a permanent CNRS researcher at Pierre and Marie Curie University in Paris, France. She studied Mathematics at Kiel University and Computer Science at the Max Planck Institute for Informatics and Saarland University. Her main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community.

Applications of Genetic Programming in Dynamic Scheduling
Domagoj Jakobovic and Marko Đurasević, University of Zagreb, Croatia
Yi Mei and Mengjie Zhang, Victoria University of Wellington, New Zealand
Su Nguyen, La Trobe University, Australia
[Saturday, 8 Sep, 16:00-17:30] [+]

Scheduling problems are encountered in many real-world situations and scenarios. In real world, the problem is often dynamic, and unpredicted new jobs arrive in real time. To solve scheduling problems under dynamic conditions various problem-specific heuristics, called dispatching rules, have been designed. However, manually designing such heuristics is a difficult and lengthy process. Therefore, a great deal of research is focused on automatically designing new scheduling heuristics. Genetic programming is usually the method of choice for generating new dispatching rules, since in numerous occasions it generated good dispatching rules for various difficult scheduling environments.The tutorial will cover recent developments in the automatic generation of dispatching rules, as well as outline several new research directions in this field, such as multi-objective heuristic generation, application of ensemble learning methods, construction of surrogate models, etc. The tutorial will help interested researchers to acquire an overview of this emerging and interesting research area and to understand the key ideas and challenges for future studies.

About the speakers
Domagoj Jakoboviæ received B.Sc. and M.Sc. degrees in Electrical Engineering at the School of Electrical Engineering and Computing, University of Zagreb. He received Ph.D. degree in December 2005 on the subject of generating scheduling heuristics with genetic programming. He is currently associate professor at University of Zagreb. His research interests include evolutionary algorithms, optimization methods and parallel algorithms. Most notable contributions are in the area of machine supported scheduling, optimization problems in cryptography, parallelization and improvement of evolutionary algorithms. He has published more than 70 papers, lead several research projects and served as a reviewer for many journals and conferences.

Su Nguyen is a David Myers Research Fellow attached to the Research Centre for Data Analytics and Cognition, La Trobe University, Australia. His primary research interests include computational intelligence, optimization, data analytics, large-scale simulation, and their applications in operations management. His current research is focusing on cognitive planning and scheduling, which attempts to combine the power of advanced machine learning and optimisation methods to improve the efficiency of service and logistics systems. His works have been published in top peer-reviewed journals in evolutionary computation and operations research.

Yi Mei is a Lecturer at the School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand. He received his BSc and PhD degrees from University of Science and Technology of China in 2005 and 2010, respectively. His research interests include evolutionary computation in scheduling, routing and combinatorial optimisation, as well as evolutionary machine learning, genetic programming, feature selection and dimensional reduction. He has more than 50 fully referred publications, including the top journals in EC and Operations Research (OR).

Marko Ðuraseviæ is currently pursuing his PhD degree in the field of evolutionary computation at University of Zagreb. Throughout his studies, he has focused on the subject of automatic generation of dispatching rules with the use of genetic programming. Since October 2014 he has been employed as a teaching and research assistant at the Faculty of Electrical Engineering and Computing. His research interests include the field of evolutionary computing, optimization methods, machine learning and scheduling problems. He has published six journal and conference papers.

Mengjie Zhang is with the Victoria University of Wellington, Wellington, New Zealand, where he is currently Professor of Computer Science, Head of the Evolutionary Computation Research Group, and the Associate Dean (Research and Innovation) in the Faculty of Engineering. His current research interests include evolutionary computation, particularly genetic programming, particle swarm optimization, and learning classifier systems with application areas of image analysis, multiobjective optimization, feature selection and reduction, job shop scheduling, and transfer learning. He has published over 350 research papers in refereed international journals and conferences.

A Small World Hidden in Evolutionary Computation Techniques
Roman Senkerik, Tomas Bata University, Zlin, Czech Republic
[Sunday, 9 Sep, 16:00-17:30] [+]

This tutorial represents an insight into an attractive open research task, which is a novel method for visualizing the dynamics of evolutionary and swarm-based algorithms in the form of networks. The idea is based on the similarity in interactions between individuals in the metaheuristics algorithms and for example, users of social networks, linking between web pages, etc. The population is visualized as an evolving complex network that exhibits non-trivial features. The features like clustering, centralities, communities, and many more, offer a clear description of the population under evaluation. This tutorial shows the differences between the types of complex networks used, variations in building complex networks to capture the population dynamics of evolutionary algorithms or communication inside swarm-based algorithms, investigation on the time development of the network. It also shows several successful utilization of complex networks attributes for the performance improvements through the adaptive population as well as parameter/strategy control, further the possibility of controlling the evolution through complex network features.

About the speaker
Roman Senkerik went to the Tomas Bata University in Zlin, Faculty of applied informatics, where he studied Technical Cybernetics and obtained his MSc degree in 2004, a Ph.D. degree in 2008 (Technical Cybernetics) and Assoc. prof. Degree in 2013 (VSB – Technical University of Ostrava – degree in Informatics). He is now a researcher and lecturer at the same university. His research interests are development, hybridization and interdisciplinary applications of evolutionary computation techniques, Soft computing methods and their interdisciplinary applications, Theory of chaos, Cyber-security. He is a member of IEEE CIS Task Force on Differential Evolution and an MC member of COST Action project: Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO).

Bio-Inspired Approaches to Anomaly and Intrusion Detection
Luis Martí, Universidade Federal Fluminense, Brazil
Marc Schoenauer, Institute for Research in Computer Science and Control, France
[Saturday, 8 Sep, 11:00-12:30] [+]

Intrusion detection systems (IDSs) have gained a substantial attention because of its high-impact safety and security applications. Two main approaches are used when building those systems: i) misuse-based and ii) anomaly-based detection. While the former focuses on detecting attacks that follow a known pattern or signature, the latter is interested in building a model representing the system’s nominal behavior while assuming all deviated activities to be anomalous or intrusions, and, therefore provide a more robust solution.Bio-inspired approaches have been proposed to address the problem of anomaly-based intrusion detection, with artificial immune systems (AISs) being the most recognizable approach of all. However, recent developments in the area of single and multi-criterion evolutionary computing, adversarial co-evolutionary modeling and simulation have served a foundation for novel and better performing bio-inspired IDS that have yielded competitive results.

This tutorial will present the anomaly detection topic, its peculiarities and revise the current state of the art on this topic, departing from classical machine learning approaches, presenting the current state-of-the-art methods and analyze how and why those methods have been shown to outperform many of the currently established approaches.

About the speakers
Luis Martí is Adjunct Professor at the Institute of Computing of the Universidade Federal Fluminense (UFF) in Niterói, Rio de Janeiro, Brazil where he co-leads the Research on Intelligence and Optimization (RIO) Group. Luis’ research is mainly concerned with evolutionary computation, evolutionary multi-objective optimization, machine learning, neural networks, and related topics.

Marc Schoenauer is Principal Senior Researcher (Directeur de Recherche 1ère classe) with INRIA, the French National Institute for Research in Computer Science and Control. He graduated at Ecole Normale Supèrieure in Paris, and obtained a PhD in Numerical Analysis at Université Paris 6 in 1980. From 1980 until Aug. 2001 he has been full time researcher with CNRS (the French National Research Center), working at CMAP (the Applied Maths Laboratory) at Ecole Polytechnique. He has been working in the field of Evolutionary Computation (EC) since the early 90s, more particularly at the interface between EC and Machine Learning (ML). He is author of more than 150 papers in journals and major conferences of these fields. He is or has been advisor of 33 PhD students.

Cartesian Genetic Programming
Julian F. Miller, University of York, UK
[Sunday, 9 Sep, 11:00-12:30] [+]

Cartesian Genetic Programming (CGP) is a well-known and respected form of Genetic Programming. It uses a very simple integer address-based genetic representation of a program in the form of a directed graph. In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. The classical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks and automatically defined functions (modular CGP). SMCGP uses functions that cause the evolved programs to change themselves as a function of time. Recurrent-CGP allows evolution to create programs which contain cyclic, as well as acyclic, connections. CGP encoded artificial neural networks represent a powerful training method for neural networks. CGP has been applied successfully to a variety of real-world problems, such as digital circuit design, visual object recognition and classification.

About the speaker
Dr. Miller is and Honorary Fellow the Department of Electronic Engineering at the University of York. His main research interests are genetic programming (GP), and computational development. He is a highly cited and well-known researcher with over 8000 citations and an h-index of 40. He has published over 230 refereed papers on evolutionary computation, genetic programming, evolvable hardware, computational development and other topics. He is an associate editor of the Genetic Programming and Evolvable Machines and a former associate editor of IEEE Transactions on Evolutionary Computation. He is an editorial board member of the journals Evolutionary Computation and Unconventional Computing. He received the Evostar award in 2011 for outstanding contribution in evolutionary computation. He edited a seminal book on Cartesian Genetic Programming in 2011 and maintains the CGP website.

Cloud-y Evolutionary Algorithms
J.J. Merelo, University of Granada, Spain
[Sunday, 9 Sep, 14:00-15:30] [+]

This tutorial will describe how cloud computing is a new paradigm that changes the way applications are designed and deployed, and how it can be put to use in a scientific computing environment. It will be a practical tutorial with examples and tools that are used nowadays by companies and institutions. The examples used will be taken from evolutionary computation, although its application is widespread.

About the speaker
JJ Merelo is professor at the university of Granada where he has been teaching Cloud Computing for more than 5 years. He has been also working in the field of evolutionary algorithms for more than 20 years. He does open science and writes free software.

Computational Complexity Analysis of Genetic Programming
Pietro Oliveto and Andrei Lissovoi, University of Sheffield, UK
[Sunday, 9 Sep, 14:00-15:30] [+]

Genetic Programming is an evolutionary computation paradigm that aims to evolve computer programs. Compared to the great number of successful applications of GP that have been reported, the theoretical understanding of its underlying working principles lags far behind. In particular, the identification of which classes of computer programs can be provably evolved efficiently via GP has progressed slowly compared to the understanding of the performance of traditional evolutionary algorithms (EAs) for function optimisation. The main reason for the slow progress is that the analysis of GP systems is considerably more involved due to the variable length of programs compared to the fixed solution representation used in EAs and because understanding candidate program quality over all possible inputs is unfeasible. Nevertheless, nowadays it is possible to analyse the time and space complexity of GP algorithms for evolving proper programs with input/output relationships where the fitness of candidate solutions is evaluated by comparing their accuracy on input/output samples of a polynomially-sized training set (eg., Boolean Functions). In this tutorial, we give an overview of the recent results outlining the techniques used and the challenges involved.

About the speaker
Andrei Lissovoi and Pietro S. Oliveto are respectively Research Associate and Senior Lecturer in the Rigorous Research team at the University of Sheffield, UK. Their main research interest is the performance analysis of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems and hyper-heuristics.

Evolutionary Algorithms and Hyper-Heuristics
Nelishia Pillay, University of Pretoria, South Africa
[Sunday, 9 Sep, 16:00-17:30] [+]

Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic. The EvoHyp library will be used to demonstrate the implementation of evolutionary algorithm hyper-heuristics for the case studies. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics. Tutorial website link: https://nicog.cs.up.ac.za/EAHH2018

About the speaker
Nelishia Pillay is a Professor and Head of Department in the Department of Computer Science at the University of Pretoria. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She has published in these areas in journals, international and national conference proceedings. She is one of the vice-chairs of the IEEE Task Force on Hyper-Heuristics. She is an active researcher in field of evolutionary algorithm hyper-heuristics and the application thereof to combinatorial optimization problems and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.

Evolutionary Bilevel Optimization: An Emerging Area for Research and Application in EC
Kalyanmoy Deb, Michigan State University, USA
Ankur Sinha, Indian Institute of Management Ahmedabad, India
Pekka Malo, Aalto University, Finland
[Sunday, 9 Sep, 09:00-10:30] [+]

Many practical optimization problems are better posed as hierarchical optimization problems in which different optimization tasks are put into different levels. The simplest of these hierarchical problems is known as “Bilevel” optimization problems which contain two levels of optimization tasks in a nested manner. A solution at the upper level is considered feasible only if it is optimal to the corresponding lower level problem. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another. Evolutionary algorithms provide amenable ways to address such problems due to their flexibility and ability to handle constrained search spaces efficiently. In this tutorial, we will introduce principles of bilevel optimization for both single and multiple objectives, and discuss the difficulties in solving such problems in general. A number of applications of evolutionary bilevel optimization (EBO) will also be highlighted. A recent review on EBO is available [1].

[1] Sinha, A., Malo, P., and Deb, K. (2018). A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications. IEEE Transactions on Evolutionary Computation, Vol 22, No. 2, pp. 276-295.

About the speakers
Kalyanmoy Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University. His research interests are in evolutionary optimization and their application in multi-criterion optimization, bilevel optimization, modeling, and machine learning. He was awarded Infosys Prize, TWAS Prize in Engineering Sciences, CajAstur Mamdani Prize, Edgeworth-Pareto award, Bhatnagar Prize in Engineering Sciences, and Bessel Research award from Germany. He is fellow of IEEE and ASME. He has published over 480 research papers with Google Scholar citation of over 114,000 with h-index 107. More information can be found from http://www.coin-laboratory.com

Ankur Sinha is a faculty at the Indian Institute of Management Ahmedabad, India. He completed his Ph.D. from Aalto University School of Business, Finland, where his dissertation was adjudged as the best thesis for the year 2011. After completing his PhD he has held visiting positions at the Michigan State University, MI USA and Aalto University, Finland. He has a Bachelors degree in Mechanical Engineering from Indian Institute of Technology Kanpur, India. His research interests are in the areas of Bilevel Optimization, Multi-objective Optimization and Data Analytics. More information about his research can be found on his website http://www.iima.ac.in/~asinha/.

Evolutionary Computation and Machine Learning in Cryptology
Stjepan Picek, TU Delft, The Netherlands
[Saturday, 8 Sep, 11:00-12:30] [+]

Evolutionary Computation (EC) has been successfully applied to various real-world problems. One domain rich with difficult problems is cryptology. This tutorial starts with a brief introduction on cryptology intended for general audience. Next, we examine several topics from cryptology that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. We discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization) for various problems and evaluate on the importance of that choice. We discuss the gap between the crypto community and EC community and what does it mean for the results. By doing that, we give a special emphasis on the perspective that cryptology can represent a source of interesting benchmark problems for EC. We finish with a more general overview on artificial intelligence applications in security.This tutorial will present live demos of EC in action when dealing with cryptology problems.

About the speaker
Stjepan Picek is an assistant professor at TU Delft, The Netherlands. Prior to that, he worked as a postdoctoral researcher at MIT, USA and KU Leuven, Belgium. Stjepan has a double doctorate in computer science from University of Zagreb, Croatia and Radboud University Nijmegen, The Netherlands. His primary research interests are cryptography, evolutionary computation, and machine learning. He has published more than 70 journal and conference papers.

Exploratory Landscape Analysis
Pascal Kerschke and Mike Preuss, University of Muenster, Germany
[Saturday, 8 Sep, 11:00-12:30] [+]

Exploratory Landscape Analysis (ELA) has been conceived as an automated approach for characterizing optimization problems by extracting — not necessarily intuitively understandable — landscape features, based on a rather small initial sample from the underlying optimization problem.Within this tutorial we will introduce the general concept of (automated) algorithm selection, which is one of the main use cases of ELA, followed by a presentation of examples from different optimization domains, in which ELA has successfully been used to improve algorithm selection processes. After presenting the general idea of ELA and providing a detailed overview of its status quo (including recently published extensions), we will show how ELA can improve our understanding of (a) the characteristics of different problem landscapes, and (b) the behavior of optimization algorithms, which are executed on these problems.

The remainder of the tutorial will be used for an interactive live-demo, in which our participants will perform ELA on some continuous optimization problems.

About the speakers
Pascal Kerschke (http://erc.is/p/kerschke) is a Research Associate at the Group of Information Systems and Statistics at the University of Muenster (Germany), from which he received his PhD degree in 2017. Prior to that, he completed his Master studies (M.Sc.) in “Data Sciences” (2013) at the Faculty of Statistics at the TU Dortmund (Germany). His main research interests are data science, machine learning, algorithm selection, as well as Exploratory Landscape Analysis (ELA) for single- and multi-objective optimization, in the continuous (Black-Box) and discrete (Traveling Salesperson Problems, TSP) domain. Furthermore, he is one of the developers of related R-packages, such as “flacco”, “smoof” and “mlr”.

Mike Preuss is Research Associate at ERCIS, University of Muenster, Germany. Previously, he was with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence methods for computer games, especially in procedural content generation (PGC) and realtime strategy games (RTS). He is associate editor of the IEEE TCIAIG (Transactions on Computational Intelligence and AI in Games) and Springer’s SWEVO journal and has been member of the organizational team of several conferences in the last years, in various functions, as general co-chair, proceedings chair, competition chair, workshops chair.

Introduction to Statistical Modeling of EC Systems and Experiments: A Visual Approach
Mark Wineberg, University of Guelph, Canada
[Sunday, 9 Sep, 11:00-12:30] [+]

This tutorial is a follow-up to the statistics tutorial given at GECCO. While still at an introductory level (attendance of the GECCO tutorial is not assumed), the material covered is typically found in more upper year undergraduate courses on statistical modeling. We will move beyond simple comparisons using T tests, to the understanding and modeling of the underlying behaviors of various factors that may affect a system, even when those behaviors are obscured by the noise encountered in a stochastic system.
Topics include: factor models, two factor linear regression confidence bands, multiple regression, polynomial regression, the relationship to GP-style symbolic regression, post-hoc analysis, and possibly one-way and multi-way ANOVA, time permitting.
As with the GECCO tutorial, this tutorial takes a very visual approach to statistics; relying on graphics and animation to provide an intuitive understanding of the subject, instead of the traditional equations, which cater to only the initiated.

About the speaker
Prof. Wineberg has been actively researching the field of Evolutionary Computation (EC) since 1993. He is an Associate Professor in the School of Computer Science at the University of Guelph, Canada. His current research includes statistical modeling of EC systems, as well as multipopulational EAs, including co-evolutionary systems and the effects of migration. Prof. Wineberg also teaches an undergraduate course at Guelph on discrete event simulations, which emphasizes proper statistical analysis, as well as a graduate course on evolutionary computation, which includes the statistical analysis tutorials given here and at GECCO.

Genetic Improvement: Taking Real-World Source Code and Improving it Using Genetic Programming
John Woodward, Queen Mary University of London, UK
Saemundur O. Haraldsson, University of Stirling, UK
[Sunday, 9 Sep, 16:00-17:30] [+]

Genetic Programming (GP) has been around for 25 years. Genetic Improvement (GI) is new. GI evolves source code, rather than a simulation of code. In other words, GI operates directly on Java or C, for example, whereas GP operates on a tiny instruction set defined by the function set and terminal set. Another fundamental difference is that GI starts with real-world software, whereas GP typically tries to evolve programs from scratch.These differences may not seem important; however this subtle difference opens new possibilities for research. Furthermore we can optimize the physical properties of code such as power consumption, size of code, bandwidth, and other non-functional properties, including execution time.

This tutorial is of interest to people with a GP background interested in applying their techniques to real source code, and software practitioners interested in using automated techniques to improve software. We will not assume prior knowledge of GP.

About the speakers
Saemundur O. Haraldsson is a research fellow at the University of Stirling. He has multiple publications on Genetic Improvement, including two that have received best paper awards; in last year’s GI and ICTS4eHealth workshops. Additionally, he coauthored the first comprehensive survey on GI which was published in 2017. He has been invited to give talks on the subject in two Crest Open Workshops and for an industrial audience in Iceland. His PhD thesis (submitted in May 2017) details his work on the world’s first live GI integration in an industrial application.

John Woodward is a lecturer at QMUL. He has organized workshops at GECCO including Metaheuristic Design Patterns and ECADA, Evolutionary Computation for the Automated Design of Algorithms which has run for 7 years. He has also given tutorials on the same topic at PPSN, CEC, and GECCO. He currently holds a grant examining how Genetic Improvement techniques can be used to adapt scheduling software for airport runways. With his PhD Student, Saemundur Haraldsson (who this proposal is in collaboration with), won a best paper award at last yearís GI workshop. He has also organized a GI workshop at UCL as part of their very successful Crest Open Workshops. Woodward has previously given a tutorial at PPSN (http://ppsn2014.ijs.si/?show=tutorials#t6)

Learning Classifier Systems as Learning Cognitive Systems
Will Browne, Victoria University of Wellington, New Zealand
[Sunday, 9 Sep, 14:00-15:30] [+]

Learning classifier systems (LCSs) are an often overlooked class of rule-based machine learning algorithms with a unique and flexible set of features that sets them apart from other strategies. The original LCS from 40 years ago, CS-1, stood for Cognitive System – One. Subsequently, LCSs have become powerful evolutionary machine learning techniques, but there is still much to be gained by exploring their cognitive systems capabilities.The tutorial will begin with a gentle introduction to LCSs based on the recent textbook ‘Introduction to Learning Classifier Systems’, which is co-authored by the presenter. The second part of this tutorial will show examples of Learning Classifier Systems in terms of cognitive systems. In-depth understanding will be provided regarding improved perception, representation, transfer learning and embodied LCSs. Examples will be discussed of solving previously intractable problems in a human-like manner as well as unique applications in robotics.

About the speaker
Prof. Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and is currently to co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

Mathematical Programming as a Complement to Bio-Inspired Optimization
Ofer Shir, Tel-Hai College and Migal-Galilee Research Institute, Israel
[Saturday, 8 Sep, 11:00-12:30] [+]

Global optimization of complex models has been for several decades approached by means of formal algorithms as well as Mathematical Programming (MP), and simultaneously has been treated by a wide range of dedicated heuristics – where nature-inspired approaches are placed. These two branches complement each other, yet practically studied under two independent CS disciplines. The claim that education within the scope of problem-solving from nature should encompass basic MP is untenable at present times, and this tutorial aims at bridging the gap for our scholars and students.
The tutorial comprises two parts. The first part presents the fundamentals of MP. It overviews mathematical optimization in light of convex optimization versus combinatorial optimization. It discusses some of the theoretical aspects, such as polyhedra and the duality theorem. The second part focuses on MP in practice, particularly on modeling, and covers selected algorithms: Simplex, Ellipsoid, and Branch-and- Bound.
The tutorial is planned for all PPSN participants, assuming no prior knowledge in mathematical
optimization.

About the speaker
Ofer Shir (ofersh@telhai.ac.il) is a Senior Lecturer (Assistant Professor) at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute, where he heads the Scientific Informatics and Experimental Optimization group – both located in the Upper Galilee, Israel. He holds a PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010). He then joined IBM-Research as a Research Staff Member (2010-2013), where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

Multiagent Systems and Agent-based Modeling and Simulation
Ana Bazzan, Federal University of Rio Grande do Sul, Brazil
[Saturday, 8 Sep, 14:00-15:30] [+]

Multiagent systems (MAS) and agent-based modeling and simulation (ABMS) deal with social interactions among intelligent actors (agents). These two disciplines study neither just physical systems nor agents in isolation, but the agent as part of a social space.The goal of this tutorial is twofold: (a) about half of the time will cover basic material about MAS and ABMS (since this simulation paradigm is highly used by the computational intelligence community), and hence provide the audience a sense of the basic principles; and (b) about half of the time will cover the most recent advances in MAS, including the highly relevant topic of multiagent learning, one of the obvious interfaces between these two communities, as well evolutionary game theory.

About the speaker
Ana L. C. Bazzan is a full professor at the Federal University of Rio Grande do Sul (UFRGS) in Brazil. She has served as general co-chair of the AAMAS 2014 (the premier conference on autonomous agents and multiagent systems) and was one of the keynote speakers at AAMAS 2017. She co-organized the Workshop on Synergies between Multiagent Systems, Machine Learning, and Complex Systems (TRI 2015). Her research interests include MAS, ABMS, multiagent reinforcement learning, evolutionary game theory, and complex systems.

Multi-Objective Optimization with the jMetal Framework
Antonio J. Nebro, University of Malaga, Spain
[Sunday, 9 Sep, 11:00-12:30] [+]

jMetal is a Java-based framework for multi-objective optimization with metaheuristics which has become popular in many disciplines (engineering, economics, bioinformatics, etc.). The journal paper describing jMetal has more than 775 citations according to Google Scholar, and it has been used by research groups, industry and academia.In this tutorial, we give a practical overview of the main jMetal components (algorithms, encodings, problems, operators, experiments, quality indicators), focusing on how to configure and run some of the included metaheuristics and also on how to incorporate new solution representations and problems. We give examples of classical algorithms but also more modern techniques, including preference-based metaheuristics. Special attention will be paid to the definition of experimental studies to statistically assess the performance of algorithms. The main goal is that the attendants can replicate all the examples presented, and the material needed to follow the tutorial will be available in a public repository (https://github.com/jMetal/PPSN2018Tutorial).

About the speaker
Antonio J. Nebro received his M.S. and Ph.D. degrees in Computer Science in 1992 and 1999, respectively, from the University of Malaga, Spain. He is currently an Associate Professor of Computer Science in this university. His current research activity is related to multi-objective optimization algorithms and parallelism, software tools for multi-objective optimization, and the application of these techniques to
real-world problems of different domains, including bioinformatics, civil engineering, and Big Data. He is one of the designers and the main developer of the jMetal framework for multi-objective optimization with
metaheuristics.

Next Generation Genetic Algorithms
Darrell Whitley, Colorado State University, USA
[Saturday, 8 Sep, 16:00-17:30] [+]

New developments in Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. Instead, for certain classes of NP Hard problems, it is possible to exactly compute the location of improving moves in constant time. In some domains, this makes random mutation obsolete.Deterministic “Partition Crossover” can be applied to optimization problems such as MAXSAT and the Traveling Salesman Problem. Partition Crossover locally decomposes a recombination graph into q subgraphs in O(n) time. It can then identify the best of 2^q possible offspring. If the parents are local optima, the offspring are guaranteed to be locally optimal in the largest hyperplane subspace containing both parents. Local decomposition has also been used to solve multiply constrained scheduling problems with unto 1 billion variables.

The book chapter “Next Generation Genetic Algorithms” will accompany the tutorial.

About the speaker
Prof. Whitley has been active in Evolutionary Computation since 1986, and has published more than 200 papers that have garnered more than 21,000 citations. He has served as Editor-in-Chief of the journal Evolutionary Computation, and served as Chair of the Governing Board of ACM SIGEVO from 2007 to 2011.

Runtime Analysis of Population-Based Evolutionary Algorithms
Per Kristian Lehre, University of Birmingham, UK
[Saturday, 8 Sep, 14:00-15:30] [+]

Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics.A rich theory on runtime analysis of EAs has been developed over the last 20 years. This theory provides insights into how the performance of EAs
depends on their parameter settings and the characteristics of the underlying fitness landscapes. Early studies were mostly concerned with EAs without populations, such as the (1+1) EA.This tutorial introduces recent techniques that enable runtime analysis of EAs with realistic populations. To illustrate the application of these techniques, we consider fundamental questions such as: When are populations necessary for efficient optimisation? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What determines an EA’s tolerance for uncertainty?

About the speaker
Dr Lehre is Senior Lecturer in the School of Computer Science at the University of Birmingham (since Jan 2017). Before joining Birmingham, he was since 2011 Assistant Professor with the University of Nottingham. He obtained MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, He completed the PhD in 2006 under the supervision of Prof Pauline Haddow, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007 with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010.

Dr Lehre’s research interests are in theoretical aspects of nature-inspired search heuristics, in particular runtime analysis of population-based evolutionary algorithms. His research has won numerous best paper awards, including GECCO (2013, 2010, 2009, 2006), ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, and a member of the editorial board of Evolutionary Computation and associate editor of IEEE Transactions on Evolutionary Computation. He has guest-edited special issues of Theoretical Computer Science and IEEE Transaction on Evolutionary Computation on theoretical foundations of evolutionary computation. Dr Lehre has given many tutorials on evolutionary computation in summer schools (UK: 2007, 2015, 2016, France: 2009, 2010, and 2011, Estonia: 2010), as well as major conferences and workshops (GECCO 2013, 2014, 2015, 2016, 2017, CEC 2013 and 2015, PPSN 2016, and ThRaSH 2013). He was the main coordinator of the 2M euro EU-funded project SAGE which brought together theory of evolutionary computation and population genetics.

Semantic Genetic Programming
Alberto Moraglio, University of Exeter, UK
Krzysztof Krawiec, Poznan University of Technology, Poland
[Sunday, 9 Sep, 09:00-10:30] [+]

Semantic genetic programming is a recent, rapidly growing trend in Genetic Programming (GP) that aims at opening the ‘black box’ of the evaluation function and make explicit use of more information on program behavior in the search. In the most common scenario of evaluating a GP program on a set of input-output examples (fitness cases), the semantic approach characterizes program with a vector of outputs rather than a single scalar value (fitness). The past research on semantic GP has demonstrated that the additional information obtained in this way facilitates designing more effective search operators. In particular, exploiting the geometric properties of the resulting semantic space leads to search operators with attractive properties, which have provably better theoretical characteristics than conventional GP operators. This in turn leads to dramatic improvements in experimental comparisons.The aim of the tutorial is to give a comprehensive overview of semantic methods in genetic programming, illustrate in an accessible way a formal geometric framework for program semantics to design provably good mutation and crossover operators for traditional GP problem domains, and to analyze rigorously their performance (runtime analysis). A number of realworld applications of this framework will be also presented. Other promising emerging approaches to semantics in GP will be reviewed. In particular, the recent developments in the behavioral programming, which aims at characterizing the entire program behavior (and not only program outputs) will be covered as well. Current challenges and future trends in semantic GP will be identified and discussed.

Efficient implementation of semantic search operators may be challenging. We will illustrate very efficient, concise and elegant implementations of these operators, which are available for download from the web.

About the speakers
Alberto Moraglio is a Lecturer in Computer Science in the College of Engineering, Mathematics and Physical Sciences at the University of Exeter, UK. He is the founder of the Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design and rigorous theoretical analysis of new successful search algorithms. He has had several tutorials at GECCO and IEEE CEC, and an extensive publication record on this subject. He co­chaired the GP track and the Theory track in past editions of GECCO, and he is co­chair of the GA track at GECCO 2016. He also co­chaired past editions of the European Conference on Genetic Programming. He has applied his geometric theory to derive a new form of Genetic Programming based on semantics with appealing theoretical properties which is rapidly gaining popularity in the GP community.

Krzysztof Krawiec is an Associate Professor in the Laboratory of Intelligent Decision Support Systems at Poznan University of Technology, Poznań, Poland. His primary research areas are genetic programming and coevolutionary algorithms, with applications in program synthesis, modeling, image analysis, and games. Dr. Krawiec co­chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO in 2015 and 2016, and is an associate editor of Genetic Programming and Evolvable Machines journal. His work in the area of semantic GP includes design of effective search operators, semantic backpropagation, discovery of semantic modularity of programs, exploitation of program execution traces, and behavioral program synthesis.

The Cartography of Computational Search Spaces
Gabriela Ochoa, University of Stirling, UK
[Sunday, 9 Sep, 09:00-10:30] [+]

The performance of heuristic search algorithms crucially depends on the underlying fitness landscape structure. Most fitness landscapes analysis techniques study their local structure; there is a lack of tools to study instead their global structure, which is known to impact algorithms’ performance. This tutorial will describe local optima networks (LONs), a model of fitness landscapes suited to analyse their global structure by bringing tools from complex networks. LONs provide new insight into the structural organisation and the connectivity pattern of a search space. We will cover the relevant definitions, extraction methodologies, metrics, and visualisation techniques to thoroughly characterise the global structure of computational search spaces. We will consider the landscapes induced by both evolutionary and local-search algorithms and will show results on combinatorial problems (binary and permutation spaces) as well as on computer program search spaces. An interactive demo will allow attendees to analyse and visualise realistic computational search spaces.

About the speaker
Gabriela Ochoa is a Senior Lecturer (Associate Professor) in Computing Science at the University of Stirling, Scotland. She holds a PhD from University of Sussex, UK in Computer Science and Artificial Intelligence. She worked in industry for five years before joining academia, and has held faculty and research positions at the University Simon Bolivar, Venezuela and the University of Nottingham, UK. Her research interests lie in the foundations and application of evolutionary algorithms and heuristic search methods, with emphasis on autonomous search, hyper-heuristics, fitness landscape analysis and visualisation, applications to combinatorial optimisation, healthcare, and software engineering. She has published over 120 scholarly papers and serves various program committees. She is associate editor of the Evolutionary Computation Journal (MIT Press), the IEEE Transactions on Evolutionary Computation, as well as a member of the Editorial Board for Genetic Programming and Evolvable Machines and of the SIGEvo executive board. She proposed the HyFlex hyper-heuristic framework and the first Cross-domain Heuristic Search Challenge (CHeSC 2011). She has served as an organiser and/or program chair for several international events such as GECCO, PPSN, FOGA and IEEE CEC and was the Editor-in-Chief for GECCO 2017.

The Most Recent Advances on Multi-Modal Optimization
Michael G. Epitropakis, University of Patras, Greece
Mike Preuss, University of Muenster, Germany
Xiaodong Li, RMIT University, Melbourne, Australia
[Saturday, 8 Sep, 14:00-15:30] [+]

Multi-Modal optimization (MMO) is currently undergoing many changes, and becoming established as an active research area that collects approaches from various domains of operational research, swarm intelligence and evolutionary computation. Typically MMO strives for delivering multiple optimal (or close to optimal) solutions in a single optimization run. This tutorial will cover several scenarios and list currently employed and potentially available performance measures. Furthermore, many state-of-the-art as well as more classic MMO methods are compared and put into a rough taxonomy. We will also discuss recent relevant competitions and their results and outline the possible future developments in this area. In brief, the tutorial will cover the following topics (in syllabus form):

  • Multi-modal optimization – what for?
  • Niching? Biological inspiration and optimization reality
  • Towards theory: high-level modelling
  • Suggested taxonomy and critical review of methods (with Demos)
  • Spotlight: Clustering, multiobjectivization, surrogates and archives
  • Measuring and different scenarios
  • Developing challenging competition benchmark function sets
  • Discussion on current competition results, and available software
  • Expected future developments

About the speakers
Dr. Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Patras, Greece. Currently, he is a Lecturer in Foundations of Data Science at the Data Science Institute and the Department of Management Science, Lancaster University, Lancaster, UK. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, machine learning and search­ based software engineering. He has published more than 35 journal and conference papers. He is an active researcher on Multi­modal Optimization, he is chairing the IEEE CIS Task Force on Multi-modal Optimization, as well as he is co–organizing workshops, special sessions and the competition series on Niching Methods for Multimodal Optimization in the top tier Evolutionary Computation conferences.

Mike Preuss is Research Associate at ERCIS, the European Research Center for Information Systems, at the University of Muenster, Germany. Previously, he was with the Chair of Algorithm Engineering at the Computer Science Department, TU Dortmund, Germany, where he received his Diploma degree in 1998 and his PhD in 2013. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization and the experimental methodology for (non-deterministic) optimization algorithms. He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games, pushing forward modern approaches of experimental analysis as the Exploratory Landscape Analysis (ELA) and innovative uses of surrogate models. He was involved in founding the EvoGames track at Evo* and the Digital Entertainment Technologies and Arts (DETA) track at GECCO. Within the games field, he is mainly interested in AI for realtime strategy (RTS) games and procedural content generation (PCG).

Xiaodong Li (M’03-SM’07) received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”.

Theory of Parallel Evolutionary Algorithms
Dirk Sudholt, University of Sheffield, UK
[Saturday, 8 Sep, 16:00-17:30][slides] [+]

Evolutionary algorithms (EAs) have given rise to many parallel variants, fuelled by the rapidly increasing number of CPU cores and the ready availability of computation power through GPUs and cloud computing. A very popular approach is to parallelize evolution in island models, or coarse-grained EAs, by evolving different populations on different processors. These populations run independently most of the time, but they periodically communicate genetic information to coordinate search. Many applications have shown that island models can speed up computation time significantly, and that parallel populations can further increase solution diversity. However, there is little understanding of when and why island models perform well, and what impact fundamental parameters have on performance. This tutorial will give an overview of recent theoretical results on the runtime of parallel evolutionary algorithms. These results give insight into the fundamental working principles of parallel EAs, assess the impact of parameters and design choices on performance, and contribute to the design of more effective parallel EAs.

About the speaker
Dirk Sudholt obtained his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener, with distinction. Since January 2012 he is a Lecturer/Senior Lecturer at the University of Sheffield, UK. His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms, including hybrid and parallel variants, and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He has published more than 90 refereed papers in international conferences and journals and has won 8 best paper awards at GECCO and PPSN.