SIGEVOlution

newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation



Image filtered using Fly4arts from a photograph of Garnedd Ugain of Y Lliwedd in Snowdonia (Wales) by Dr Franck P. Vidal.

Editorial

Welcome to the Spring 2021 issue of the SIGEvolution newsletter! Our first contribution, by Edgar Galván, gives a survey of a current area in Neuroevolution; evolutionary approaches applied to the architectural configuration and training of artificial deep neural networks. We continue with a thorough overview of Nevergrad, an open-source platform for black-box optimization developed by a group of researchers based in France. We then report on how evolutionary computation has inspired a young college student from the United States, Alec Pugh, to develop his coding skills. The issue concludes by announcing a number of exciting events and calls for submissions.

Please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.

Gabriela Ochoa, Editor.

About the Cover

The artwork on the cover was created by Dr Zainab Ali Abbood as part of her PhD dissertation supervised by Dr Franck P. Vidal at Bangor University (Wales, UK). She implemented Fly4Arts: Evolutionary Digital Art with the Fly Algorithm, an image filter implemented using GPU computing. It is based on the Parisian Evolution / Cooperative Co-evolution principle. The algorithm contains all the usual components of an evolutionary algorithm. In addition, there is a "global'' fitness calculated on the whole population, and a local fitness assessing the contribution of each individual. Each individual represents a paintbrush stroke, and the population is the image canvas. Each stroke has a pattern, size, orientation, colour and position in the 3-D Euclidean space. The 3rd dimension is needed so strokes can partially occlude one another. Fly4Arts has been presented at the Biennial International Conference on Artificial Evolution (EA-2017), European Conference on the Applications of Evolutionary (EvoApplications 2017) and in “Fly4Arts: Evolutionary Digital Art with the Fly Algorithm,” Arts Sci., vol. 17, no. 1, Oct. 2017. The artworks were on display during the Art&Science in Evolutionary Computation exhibition organised by Galerie Louchard in Paris.

Neuroevolution in Deep Neural Networks: A Comprehensive Survey

by Edgar Galván

Maynooth University, Ireland

Abstract. A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNNs). These methods play a crucial role in the success or failure of the DNNs for most problems. Evolutionary Algorithms are gaining momentum as a computationally feasible method for the automated optimisation of DNNs. Neuroevolution is a term that describes these processes. This newsletter article summarises the full version available at https://arxiv.org/abs/2006.05415.

Introduction

Deep learning algorithms are inspired by deep hierarchical structures of human perception as well as production systems. These algorithms have achieved extraordinary results in different areas including computer vision and speech recognition, to mention a few examples. The design of DNN architectures (along with the optimisation of their hyperparameters), as well as their training, plays a crucial part in their success or failure. EA architecture-search methods, sometimes referred to as neuroevolution, are yielding impressive results in the automatic configuration of DNN architectures. There are over 300 works published in the area of Neural Architecture Search, where nearly a third correspond to neuroevolution in DNNs. Figure 1 (left) shows a breakdown of these publications per year, from 2009 to 2020; and (right) most common venues.

Figure 1: Number of publications on neuroevolution in DNNs. Left: Per year. Right: In conference proceedings and journals.

Evolving DNNs Architectures Through Evolutionary Algorithms

Motivation. In recent years, there has been a surge of interest in methods for neural architecture search. Broadly, they can be categorised in one of two areas: evolutionary algorithms (EAs) or reinforcement learning. Recently, EAs have started gaining momentum for designing deep neural network architectures. The popularity of these algorithms is due to the fact that they are gradient-free, population-based methods that offer a parallelised mechanism to simultaneously explore multiple areas of the search space while at the same time offering a mechanism to escape from local optima. Moreover, the fact that the algorithm is inherently suited to parallelisation means that more potential solutions can be simultaneously computed within acceptable wall-clock time. Steady increases in computing power, including graphics processing units with thousands of cores, are contributing to speed up population-based EAs.

Criticism. Despite the popularity of EAs for designing deep neural network architectures, they have also been criticised in the light of being slow learners, as well as being computationally expensive to evaluate. For example, when using a small population-based EA of 20 individuals (potential solutions) and using a training set of 50,000 samples, one generation alone (of hundreds, thousands or millions of generations) will require one million evaluations through the use of a fitness function.

Training Deep Neural Networks With Evolutionary Algorithms

Motivation. Backpropagation has been one of the most successful and dominant methods used for training ANNs over the past three decades. This simple, effective and elegant method applies Stochastic Gradient Descent (SGD) to the weights of the ANN where the goal is to keep the overall error as low as possible. However, as remarked by some researchers, the widely held belief, up to around 2006, was that backpropagation would suffer a loss of its gradient within DNNs. This turned out to be a false assumption and it has subsequently been proved that backpropagation and SGD are effective at optimising DNNs even when there are millions of connections. Both backpropagation and SGD benefit from the availability of sufficient training data and the availability of computational power. In a problem space with so many dimensions, the success of using SGD in DNNs is still surprising. Practically speaking, SGD should be highly susceptible to local optima. Interesting works have studied this phenomenon arguing, for example, that the noise helps SGN escape saddle points due to the randomness in the estimator. Others hypothesise that the presence of multiple local optima is not a problem as they are very similar to the best solution. EAs perform very well in the presence of saddle points.

Criticism. As there are no guarantees of convergence, the solutions computed using EAs are usually seen as near optimal. Population-based EAs are in effect an approximation of the gradient as this is estimated from the individuals in a population and their corresponding objectives. On the other hand, SGD computes the exact gradient. As a result, some researchers may consider EAs unsuitable for DL tasks. However, it has been demonstrated that the exact approximation obtained by SGD is not absolutely critical in the overall success of DNNs using this approach. For example, it has been demonstrated that breaking the precision of the gradient calculation has no negative or detrimental effect on learning.

Figure 2 shows a visual graph representation of the research trends followed in neuroevolution in deep neural networks. This is the result of using keywords found in the titles and abstracts of around 100 articles published in the last 5 years. We computed a similarity metric between these keywords and each paper. These similarities induce corresponding graph structures on the paper and key term 'spaces'. Each paper/term corresponds to a node and edges arise naturally whenever there is a similarity between nodes.

The use of evolution-based methods in designing deep neural networks is already a reality. Different EA methods with different representations have been used, ranging from landmark methods including Genetic Algorithms, Genetic Programming and Evolution Strategies up to using hybrids. In a short period of time, we have observed both ingenious representations and interesting approaches achieving extraordinary results against human-designed networks as well as state-of-the-art approaches. We have also seen that most neuroevolution studies have focused their attention on designing deep Convolutional Neural Networks. The full version of this work summarises in tables the approaches used, parameter values and the deep neural networks employed by the research community.

Figure 2. Visual graph representation of the research conducted in neuroevolution of DNNs.The articles for producing this visualisation were obtained from http://ieeexplore.ieee.org/Xplore/home.jsp. Last accessed date: 9/01/2021. Links with a strength lower than 10 are filtered out. More details about constructing this type of graph visualusation can be found in [2].


Future Work on Neuroevolution in Deep Neural Networks

Despite the large number of works on the area of neuroevolution in DNNs, there are a number of interesting areas that have been underexplored by the research community, including (a) the study of surrogate-assisted EAS, (b) combining stochastic gradient descent and EAs, (c) mutations and the neutral theory, (d) multi-objective optimisation, (e) fitness landscape analysis, (f) standardized scientific neuroevolution studies, and (g) diversifying the use of benchmark problems. In the full version of this article, we articulate why we believe each of these areas can contribute positively to the broader area of neuroevolution in deep neural networks.

Conclusions

The full version of this article [1] provides a comprehensive survey of neuroevolution approaches in Deep Neural Networks (DNNs) and discusses the most important aspects of the application of Evolutionary Algorithms (EAs) in deep learning. The target audience of this paper is a broad spectrum of researchers and practitioners from both the Evolutionary Computation and Deep Learning (DL) communities. The paper highlights where EAs are being used in DL and how DL is benefiting from this. Readers with a background in EAs will find this survey very useful in determining the state-of-the-art in neural architecture search methods in general. Additionally, readers from the DL community will be encouraged to consider the application of EAs approaches in their DNN work. Configuration of DNNs is not a trivial problem. Poorly or incorrectly configured networks can lead to the failure or under-utilisation of DNNs for many problems and applications. Finding well-performing architectures is often a very tedious and error-prone process. EAs have been shown to be a competitive and successful means of automatically creating and configuring such networks. Consequently, neuroevolution has great potential to provide a strong and robust toolkit for the DL community in the future. The article outlines and discusses important issues and challenges in this area.

Acknowledgements. Drafts of this journal article have undergone massive open online peer reviews through public mailing lists including genetic programming@yahoogroups.com, uai@engr.orst.edu, connectionists@mailman.srv.cs.cmu.edu. Thanks to numerous NN / DL / Neuroevolution experts for their valuable comments.

References

[1] Edgar Galván and Peter Mooney. Neuroevolution in Deep Neural Networks: Current Trends and Future Challenges. IEEE Transactions on Artificial Intelligence, 2021. (to appear)

[2] Riccardo Poli, Analysis of the publications on the applications of particle swarm optimisation, Journal of Artificial Evolution and Applications. (2008) 4:1–4:10

Nevergrad: Black-Box Optimization Platform

by Pauline Bennet, Carola Doerr, Antoine Moreau, Jeremy Rapin, Fabien Teytaud, Olivier Teytaud

Nevergrad is an open source platform for black-box optimization.

Join the user group!

And if you like Nevergrad, please support us by adding a star on GitHub (click on “star” here: https://github.com/facebookresearch/nevergrad).

What is Black-Box Optimization?

Black-box optimization deals with the solution of problems for which we can assess the quality of its solution candidates, but for which we do not have (or do not want to use) gradients or other useful a priori information. Structural engineering or the design of neural networks are classical examples for black-box optimization, where the evaluation of a potential design returns the quality of this particular solution candidate, but typically does not reveal much information about other design alternatives. Information about the problem must hence be collected through the evaluation of several solution candidates. Black-box optimization algorithms are often sequential, iterating between the evaluation of one or more solution candidates and adjusting the strategy by which the next candidates are generated. Black-box optimization problems can be subject to constraints or to noise. It is not uncommon to have two or more objective functions, for which one aims to find good trade-offs. Decision spaces can be purely numerical, combinatorial, or a mixture of both.

Many different approaches to solve black-box optimization problems exist, and one of the biggest challenges in applying these is in selecting the most suitable technique for a given problem.

Nevergrad aims at supporting its users in this selection task by providing very broad sets of benchmark problems that the algorithms can be compared upon, by making available state-of-the-art black-box optimization algorithms, powerful algorithm selection wizards which support users in selecting an algorithm from our portfolio, and a frequently updated dashboard of experimental results to support researchers in the analysis and design of efficient black-box optimization techniques.

Covariance Matrix Adaptation Evolution Strategies, one of the methods included in Nevergrad. Image: Wikipedia, Public Domain.


The Science of Black-Box Optimization

Black-Box Optimization in the Presence of Noise

In a ground-breaking paper, population control was proposed as a simple solution for fast noisy optimization: this combines parallelizability, ability to converge with simple regret 1/n, and small constants making the algorithm reliable in low dimension. In Nevergrad, the population control algorithm TBPSA fixes a bias in previous population control methods; it is quite robust for noisy optimization of continuous variables and has been successfully used for an application to the Stockfish chess engine.

Fully Parallel Black-Box Optimization

In particular, for fully parallel hyperparameter search, various fundamental studies have analyzed one-shot black-box optimization methods:

Structured Optimization

In particular, for real-world problems, optimization takes into account some high-level information on the structure of the problem (groups of more inter-related variables: they typically use collaborative coevolution). For example, many variants of differential evolution win competitions based on the LSGO benchmarks.

Scientific reports published some applications to physical structures at the nanometric scale.

Optimization Wizards

Automatic algorithm selection is central in combinatorial optimization and planning: it consists of selecting automatically the probably best algorithm in a wide range of possibilities. Under the name "wizard", such combined methods routinely win competitions in SAT planning and combinatorial optimization. We apply it to all forms of black-box optimization. Some optimization wizards use essentially the budget, the dimension, the type of variables for choosing an algorithm; improved forms also carefully use chaining (running algorithms one after the other, in particular for combining fast local search and robust global search as in memetic algorithms) and meta-models (fast learnt approximations of the objective function). Dynamically choosing an algorithm using results is also part of the picture, with "bet and run" as a classical solution.

Discrete Optimization: Choosing the Mutation Rates

After the initial enthusiasm for simple rules for choosing optimal fixed mutation rates such as the RLS and the (1+1)-evolutionary algorithm, new variants used random mutation rates and then adaptive mutation rates; there is now a whole body of work, including self-adjusting mutation rates, mutation rates embedded in the individuals, and coordinate-wise mutation rates.

Multiobjective Optimization

Multiobjective optimization consists of looking for trade-offs between several objective functions. Variants of differential evolution (PDE and DEMO) are well-known for this, though now all single-objective optimization methods can be adapted to the multiobjective setting using hypervolume indicators, or using NSGA-II selection methods. A key challenge is to build principled comparisons between those different methods. The dashboard provides extensive results on multiobjective optimization using different evaluation methods -- including classical ones like hypervolume, epsilon indicators, etc., but also user-centric comparisons using quality assessment tools trained on real human data.

Optimization with a Neural Quality Assessment in the Loop

Evolutionary optimization is convenient for adding a user in the loop, as it does not need gradients (humans typically answer “I prefer this” rather than “the gradient of my favorability is 0.4 wrt the 7th axis of the latent variable”. One can use evolutionary optimization for combining preferences (provided by humans or by hard to differentiate deep IQA) as in Evolutionary GAN (see a demo here), Evolutionary super-resolution, or interactive GANs.

What can we do with Nevergrad ?

Nevergrad is a benchmarking platform that is designed to help researchers gain insight into the strengths and weaknesses of different black-box optimization techniques. For practitioners, Nevergrad provides powerful state-of-the-art optimization techniques, conveniently accessible through a user-friendly Python environment.

The key distinguishing feature of Nevergrad is the breadth of algorithms and problem suites which it covers and its publicly available dashboard, which provides convenient access and visualization of our rich data sets.


Algorithm Portfolio: Nevergrad provides implementations of Covariance Matrix Adaptation, Particle Swarm Optimization, evolutionary algorithms, Differential Evolution, Bayesian optimization, HyperOpt, Powell, Cobyla, LHS, quasi-random point constructions, NSGA-II, and more.

Building on our rich benchmark data, our algorithm selector NGOpt combines these algorithms by automatically selecting a solver based on high-level problem information, by sequentially executing two or more algorithms from the list, or by leveraging parallel resources to actively select the best-performing approach.


Benchmark Problems: Nevergrad provides interfaces to several collections of benchmark problems, either home-built (YABBOB, LSGO) or external (MuJoCo, MLDA, PBO). Together, these problem suites range from classical optimization problems through Machine Learning tasks to real-world optimization challenges.

We still include many algorithms (CMA, PSO, DE, Bayesian Optimization, HyperOpt, LHS, Quasi-Random, NSGA-II, ...) and benchmarks (YABBOB, LSGO, LMDA, Mujoco, and many others including real-world stuff e.g. in power systems).

For example, we can do hyperparameter optimization, optimizing policies for model-based RL, operations research, multiobjective optimization, and user-controlled GAN or quality-controlled GAN (Images: see beautiful cats and horses).

Yes, we can generate cats and horses. (Collab. Univ. Littoral Cote d’Opale & Univ. Konstanz.) Compared to traditional GANs, the latent variables are slightly mutated (not too much, for preserving diversity) in order to improve the quality of images. For difficult datasets (such as horses) we get rid of weird artefacts such as balls of horse skin floating in the air or horses with multiple heads. A side remark about GANs is that the ``abstract’’ ``high-level’’ quality of images (no weird additional limb) is actually correlated with image quality as estimated from low-level features.

What’s New?

Recent features include improvement of the multiobjective setting and constraint management. MuJoCo (a robotic benchmark for which results are notoriously influenced by implementation details) was added so that you can easily run MuJoCo without suffering for the interfacing. HyperOpt is added, a competence map that automatically selects an algorithm for your problem, if you have used the instrumentation for describing your problem.

We also now run a dashboard, which maintains a list of problems and the performance of many algorithms on it. We already had MLDA as a classical benchmark and YABBOB as our own variant of BBOB. We now also include LSGO -- all with the same interface.

Compared to most existing frameworks (BBOB/COCO, LSGO), we have: real-world problems, realistic ML, rigorous implementation of noisy optimization, larger scale, and algorithms (not only benchmarks). Compared to optimization platforms, we have a wide range of algorithms with the same interface. To the best of our knowledge, Nevergrad is the only platform which periodically reruns all benchmarks.

Who uses Nevergrad & Black-Box Optimization ?

Electricity, Photonics, and Other Real-World Problems

Madagascar has 75% of its population without any access to electricity and even the rest of the population does not have continuous access. We collaborate with Univ. Antananarivo for modeling the key “what if” questions regarding the electrification of Madagascar.

Other models for electricity are under development. We include problems close to waveguides.

Antireflective coating. Right: silicium. Left: silicum + antireflective coating.

AI-designed AR coating typically reduces the part of the light which is lost.

(collaboration Univ. Clermont-Ferrand)



Visibility in conferences

Nevergrad is mainly a thing in optimization conferences (GECCO, PPSN, Dagstuhl/optim), but it is becoming known in ML conferences: 6 PDFs on OpenReview ICLR mention Nevergrad, 1 at ICML-Proceedings, and 71 on arXiv. Several workshops and competitions were based on Nevergrad.

Frameworks using Nevergrad

Hydra and Ray use Nevergrad for optimizing hyperparameters. Nevergrad also interfaces easily with submitit (which is a spinoff of nevergrad). IOHprofiler profiler is interfaced with Nevergrad, so that problems can be accessed both ways. IOHprofiler’s data analysis and visualization tool IOHanalyzer can easily read Nevergrad’s performance files.


Stockfish Chess Engine

Nevergrad is used for tuning Stockfish, a very strong chess program still winning top competitions in spite of the zero-learning style programs. Nevergrad has been used for optimizing the weights of the Stockfish neural value function.

Stockfish (a very powerful chess program) used Nevergrad and still wins top competitions in spite of the zero-learning era (image: Peter Österlund, Tord Romstad, Marco Costalba, Joona Kiiski, GPLv3 <http://www.gnu.org/licenses/gpl-3.0.html>, via Wikimedia Commons).




Nevergrad has also been used for other games (https://github.com/fsmosca/Lakas), for locating Wifi sources (https://github.com/ericjster/wifilocator), for various projects with machine learning (in particular GAN, or research around MuZero https://github.com/andrei-ars/muzero-test3), for research in black-box optimization (https://github.com/lazyoracle/optim-benchmark), for improving the way deep nets are mapped onto accelerators (https://github.com/maestro-project/gamma), in digital medicine (https://github.com/tl32rodan/Digital-Medicine-Smoke-Status-Detection or https://github.com/hello0630/sepsis_framework), and for AI planning for financial applications (https://github.com/gordoni/aiplanner) .

Open Optimization Competition

Together with the IOHprofiler team we are organizing the Open Optimization Competition 2021. The first one was organized in 2020 (OOC 2020), and rewarded contributions to the benchmarking experience using Nevergrad or IOHprofiler. The 2021 edition also hosts a classical, performance-oriented track, in which participants are invited to submit their algorithms to compete with the state-of-the-art black-box optimization techniques. Accepted submissions algorithms are automatically run on our platform, and results are made public on our regularly updated dashboard.

Selected publications using Nevergrad

Future work

Further improvements of our algorithm wizard is an ongoing task, just as we always strive to include more benchmark problems on which we can test the algorithms. Priority in terms of future extensions are an improved constraint management and the inclusion of real-world applications.

Nevergrad Contributors (random order)

Jeremy Rapin, Antoine Moreau, Fabien Teytaud, Carola Doerr, Baptiste Roziere, Laurent Meunier, Herilalaina Rakotoarison, Olivier Teytaud, Pauline Bennet, Diederick Vermetten, Julien Dehos, Pak Kan Wong, Rahamefy Solofohanitra Andriamalala, Toky Axel Andriamizakason, Andry Rasoanaivo, Vlad Hosu and so many others.

Genetically Generated ASCII Trees

by Alec Pugh

This project, inspired by the article: "On genetic algorithms and Lindenmayer systems" by Gabriela Ochoa, implements an ASCII tree-generation program using L-systems.

The algorithm starts with a list of N randomly generated L-systems of a specified length and makes sure they are correct syntactically. Then, iterates through the list selecting the strings which generate trees with the most symmetry and height. The best candidate from each generation is drawn into the console using the digital differential analyzers (DDA) line drawing algorithm, and additional individuals are generated from two parents: the most symmetric and the tallest from the previous generation. Cross-breeding and mutation are done as follows: the child is initially equal to the symmetric parent, and then each character is iterated through with a 50% probability to be set to the character of the tallest parent. Once the child string is generated, each character is again iterated with a specific probability of mutation to another random character from the initial genotypic set. This is repeated N times and the process restarts indefinitely.

The code is available from Github at: https://github.com/alecstem/gen-tree

About the Author. My name is Alec Pugh, I am 19 years old, and a transfer student from the USA in community college. I am a self-taught programmer that primarily uses C++, and I'm intrigued by how code can be used to mimic the biological processes of life. Naturally, evolutionary computation is a field that has drawn my attention extensively, and I hope to transfer to a university that allows me to perform research in this field very soon!

Forthcoming Events

EvoStar

EvoStar 2021 is planned as an online event from 7 to 9 April 2021.

EvoStar is comprised of four co-located conferences

  • EuroGP 24th European Conference on Genetic Programming

  • EvoApplications 24th European Conference on the Applications of Evolutionary and Bio-inspired Computation

  • EvoCOP 21st European Conference on Evolutionary Computation in Combinatorial Optimisation

  • EvoMUSART 10th International Conference (and 18th European event) on Evolutionary and Biologically Inspired Music, Sound, Art and Design

Genetic Improvement

GI 2021, the 10th International Workshop on the Repair and Optimisation of Software using Computational Search, will be co-located with the 43rd International Conference on Software Engineering, 23 - 29 May. GI@ICSE 2021 will be held as a completely virtual event on Sunday, 30 May.

GI is the premier workshop in the field and provides an opportunity for researchers interested in automated program repair and software optimisation to disseminate their work, exchange ideas and discover new research directions.

Genetic and Evolutionary Computation Conference

GECCO 2021 will be held as an online event on July 10-14, 2021. GECCO presents the latest high-quality results in genetic and evolutionary computation since 1999.

Topics include: genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems (artificial life, robotics, evolvable hardware, generative and developmental systems, artificial immune systems), digital entertainment technologies and arts, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, real-world applications, search-based software engineering, theory and more.

Calls for Papers

The Human-Competitive Awards

Humies 2021 will take place in conjunction with the Genetic and Evolutionary Computation Conference, GECCO 2021, July 10-14, 2021.

Entries are hereby solicited for awards totalling $10,000 for human-competitive results that have been produced by any form of genetic and evolutionary computation (including, but not limited to genetic algorithms, genetic programming, evolution strategies, evolutionary programming, learning classifier systems, grammatical evolution, gene expression programming, differential evolution, etc.) and that have been published in the open literature between the deadline for the previous competition and the deadline for the current competition.

Important Dates

  • Friday May 28, 2021 — Deadline for entries (consisting of one TEXT file, PDF files for one or more papers, and possible "in press" documentation (explained below). Please send entries to goodman at msu dot edu

  • Friday June 11, 2021 — Finalists will be notified by e-mail

  • Friday, June 25, 2021 — Finalists not presenting in person must submit a 10-minute video presentation (or the link and instructions for downloading the presentation, NOT a YouTube link) to goodman at msu dot edu.

  • July 10-14, 2021 (Saturday - Wednesday) — GECCO conference (the schedule for the Humies session is not yet final, so please check the GECCO program as it is updated)

  • Monday, July 12, 2021, 13:40-15:20 Lille time (Central European Time = GMT+1; 7:40am-9:20am EDT ) — Presentation session, where 10-minute videos will be available for viewing.

  • Wednesday, July 14, 2021 — Announcement of awards at the virtual plenary session of the GECCO conference

Judging Committee: Erik Goodman, Una-May O'Reilly, Wolfgang Banzhaf, Darrell Whitley, Lee Spector, Stephanie Forrest

Publicity Chair: William Langdon

Foundations of Genetic Algorithms

FOGA 2021 will be held from September 6th to September 8th as a virtual event, initially planned to be at the Vorarlberg University of Applied Sciences in Dornbirn, Austria.

The conference series aims at advancing the understanding of the working principles behind Evolutionary Algorithms and related Randomized Search heuristics. FOGA is a premier event to discuss advances in the theoretical foundations of these algorithms, corresponding frameworks suitable to analyze them, and different aspects of comparing algorithm performance.

Topics of interest include, but are not limited to: Run time analysis; Mathematical tools suitable for the analysis of search heuristics, Fitness landscapes and problem difficulty; Configuration and selection of algorithms, heuristics, operators, and parameters; Stochastic and dynamic environments, noisy evaluations; Constrained optimization; Problem representation; Complexity theory for search heuristics; Multi-objective optimization; Benchmarking; Connections between black-box optimization and machine learning.

Important dates

  • Submission deadline: April 30, 2021

  • Author rebuttal phase: June 1 - 7, 2021

  • Notification of acceptance: June 20, 2021

  • Camera-ready submission: July 14, 2021

  • Early-registration deadline: July 14, 2021

  • Conference dates: September 6 - 8, 2021

Special Issue on: Benchmarking Sampling-Based Optimization Heuristics: Methodology and Software

IEEE Transactions on Evolutionary Computation

The goal of this special issue is to provide an overview of state-of-the-art software packages, methods, and data sets that facilitate sound benchmarking of evolutionary algorithms and other optimization techniques. By providing an overview of today's benchmarking landscape, new synergies will be laid open, helping the community to converge towards a higher compatibility between tools, towards better reproducibility and replicability of our research, a better use of resources, and, ultimately, towards higher standards in our benchmarking practices.

Topics

We welcome submissions on the following topics:

Generation, selection, and analysis of problems and problem instances; benchmark-driven algorithm design, selection, and analysis; experimental design; benchmark data collections and their organization; performance analysis and visualization, including statistical evaluation; holistic performance analysis in the context of real-world optimization problems, e.g., algorithm robustness, problem class coverage, implementation complexity; other aspects of benchmarking optimization algorithms.

Important dates

  • Submission deadline: May 31, 2021

  • Tentative publication date: Spring 2022

Manuscripts should be prepared according to the Information for Authors section of the journal, and submissions should be made through the journal submission website, by selecting the Manuscript Type "BENCH Special Issue Papers'" and clearly adding "Benchmarking Special Issue Paper" to the comments to the Editor-in-Chief.

About this Newsletter

SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW]

We solicit contributions in the following categories:

Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.

Short surveys and position papers: We invite short surveys and position papers in EC and EC related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.

Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.

Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.

Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.

Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.

Forthcoming Events. If you have an EC event you wish to announce, this is the place.

News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.

Letters. If you want to ask or to say something to SIGEVO members, please write us a letter!

Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.

Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.

Enquiries about submissions and contributions can be emailed to gabriela.ochoa@stir.ac.uk

All the issues of SIGEVOlution are also available online at: www.sigevolution.org

Notice to contributing authors to SIG newsletters

By submitting your article for distribution in the Special Interest Group publication, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:

  • to publish in print on condition of acceptance by the editor

  • to digitize and post your article in the electronic version of this publication

  • to include the article in the ACM Digital Library

  • to allow users to copy and distribute the article for noncommercial, educational or research purposes

However, as a contributing author, you retain copyright to your article and ACM will make every effort to refer requests for commercial use directly to you.

Editor: Gabriela Ochoa

Sub-editor: James McDermott

Associate Editors: Emma Hart, Una-May O'Reilly, Nadarajen Veerapen, and Darrell Whitley