newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation
Welcome to the Spring 2021 issue of the SIGEvolution newsletter! Our first contribution, by Edgar Galván, gives a survey of a current area in Neuroevolution; evolutionary approaches applied to the architectural configuration and training of artificial deep neural networks. We continue with a thorough overview of Nevergrad, an open-source platform for black-box optimization developed by a group of researchers based in France. We then report on how evolutionary computation has inspired a young college student from the United States, Alec Pugh, to develop his coding skills. The issue concludes by announcing a number of exciting events and calls for submissions.
Please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.
Gabriela Ochoa, Editor.
About the Cover
The artwork on the cover was created by Dr Zainab Ali Abbood as part of her PhD dissertation supervised by Dr Franck P. Vidal at Bangor University (Wales, UK). She implemented Fly4Arts: Evolutionary Digital Art with the Fly Algorithm, an image filter implemented using GPU computing. It is based on the Parisian Evolution / Cooperative Co-evolution principle. The algorithm contains all the usual components of an evolutionary algorithm. In addition, there is a "global'' fitness calculated on the whole population, and a local fitness assessing the contribution of each individual. Each individual represents a paintbrush stroke, and the population is the image canvas. Each stroke has a pattern, size, orientation, colour and position in the 3-D Euclidean space. The 3rd dimension is needed so strokes can partially occlude one another. Fly4Arts has been presented at the Biennial International Conference on Artificial Evolution (EA-2017), European Conference on the Applications of Evolutionary (EvoApplications 2017) and in “Fly4Arts: Evolutionary Digital Art with the Fly Algorithm,” Arts Sci., vol. 17, no. 1, Oct. 2017. The artworks were on display during the Art&Science in Evolutionary Computation exhibition organised by Galerie Louchard in Paris.
Neuroevolution in Deep Neural Networks: A Comprehensive Survey
by Edgar Galván
Maynooth University, Ireland
Abstract. A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNNs). These methods play a crucial role in the success or failure of the DNNs for most problems. Evolutionary Algorithms are gaining momentum as a computationally feasible method for the automated optimisation of DNNs. Neuroevolution is a term that describes these processes. This newsletter article summarises the full version available at https://arxiv.org/abs/2006.05415.
Deep learning algorithms are inspired by deep hierarchical structures of human perception as well as production systems. These algorithms have achieved extraordinary results in different areas including computer vision and speech recognition, to mention a few examples. The design of DNN architectures (along with the optimisation of their hyperparameters), as well as their training, plays a crucial part in their success or failure. EA architecture-search methods, sometimes referred to as neuroevolution, are yielding impressive results in the automatic configuration of DNN architectures. There are over 300 works published in the area of Neural Architecture Search, where nearly a third correspond to neuroevolution in DNNs. Figure 1 (left) shows a breakdown of these publications per year, from 2009 to 2020; and (right) most common venues.
Evolving DNNs Architectures Through Evolutionary Algorithms
Motivation. In recent years, there has been a surge of interest in methods for neural architecture search. Broadly, they can be categorised in one of two areas: evolutionary algorithms (EAs) or reinforcement learning. Recently, EAs have started gaining momentum for designing deep neural network architectures. The popularity of these algorithms is due to the fact that they are gradient-free, population-based methods that offer a parallelised mechanism to simultaneously explore multiple areas of the search space while at the same time offering a mechanism to escape from local optima. Moreover, the fact that the algorithm is inherently suited to parallelisation means that more potential solutions can be simultaneously computed within acceptable wall-clock time. Steady increases in computing power, including graphics processing units with thousands of cores, are contributing to speed up population-based EAs.
Criticism. Despite the popularity of EAs for designing deep neural network architectures, they have also been criticised in the light of being slow learners, as well as being computationally expensive to evaluate. For example, when using a small population-based EA of 20 individuals (potential solutions) and using a training set of 50,000 samples, one generation alone (of hundreds, thousands or millions of generations) will require one million evaluations through the use of a fitness function.
Training Deep Neural Networks With Evolutionary Algorithms
Motivation. Backpropagation has been one of the most successful and dominant methods used for training ANNs over the past three decades. This simple, effective and elegant method applies Stochastic Gradient Descent (SGD) to the weights of the ANN where the goal is to keep the overall error as low as possible. However, as remarked by some researchers, the widely held belief, up to around 2006, was that backpropagation would suffer a loss of its gradient within DNNs. This turned out to be a false assumption and it has subsequently been proved that backpropagation and SGD are effective at optimising DNNs even when there are millions of connections. Both backpropagation and SGD benefit from the availability of sufficient training data and the availability of computational power. In a problem space with so many dimensions, the success of using SGD in DNNs is still surprising. Practically speaking, SGD should be highly susceptible to local optima. Interesting works have studied this phenomenon arguing, for example, that the noise helps SGN escape saddle points due to the randomness in the estimator. Others hypothesise that the presence of multiple local optima is not a problem as they are very similar to the best solution. EAs perform very well in the presence of saddle points.
Criticism. As there are no guarantees of convergence, the solutions computed using EAs are usually seen as near optimal. Population-based EAs are in effect an approximation of the gradient as this is estimated from the individuals in a population and their corresponding objectives. On the other hand, SGD computes the exact gradient. As a result, some researchers may consider EAs unsuitable for DL tasks. However, it has been demonstrated that the exact approximation obtained by SGD is not absolutely critical in the overall success of DNNs using this approach. For example, it has been demonstrated that breaking the precision of the gradient calculation has no negative or detrimental effect on learning.
Figure 2 shows a visual graph representation of the research trends followed in neuroevolution in deep neural networks. This is the result of using keywords found in the titles and abstracts of around 100 articles published in the last 5 years. We computed a similarity metric between these keywords and each paper. These similarities induce corresponding graph structures on the paper and key term 'spaces'. Each paper/term corresponds to a node and edges arise naturally whenever there is a similarity between nodes.
The use of evolution-based methods in designing deep neural networks is already a reality. Different EA methods with different representations have been used, ranging from landmark methods including Genetic Algorithms, Genetic Programming and Evolution Strategies up to using hybrids. In a short period of time, we have observed both ingenious representations and interesting approaches achieving extraordinary results against human-designed networks as well as state-of-the-art approaches. We have also seen that most neuroevolution studies have focused their attention on designing deep Convolutional Neural Networks. The full version of this work summarises in tables the approaches used, parameter values and the deep neural networks employed by the research community.
Future Work on Neuroevolution in Deep Neural Networks
Despite the large number of works on the area of neuroevolution in DNNs, there are a number of interesting areas that have been underexplored by the research community, including (a) the study of surrogate-assisted EAS, (b) combining stochastic gradient descent and EAs, (c) mutations and the neutral theory, (d) multi-objective optimisation, (e) fitness landscape analysis, (f) standardized scientific neuroevolution studies, and (g) diversifying the use of benchmark problems. In the full version of this article, we articulate why we believe each of these areas can contribute positively to the broader area of neuroevolution in deep neural networks.
The full version of this article  provides a comprehensive survey of neuroevolution approaches in Deep Neural Networks (DNNs) and discusses the most important aspects of the application of Evolutionary Algorithms (EAs) in deep learning. The target audience of this paper is a broad spectrum of researchers and practitioners from both the Evolutionary Computation and Deep Learning (DL) communities. The paper highlights where EAs are being used in DL and how DL is benefiting from this. Readers with a background in EAs will find this survey very useful in determining the state-of-the-art in neural architecture search methods in general. Additionally, readers from the DL community will be encouraged to consider the application of EAs approaches in their DNN work. Configuration of DNNs is not a trivial problem. Poorly or incorrectly configured networks can lead to the failure or under-utilisation of DNNs for many problems and applications. Finding well-performing architectures is often a very tedious and error-prone process. EAs have been shown to be a competitive and successful means of automatically creating and configuring such networks. Consequently, neuroevolution has great potential to provide a strong and robust toolkit for the DL community in the future. The article outlines and discusses important issues and challenges in this area.
Acknowledgements. Drafts of this journal article have undergone massive open online peer reviews through public mailing lists including genetic email@example.com, firstname.lastname@example.org, email@example.com. Thanks to numerous NN / DL / Neuroevolution experts for their valuable comments.
 Edgar Galván and Peter Mooney. Neuroevolution in Deep Neural Networks: Current Trends and Future Challenges. IEEE Transactions on Artificial Intelligence, 2021. (to appear)
 Riccardo Poli, Analysis of the publications on the applications of particle swarm optimisation, Journal of Artificial Evolution and Applications. (2008) 4:1–4:10
Nevergrad: Black-Box Optimization Platform
by Pauline Bennet, Carola Doerr, Antoine Moreau, Jeremy Rapin, Fabien Teytaud, Olivier Teytaud
What is Black-Box Optimization?
Black-box optimization deals with the solution of problems for which we can assess the quality of its solution candidates, but for which we do not have (or do not want to use) gradients or other useful a priori information. Structural engineering or the design of neural networks are classical examples for black-box optimization, where the evaluation of a potential design returns the quality of this particular solution candidate, but typically does not reveal much information about other design alternatives. Information about the problem must hence be collected through the evaluation of several solution candidates. Black-box optimization algorithms are often sequential, iterating between the evaluation of one or more solution candidates and adjusting the strategy by which the next candidates are generated. Black-box optimization problems can be subject to constraints or to noise. It is not uncommon to have two or more objective functions, for which one aims to find good trade-offs. Decision spaces can be purely numerical, combinatorial, or a mixture of both.
Many different approaches to solve black-box optimization problems exist, and one of the biggest challenges in applying these is in selecting the most suitable technique for a given problem.
Nevergrad aims at supporting its users in this selection task by providing very broad sets of benchmark problems that the algorithms can be compared upon, by making available state-of-the-art black-box optimization algorithms, powerful algorithm selection wizards which support users in selecting an algorithm from our portfolio, and a frequently updated dashboard of experimental results to support researchers in the analysis and design of efficient black-box optimization techniques.
Covariance Matrix Adaptation Evolution Strategies, one of the methods included in Nevergrad. Image: Wikipedia, Public Domain.
The Science of Black-Box Optimization
What can we do with Nevergrad ?
Who uses Nevergrad & Black-Box Optimization ?
Open Optimization Competition
Selected publications using Nevergrad
Nevergrad Contributors (random order)
Genetically Generated ASCII Trees
by Alec Pugh
This project, inspired by the article: "On genetic algorithms and Lindenmayer systems" by Gabriela Ochoa, implements an ASCII tree-generation program using L-systems.
The algorithm starts with a list of N randomly generated L-systems of a specified length and makes sure they are correct syntactically. Then, iterates through the list selecting the strings which generate trees with the most symmetry and height. The best candidate from each generation is drawn into the console using the digital differential analyzers (DDA) line drawing algorithm, and additional individuals are generated from two parents: the most symmetric and the tallest from the previous generation. Cross-breeding and mutation are done as follows: the child is initially equal to the symmetric parent, and then each character is iterated through with a 50% probability to be set to the character of the tallest parent. Once the child string is generated, each character is again iterated with a specific probability of mutation to another random character from the initial genotypic set. This is repeated N times and the process restarts indefinitely.
The code is available from Github at: https://github.com/alecstem/gen-tree
About the Author. My name is Alec Pugh, I am 19 years old, and a transfer student from the USA in community college. I am a self-taught programmer that primarily uses C++, and I'm intrigued by how code can be used to mimic the biological processes of life. Naturally, evolutionary computation is a field that has drawn my attention extensively, and I hope to transfer to a university that allows me to perform research in this field very soon!
EvoStar is comprised of four co-located conferences
EuroGP 24th European Conference on Genetic Programming
EvoApplications 24th European Conference on the Applications of Evolutionary and Bio-inspired Computation
EvoCOP 21st European Conference on Evolutionary Computation in Combinatorial Optimisation
EvoMUSART 10th International Conference (and 18th European event) on Evolutionary and Biologically Inspired Music, Sound, Art and Design
GI 2021, the 10th International Workshop on the Repair and Optimisation of Software using Computational Search, will be co-located with the 43rd International Conference on Software Engineering, 23 - 29 May. GI@ICSE 2021 will be held as a completely virtual event on Sunday, 30 May.
GI is the premier workshop in the field and provides an opportunity for researchers interested in automated program repair and software optimisation to disseminate their work, exchange ideas and discover new research directions.
Genetic and Evolutionary Computation Conference
GECCO 2021 will be held as an online event on July 10-14, 2021. GECCO presents the latest high-quality results in genetic and evolutionary computation since 1999.
Topics include: genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems (artificial life, robotics, evolvable hardware, generative and developmental systems, artificial immune systems), digital entertainment technologies and arts, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, real-world applications, search-based software engineering, theory and more.
Calls for Papers
Entries are hereby solicited for awards totalling $10,000 for human-competitive results that have been produced by any form of genetic and evolutionary computation (including, but not limited to genetic algorithms, genetic programming, evolution strategies, evolutionary programming, learning classifier systems, grammatical evolution, gene expression programming, differential evolution, etc.) and that have been published in the open literature between the deadline for the previous competition and the deadline for the current competition.
Friday May 28, 2021 — Deadline for entries (consisting of one TEXT file, PDF files for one or more papers, and possible "in press" documentation (explained below). Please send entries to goodman at msu dot edu
Friday June 11, 2021 — Finalists will be notified by e-mail
Friday, June 25, 2021 — Finalists not presenting in person must submit a 10-minute video presentation (or the link and instructions for downloading the presentation, NOT a YouTube link) to goodman at msu dot edu.
July 10-14, 2021 (Saturday - Wednesday) — GECCO conference (the schedule for the Humies session is not yet final, so please check the GECCO program as it is updated)
Monday, July 12, 2021, 13:40-15:20 Lille time (Central European Time = GMT+1; 7:40am-9:20am EDT ) — Presentation session, where 10-minute videos will be available for viewing.
Wednesday, July 14, 2021 — Announcement of awards at the virtual plenary session of the GECCO conference
Judging Committee: Erik Goodman, Una-May O'Reilly, Wolfgang Banzhaf, Darrell Whitley, Lee Spector, Stephanie Forrest
Publicity Chair: William Langdon
Foundations of Genetic Algorithms
FOGA 2021 will be held from September 6th to September 8th as a virtual event, initially planned to be at the Vorarlberg University of Applied Sciences in Dornbirn, Austria.
The conference series aims at advancing the understanding of the working principles behind Evolutionary Algorithms and related Randomized Search heuristics. FOGA is a premier event to discuss advances in the theoretical foundations of these algorithms, corresponding frameworks suitable to analyze them, and different aspects of comparing algorithm performance.
Topics of interest include, but are not limited to: Run time analysis; Mathematical tools suitable for the analysis of search heuristics, Fitness landscapes and problem difficulty; Configuration and selection of algorithms, heuristics, operators, and parameters; Stochastic and dynamic environments, noisy evaluations; Constrained optimization; Problem representation; Complexity theory for search heuristics; Multi-objective optimization; Benchmarking; Connections between black-box optimization and machine learning.
Submission deadline: April 30, 2021
Author rebuttal phase: June 1 - 7, 2021
Notification of acceptance: June 20, 2021
Camera-ready submission: July 14, 2021
Early-registration deadline: July 14, 2021
Conference dates: September 6 - 8, 2021
Special Issue on: Benchmarking Sampling-Based Optimization Heuristics: Methodology and Software
IEEE Transactions on Evolutionary Computation
The goal of this special issue is to provide an overview of state-of-the-art software packages, methods, and data sets that facilitate sound benchmarking of evolutionary algorithms and other optimization techniques. By providing an overview of today's benchmarking landscape, new synergies will be laid open, helping the community to converge towards a higher compatibility between tools, towards better reproducibility and replicability of our research, a better use of resources, and, ultimately, towards higher standards in our benchmarking practices.
We welcome submissions on the following topics:
Submission deadline: May 31, 2021
Tentative publication date: Spring 2022
Manuscripts should be prepared according to the Information for Authors section of the journal, and submissions should be made through the journal submission website, by selecting the Manuscript Type "BENCH Special Issue Papers'" and clearly adding "Benchmarking Special Issue Paper" to the comments to the Editor-in-Chief.
About this Newsletter
SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW]
We solicit contributions in the following categories:
Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.
Short surveys and position papers: We invite short surveys and position papers in EC and EC related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.
Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.
Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.
Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.
Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.
Forthcoming Events. If you have an EC event you wish to announce, this is the place.
News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.
Letters. If you want to ask or to say something to SIGEVO members, please write us a letter!
Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.
Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.
Enquiries about submissions and contributions can be emailed to firstname.lastname@example.org
All the issues of SIGEVOlution are also available online at: www.sigevolution.org
Notice to contributing authors to SIG newsletters
By submitting your article for distribution in the Special Interest Group publication, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:
to publish in print on condition of acceptance by the editor
to digitize and post your article in the electronic version of this publication
to include the article in the ACM Digital Library
to allow users to copy and distribute the article for noncommercial, educational or research purposes
However, as a contributing author, you retain copyright to your article and ACM will make every effort to refer requests for commercial use directly to you.
Editor: Gabriela Ochoa
Sub-editor: James McDermott
Associate Editors: Emma Hart, Una-May O'Reilly, Nadarajen Veerapen, and Darrell Whitley