# SIGEVOlution

newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation

## Editorial

Welcome to the Winter 2020 issue of the SIGEvolution newsletter! Our first contribution, by Anne Auger and Nikolaus Hansen , gives a visual and engaging overview of the article that received the SIGEVO 2020 impact award. Established in 2009, this award recognizes up to three papers a year that were published in the GECCO conference 10 years earlier and which are both highly cited and deemed to be seminal by the SIGEVO Executive Committee. We continue with a summary and statistics of GECCO 2020, by the EiC Jose Antonio Lozano, with visualisations of submissions, distribution of authors by countries and continents, and the now traditional bump-chart following the dynamics of GECCO tracks over the years. We end the issue announcing a number of exciting events and calls for submissions.

As ever, please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.

Gabriela Ochoa, Editor.

## About the Cover

A figure from the contribution by Anne Auger and Niko Hansen, these graphs show the empirical cumulative distribution function of the number of function evaluations of a selection of algorithms for all problems in the respective dimension (2, 5, 20, and 40). The algorithm named “best 2009” is an oracle portfolio algorithm where the best performing algorithm is picked to be used for each function and dimension individually. The saturation of the graphs to the right indicates the fraction of solved problems. Horizontal distances of the graphs to the left border indicate the needed evaluations, displayed on the log scale.

## A SIGEVO Impact Award for a Paper Arising from the COCO Platform

### A Summary and Beyond

by Anne Auger and Nikolaus Hansen

Inria and Ecole Polytechnique, IP Paris, France.

The paper *Comparing Results of 31 Algorithms from the Black-Box Optimization Benchmarking BBOB-2009** *received the 2020 SIGEVO Impact Award for ten-year impact, which was announce at the ACM GECCO 2020 conference. The work compares the performance of 31 algorithms that had been benchmarked with the Comparing Continuous Optimizer platform (COCO) for the Black-Box-Optimization Benchmarking (BBOB) GECCO workshop in 2009. Beyond the aforementioned paper, the award embodies the impact of the COCO platform and of the BBOB workshop series. We present here a bird eye view of the platform together with the summary of the main results of the paper which obtained the scientific award.

**The COCO platform:** For a comparative performance assessment via benchmarking, everybody has to conduct the same experiments: run an algorithm on the same testsuite, record and log the same data, use those data to display the results either as tables or graphs representing the performance profile of an optimization algorithm. The development of the COCO platform started from the idea that this time invested by each researcher is saved by providing the code we would be using ourselves to everybody else to perform benchmarking---from running experiments, to displaying graphs and tables for a scientific paper and even providing a latex template of the scientific article where the graphs and tables of results are already displayed. The idea of the COCO platform was born. Its first version was released a few months before the GECCO 2009 conference.

The design of the platform had to take into account that possible users would like to run their algorithm in different programming languages. At the beginning, we had made the choice to implement the test functions in several languages. This design was changed a few years later, initiated by Olaf Mersmann, in order to have the test functions written in a single language, C, that entails the important advantage of avoiding bugs that could be produced with the reimplementation of the test functions in another language. Figure 1 shows the current structure of the platform.

The platform included in the beginning a single-objective testbed (the bbob testbed) as well as a noisy suite. We now also have a bi-objective, a large-scale and a mixed-integer testbed.

**Figure 1**. Overview of the platform. COCO provides the parts in black while users plug-in their solver to the COCO interface in the language of their choice. Once the experiments have been run, the log files are post-processed by running the COCO post-processing module to produce different plots and tables which can be directly integrated into latex templates

**Benchmarking methodology:** While benchmarking might look easy, there were surprisingly many choices and decisions to make that turned out not to be obvious and lead us to rethink and explore benchmarking methodology. Hence, together with the design of the platform, we have been working on developing the scientific methodology necessary for a sound performance assessment. We wish to emphasize a few of the underlying concepts behind this methodology while we refer to [1] for a more detailed overview.

We emphasize

*quantitative*performance assessment to be able to draw conclusions of the type “this algorithm is 10 times faster than this other one”. This type of statement is much more informative than “this algorithm is better than this other one”, which can mean that the algorithm is 100 times faster or only 0.1% faster. Therefore, we measure the number of function evaluations required to reach certain target precision values and not the function value reached at some given budgets of function evaluations (see Figure 2).Our quantitative performance measurement of choice can be aggregated across functions. The meaning of number of evaluations does not depend on the specific function and function evaluations can be aggregated across functions when represented in the log scale, where the aggregation is faithful to the ratio between measurements [7].

Each function is pseudo-randomized and comes in several instances. Typically, the optimum is different for each function-instance and certain transformations like rotation are also chosen pseudo-randomly for different instances. This way, it is harder to exploit specific properties of the function and deterministic and stochastic algorithms can be compared in the same way.

All problems are scalable, defined for “any” dimension while our typical setup is to run experiments for dimension 2,3,5,10,20 and 40.

**Figure 2**. Starting from a convergence graph, we use a fixed target approach, i.e., we record the number of function evaluations to reach some targets, the red stars, in contrast to a fixed budget approach where function values at different number of function evaluations, the blue stars, are recorded. The data ensuing from the red stars are quantitative and rational: we can take ratios and requiring, say, twice the number of function evaluations is a meaningful notion. The fixed target approach gives imprecise data for algorithms that do not reach the target within the timeout budget (red triangle) while the fixed budget approach gives imprecise data for algorithms that can solve the problem faster than within the fixed budget (blue triangle)

**Providing data:** We provide data of algorithms that have been already benchmarked with the platform. Figure 3 shows a code example use-case.

Data from any online archive can be processed together with any local, user-generated data. Online data are automatically downloaded once when needed. Some resulting output figures from running the example code in Figure 3 is shown in Figure 4.

The graphs show the empirical cumulative distribution function (CDF) of the number of function evaluations for all problems in the respective dimension. The saturation of the graphs to the right indicates the fraction of solved problems. Horizontal distances of the graphs to the left border indicate the needed evaluations, displayed on the log scale. Hence, more to the left means faster and higher up means more often successful.

A smartly restarted version of the Nelder-Mead algorithm is overall superior in dimension 2, but its performance quickly deteriorates in higher dimensionalities. A recently benchmarked portfolio algorithm that runs SLSQP from scipy.optimize once, followed by a run of IPOP-CMA-ES, shows remarkable performance. In particular, up to a budget of 200 times dimension, SLSQP overall outperforms even the “best 2009” oracle portfolio. CMA-ES and iAMaLGaM-IDEA solve by far the most problems in dimension 5 or larger.

We highlight the generality of using empirical CDFs and their surprising connection to single convergence graphs. Figure 5 shows a single convergence graph and the two ways to discretize the graph data

**Figure 5**: Convergence graph of the IPOP-CMA-ES minimizing an instance of the BBOB step-ellipsoidal function F7 in 20-D. The y-axis displays the function precision value. The run is artificially stopped early for demonstration purposes. The upper right suggests budget-based discretization, the lower left suggests precision value based discretization. Bars in the lower right indicate our fundamental measurements, and their lengths indicate the number of evaluations needed to reach the respective precision.

Discretization by function precision value results in the number of evaluations as “dependent variable” and measurement, determined whenever the convergence graphs surpasses the respective precision value for the first time. The evaluation measurements are indicated as bold red bars and stars in the lower right. Figure 6 depicts the process to generate an empirical CDF from these measurements.

**Figure 6**.Stars indicate the measured number of evaluations for equidistant precision values. Plotting the star values in increasing order in an empirical CDF, as in the right, recovers the original convergence graph upside down.

Given that the target precision values are equidistant on the given y-axis, the empirical CDF recovers, by construction, the original convergence graph, upside down and up to the discretization precision.

The procedure of measuring the number of evaluations and plotting them in an empirical CDF is not restricted to a single convergence graph. Figure 7 shows the very same procedure for two convergence graphs.

**Figure 7**: Convergence graphs on two function instances and the resulting empirical CDF over the depicted discretization. The lower right depicts the average number of evaluations as the blue area above the ECDF. This area is analogous to the bar area in the left of Figure 6 (flipped upside down just as the convergence graph data is). The area of the opaque upper 8.5% narrow band is open ended to the right and lower bounded by the timeout budget.

The first half of the ECDF in Figure 7 closely resembles both convergence graphs upside down, because the graphs are similar in the beginning. The second half of the ECDF is, by construction, the squeezed second convergence graph upside down (up to discretization errors).

Finally, Figure 8 shows empirical CDFs over all 15 instances.

**Figure 8**: Convergence graphs of 15 function instances of bbob F7 in 20-D and, in the right figures, two empirical CDFs that result from the evaluations measured at the indicated precision levels. The above ECDF entails the measured evaluations of the below ECDF. The saturation of the lower ECDF at 0.8 corresponds to 12 of 15 runs reaching the single target.

The first row of Figure 8 is analogous to the previous figures. The second row gives the empirical CDF for the single function precision value of 0.1. This ECDF is akin to so-called data profiles, introduced by Moré and Wild in 2009.

The figures show that ECDFs are generalizations of single convergence graphs. They also allow to aggregate data from several convergence graphs with the additional freedom to choose the target precision values where the measurements are taken. When displayed in logarithmic scale, aggregation over different functions is a viable option as well, as shown in Figure 4.

### Conclusions

We argue that empirical CDFs are a generic and versatile tool to display and aggregate performance measurement data. ECDFs allow interpretation and display of convergence graphs, aggregations of convergence graphs and data profiles in a common framework. The data we considered, that is the number of function evaluations, are quantitative on the highest scale of measurement. In the collected benchmarking data, we find that a restart version of the Nelder Mead algorithm performs best in dimension 2-3. For dimensions larger than three and budgets below a thousand times dimension, a more recently benchmarked version of SLSQP overall outperforms all results obtained in 2009. For larger budgets and hence more difficult problems, CMA-ES restart variants continue to perform best, followed by AMaLGaM-IDEA variants.

The bbob benchmark suite appears to be sufficiently difficult and discriminative. The most decisive factors for algorithm performance and discrimination between already well-performing algorithms are dimensionality, multi-modality, and non-smoothness. Finally, problem difficulty and hence the budget available or necessary to solve a problem is in itself already a surprisingly discriminative problem characteristic to select a good performing algorithm.

**Acknowledgements**: We wish to acknowledge the unstinting support of Marc Schoenauer and Darrell Whitley as well as Dimo Brockhoff and Tea Tušar who joined the COCO development team later but are today in the driving seats. Thank you to Raymond Ros, Petr Pošík and Steffen Finck who were with us when this adventure started. Olaf Mersmann has been of tremendous help for the new design of the platform and we are very grateful for that. Last, we wish to thank all the contributors to the code of the platform, i.e. in addition to some of the people thanked above Dejan Tušar, Ouassim Ait ElHara, Phillipe R. Sampaio, Asma Atamna, Konstantinos Varelas, Umut Batu, Duc Manh Nguyen and Filip Matzner.

### References

[1] Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar & Dimo Brockhoff (2020) COCO: a platform for comparing continuous optimizers in a black-box setting, Optimization Methods and Software, DOI: 10.1080/10556788.2020.1808977

[2] The Comparing Continuous Optimizers platform github repository: https://github.com/numbbo/coco

[3] Hansen, Nikolaus. A global surrogate assisted CMA-ES. Proceedings of the Genetic and Evolutionary Computation Conference. 2019.

[4] Moré, Jorge J., and Stefan M. Wild. Benchmarking derivative-free optimization algorithms. *SIAM Journal on Optimization* 20.1 (2009): 172-191.

[5] Kraft, Dieter. A software package for sequential quadratic programming. Research report DFVLR-FB–88-28, Deutsche Forschungs- und Versuchsanstalt für Luft und Raumfahrt (1988).

[6] John A Nelder and Roger Mead. A simplex method for function minimization. *The computer journa*l, 7(4):308–313, 1965.

[7] Fleming, Philip J., and John J. Wallace. How not to lie with statistics: the correct way to summarize benchmark results. *Communications of the ACM *29.3 (1986): 218-221.

## GECCO 2020: Summary and Statistics

By Jose A. Lozano

Basque Center for Applied Mathematics (BCAM), University of the Basque Country UPV/EHU

The 2020 Genetic and Evolutionary Computation Conference (GECCO 2020) was planned to be held on July 8th-12th 2020 in Cancún, Mexico. However, COVID-19 made this edition the first-ever online GECCO. This report provides statistics about submissions and authorship, and some comments about the evolution and growth of GECCO.

GECCO 2020 would not have been possible without the efforts of Carlos Coello (General Chair), Arturo Hernández (Local Chair), Josu Ceberio (Proceedings Chair) and many other organizers, track chairs, and program committee members.

In total, GECCO 2020 received 520 abstract submissions in a first stage, 415 of them were actually submitted as full papers in the second stage, and 149 were finally accepted as full papers, giving an acceptance rate of 36%, which is typical of high-quality conferences. From the submissions that were not accepted as full papers, 167 were accepted as posters. There were also 46 additional poster submissions, of which 27 were accepted.

The number of submissions, the number of accepted full papers, and the acceptance rate of each track are shown in Fig 1. RWA and EML were the two most popular tracks. The overall acceptance rate of the conference (36%) is shown as the blue line, while the acceptance rate per track is shown as a red line, ranging from 29% (DETA and ACO-SI) to 42% (THEORY).

GECCO 2020 was organized into 13 main tracks: Ant Colony Optimization and Swarm Intelligence (ACO-SI), Complex Systems (Artificial Life/Artificial Immune Systems/ GenerativeSystems/Evolutionary Robotics/Evolvable Hardware) (CS), Digital Entertainment Technologies and Arts (DETA), Evolutionary Combinatorial Optimization and Metaheuristics (ECOM), Evolutionary Machine Learning (EML), Evolutionary Multiobjective Optimization (EMO), Evolutionary Numerical Optimization (ENUM), Genetic Algorithms (GA), General Evolutionary Computation and Hybrids (GECH), Genetic Programming (GP), Real World Applications (RWA), Search-Based Software Engineering (SBSE) and Theory.

**Figure 2.**List of countries from which 80% of authors originated.

The submissions received at GECCO 2020 were authored by a total of 2561 authors affiliated to 65 different countries with 80% of authors affiliated to just 17 countries (see Fig. 2), with the USA and the UK contributing with the largest number of authors.

A visualization of the authors per country and continent is given in Fig. 3. The area of the circles is proportional to the number of authors per country. Circles are colored and grouped by continent. Few submissions originated from South America and Africa, which suggests opportunities for growing the GECCO community.

In addition to submissions to the main conference, GECCO 2020 received 79 papers for 21 specialized workshops, and provided 42 tutorials, both introductory and advanced, from among 54 proposals.

This edition of GECCO was very fortunate to have three fantastic keynote speakers: Leslie Valiant (Harvard University), Kalyanmoy Deb (Michigan State University) and Darrell Whitley (Colorado State University).

### Historical evolution of GECCO

Comparing the number of submissions and acceptance rate of GECCO 2020 to its previous editions (Fig. 4), it can be inferred that the number of submissions strongly depends on the location of the conference, being in general a little bit higher when it is held in Europe. The acceptance rate has been consistently lower than 40% since 2011.

The evolution of the main GECCO tracks is shown in Fig. 5, where the (vertical) width of each line is relative to the size of the track (number of accepted full papers) and the vertical position of the line give us the relative order (larger to smaller, from top to bottom).

Finally, the growth of GECCO is also clearly illustrated by the number of attendees, which achieved a historical maximum of 691 registrations in GECCO2018 (Kyoto, Japan), and a new maximum for Europe of 677 registrations in GECCO2019. It is worth noting that, in spite of being an online conference, and the number of submission being lower than in previous editions, GECCO 2020 reached 652 registrations. In fact, offering reduced registration fees for non-authors contributed to having attendees that do not normally go to GECCO, and helped to increase the visibility of the conference in more countries.

## Forthcoming Events

EvoStar 2021 is planned as a hybrid event (online and onsite) from 7 to 9 April 2021. Onsite, EvoStar will be held in Seville, Spain.

EvoStar is comprised of four co-located conferences

EuroGP 24th European Conference on Genetic Programming

EvoApplications 24th European Conference on the Applications of Evolutionary and bio-inspired Computation

EvoCOP 21st European Conference on Evolutionary Computation in Combinatorial Optimisation

EvoMUSART 10th International Conference (and 18th European event) on Evolutionary and Biologically Inspired Music, Sound, Art and Design

### XPRIZE on COVID-19 predictive and prescriptive modeling.

XPRIZE, in partnership with Cognizant, launched the Pandemic Response Challenge on November 17.

The Pandemic Response Challenge will focus on the development of data-driven models to predict COVID-19 infection rates and prescribe Intervention Plans that regional governments, communities, and organizations can implement to contain the pandemic and reopen safely. For more information (i.e. competition guidelines, access to the GitHub repository and slack channel), visit: https://xprize.org/pandemicresponse

## Call for Submissions

### The Human-Competitive Awards

Humies 2021 will take place in conjunction with the Genetic and Evolutionary Computation Conference, GECCO 2021, in Lille, France during July 10-14, 2021.

Entries are hereby solicited for awards totalling $10,000 for human-competitive results that have been produced by any form of genetic and evolutionary computation (including, but not limited to genetic algorithms, genetic programming, evolution strategies, evolutionary programming, learning classifier systems, grammatical evolution, gene expression programming, differential evolution, etc.) and that have been published in the open literature between the deadline for the previous competition and the deadline for the current competition.

### Important Dates

**Friday May 28, 2021**— Deadline for entries (consisting of one TEXT file, PDF files for one or more papers, and possible "in press" documentation (explained below). Please send entries to goodman at msu dot edu**Friday June 11, 2021**— Finalists will be notified by e-mail**Friday, June 25, 2021**— Finalists not presenting in person must submit a 10-minute video presentation (or the link and instructions for downloading the presentation, NOT a YouTube link) to goodman at msu dot edu.**July 10-14, 2021 (Saturday - Wednesday)**— GECCO conference (the schedule for the Humies session is not yet final, so please check the GECCO program as it is updated)**Monday, July 12, 2021, 13:40-15:20 Lille time (Central European Time = GMT+1; 7:40am-9:20am EDT )**— Presentation session, where 10-minute videos will be available for viewing.**Wednesday, July 14, 2021**— Announcement of awards at (virtual?) plenary session of the GECCO conference

**Judging Committee**: Erik Goodman, Una-May O'Reilly, Wolfgang Banzhaf, Darrell Whitley, Lee Spector, Stephanie Forrest

**Publicity Chair**: William Langdon

### Genetic and Evolutionary Computation Conference

GECCO 2021, will take place in Lille, France during July 10-14, 2021. GECCO presents the latest high-quality results in genetic and evolutionary computation since 1999.

**Topics include**: genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems (artificial life, robotics, evolvable hardware, generative and developmental systems, artificial immune systems), digital entertainment technologies and arts, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, real world applications, search-based software engineering, theory and more.

### Important dates

Workshop proposals: November 5, 2020

Tutorial proposals: November 5, 2020

Notification of workshop and tutorial acceptance: December 3, 2020

Abstract submission: January 28, 2021

Full paper submission: February 4, 2021

Poster-only submission: February 4, 2021

Notification of acceptance: March 26, 2021

GECCO 2021 Conference: July 10-14, 2021

### Foundations of Genetic Algorithms

In September 2021 the 16th edition of FOGA will take place at the Vorarlberg University of Applied Sciences in Dornbirn, Austria.

The conference series aims at advancing the understanding of the working principles behind Evolutionary Algorithms and related Randomized Search heuristics. FOGA is a premier event to discuss advances in the theoretical foundations of these algorithms, corresponding frameworks suitable to analyze them, and different aspects of comparing algorithm performance.

**Topics of interest include**, but are not limited to: Run time analysis; Mathematical tools suitable for the analysis of search heuristics, Fitness landscapes and problem difficulty; Configuration and selection of algorithms, heuristics, operators, and parameters; Stochastic and dynamic environments, noisy evaluations; Constrained optimization; Problem representation; Complexity theory for search heuristics; Multi-objective optimization; Benchmarking; Connections between black-box optimization and machine learning.

### Important dates

Submission deadline: April 30, 2021

Author rebuttal phase: June 1 - 7, 2021

Notification of acceptance: June 20, 2021

Camera-ready submission: July 14, 2021

Early-registration deadline: July 14, 2021

Conference dates: September 6 - 8, 2021

### Genetic Improvement

GI 2021, the 10th International Workshop on the Repair and Optimisation of Software using Computational Search, will be co-located with the 43rd International Conference on Software Engineering in Madrid, Spain, 23 - 29 May. **GI 2021 **invites submissions that discuss recent developments in all areas of research on, and applications of, Genetic Improvement

GI is the premier workshop in the field and provides an opportunity for researchers interested in automated program repair and software optimisation to disseminate their work, exchange ideas and discover new research directions.

**Topics of interest include **both the theory and practice of Genetic Improvement. **Applications include**, but are not limited to, using GI to: improve efficiency, decrease memory consumption, decrease energy consumption, transplant new functionality, specialise software, translate between programming languages, generate multiple versions of software, repair bugs.

### Important dates

Submission deadline: January 12, 2021

Notification of acceptance: February 22, 2021

Camera-ready submission: March 12, 2021

Workshop dates: May 23 - 29, 2021

## About this Newsletter

SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW]

We solicit contributions in the following categories:

**Art**: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.

**Short surveys and position papers**: We invite short surveys and position papers in EC and EC related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.

**Software: **Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.

**Lost Gems**: Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.

**Dissertations**: We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.

**Meetings Reports**: Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.

**Forthcoming Events**: If you have an EC event you wish to announce, this is the place.

**News and Announcement**s: Is there anything you wish to announce, such as an employment vacancy? This is the place.

**Letters**: If you want to ask or to say something to SIGEVO members, please write us a letter!

**Suggestions**: If you have a suggestion about how to improve the newsletter, please send us an email.

**Contributions** will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.

Enquiries about submissions and contributions can be emailed to **editor@sigevolution.org**

All the issues of SIGEVOlution are also available online at: www.sigevolution.org

**Notice to contributing authors to SIG newsletters **

By submitting your article for distribution in the Special Interest Group publication, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:

to publish in print on condition of acceptance by the editor

to digitize and post your article in the electronic version of this publication

to include the article in the ACM Digital Library

to allow users to copy and distribute the article for noncommercial, educational or research purposes

However, as a contributing author, you retain copyright to your article and ACM will make every effort to refer requests for commercial use directly to you.

**Editor**: Gabriela Ochoa

**Associate Editors**: Emma Hart, James McDermott, Una-May O'Reilly and Darrell Whitley

**Design & Layout**: Gabriela Ochoa