Newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation
Editorial
Welcome to the 4th (Winter) 2024 issue of the SIGEvolution newsletter! We start by celebrating this year’s ACM SIGEVO Outstanding Contribution Awardees: Dr. Anne Auger and Prof. Dr. Franz Rothlauf, who kindly shared insights on their contributions and perspectives on evolutionary computation. Our next article overviews the recent developments and achievements of the Cartesian Genetic Programming community. We conclude with further announcements and calls for submissions. Remember to contact us if you’d like to contribute or have suggestions for future newsletter issues.
Gabriela Ochoa (Editor)
About the Cover
The cover image (by Tea Tušar, Department of Intelligent Systems, Jožef Stefan Institute ) showcases the correlation landscapes of six bi-objective optimization problems with two variables. The correlation between two objectives, estimated by the Pearson correlation coefficient, ranges from -1 (perfect anti-correlation, depicted in dark red) through 0 (no dependency, shown in white) to 1 (perfect correlation, illustrated in dark blue). These visualizations reveal the number, distribution, and structure of the basins of attraction within the problem’s search space. They highlight that the correlation between objectives is not a global property but varies depending on the search space location.
These visualizations originated from the “Multiobjective Optimization on a Budget” Dagstuhl Seminar, where a working group focused on exploring correlations in multi-objective optimization (see the corresponding Dagstuhl report for further details). The six problems depicted here are part of the bbob-biobj suite from the COCO (Comparing Continuous Optimizers) platform, which is used for benchmarking optimization algorithms.
Table of Contents
The 2024 ACM SIGEVO Outstanding Contribution Awardees
The SIGEVO Outstanding Contribution Award recognizes remarkable contributions to Evolutionary Computation (EC) when evaluated over a sustained period of at least 15 years. These contributions can include technical innovations, publications, leadership, teaching, mentoring, and service to the EC community.
In 2024, two striking members of our community received this recognition: Dr. Anne Auger, and Prof. Dr. Franz Rothlauf. To celebrate these distinctions, Anne and Franz kindly answered our questions reflecting on their contributions and views on evolutionary computation as well as their advice to young researchers in the field.
Dr. Anne Auger
INRIA RandOpt Team, CMAP, Ecole Polytechnique, Palaiseau, France

Q1: Which are the service, editorial, leadership, mentoring or other contributions you are most proud of?
I am really proud of having served as General Chair for GECCO in 2019. I am also deeply honored to serve in the business committee of SIGEVO and to have been invited early in my career to join the SIGEVO board.
Equally fulfilling has been my role as a mentor to students. Guiding PhD students through their research endeavors is particularly rewarding. I also take great pride in mentoring Master’s-level students, especially those learning about derivative-free optimization and evolutionary computation. Helping them grasp the foundational and advanced concepts of these areas while witnessing their enthusiasm and intellectual curiosity, is an experience that continually reinvigorates my own passion for these fields.
Q2: Which are your most significant technical contributions to Evolutionary Computation (EC)?
I am a mathematician by training. I started to study the theoretical convergence of Evolution Strategies before they became known outside the EC community.
Some of my most significant contributions relate to proving the linear convergence of adaptive evolution strategies. First, during my PhD, on functions with spherical level sets for a specific step-size adaptive algorithm [1]. It turned out that the technique I developed and used connecting the linear convergence of an ES to analyzing the stability of normalized Markov chains could be greatly generalized to larger classes of functions including in particular non quasi-convex functions and larger classes of algorithms [2]. This allowed us to obtain linear convergence proofs for state-of-the-art step-size adaptive algorithms [3]. This was done in collaboration with Nikolaus Hansen and our PhD student Cheikh Touré.
Together with Youhei Akimoto and Tobias Glasmarchers we also understood how to directly analyze the non-normalized Markov chains underlying adaptive ES and provide hitting time bounds pertaining to linear convergence [4,5].
These results specifically pertain to step-size adaptive Evolution Strategies, marking a significant step forward in theoretical analysis. Our ultimate objective, however, has always been to extend such analyses to algorithms with covariance matrix adaptation, a more complex and powerful class of ES. I am pleased to share that we have recently succeeded in proving linear convergence for CMA-ES, a milestone achieved in particular thanks to the exceptional Ph.D. work of Armand Gissler. While these findings are still in the process of being published, they represent a major advancement.
I also worked in multi-objective optimization formalizing and analyzing the optimization goal of hypervolume-based optimization algorithms [6,7].
Finally, an additional significant contribution lies in the area of benchmarking methodology. This work began as a collaborative effort with Nikolaus Hansen and some of our students back in 2009, well before benchmarking became a prominent focus in the Evolutionary Computation (EC) community. Together, we developed and implemented this methodology within the COmparing Continuous Optimizers platform [8] which provides a seamless way to benchmark both deterministic and stochastic optimization algorithms, enabling direct performance comparisons with previously benchmarked methods. The results and data are openly accessible at COCO’s data archive (see https://numbbo.it/data-archive/), fostering transparency and collaboration in the field.
As the project progressed, we were fortunate to have Dimo Brockhoff, Olaf Mersmann and Tea Tušar join the core COCO development team, bringing their expertise and further enhancing the platform’s capabilities. Their contributions have been instrumental in advancing COCO into the robust benchmarking tool it is today.
Q3: What are the current open problems, or topics where you think there are opportunities for substantial contributions in our field?
I see opportunities for substantial contributions in areas that are very relevant in practice but to a great extent not yet deeply explored. I think for instance of mixed-integer/discrete optimization with/without constraints and the combination of this with multi-objective optimization. Those subdomains are much less mature than unconstrained continuous optimization where it is now accepted that CMA-ES is a powerful algorithm that is likely hard to improve further.
Q4: Do you think the current AI hype is an opportunity or a threat for EC?
I see it as a great opportunity. It attracts more students for topics connected closely or loosely to AI, it brings more funding opportunities.
It also helps that more and more people from the optimization community in particular accept that it is fine if practice and algorithm design are ahead of theory and even realize that it is actually an advantage. It has always been in the DNA of research in EC that we should have algorithms that provide solutions to (real/difficult) problems before having algorithms where we can prove (mathematically) that they work. I believe this approach has been a critical factor in the success of many EC methods, despite the challenges it has occasionally presented. However, I feel that this era of scepticism is to a great extent behind us and that the worldwide acceptance of how impactful AI methods are coined (practice before theory) has been helpful for the EC community.
Q5: How do you view the visibility of the EC community in the larger computer science community?
I am convinced that the visibility and recognition of methods originating from the field of Evolutionary Computation (EC) have significantly increased in recent years.
This is evident not only in evolutionary multi-objective optimization but also in single-objective continuous optimization, which is the area I am most familiar with. A prominent example is the CMA-ES algorithm, which has gained substantial recognition beyond the EC community. Its widespread adoption is reflected in its remarkable usage statistics, with over 70 million downloads of its source code, as tracked by platforms such as pepy.tech for CMA and pepy.tech for CMAES. This algorithm has found applications in artificial intelligence, where it is integrated into hyperparameter optimization frameworks and is extensively cited in papers presented at leading AI conferences.
Evolution Strategies (ES) are also achieving greater visibility within the mathematical optimization community. Papers on ES have been published in highly respected venues for mathematical optimization, such as the SIAM Journal on Optimization and the Journal of Global Optimization [2,3,5], further underscoring their growing relevance and acceptance in this domain.
Our work on benchmarking methodologies, particularly through the development of the COCO platform, is another example of EC research making an impact beyond its traditional boundaries. The support and momentum provided by the BBOB workshops at GECCO were instrumental in advancing these efforts, and the outcomes have been recognized and published outside the EC community as well.
These examples, which I am closely connected to through my own work, highlight the expanding influence of EC methods. However, I am confident that there are many more examples of such growth and visibility across the broader EC landscape!
Q6: What advice do you have for the younger generation of researchers in the field?
The Evolutionary Computation (EC) community is notably inclusive and open-minded, creating an excellent opportunity for innovation. This environment makes it more likely for original and unconventional ideas proposed by young researchers to gain acceptance and recognition compared to other, more rigid scientific communities, which is great! However, this open-mindedness can sometimes result in the acceptance of work of suboptimal quality.
Resist the temptation to publish vague or poorly executed ideas, or to present “novel algorithms” that merely make trivial adjustments—such as tweaking a minor parameter/adding an irrelevant component—without offering meaningful improvements. In the long term, publishing such works will not help you. Be critical during the entire process to obtain a scientific result: challenge your ideas, ask friends to challenge them, find out why and where they work/do not work and improve what does not work! Be welcoming if someone finds a problem in your algorithm: it will allow you to improve it and make it stronger!
References
[1] Auger, A. (2005), Convergence results for the (1,λ)-SA-ES using the theory of φ-irreducible Markov chains, Theoretical Computer Science. Vol. 334(1-3), pp. 35-69.
[2] Anne Auger, Nikolaus Hansen (2016), Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains, SIAM J. Optim., 26(3) (2016), 1589-1624.
[3] Toure, C., Auger, A., & Hansen, N. (2023). Global linear convergence of evolution strategies with recombination on scaling-invariant functions. Journal of Global Optimization, 86(1), 163-203.
[4] Youhei Akimoto, Anne Auger, Tobias Glasmachers (2018). Drift Theory in Continuous Search Spaces: Expected Hitting Time of the (1+1)-ES with 1/5 Success Rule. Proceedings of the GECCO 2018 Conference, 2018, Kyoto, Japan.
[5] Akimoto, Y., Auger, A., Glasmachers, T., & Morinaga, D. (2022). Global linear convergence of evolution strategies on more than smooth strongly convex functions. SIAM Journal on Optimization, 32(2), 1402-1429.
[6] Auger, A. and Bader, J. Brockhoff, D. and Zitzler, E. (2009), Theory of the Hypervolume Indicator: Optimal μ-Distributions and the Choice of the Reference Point, Foundations of Genetic Algorithms (FOGA 2009). New York, NY, USA , pp. 87-102. ACM.
[7] Auger, A. and Bader, J. Brockhoff, D. and Zitzler, E. (2012), Hypervolume-based Multiobjective Optimization: Theoretical Foundations and Practical Implications, Theoretical Computer Science. Vol. 425, pp. 75-103.
[8] Hansen, N., Auger, A., Ros, R., Mersmann, O., Tusar, T., & Brockhoff, D. (2021). COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software, 36(1), 114-144.
Prof. Dr. Franz Rothlauf
Information Systems, University of Mainz, Germany

Q1: Which are the service, editorial, leadership, mentoring or other contributions you are most proud of?
I am proud of serving for a long time as treasurer as well as chair of SIGEVO. During this time, the most challenging task was dealing with the Covid pandemic. During a few months continuous, rapid, and complete change was necessary to adapt the conferences to the changing environment. I am very happy that GECCO did very well through this time and even became a better and stronger conference after all these challenges. Especially I am happy that the hybrid character of GECCO also allows remote presentations which makes GECCO a much more inclusive and much more environment-friendly event. My hope is that not only SIGEVO but the whole scientific community continues this paths towards environment-friendly events with much less physical traveling as continuously burning oil will have (and probably already has) a devastating effect on our way of life.
Q2: Which are your most significant technical contributions to Evolutionary Computation (EC)?
First, studying the combination of representation and operators and getting a better understanding on how locality and redundancy affects the performance of EC. This is not only relevant for problems of fixed size but (even more) genetic programming suffers a lot from representation-related problems. Besides my book on representations from 2010 [1], where I view the parts on redundancy as most relevant. There are many subsequent publications such as [3] that analyze the problems of EAs with
low locality.
Second, I view the use of machine learning models that replace the traditional search operators as a promising research field. Here, I see [2] as one of my most significant technical contributions.
Q3: What are the current open problems, or topics where you think there are opportunities for substantial contributions in our field?
We have a good understanding of the possibilities (but also limitations) of EC for problems of fixed size. For such problems, EC works well, and a successful application of EC “only” requires a problem-specific adaption of the used optimization method. However, the situation is completely different for GP, which still (after 30 years of research) does not work well and often acts as random search. All current approaches only work for small and toy problems, but not for larger problems of reasonable size. To me, this is mainly because the way we represent solutions in GP creates landscapes that are very rugged, and which cannot be efficiently searched by guided search methods like GP [3]. I see the development of approaches that lead to smoother search spaces as one of the largest open problems in our field.
Q4: Do you think the current AI hype is an opportunity or a threat for EC?
I see it as a great chance. Adaption is very relevant for machine learning approaches and there is more to find for ML researchers in EC than stochastic gradient descent 😉 I am convinced that the machine learning models that came up in the last five years will have a large impact on optimization as they directly learn from training data what are good solutions for many real-world problems. This leads to solutions that are more attractive to use for humans in comparison to the traditional optimization approaches which require complicated problem modelling including the design of unrealistic fitness functions.
Unfortunately, currently ML people often re-invent many of the things that are established knowledge in EC, however, I am optimistic that on the long run there will be a fruitful cooperation between ML and EC researchers. And I clearly see the AI hype (as soon as it is not a hype any more) as an opportunity.
Q5: How do you view the visibility of the EC community in the larger computer science community?
We are not where we should be, but we are doing better in the last years. As side effect of the AI hype, the larger computer science community also accepts that neural networks (which are nature inspired) and also EC (which is also nature inspired) might do a good job for some tasks. The computer science community understands that EC is not just applying heuristics but there is a profound theoretical understanding and there are many problems that can be solved well using metaheuristics.
Q6: What advice do you have for the younger generation of researchers in the field?
Do not try to maximize your paper output or h-index, but enjoy what you are doing. Of course, especially young researchers have to fulfil expectations (at least to get tenured 😉 but it would be nice if matching the expectations from the outside could be fun. We currently live in a a world that becomes very digital and CS (and of course EC and ML) are one of the drivers for change. This is exciting!
References
[1] Rothlauf, F. (2010). Representations for genetic and evolutionary algorithms (2nd edition). Springer. https://doi.org/10.1007/3-540-32444-5_2
[2] Rothlauf, F., & Probst, M. (2020). Harmless Overfitting: Using Denoising Autoencoders in Estimation of Distribution Algorithms. Journal of Machine Learning Research, 21(78). https://jmlr.org/papers/volume21/16-543/16-543.pdf
[3] Sobania, D., Rothlauf, F. (2020). Challenges of Program Synthesis with Grammatical Evolution. Genetic Programming. EuroGP 2020. Lecture Notes in Computer Science, vol 12101. Springer, Cham. https://doi.org/10.1007/978-3-030-44094-7_14
Drifting and Evolving: The graph-based Genetic Programming community has entered a new Era
Roman Kalkreuth, RWTH Aachen University, Germany

It’s now over two years since Julian F. Miller sadly passed away in February 2022 and lots of development has taken place within the graph-based genetic programming (GGP) community since then.
After Julian’s death I felt puzzled about how we could proceed since he has been the main driver of our community due to his pioneering work with Cartesian GP. After various meetings with colleagues, I noticed a common feeling that we need to find a way to reform and regroup ourselves, which finally leads us into a new era. At GECCO’22 in Boston, we organized a graph-based GP tutorial [1] which was dedicated to Julian, since he offered various tutorials at major evolutionary computation conferences throughout his academic career. It was our first step to keep community-driven efforts going.
At that time, we realized that another format was desperately needed to facilitate the reformation process, and we therefore exchanged first ideas on community-building efforts and finally agreed that a workshop at GECCO’23 in Lisbon would be a good idea. Dennis Wilson and I were mainly responsible for the first edition that included the presentation of two keynotes, given by Wolfgang Banzhaf (Michigan State University, USA) and Lukas Sekanina (Brno University of Technology, Czech Republic) and several papers on highly relevant topics for GGP such as representations, genetic operators, selection mechanism but also novel applications. The papers were presented as posters to facilitate open discussion and exchange, but also to give attendees the opportunity for networking. Due to the high number of attendees, we decided to go for a second edition at GECCO in Melbourne this year.

For this second workshop, we decided to invite a keynote speaker from the private sector, which allows us to bring in new elements for exchange and discussion. We felt glad that sakana.ai, a thriving Tokyo-based artificial intelligence startup, founded by two prominent former Google AI researchers, David Ha and Llion Jones, accepted our invitation.
Takuya Akiba, a research scientist at sakana.ai, presented his recent work called Evolutionary Model Merge [2], which aims to combine large language models in parameter and data flow space to evolve new foundation models. Another great keynote was given by Bill Langdon (University College London, UK) on evolutionary robustness, which addressed key findings of his pioneering work in Genetic Improvement.
The 2024 workshop had a broader organizing team that included people on various academic levels. Since Julian’s passing, various papers have been presented at major conferences in evolutionary computation such as GECCO, PPSN, FOGA, EVO* and CEC.

Recently, Cartesian GP was used to evolve image filters; this work appeared in Nature Communications and obtained the GECCO 20204 Hummie Gold award. The article receiving the best-paper award at 2024 EuroGP, was also based on Cartesian GP [4].
Cartesian GP has now taken a new approach to community building, allowing people to become active in our community at an early stage, and providing an opportunity for personal development and growth. With this new format, we have slightly drifted from former structures that have evolved quite naturally. The adoption of genetic drift has been essential for Julian’s Cartesian GP and his research in general. It is therefore in our spirit to follow this kind of natural development.
At GECCO’22, the prospect of reforming the community seemed too challenging for me in my postdoctoral period, since the footsteps of all the great people that helped to shape CGP over the past decades felt quite big at that time. But today I see these vivid and versatile activities evolving in our community, and I realize that all the efforts (and struggles, of course) were all worth it. The CGP community continues to proceed in the spirit of Julian’s pioneering effort and his non-toxic approach to academia. His vision continues to inspire us and I therefore would like to conclude my article with a quote of Julian which we used for the dedication of the GECCO’22 GGP tutorial:
“I think Darwin’s evolution by natural selection is one of the greatest ideas of all time. Using those principles on computers has profoundly altered my personal philosophy. I think living systems are extremely precious. They are all unique and took 3.5 billion years to evolve to what they are. Imagine it took 3.5 billion years to build something, wouldn’t it be fantastically precious?…..“
– Julian F. Miller (1955 – 2022)

I would like to thank all the people who supported our community-building efforts over the last two years for their great effort and commitment. This does also include my supervisors Guenter Rudolph, Thomas Baeck, Carola Doerr and Holger Hoos at TU Dortmund, Leiden, Sorbonne and RWTH Aachen University who offered me a suitable environment to plan our activities and equipped me with good advice.
About the Author

Roman Kalkreuth (right) is currently an assistant professor at RWTH Aachen University (Germany). Primarily, his research focuses on the analysis and development of algorithms for graph-based genetic programming.
References
[1] Kalkreuth, R., Sotto, L. F. D. P., & Vašı́ček, Z. (2022). Graph-based genetic programming. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 958–982. https://doi.org/10.1145/3520304.3533657
[2] Akiba, T., Shing, M., Tang, Y., Sun, Q., & Ha, D. (2024). Evolutionary Optimization of Model Merging Recipes. https://arxiv.org/abs/2403.13187
[3] Cortacero, K., McKenzie, B., Müller, S. et al. Evolutionary design of explainable algorithms for biomedical image segmentation. Nature Communication 14, 7112 (2023). https://doi.org/10.1038/s41467-023-42664-x
[4] Nadizar, G., Medvet, E., & Wilson, D. G. (2024). Naturally Interpretable Control Policies via Graph-Based Genetic Programming. Genetic Programming: 27th European Conference, EuroGP 2024, Held as Part of EvoStar 2024, Aberystwyth, UK, April 3–5, 2024, Proceedings, 73–89. https://doi.org/10.1007/978-3-031-56957-9_5
Announcements

Genetic Programming and Evolvable Machines (GPEM)
Volume 25, Issue 2. December 2024
Outgoing Editor: Lee Spector
Incoming Editor: Leonardo Trujillo
Editorial
Chief editorship transition
Lee Spector, Leonardo Trujillo
Research Articles
An investigation into structured grammatical evolution initialisation
Aidan Murphy, Mahsa Mahdinejad, Anthony Ventresque, Nuno Lourenço
Part of collection: Special Issue on Twenty-Five Years of Grammatical Evolution
Evolving code with a large language model
Erik Hemberg, Stephen Moskal, Una-May O’Reilly
Hga-lstm: LSTM architecture and hyperparameter search by hybrid GA for air pollution prediction
Jiayu LiangYaxin LuMingming Su
A survey on dynamic populations in bio-inspired algorithms
Davide Farinati, Leonardo Vanneschi
GSGP-hardware: instantaneous symbolic regression with an FPGA implementation of geometric semantic genetic programming
Yazmin Maldonado, Ruben Salas, Leonardo Trujillo
Part of collection: Special Issue for the Tenth Anniversary of Geometric Semantic Genetic Programming
Geometric semantic GP with linear scaling: Darwinian versus Lamarckian evolution
Giorgia Nadizar, Berfin Sakallioglu, Leonardo Vanneschi
Part of collection: Highlights of Genetic Programming 2023 Events
Book Reviews
Leonardo Vanneschi and Sara Silva: lectures on intelligent systems
Beatrice M. Ombuki-Berman
“The physics of evolution” by Michael W. Roth, Crc press, 2023
Wolfgang Banzhaf
Benjamin Doerr and Frank Neumann (editors): theory of evolutionary computation
Jonathan E. Rowe

ACM Transactions on Evolutionary Learning and Optimization (TELO)
Volume 4, Issue 4December 2024
Editors: Juergen Branke, Manuel López-Ibáñez
Evolutionary Seeding of Diverse Structural Design Solutions via Topology Optimization
Yue Xie, Josh Pinskier, Xing Wang, David Howard
https://doi.org/10.1145/3670693
Evolving to Find Optimizations Humans Miss: Using Evolutionary Computation to Improve GPU Code for Bioinformatics Applications
Jhe-Yu Liou, Muaaz Awan, Kirtus Leyba, Petr Šulc, Steven Hofmeyr,+ 2
https://doi.org/10.1145/3703920
Layer-Wise Learning Rate Optimization for Task-Dependent Fine-Tuning of Pre-Trained Models: An Evolutionary Approach
Chenyang Bu,Yuxin Liu,Manzong Huang,Jianxuan Shao,Shengwei Ji, Wenjian Luo,+ 1
https://doi.org/10.1145/3689827
Evolutionary Optimization with a Simplified Helper Task for High-Dimensional Expensive Multiobjective Problems
Xunfeng Wu, Qiuzhen Lin, Junwei Zhou, Songbai Liu, Carlos A. Coello Coello,+ 1
https://doi.org/10.1145/3637065
Bayesian Inverse Transfer in Evolutionary Multiobjective Optimization
Jiao Liu, Abhishek Gupta, Yew-Soon Ong
https://doi.org/10.1145/3674152
On the Generalisation Performance of Geometric Semantic Genetic Programming for Boolean Functions: Learning Block Mutations
Dogan Corus, Pietro S. Oliveto
https://doi.org/10.1145/3677124
Learning the Graph Structure of Regular Vine-Copulas from Dependence Lists
Diana Carrera, Roberto Santana, Jose Antonio Lozano
https://doi.org/10.1145/3695467
Language Model Crossover: Variation through Few-Shot Prompting
Elliot Meyerson, Mark J. Nelson, Herbie Bradley, Adam Gaier, Arash Moradi,+ 2
https://doi.org/10.1145/3694791
Call for Papers: Conferences

GECCO 2025 @ Malaga (Hybrid)
The Genetic and Evolutionary Computation Conference (GECCO) presents the latest high-quality results in genetic and evolutionary computation since 1999. Topics include: genetic algorithms, genetic programming, swarm intelligence, complex systems, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, learning for evolutionary computation, evolutionary multiobjective optimization, evolutionary numerical optimization, neuroevolution, real-world applications, theory, benchmarking, reproducibility, hybrids and more. Detailed Call for Papers.
Important Dates
Full papers
- Abstract: January 22, 2025
- Submission: January 29, 2025
- Notification: March 19, 2025
- Camera-ready: April 09, 2025
Poster-only papers
- Submission: January 29, 2025
- Notification: March 19 2025
- Camera-ready: April 09, 2025
Call for Papers: Journal Special Issues

ACM Transactions on Evolutionary Learning and Optimization (TELO)
Special Issue on Benchmarking in Multi-Criteria Optimization
Guest Editors
- Dimo Brockhoff, Inria and IP Paris, France, dimo.brockhoff@inria.fr
- Boris Naujoks, TH Köln, Germany, boris.naujoks@th-koeln.de
- Robin Purshouse, University of Sheffield, UK, r.purshouse@sheffield.ac.uk
- Tea Tušar, Jožef Stefan Institute, Slovenia, tea.tusar@ijs.si
Benchmarking involves the experimental comparison of optimization algorithms. This activity is of high practical importance in the design of algorithms and their application because practically relevant algorithms are often too complex to be analysed theoretically; nevertheless, a practitioner needs to decide which algorithm to apply. Moreover, we cannot apply a theoretical algorithm in practice, but only an implementation of it, and two implementations of the same algorithm might differ significantly, due to different handling of numerics, varying internal parameter settings, or other reasons.
At a recent Lorentz Center workshop “BeMCO: Benchmarking in Multi-Criteria Optimisation“ several topic clusters were identified in urgent need for more research. This special issue aims to capture cutting edge research and novel ideas in the benchmarking of optimization algorithms for problems with multiple objectives.
Full Call for Papers (PDF)
Important Dates
- Open for Submissions: January 1, 2025
- Submissions deadline: April 1, 2025
- First-round review decisions: June 1, 2025
- Deadline for revisions: August 1, 2025
- Notification: October 1, 2025
- Tentative publication: December 2025
Submission Information
You are invited to submit your articles no later than the given deadline. All papers will be subject to a rigorous peer review. Manuscripts should be prepared according to the “Guidelines for Authors” section at https://dl.acm.org/journal/telo/author-guidelines and submissions should be made through the journal submission website at https://mc.manuscriptcentral.com/telo by selecting the Manuscript Type “Special Issue onsBenchmarking in Multi-Criteria Optimization”.For questions and further information, please contact one of the guest editors
Call For Entries

22nd Annual (2025) “Humies” Awards
For Human-Competitive Results – Produced by Genetic and Evolutionary Computation
To be held as part of the Genetic and Evolutionary Computation Conference (GECCO)
July 14-18, 2025 (Monday – Friday), Malaga, Spain (Hybrid)
Detailed Call for Entries
Entries are hereby solicited for awards totaling $10,000 for human-competitive results that have been produced by any form of genetic and evolutionary computation (including, but not limited to genetic algorithms, genetic programming, evolution strategies, evolutionary programming, learning classifier systems, grammatical evolution, gene expression programming, differential evolution, genetic improvement, etc.) and that have been published in the open, reviewed literature between the deadline for the previous competition and the deadline for the current competition.
Important Dates
- Friday, May 30, 2025: Deadline for entries (consisting of one TEXT file, PDF files for one or more papers, and possible “in press” documentation). Please send entries to goodman at msu dot edu
- Friday, June 13, 2025: Finalists will be notified by e-mail
- Friday, June 27, 2025: Finalists must submit a 10-minute video or, if presenting in person, their slides, to goodman at msu dot edu.
- July 14-18, 2025 (Monday – Friday): GECCO conference (the schedule for the Humies session is not yet final, so please check the GECCO program as it is updated for the time of the Humies session). GECCO will be in hybrid mode, so the finalists may present their entry in person or on video.
- Friday, July 18, 2025: Announcement of awards at the plenary session of the GECCO conference.
About this Newsletter
SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW].
We solicit contributions in the following categories:
Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.
Short surveys and position papers. We invite short surveys and position papers in EC and EC-related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.
Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.
Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.
Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.
Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.
Forthcoming Events. If you have an EC event you wish to announce, this is the place.
News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.
Letters. If you want to ask or say something to SIGEVO members, please write us a letter!
Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.
Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.
Enquiries about submissions and contributions can be emailed to gabriela.ochoa@stir.ac.uk
All the issues of SIGEVOlution are also available online at: evolution.sigevo.org
Notice to contributing authors to SIG newsletters
As a contributing author, you retain the copyright to your article. ACM will refer all requests for republication directly to you.
By submitting your article for distribution in any newsletter of the ACM Special Interest Groups listed below, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:
- to publish your work online or in print on the condition of acceptance by the editor
- to include the article in the ACM Digital Library and in any Digital Library-related services
- to allow users to make a personal copy of the article for noncommercial, educational, or research purposes
- to upload your video and other supplemental material to the ACM Digital Library, the ACM YouTube channel, and the SIG newsletter site
If third-party materials were used in your published work, supplemental material, or video, make sure that you have the necessary permissions to use those third-party materials in your work
Editor: Gabriela Ochoa
Sub-editor: James McDermott
Associate Editors: Emma Hart, Bill Langdon, Una-May O’Reilly, Nadarajen Veerapen, and Darrell Whitley