newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation
Editorial
Welcome to the 2nd (Summer) 2023 issue of the SIGEvolution newsletter! Our cover image is a contribution from Craig Reynolds, notable in our community as the creator of the Boids. Here, he shares with us his coevolution of camouflage system. The issue continues with two contributions related to Evolutionary Computation and explainable/interpretable AI, a topic that is gaining interest in our community. A fresh overview of this year’s EvoStar conference follows, covering both the scientific and social highlights, from the perspective of one of the outstanding EvoStar PhD students and best-paper awardee. We complete the issue with recent announcements and forthcoming events. Please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.
Gabriela Ochoa, Editor
About the Cover
Prey camouflage evolving over simulation time to become more effective in a given environment. (“Prey” are disks with diameter 20% of image width. Background photos show leaf litter under California live oak trees.) Six snapshots during a coevolutionary simulation, with a population of prey who evolve camouflage patterns detection, and a population of predators who evolve an ability to find and eat prey. This is Figure 2 from “Coevolution of Camouflage” — to appear in Proceedings of the Artificial Life Conference 2023 in Sapporo, Japan, July 24-28. Preprint: https://arxiv.org/abs/2304.11793
These prey evolve camouflage patterns using Genetic Programming applied to a procedural texture synthesis library. The predators are convolutional neural nets which inherit weights from their parents and learn (“fine-tune”) during their lifetime. All fitness is relative, based on tournaments of three individuals. Using negative selection, the worst individual in a tournament may die (by predation or starvation) and be replaced by an offspring of the other two individuals in a tournament. Craig Reynolds.
Table of Contents
Evolutionary Computation and Explainable AI towards ECXAI 2023: a year in review
Jaume Bacardit, Newcastle University, Newcastle u/o Tyne, UK
Alexander Brownlee, University of Stirling, Stirling, UK
Stefano Cagnoni, University of Parma, Parma, Italy
Giovanni Iacca, University of Trento, Trento, Italy
John McCall, Robert Gordon University, Aberdeen, UK
David Walker, University of Plymouth, Plymouth, UK
Abstract. At GECCO 2022, we organized the first Workshop on Evolutionary Computation and Explainable AI (ECXAI). With no pretence at completeness, this paper briefly comments on its outcome, what has happened recently in the field, and our expectations for the near future and, in particular, for the upcoming second edition of EXCAI, at GECCO 2023.
Keywords: Explainable Artificial Intelligence · Evolutionary Computation and Optimisation · Machine Learning.
Explainable AI: facts and motivations
The increasing adoption of black-box algorithms, including some Evolutionary Computation (EC)-based methods, has led to greater attention to generating explanations and their accessibility to end-users. This has created a fertile environment for the application of techniques in the EC domain for generating both end-user and researcher-focused explanations. Furthermore, many XAI approaches in Machine Learning (ML) rely on search algorithms – e.g., [10] – that can draw on the expertise of EC researchers.
Important questions that automated decision-making techniques (such as EC and ML) have raised include: (i) Why has the algorithm obtained solutions in the way that it has? (ii) Is the system biased? (iii) Has the problem been formulated correctly? (iv) Is the solution trustworthy and fair?
The goal of XAI and related research is to develop methods to interrogate AI processes and answer these questions. Our position is that, despite the differences in the problem formulation (ML vs. optimisation), using or adapting XAI techniques to explain EC-based processes that tackle search problems will improve such methods’ accessibility to a wider audience, increasing their uptake and impact. As well as this, we posit that EC can play a crucial role in improving the state-of-the-art XAI techniques within the wider AI community.
Perhaps the most crucial reason why explainability is important is trust. The research community is already largely convinced of the value of EC approaches and is keen to increase the uptake of EC tools and methods by non-EC experts. Central to this is convincing users that they can trust the solutions that are generated by knowing what makes that solution better than something else, which might be synonymous with knowing why the solution was chosen.
Extending this theme is that of validity. EC methods, and optimisers in general, only optimise the target function. Explaining why a solution was chosen might clarify whether it is solving the actual problem or just exploiting an error or loophole in the problem’s definition, which can lead to surprising or even amusing results [8], but can also simply yield frustratingly incorrect solutions.
EC is stochastic, which makes noise in the generated solutions virtually unavoidable. Thus, another motivation is whether we can explain which characteristics of the solution are crucial: its malleability. This property could be assessed by answering the question: “Which variables could be refined or amended for aesthetic or implementation purposes?”.
Finally, when we define a problem, it is often hard to fully codify all the real-world goals of the system. We want something that is mathematically optimised but also something that corresponds to the problem owner’s hard-to-codify intuition. By incorporating XAI into interactive EC, we could make it easier for the problem owner to interact with the optimiser [16].
The idea to organise a dedicated workshop [2] on the possible interrelationships between EC and XAI was based mainly on these considerations. We aimed to support these research lines and foster a tighter interaction between EC-based and other AI methods. Given the successful outcome of the event, we decided to iterate it in 2023, inviting participants to focus their contributions on two main, complementary, issues, namely: (i) How EC can contribute to XAI? and (ii) How XAI can be used to explain EC-based solutions?
A year in review
Several papers were discussed at the ECXAI22 workshop. An introductory paper [2] outlined the broad research questions around EC and XAI, as noted above, and gave a brief literature review of recent work. Two papers explored routes for possible explainability in EC, one describing the mining of surrogate models for characteristics like the sensitivity of the objectives to each variable and inter- variable relationships for bitstring-encoded benchmark functions [13], and the other proposing Population Dynamics Plots to visualize the progress of an EA, to allow the lineage of solutions to be traced back to their origins, and provide a route to explaining the behaviour of different algorithms [17].
Other papers explored the use of EC in providing explainability for ML systems, in particular, related to the assessment of Neural Transformers Trained on source code [11], the evolution of interpretable restriction policies for pandemics control [4], and the optimisation of explainable rule sets [12] and Learning Classifier Systems (LCS) through feature clustering [1].
Other recent relevant papers and new trends
Genetic Programming (GP) has long been claimed to produce better explainable models than other ML methods for its capacity to evolve complex symbolic expressions that intrinsically define their semantics. For the same reasons, GP has also the capacity to be a useful tool for post-hoc explanation of black-box models. These two viewpoints are extensively discussed in [9]. GP is also used in [15], aiming at deriving a context-aware approach, which essentially means developing a system that can decompose the main problem into a set of sub-problems (contexts) and finding specific solutions to each of them. According to the authors, this approach results in prediction models that are smaller and easier to interpret than those obtained by evolutionary learning algorithms without context awareness.
Recent work has also focused on the use of evolutionary learning to induce decision trees combined with reinforcement learning (RL), both for discrete [5] and continuous action spaces [4]. A similar approach has been applied for the automated analysis of ultrasound imaging data [6], and, more recently, in the domain of multi-agent reinforcement learning (MARL) tasks [3].
One of the most appealing recent trends currently emerging relates to the use of Quality Diversity (QD) evolutionary algorithms, such as MAP-Elites, to search for a multitude of diverse policies for RL tasks [7].
An invitation and concluding remarks
The intersection between XAI and EC is evidently an emerging area, as demonstrated by the steady stream of recent publications and interest in the ECXAI workshop at GECCO 2022. Several approaches, along very different directions, have been reported in the literature since then, and the topic is ripe for cross-fertilisation of ideas between the EC and XAI communities.
On the one hand, using XAI to attempt to explain the behaviour and out-comes of EC techniques seems to be a viable way to attract attention to these optimisation methods and present them as a rigorous, robust alternative for solving complex optimisation problems. Using XAI in EC may also be a way to free the field from the excessive use of metaphors, which has been heavily criticized in the past few years [14], focusing more on the analysis of the algorithmic functionalities.
On the other hand, using EC for developing or augmenting XAI seems another important direction that deserves to be explored in the future. In this sense, the seminal works that have just been published on the use of QD algorithms for interpretable RL, or of EC for interpretable MARL, are encouraging.
We are ready to welcome you as well as your thoughts and suggestions in the upcoming ECXAI workshop at GECCO 2023.
References
- Andersen, H., Lensen, A., Browne, W.N.: Improving the search of learning classifier systems through interpretable feature clustering. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1752–1756. GECCO ’22, ACM, New York, NY, USA (2022)
- Bacardit, J., Brownlee, A.E.I., Cagnoni, S., Iacca, G., McCall, J., Walker, D.: The intersection of evolutionary computation and explainable AI. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1757–1762. GECCO ’22, Association for Computing Machinery, New York, NY, USA (2022)
- Crespi, M., Custode, L.L., Iacca, G.: Towards interpretable policies in multi-agent reinforcement learning tasks. In: Bioinspired Optimization Methods and Their Applications: 10th International Conference, BIOMA 2022, Maribor, Slovenia, November 17–18, 2022, Proceedings. pp. 262–276. Springer, Cham (2022)
- Custode, L.L., Iacca, G.: Interpretable AI for policy-making in pandemics. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1763–1769. GECCO ’22, ACM, New York, NY, USA (2022)
- Custode, L.L., Iacca, G.: Evolutionary learning of interpretable decision trees. IEEE Access 11, 6169–6184 (2023)
- Custode, L.L., Mento, F., Tursi, F., Smargiassi, A., Inchingolo, R., Perrone, T., Demi, L., Iacca, G.: Multi-objective automatic analysis of lung ultrasound data from COVID-19 patients by means of deep learning and decision trees. Applied Soft Computing 133, 109926 (2023)
- Ferigo, A., Custode, L.L., Iacca, G.: Quality diversity evolutionary learning of decision trees. arXiv:2208.12758 (2022)
- Lehman, J., Clune, J., Misevic, D.: The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities (2020), arXiv:1803.03453
- Mei, Y., Chen, Q., Lensen, A., Xue, B., Zhang, M.: Explainable Artificial Intelligence by Genetic Programming: A survey. IEEE Transactions on Evolutionary Computation 27(3) pp. 621–641 (2023)
- Ribeiro, M.T., Singh, S., Guestrin, C.: ”Why should I trust you?”. Explaining the predictions of any classifier. In: International Conference on Knowledge Discovery and Data Mining. ACM SIGKDD, New York, NY, USA (2016)
- Saletta, M., Ferretti, C.: Towards the evolutionary assessment of neural transformers trained on source code. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1770–1778. GECCO ’22, ACM, New York, NY, USA (2022)
- Shahrzad, H., Hodjat, B., Miikkulainen, R.: Evolving explainable rule sets. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1779–1784. GECCO ’22, ACM, New York, NY, USA (2022)
- Singh, M., Brownlee, A.E.I., Cairns, D.: Towards explainable metaheuristic: Mining surrogate fitness models for importance of variables. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1785–1793. GECCO ’22, ACM, New York, NY, USA (2022)
- Sörensen, K.: Metaheuristics — the metaphor exposed. International Transactions in Operational Research 22(1), 3–18 (2015)
- Tran, B., Sudusinghe, C., Nguyen, S., Alahakoon, D.: Building interpretable predictive models with context-aware evolutionary learning. Applied Soft Computing 132, 109854 (2023)
- Virgolin, M., De Lorenzo, A., Randone, F., Medvet, E., Wahde, M.: Model learning with personalized interpretability estimation (ML-PIE) (2021), arXiv:2104.06060
- Walter, M.J., Walker, D.J., Craven, M.J.: An explainable visualisation of the evolutionary search process. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. p. 1794–1802. GECCO ’22, ACM, New York, NY, USA (2022)
Thesis Report: Evolutionary Optimization of Decision Trees for Interpretable Reinforcement Learning
Leonardo Lucio Custode, University of Trento, Italy
leonardo.custode@unitn.it
This thesis focuses on the issue of interpretability in machine learning, proposing algorithms to evolve decision trees (DTs) for reinforcement learning (RL) tasks. The dissertation is available at the following link: https://iris.unitn.it/handle/11572/375447.
Interpretability is a growing concern in the machine learning (ML) community. In fact, while there is a plethora of approaches for explainable AI (XAI), these suffer from a fundamental issue: they are approximations of ML models. Therefore, they cannot be trusted in high-risk or safety-critical scenarios.
This is where interpretable AI comes in to help. Interpretability focuses on the use of small, transparent models for solving ML tasks, which allows researchers and practitioners to delve into the details of the model obtained. This allows to solve three major problems:
- Bias: having models that can be understood by humans can help reduce their bias caused by the training data.
- Debugging: debugging large ML models is very challenging. Having small, interpretable models helps in understanding bugs in their reasoning.
- Knowledge: solving a problem with large ML models does not give users any additional understanding about how to solve the problem. Using interpretable models allows for further understanding of the problem, which may lead to novel discoveries.
This thesis proposes a general framework, which combines evolutionary computation (EC) with reinforcement learning (RL). The approach uses a two-level optimization approach:
- In the outer loop, we use Grammatical Evolution to evolve decision trees (DTs).
- In the inner loop, i.e., the fitness evaluation phase, we use Q-Learning to learn the values for the leaves, based on the rewards received by the environment.
Compared to existing approaches, this approach yields the following advantages:
- Given that we are using a population-based evolutionary optimizer, we are less likely to get trapped in local optima than classic algorithms for DT induction.
- Since our approach uses Q-Learning to learn the values for the leaves, it is much more efficient than purely evolutionary methods for evolving DTs.

More specifically, our approach decomposes the RL problem in two subproblems:
- The first subproblem is to find a useful decomposition of the state space, which groups states into “clusters” of semantically-similar states.
- The second subproblem, instead, is to find the optimal action of each of the clusters.
We adapt our method to different scenarios:
- When dealing with environments that have “meaningful” inputs (i.e., variables that completely describe a property of the environment, such as velocity, angle, etc.) and discrete action spaces, the approach stated above works very well, often outperforming deep neural networks, while having much better interpretability.
- When moving to environments with continuous action spaces, however, Q-Learning becomes infeasible. To solve this issue, we leverage a co-evolutionary approach that translates a continuous RL problem into a discrete RL problem.
- When dealing with environments that do not have directly “meaningful” inputs (for instance, in an image each single pixel typically does not contain enough information to describe a semantic property of the environment), we leverage co-evolutionary optimization to perform, simultaneously, entity recognition (and identification) and planning.
- In multi-agent scenarios, we use a co-evolutionary approach that optimizes a separate DT for each of the agents.
We test our approach also in practical problems, including pandemic control and automated analysis of lung ultrasound data, where we achieve state-of-the-art performance while retaining interpretability.
Finally, when comparing our results to the state of the art on common benchmarks (see Figure 2), we obtain comparable or better score (i.e., the sum of the cumulative rewards received by the environment) on the state of the art (y-axis), while having substantially smaller complexity (x-axis), i.e., better interpretability.


Leonardo Lucio Custode is a postdoctoral researcher at the university of Trento, where he is mainly working on federated learning, machine learning, and optimization.
He completed his PhD in 2023. The main topics were the intersection of: Interpretable AI, Reinforcement Learning and Evolutionary Computation.
Conference Overview: EvoStar 2023
Julia Reuter, Otto von Guericke University Magdeburg, Germany
This year’s EvoStar conference was held on April 12th – 14th in the beautiful city of Brno, Czech Republic. The organizing team around Lukas Sekanina with their warm hospitality did an excellent job to make this conference an unforgettable experience.
For all students, the conference started in the evening before the actual conference opening, with a student reception. Just like last year, this event provided a fantastic opportunity to connect with the fellow students in an informal setting. The connections established while chatting about research, ideas and personal experiences lasted throughout the main conference days. A big thank you at this point to our coordinator Anna I Esparcia-Alcázar as well as the engaged student affairs team, who prepared entertaining ice-breaker games and the EvoStar quiz. Since many students attended this conference for the first time, what could not be missed is practicing the well-established EvoStar songs in preparation for the conference dinner.
EvoStar 2023 welcomed all participants on a sunny day at the building of the Faculty for Information Technology. The venue was indeed well-equipped for the needs of a conference and architecturally integrated into a former monastery.

The conference started off with an opening by SPECIES president Penousal Machado. It was followed by an inspiring plenary talk by Marek Vácha about the origins of evolution and the influence of Gregor Mendel. Pea plants played an important role in this talk, as Mendel experimented with them to model and understand heredity and genetic variation. Of course, researchers working with evolutionary algorithms were an excellent audience for this topic.
Afterwards, everybody swarmed out to the different parallel sessions including EvoMUSART, EvoCOP, EvoAPPs and EuroGP. As always, the variety in presented talks and research areas fostered an inspiring atmosphere and allowed researchers to expand their horizon beyond their own topics. The best paper sessions of EuroGP and EvoMUSART on that day were well attended.
In the evening, everyone gathered in the hallway, where the poster session took place. Participants had the opportunity to present their progress and ideas in an uncomplicated way, to fellow students, experienced researchers as well as the well-known “old crocs”. The poster session was full of discussions and intellectual debates, finding common ground as well as disagreements on certain topics, all in a respectful and appreciative manner. Once again, the breadth and depth of the research represented in the EvoStar community became apparent. After a long first conference day, we did of course not go to bed, but enjoyed some Czech beers at a local brewery.

On the second day, the sessions continued as usual. Susan Stepney was the first awardee of the Julian F. Miller Award for her outstanding contributions to theory, framework and algorithms for evolutionary computation. In her plenary talk, she discussed the topic of reservoir computing and her vision of tomorrow’s computing, including meta-materials and meta-dynamics.

The talk was followed by a student event, where students were encouraged to share their thoughts, doubts and ideas with “old crocs” from the EvoStar community in an informal format. This Q&A session was a great opportunity to approach our role models and gave space to ask any question; even those which would be a bit awkward to ask during a regular conference day, such as struggles during the early research career or whether they are more of a cat or dog person (yes, this question was really asked and also answered). The best paper sessions for EvoAPPs and EvoCOP followed.
The highly praised conference dinner at Hotel International Brno closed the second day of EvoStar 2023, with a buffet of various delicious Czech and European dishes and drinks. During the dinner, Mengjie Zhang received an award for his outstanding contributions to the community for many years and was accepted in the circle of “old crocs”. Thanks to the musical talent within our research community, two guitarists accompanied the singing of “amigos para siempre”, and once again we felt the familiar atmosphere at EvoStar.
The last conference day started off with more session talks from EvoAPPs, EvoCOP and EvoMUSART. I was personally participating in an inspiring yet alarming debate about the influence of AI tools such as DALL-E on the creative process, which included the fundamental discussion about what art is. Is writing prompts to an AI tool really art, and what are the limitations of these tools? These questions gave me lots of food for thought after the conference, and still come up in my mind from time to time.
The conference was closed with a plenary talk from Eveline Lutton about cooperation and competition in evolutionary computation. She gave various examples where collective intelligence can be used to optimize problems from different application areas, as well as future research directions. The conference ended with the closing session and presentation of the awards. The winners were, among tough competition:
- EvoAPPs: Under the Hood of Transfer Learning for Deep Neuroevolution (EML joint track). Stefano Sarti, Nuno Lourenço, Jason Adair, Penousal Machado and Gabriela Ochoa
- EvoCOP: A Policy-Based Learning Beam Search for Combinatorial Optimization. Rupert Ettrich, Marc Huber and Günther R. Raidl
- EuroGP: Graph Networks as Inductive Bias for Genetic Programming: Symbolic Models for Particle-Laden Flows (EML joint track) Julia Reuter, Sanaz Mostaghim, Hani Elmestikawy, Fabien Evrard and Berend van Wachem
- EvoMUSART: Using Autoencoders to Generate Skeleton-based Typography. Jéssica Parente, Luis Gonçalo, Tiago Martins, João Miguel Cunha, João Bicker and Penousal Machado

Awardees of the “Outstanding students” prize at the closing session.
Congratulations to all winners and thank you to all the organizers, participants and helpers for creating long-lasting memories during this year’s EvoStar conference.

Julia Reuter is a 2nd year PhD student at the Department of Computational Intelligence at Otto-von-Guericke-University Magdeburg, Germany. Her research focuses on genetic programming for symbolic regression and its practical application in engineering sciences, such as fluid mechanics and robotics.
Announcements
Library for Evolutionary Algorithms in Python (LEAP)
The team behind the Library for Evolutionary Algorithms in Python (LEAP) is pleased to announce the eighth release (v0.8) of the leap_ec Python package.
LEAP is a general purpose evolutionary computation package that combines readable and easy-to-use syntax for search and optimization algorithms with powerful distribution and visualization features. LEAP aims to be useful to users from a) research, b) education, and c) industry settings, and its signature is its flexible operator pipeline approach, which makes it easy to concisely express a metaheursitic algorithm’s configuration as high-level code (see Coletti et al, 2020).
LEAP v0.8 introduces
- support for multi-objective fitness functions (along with an NSGA-II implementation),
- support for evolving constant auxiliary parameters in Cartesian genetic programming, and
- convenience functions for solving and visualizing populations while solving max-ones style problems.
These features complement LEAP’s existing support for simple EA population models, island models, cooperative coevolution, basic Cartesian genetic programming, and distributed fitness evaluation (the latter having been stress-tested on HPC systems with on the order of 2,500 processors).
LEAP can be installed by simply running pip install leap_ec, and quickstart examples can be found with the API docs at https://leap-gmu.readthedocs.io/
We look forward to continuing to develop this general-purpose EC platform in collaboration with the many projects and classrooms where it is currently being used—we welcome feature requests and collaboration, and invite anyone who is using LEAP or evaluating it for their application to join our Slack group at https://leap-gmu.slack.com/ (just email colettima@ornl.gov and/or escott@mitre.org for an invitation!).
Forthcoming Events
The Genetic and Evolutionary Computation Conference (GECCO 2023)

Lisbon (hybrid), July 15-19, 2023
The Genetic and Evolutionary Computation Conference (GECCO) presents the latest high-quality results in genetic and evolutionary computation since 1999. Topics include: genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, neuroevolution, real world applications, search-based software engineering, theory, hybrids and more.
17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms (FOGA XVII)
The FOGA series aims to advance our understanding of the working principles behind these algorithms and related methods, such as ant colony optimization, particle swarm optimization, and simulated annealing. The conference provides a platform for discussing the theoretical foundations, mathematical analysis tools, algorithm performance comparison, and benchmarking aspects. It also explores the connections between black-box optimization and machine learning.
The conference offers a unique opportunity to dive into the intriguing world of algorithmic optimization and problem-solving techniques. With a special emphasis on runtime analysis, fitness landscapes, multi-objective optimization, and complexity theory, FOGA fosters the exchange of ideas, advancements, and best practices in these areas.
Supported by ACM/SIGEVO, FOGA 2023 will be hosted by the Hasso Plattner Institute in the beautiful and historical city of Potsdam, Germany. This year all presentations will be poster presentations to encourage more discussion between the participants. The conference will take place for three days (Wednesday to Friday) starting on 30 August.
About this Newsletter
SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW].
We solicit contributions in the following categories:
Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.
Short surveys and position papers. We invite short surveys and position papers in EC and EC-related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.
Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.
Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.
Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.
Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.
Forthcoming Events. If you have an EC event you wish to announce, this is the place.
News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.
Letters. If you want to ask or say something to SIGEVO members, please write us a letter!
Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.
Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.
Enquiries about submissions and contributions can be emailed to gabriela.ochoa@stir.ac.uk
All the issues of SIGEVOlution are also available online at: www.sigevolution.org
Notice to contributing authors to SIG newsletters
As a contributing author, you retain the copyright to your article. ACM will refer all requests for republication directly to you.
By submitting your article for distribution in any newsletter of the ACM Special Interest Groups listed below, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:
- to publish your work online or in print on the condition of acceptance by the editor
- to include the article in the ACM Digital Library and in any Digital Library-related services
- to allow users to make a personal copy of the article for noncommercial, educational, or research purposes
- to upload your video and other supplemental material to the ACM Digital Library, the ACM YouTube channel, and the SIG newsletter site
If third-party materials were used in your published work, supplemental material, or video, make sure that you have the necessary permissions to use those third-party materials in your work
Editor: Gabriela Ochoa
Sub-editor: James McDermott
Associate Editors: Emma Hart, Bill Langdon, Una-May O’Reilly, Nadarajen Veerapen, and Darrell Whitley