Volume 16, Issue 1

newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation

A sketch of the Non-dominated Sorting Genetic Algorithm, NSGA-II, the most used and cited evolutionary algorithm to date. Image provided by Prof. Kalyanmoy Deb, Michigan State University.


Welcome to the 1st (Spring) 2023 issue of the SIGEvolution newsletter! This issue celebrates the achievements of Prof. Kalyanmoy Deb, who has been named one of the Global Computing Association 2022 ACM Fellows for Outstanding Contributions that Propel Technology Today, for his “technical contributions in evolutionary multi-objective optimization algorithms and multi-criterion decision support”. Kalyanmoy kindly agreed to answer our questions and did so vividly and insightfully. We continue with an overview of a recent PhD dissertation entitled Enhancing Evolutionary Algorithm Performance with Knowledge Transfer and Asynchronous Parallelism. We complete the issue with recent announcements and call for submissions. Please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.
Gabriela Ochoa, Editor

An Interview with Kalyanmoy Deb 2022 ACM Fellow

Prof. Kalyanmoy Deb has been named one of the 2022 ACM Fellows for “Outstanding Contributions That Propel Technology Today”, in particular for his “technical contributions in evolutionary multi-objective optimization algorithms and multi-criterion decision support.”

Kalyanmoy kindly agreed to answer all our questions, ranging from his seminal contributions to his views and perspectives on evolutionary computation and its role within computing science and artificial Intelligence.

Q1: NSGA-II has achieved unprecedented success in both citations and industry uptake. Can you tell us about the history and challenges of developing this milestone algorithm?

Evolutionary algorithms (EAs) to address multi-objective optimization problems started in the early 90s with the publication of three independent implementations of David Goldberg’s suggestion to include an explicit diversity-preserving operator in a population-based EA: (i) Fonseca and Fleming’s multi-objective genetic algorithm (MOGA) [8]; (ii) Horn, Nafploitis and Goldberg’s niched Pareto GA (NPGA) [7]; and (ii) our (my student Srinivas and me) non-dominated sorting GA (NSGA) [10]. Our NSGA used domination principles to advance the population towards the Pareto-optimal set and a niching operator to maintain diversity in them. This latter aspect is usually not followed in single-objective EA studies but is an essential operator for multi-objective optimization. A number of other EA methods were also suggested, but they all used a dominance based convergence and a diversity-preserving operator for finding multiple solutions.

One common aspect of these first-generation multi-objective algorithms is that they did not use any elite-preservation operator, thereby compromising the performance and was also contrary to Rudolph’s asymptotic convergence proof [9] which required the preservation of elites from one generation to the next. In addition, NSGA also required a niching parameter which needed to be fixed appropriately for a new problem, making it difficult to apply.

The algorithmic challenges to come up with an efficient EA for multi-objective optimization were as follows: (i) elite preservation approach, (ii) parameter-less approach, (iii) modular approach, and (iv) a computationally fast approach. There were a few related other challenges which we needed to sort out for this purpose. To test an algorithm’s efficiency, we needed test problems with known difficulties. For this purpose, I suggested a variable-scaled test problem suite to test a multi-objective algorithm’s ability to converge to the Pareto set, maintain diversity in convex, concave, and disjointed Pareto sets, as well as in variably dense search spaces. Besides challenging test problems, we also needed performance metrics to compare two or more sets of solutions. For this purpose, we used existing convergence metrics and proposed new metrics for measuring diversity.

The NSGA-II approach carefully integrated elite preservation with an emphasis for less-crowded non-dominated solutions in a modular manner and without any additional parameter. The operators were implemented with least computational effort, even to sacrifice the diversity aspect somewhat. The NSGA-II paper [4] now has more than 50,000 Google Scholar citations, making it the highest cited (or one of the highest cited) journal paper(s) in the EC field.

I was fortunate to have three brilliant undergraduate students from IIT Kanpur (India) for this study. It was early 2000s and we have had mixed success in arriving at our desired algorithm. I called the three students (Sameer Agrawal, Amrit Pratap and T. Meyarivan) on a Saturday morning in my laboratory. I still vividly remember that they sat side by side in a row and had a basic EC code in Python. I proposed an idea to one student and went to the next to propose a slightly different version. By the time, I was done explaining a version to the third student, the first student was ready to show me some results on the first idea. Noticing which test problems were not solved properly, I suggested an update of our previous algorithm. Then, I moved to the next student for his results and update. I remember in a few iterations we came up with our final NSGA-II algorithm, which solved all the test problems to our satisfaction. Then, we compared the performance of our algorithm with other competing algorithms to prepare our paper. The NSGA-II algorithm was first published in the PPSN conference in Paris in 2000 [3]. Then, a more comprehensive version with its detailed performance appeared in IEEE Transactions on Evolutionary Computation in 2002 [4].

Q2: What do you think are the characteristics or components that made NSGA-II such a success story?

I believe the main features for NSGA-II’s success are (i) its modularity (for example, you can change the dominance definition by only modifying a small subroutine of the algorithm), (ii) absence of any additional parameter. The availability of a working code was another reason for its popularity. I have published a GECCO paper on this aspect [5].

Q3: Is there a real-world application of NSGA-II that you are particularly proud of?

NSGA series of algorithms have been applied to various problems. Upon a search through the SCOPUS database, I found that about 50% of NSGA applications are in CS and Engineering. The remaining applications come from other fields including astronomy, medicine, economics, social sciences, etc. This is amazing and great for the EC field, as it introduces and popularizes the act of EC among researchers in wide-ranging fields. Having said that, I am particularly proud of our 14-objective land use management study from New Zealand involving the Maori tribe and local government as stakeholders [1].

Our study clearly showed how starting from an astronomically large search space, 100 excellent trade-off solutions compromising triple bottom-line objective classes — productivity, profit, and cost — could be found by a customized NSGA-II algorithm and analyzed by involving real stakeholders to arrive at a single preferred solution. This study awarded us the “Wiley Practice Prize” by the International Society of Multi-Criterion Decision-making in 2013. While we regularly use the NSGA series of algorithms for various industries involving automobile component design to manufacturing process design, industries regularly choose the “NSGA-II” option adopted in various commercial softwares for their routine design and applications. There are also many innovative NSGA-II applications and extensions achieved by EC researchers.

Q4: Which are your other most important contributions to the field?

Right from the beginning, I was interested in making EAs efficiently applicable to practical problems. I guess this comes due to my basic undergraduate training in engineering. In the early 90s, most applications of genetic algorithms (GAs) used binary encodings, despite the problem having real-parameter variables. I used to lose half of audience in my industry presentations as soon as I explain the real-parameter to binary string mapping. I thought it was not necessary and developed the Simulated Binary Crossover and Polynomial Mutation operators which directly operate on real-parameter variables. Despite some issues in linked problems, these operators are used as default in EA studies, including in commercial softwares.

Another major contribution I have made is in handling constraints. My 2000 paper on parameter-less constraint handling through an EA [2] is simple and intuitive, and importantly works on most problems. It is also used as default in many EA studies today.

Besides NSGA-II, I have been continuously proposing new and diverse evolutionary multi objective optimization (EMO) algorithms, such as NSGA-III, which can solve 3-15 objectives much better than NSGA-II, objective-scalable test problems, other problem-specific EMO algorithms. I have extended EMO concepts to various types of optimization problems, including mixed-integer problems, dynamic problems, bilevel (hierarchical) problems, coevolutionary problems, Bayesian optimization problems, neural architecture search problems, etc.

I went extra miles to collaborate with leading researchers from contemporary fields to EMO, such as multi-criterion decision-making (MCDM) and operations research (OR) communities. I relocated my family for two years to visit MCDM and OR researchers in Finland and to understand the strengths and weaknesses of their methods vis-a-vis EMO methods. The collaboration allowed me to propose hybrid EMO-MCDM and EMO-OR methods and infiltrate the EC/EMO ideas to these communities by organizing joint seminars and conferences.

Our recent billion-variable integer linear programming application motivated from an industrial problem [6] is another EC application I am quite proud of. The customized GA uses a biased initial population, a truly recombination operator of mixing parts of solutions from two parents, and repairs children solutions from recombination to fix constraints. The algorithm follows the principles of an EA and clearly shows its working to 50,000 to one billion variable versions of the problem in a polynomial time complexity to find a near-optimal solution every time. I believe such a study can be repeated for other problem
classes with some innovative thoughts and tricks.

Q5: What are the current open problems, or topics where you think there are opportunities for substantial contributions in our field?

EC has come a long way. We now have some understanding of how the basic EC framework must be modified to address different problem classes. They cater to representation and its relationship with genetic operators to create meaningful solutions. Overall, I see several opportunities for substantial contributions of EC. The idea of an automatic and adaptive algorithm generator for a given problem class will be very helpful. This may use the involvement of machine learning methods to transfer knowledge of
solving similar problems from the past and modify the algorithm as it moves through different parts of the fitness landscape. In the long run, it may involve the inclusion and exclusion of objectives and constraints, constraints to be transformed into objectives and vice versa, reduction in variables, and the use of surrogates. I am thinking here of a “Dream Algorithm” which will include collective critical research results from the EC field.

Next, I feel we need to go from the usual one-click algorithm runs to interactive EC applications which will provide users with critical findings thus far during a run and expect users to validate and provide further information to make the subsequent search more effective.

We definitely need some key application areas in which we convincingly demonstrate better performance compared to existing non-EC methods. Such algorithms may look different from a vanilla EC approach and use certain problem-specific operators, but the algorithm must show the advantage of using a population and/or a recombination operator, and the flexibility and implicit parallelism associated with EC algorithms. Finally, I feel large-scale applications demonstrating scaling of EC’s performance over three or four orders of magnitude of variables would get the attention of non-EC researchers and practitioners.

Q6: Do you think the current AI hype is an opportunity or a threat for EC?

I strongly believe that it is an opportunity, rather than a threat. I believe AI and ML methods are complementary to EC, rather than a competitor to each other. Every ML method needs to optimize an internal model to find an intermediate or final ML solution. Adam optimization is the simplest optimization method that one can think of, and it is what comes coded in most ML distributions. We have shown how a population-based optimization strategy with a helper objective (to the accuracy objective) can help find better solutions, say with lesser complexity.

EC and EMO have a lot to contribute to AI/ML. Both fields have grown rather fast. Both fields attract inquisitive students and newcomers with some programming background to get into the fields. We all need to understand the history of what worked and what did not and make a more scientific and systematic approach rather than proposing half-cooked and not-well-thought-out ideas. For example, I would be interested in developing a competitor to Adam algorithm which may be a bit more expensive but can avoid local attractors and provide a better optimal solution. Can there be a hybrid algorithm, which starts with an EC and ends with a gradient-based algorithm, thereby having a better global perspective and robustness?

Let us also keep looking for innovative EC/EMO scopes within AI/ML approaches. I found that a multi-objective approach often produces a completely new and innovative approach to different problem-solving scenarios. In such cases, I cannot think of any other efficient method than EMO to address them.

Q7: How do you view the visibility of the EC community in the larger CS community?

I see that the EC is getting more and more visible within the larger CS community, mainly due to their flexibility in handling any type of objectives, constraints, and variables. EC is great that way, but CS folks are usually sceptical about a few matters: (i) repeatability in multiple runs, (ii) guaranteed anytime convergence, and (iii) lack of mathematical principles to guide EC’s search. As it is clear from the no-free lunch (NFL) theorem, we would never be able to have one efficient EC algorithm for any arbitrary problem, thereby requiring the need for a “customized” EA for a specific class of problems. As a community, we should start with the most popular problem classes one by one and develop efficient EC methods for each by providing a proper explanation of its working, so CS folks can see what aspects of the EC method is responsible for its better performance compared to their usual solution procedures.

Over 30 years of EC research and application, EC methods have been satisfactorily and routinely applied to many practical problems. Often practical success is not enough for a theoretical mind. I believe we should continue with both kinds of research (theoretical and applied), as they both help progress the field. Eventually, ideas from one will help the other.

Q8: What advice do you have for the younger generation of researchers in the field?

I believe everyone should follow his/her own recipes for success. I can only share what worked for me early on in my career. Right from the beginning, my research goal was to come up with efficient optimization algorithms for various kinds of search and optimization problems and evaluate the role of EAs for this purpose. While I did several incremental studies to have my publication count growing, I also looked at critical issues related to my field.

As I already mentioned, I had a problem getting through practitioners with binary encodings for real parameters, which often arise in practice.  So, I thought developing an efficient real-parameter GA would be critical for the field. Then, I realized that constraints, which occur in most practical problems, are handled generically using penalty functions requiring extensive parameter tuning. I wanted to come up with a parameter-less constraint-handling approach that can exploit the population of points. I realized and experienced first-hand that in practice there are usually more than one important objectives, often conflicting with each other, that require to be optimized simultaneously. In practice, they are most often interested in robust solutions with a little sensitivity to variable perturbations. I thought of addressing these challenging issues one by one so that EC methods become more competitive and acceptable to search and optimization community. I have found that these critical studies stayed active in the field over time and fetched citations for me.

I recommend the same to young researchers. It is relatively easy to do incremental studies and get another publication, but in the long run, they will also have an incremental effect on one’s career. The critical and challenging issues of a field are difficult to crack (if they were easy, others may have solved them already), but identify them and have a go at them. If you succeed, they will stay with you, give you further ideas, and make you famous.


[1] O Chikumbo, E Goodman, K Deb (2012) Approximating a multi-dimensional Pareto front for a land use management problem: A modified MOEA with an epigenetic silencing metaphor, IEEE congress on Evolutionary Computation, 1-9.

[2] K Deb (2000) An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, 186 (2-4), 311-338.

[3] K Deb, S Agrawal, A Pratap, and T Meyarivan (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II International Conference on Parallel Problem Solving from Nature PPSN, 849-858

[4] K Deb, A Pratap, S Agarwal, and T Meyarivan (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II). IEEE Transactions on Evolutionary Computation 6(2), 182-197. Citation: 50,140 (As of Google Scholar on 27 March 2023).

[5] K Deb (2008) A robust evolutionary framework for multi-objective optimization. Proceedings of Genetic and Evolutionary Computation conference (GECCO-2008), (Atlanta, USA). pp. 633–640.

[6] K Deb and C Myburgh (2017) A population-based fast algorithm for a billion-dimensional resource allocation problem with integer variables. European Journal of Operational Research, 261(2), 460-474.

[7] J Horn, N Nafpliotis, DE Goldberg (1994) A niched Pareto genetic algorithm for multiobjective optimization. Proceedings of the first IEEE conference on evolutionary computation. IEEE world congress on computational intelligence. pp. 82-87.

[8] CM Fonseca, PJ Fleming (1993) Genetic algorithms for multiobjective optimization: formulation, discussion, and generalization. Proceedings of the Fifth International Conference on Genetic Algorithms (ICGA-93) (San Mateo, CA). pp . 416-423.

[9] G Rudolph G. (1998) On a multi-objective evolutionary algorithm and its convergence to the Pareto set. In 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360). pp. 511-516.

[10] N Srinivas, K Deb (1994) Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary computation, 2 (3), 221-248.     

Enhancing Evolutionary Algorithm Performance with Knowledge Transfer and Asynchronous Parallelism

Eric O. Scott, George Mason University, Virginia, Washington, D.C.

This two-part PhD thesis (by Eric O. Scott, Advisors: Kenneth A. De Jong and Sean Luke) investigates asynchronous steady-state evolutionary algorithms and evolutionary knowledge transfer, respectively, as ways to address challenges that can make it difficult to apply evolutionary algorithms (EAs) to computationally intensive applications. A full copy of the dissertation can be found via ProQuest.

The first challenge is idle computing resources: while parallel and distributed implementations of EAs are widely used to address applications where fitness evaluations are computationally time-consuming, most traditional implementations of these algorithms use a generational model that tends to leave many CPU resources unused. Asynchronous steady-state EAs (ASEAs) can mitigate this problem, by allowing processors to be immediately filled with jobs as soon as they become available (rather than waiting to synchronize at a generation boundary).

Users of ASEAs have often been concerned, however, about several aspects of their behavior. In particular, it is difficult to predict whether an ASEA will exhibit speedup over a more traditional approach in some applications, and ASEAs can exhibit a bias toward fast-evaluating solutions (instead of high-quality solutions), which has often been cited as a threat to their performance as optimizers. In
this thesis, I have proved analytical bounds for predicting the throughput improvement of an ASEA over a generational algorithm in special cases—a result that may help practitioners in planning computationally intensive projects (which often require estimating supercomputer leases in advance). I also provide a detailed empirical analysis of evaluation-time bias in ASEAs, showing that it typically has a very weak effect on an algorithm’s evolutionary trajectory—I use this to claim that “evaluation-time bias is no big deal,” or at least that it should not be used as a reason to avoid using ASEAs. An exception to this rule is population initialization, however: I show empirically that ASEAs are very sensitive to the initialization strategy that is used, and that some common approaches to asynchronously evaluating initial populations have significant drawbacks and should be avoided. From these and other results presented in the thesis (e.g. an analysis of a “quasi-generational” hybrid algorithm), I conclude overall that ASEAS are generally good problem solvers, and that viewing evolution primarily in terms of asynchronous population updates is a natural and viable approach to doing evolutionary computing in highly parallel settings.

Second, while it is a truism that domain-specific knowledge (in the form of specialized operators and representations) is essential to creating high-performing metaheuristics for computationally costly
applications, doing so requires costly and highly specialized research that most organizations cannot afford. Evolutionary knowledge transfer (EKF) is one strategy that may help to change this, by allowing
algorithms to automatically re-use information from past experience on related problems. If we had algorithms that could learn and generalize from experience and that could transfer and adapt that experience to distantly related problems, the engineering workflow for many EA applications could look quite different in the future.

In many respects, however, evolutionary knowledge transfer is poorly understood. In particular, EKF relies fundamentally on the idea that problem instances are available that share some form of similarity that an algorithm can exploit to achieve better problem-solving ability. But it is difficult to understand what kinds of problem classes can benefit from transfer, and what kinds of transfer strategies will be able to exploit similarities where they exist—with the result that negative transfer (a transfer that harms algorithm
performance) is common in applications. In this thesis, I introduce a three-part analysis of theoretical motivations for knowledge transfer—articulating how it can be motivated narrowly by clear
examples of similar problems (akin to the identical elements theory in psychology), or by appealing to shared causal principles, or, third, to the more expansive and speculative evolutionary concept of
massively multi-task “innovation engines.” Then I present analytical results that define two extremes of EKF’s usefulness: on the one hand, I prove no-free-lunch theorems (NFLTs) for transfer optimization for the first time, and show empirically that they may hold when their assumptions are partly relaxed; on the other hand, I also prove the first favorable runtime result for an EKF algorithm, showing the conditions under which EKF leads to an asymptotic complexity improvement on generalized leading-ones problems. In addition to these results, I present a preliminary study of transfer prediction and many-source knowledge transfer methods, showing that many-source population seeding in particular may be resilient to negative transfer. Algorithms of this kind may offer an effective way to help practitioners find useful source tasks among a larger set of candidate tasks. Finally, I present one of the first studies of representation-learning methods for evolutionary knowledge transfer—showing how a meta-evolutionary approach to learning genotype-phenotype maps can encode and transfer knowledge to new classes of problems, and how EKF with Cartesian genetic programming can implicitly reuse subcomponents while solving multi-task function synthesis tasks.

As an example illustration of the results in the dissertation, the figure below shows the observed performance of evolutionary knowledge transfer among 11 real-valued minimization tasks (left), shown along with the values of a distance metric (right) that combines correlation information about each pair of landscapes with the distance between their global optima. The performance values are reported as a ratio between the area-under-the-curve (AUC) without and with transfer (so values lower than 1.0 indicate successful transfer). In general, the combined distance metric shown here predicts transferability better than either correlation or global-optimum-distance alone (not shown).

Overall, in this dissertation I have sought to address two fundamental problems that are facing evolutionary computation as a field: the problem of managing idle time when scaling to parallel and distributed applications, and the problem of pre-configuring algorithms with useful heuristic knowledge by reusing knowledge from past tasks. I have shown that asynchronous EAs are an effective and promising family of algorithms for solving diverse optimization problems with minimal idle time. And while evolutionary knowledge transfer is a young and more speculative field, I have helped to expand our fundamental understanding of this area with new theoretical and empirical results.

About the Author

Eric “Siggy” Scott is a Lead AI Engineer at MITRE Corporation in McLean, VA, where he directs research into machine learning and optimization for applications that benefit many different government sponsors. Siggy’s publications cover evolutionary algorithms, artificial intelligence, and agent-based modeling, and his current work is focused on use-inspired basic research and academic collaborations that move cutting-edge results into applications that serve the public interest. He completed a PhD in Computer Science at George Mason University in 2022.


Summer School on Automatic Algorithm Design 2023


This one-week program in France, 12-16 June 2023, is designed for young researchers, and provides an opportunity to connect with experienced professionals in the field of automated design of high-performing metaheuristics for single and multi-objective optimization. Don’t miss out on this unique learning experience. Registration is open.

Tutorial speakers:

The latest issue of the ACM Transactions on Evolutionary Learning and Optimization (TELO)

Volume 2, Issue 4. December 2022

Editor: Juergen Branke, Manuel López-Ibáñez

Publisher: Association for Computing Machinery, New York, US

Table of Contents

  • Multi-donor Neural Transfer Learning for Genetic Programming. Alexander Wild, Barry Porter. Article No.: 12, pp 1–40. https://doi.org/10.1145/3563043
  • Theoretical and Empirical Analysis of Parameter Control Mechanisms in the (1 + (λ, λ)) Genetic Algorithm. Mario Alejandro Hevia Fajardo, Dirk Sudholt. Article No.: 13, pp 1–39. https://doi.org/10.1145/3564755
  • Explainable Regression Via Prototypes. Renato Miranda Filho, Anísio M. Lacerda, Gisele L. Pappa. Article No.: 14, pp 1–26. https://doi.org/10.1145/3576903

Call for Submissions

2023 “Humies” Awards for Human-Competitive Results

Produced by Genetic and Evolutionary Computation

Important Dates

  • Friday, 2 June 2023: Deadline for entries (consisting of one TEXT file, PDF files for one or more journal or reviewed conference papers, and possible “in press” documentation). Send entries to goodman at msu dot edu
  • Friday, 16 June 2023: Finalists will be told by e-mail Friday.
  • 30 June 2023: Finalists must submit a 10-minute video presentation to goodman at msu dot edu.
  • 15-19 July 2023 (Saturday – Wednesday): GECCO conference.
  • Monday, 17 July 2023: Presentation session.
  • Wednesday, 19 July 2023: Announcement of awards at the plenary session of the GECCO conference.

About this Newsletter

SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW].
We solicit contributions in the following categories:

Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.

Short surveys and position papers. We invite short surveys and position papers in EC and EC-related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.

Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.

Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.

Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.

Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.

Forthcoming Events. If you have an EC event you wish to announce, this is the place.

News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.

Letters. If you want to ask or say something to SIGEVO members, please write us a letter!

Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.

Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.

Enquiries about submissions and contributions can be emailed to gabriela.ochoa@stir.ac.uk
All the issues of SIGEVOlution are also available online at: evolution.sigevo.org

Notice to contributing authors to SIG newsletters
As a contributing author, you retain the copyright to your article. ACM will refer all requests for republication directly to you.

By submitting your article for distribution in any newsletter of the ACM Special Interest Groups listed below, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:

  • to publish your work online or in print on the condition of acceptance by the editor
  • to include the article in the ACM Digital Library and in any Digital Library-related services
  • to allow users to make a personal copy of the article for noncommercial, educational, or research purposes
  • to upload your video and other supplemental material to the ACM Digital Library, the ACM YouTube channel, and the SIG newsletter site

If third-party materials were used in your published work, supplemental material, or video, make sure that you have the necessary permissions to use those third-party materials in your work

Editor: Gabriela Ochoa

Sub-editor: James McDermott

Associate Editors: Emma Hart, Bill Langdon, Una-May O’Reilly, Nadarajen Veerapen, and Darrell Whitley