SIGEVOlution

newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation

Example evolved robots in hardware and simulation, image from the article in this issue entitled "Towards the Autonomous Evolution of Robot Ecosystems".

Editorial

Welcome to the Spring 2022 issue of the SIGEvolution newsletter! We start with an overview of a collaborative project with an ambitious long-term research vision: "a technology enabling the evolution of entire autonomous robotic ecosystems that live and work for long periods in challenging and dynamic environments without the need for direct human oversight". We continue with the lively recount, in eyes of two energetic PhD students, of the workshop "Benchmarked: Optimization Meets Machine Learning 2022". We finish the issue with the usual announcements of forthcoming events. Please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.

Gabriela Ochoa, Editor.

Towards the Autonomous Evolution of Robot Ecosystems

Emma Hart, Edinburgh Napier University, Scotland, UK

Introduction

Mobile robots are increasingly part of the fabric of everyday life, with applications that range from maintenance of underwater infrastructure, through increasing productivity in manufacturing and warehouse logistics operations to cleaning our homes. For most of these tasks, robots are designed and programmed by human engineers. While they might not have complete knowledge of the situations/tasks they might come across, the robots operate in environments that are to a large extent predictable. Therefore, it is possible to design bodies with sensors and actuators that seem appropriate and to program controllers that provide some level of adaptive behaviour. But what if one wants to send a robot to operate in an unknown environment whose characteristics are unknown or inaccessible or dangerous for humans? To clean up inside a nuclear reactor, for example, mine for minerals at the bottom of the ocean, or explore an asteroid whose surface is unmapped? If a technology could be produced that could design and fabricate robots in-situ, not only could this avoid wasted effort, it could lead to robots uniquely adapted to their environment.

This is exactly the challenge addressed by the Autonomous Robot Evolution (ARE) project – a collaboration between researchers at Edinburgh Napier University, University of York, Bristol, Sunderland and VU Amsterdam. The project focuses on a disruptive robotic technology where robots are created, reproduce and evolve in real-time and real space. Its long-term vision is a technology enabling the evolution of entire autonomous robotic ecosystems that live and work for long periods in challenging and dynamic environments without the need for direct human oversight. Our proposed approach (dubbed the EvoSphere, see figure 1) is based on state-of-the-art 3D printing techniques with novel materials and a hybridised hardware-software evolutionary architecture. It addresses current weaknesses in robot design methodology by establishing self-reproducing robots that evolve their morphologies and controllers in real-time. Employing evolution offers a) fit-for-purpose designs that are hard to obtain by conventional engineering and optimisation methods, b) “out-of-the-box” solutions that are unlikely to be conceived by human experts, and c) the opportunity to adapt to unforeseen or changing situations on-the-fly.

Figure 1. The envisioned EvoSphere. (Step 1): Robots are designed and fabricated by evolutionary processes. Step (2): New robots undergo individual learning in a simple environment. (Step 3): robots deemed viable are released into their environment and acquire a fitness score which is used to drive the evolutionary process.

Background

To realise the vision outlined above, any evolutionary process must be able to simultaneously evolve both the body and brain: while a naïve approach might simply consider adapting a brain to a newly evolved body design, it has been clear for some years that intelligence is not simply a property of the brain but also of the body. In their seminal text of 2006, Pfiefer and Bongard clearly argued that intelligence is at the same time both tightly constrained and also enabled by the body, and influenced by its morphology and its material properties. Hence any evolutionary process must jointly consider both options in order to produce an effective design. There are large potential payoffs in such an approach – devolving some intelligence to the morphology can reduce the need for complexity in the brain, with evolution offering the ideal tool to determine where along this spectrum the optimal balance lies by simultaneously evolving body and brain.

This challenge has been readily embraced by the evolutionary robotics community. In 2000, Lipson and Pollack published the first work that showed that small robots constructed from bars connected with free joints could be evolved in simulation and then built-in hardware post-evolution. A decade later, the same group later demonstrated that it was possible to apply the same principles in a vastly expanded design space that exploited a continuous distribution of hard and soft materials, where actuation was achieved via volumetric expansion or contraction of the materials themselves. In 2020, a very similar approach by Kriegman et al. resulted in the ground-breaking demonstration of xenobots: robots evolved in simulation and manufactured post-simulation using living stem-cells. However, while each of these examples represent noteworthy landmarks in the field, they all have two shortcomings. Firstly, none of the robots just described have sensors, hence although capable of directed motion, the lack of an ability to sense their environment results in an open-loop form of control that prevents the use of feedback for self-correction of behaviour. Secondly, fabricating post-evolution introduces a reality gap: infamous in robotics, this results from inevitable differences between simulation and reality, regardless of the fidelity of the simulator.

This can be avoided by bypassing the simulation stage altogether and evaluating evolved designs directly in hardware. This vision was first outlined in 2012, and has been greatly facilitated by advances in 3d printing and automation that facilitate rapid fabrication and autonomous assembly. In 2015, Brodbeck et al. demonstrated that robot evolution could be achieved completely in hardware: modular designs evolved in a computer built by a ‘mother robot’ directly in hardware, which then observes the behaviour of each new robot using a camera system in order to assign a fitness score. An alternative approach was proposed by Eiben et al. 2016, in which physical robots co-exist and interact in a real-world environment. The evolutionary process is decentralised: reproduction operators act on the genomes of robots that come within a pre-defined distance of each other and trigger the production of offspring.

ARE builds on this previous research, using a centralised evolutionary process and fully automated reproduction via a bespoke design of a facility called the RoboFab (figure 2). Furthermore, the design space is considerably enriched: moving away from modular designs with an algorithm that can design a robot skeleton of any shape printed using 3d plastic, and a varied array of sensor and actuators available, including both limbs and joints. There are considerable engineering challenges associated with designing a fully functioning RoboFab: these are discussed in detail in other articles: here, we provide an overview of the aspects associated with the design and implementation of the evolutionary mechanisms.

Figure 2. The "RoboFab": the skeleton designs produced by the evolutionary process are 3d printed, after which the automated assembly arm adds a Raspberry Pi to act as brain and sensors and actuators as determined by the genome, taken from a bank of components. The components are engineered to facilitate easy attachment to the skeleton. The assembly arm also wires the components to the power supply and brain.

Evolution

The EvoSphere evolves robots using a genome that specifies both body and brain, where brains are neural controllers. Separate encodings are used for each part. Bodies are represented using a compositional pattern-producing network (CPPN): this is an indirect encoding commonly used in robotics as it is capable of generating repeating patterns, hence useful in evolving symmetry. An indirect encoding is also necessary for the brain: a neural controller cannot be directly specified on the genome as the number of inputs and outputs have to match the body plan generated by the CPPN in the body part of the genome. Hence, a second CPPN is encoded on the genome to generate the weights of a controller with an appropriate structure (see figure 3).

Figure 3. The genome describes two CPPNs. One is used to generate the body, during which a “manufacturability” check occurs to ensure that the resulting shape can be printed and assembled. The second CPPN is used to generate the weights of a neural network with a structure that corresponds to the new body in terms of the number of inputs and outputs. (CPPN figure from Stanley, 2006.)

However, this also challenges the evolutionary process in that while reproduction between two morphologically distinct parents might result in a viable body plan, a directly inherited controller is at best unlikely to provide adequate control. One way to address this is to add a learning cycle into the evolutionary loop. This can either improve an inherited controller over an individual’s lifetime – when the inherited controller has an appropriate structure – or even learn a new controller from scratch. We propose an additional mechanism that can bootstrap the learning phase from a repository of previously discovered high-performing controllers: good controllers discovered in the evolutionary process are stored in an external repository, sorted by their `species’, where a species is defined by a simple descriptor denoting (e.g.) the number of wheels, joints, and sensors of each type on a robot. Whenever a new body is created, if there is already a controller in the repository for that species, then the learning process is bootstrapped using the controller from the repository, which has been shown to considerably speed up learning. (Note however that there can be many robots matching the species descriptor as it does not account for the skeleton shape or positioning of any attached components). The repository is continually updated as more and more species are discovered.

As shown in figure 4, the system thus contains nested evolutionary and learning algorithms. Specifically, we use NEAT to evolve the CPPN for the body-part, and a novelty-driven increasing population evolutionary strategy (NIP-ES) to implement the learning algorithm.

Figure 4. Nested loops of evolution and learning: the genome evolved in the outer loop is improved via an inner loop in which a learning algorithm is employed. We use a novelty-driven increasing population evolutionary strategy (NIP-ES) as the learner.

ECOSYSTEM Manager

Evolution in hardware is slow, mainly due to the time associated with printing and assembling the robot. Hence, in addition to running evolution in hardware, we simultaneously run a simulated version of the EvoSphere – this can be used to quickly explore the potential space of useful robots, identifying useful designs to be added to the physically evolving population. At the same time, a more exploitative EA is running in a much smaller physical population, mitigating the infamous reality gap. Robots can be transferred directly from virtual to physical worlds – or even vice versa. In addition, this also facilitates a novel form of reproduction in which offspring can be produced from a virtual mother and physical father (see figure 5). A human operator can control the entire process – managing the rate of migration of robots from one system to the other, acting as a ‘breeder’ in selecting robots for reproduction, or altering the parameters of the evolutionary algorithms. Examples of some of the robots evolved are shown in figure 6.


Figure 5. The ECOsystem Manager enables genomes to be transferred between the virtual and physical worlds, as well as facilitating reproduction between a virtual mother and physical father. In this case the offspring can be added to either or both worlds.

Figure 6. Examples of evolved robots. The figure shows two robots assembled in hardware and three different robots in simulation, illustrating the wide variety in morphology that results from evolving using the CPPN representation

Looking Ahead

The original motivation for this work stemmed from a vision to create a system that could design robots in-situ in hardware – so that they can be fit for their desired purpose and avoid the infamous reality gap. However, from another perspective, in the words of the renowned biologist John Maynard-Smith 'until we discover extra-terrestrial life, biologists have only one “system” on which to study evolution’. Just as the Large Hadron Collider provides us with an instrument to study the intricacies of particle physics, perhaps a reproducing system of robots provides a new instrument to study fundamental questions about life itself. The ARE technology could be used for exactly such a purpose, providing a platform by which to generalise about evolution while simultaneously providing a springboard to explore a rich set of “what-if” questions, freed from the constraints imposed by working with biological matter.

Further resources

Acknowledgements. The ARE project is funded by EPSRC (EP/R03561X, EP/R035733, EP/R035679) and is a collaboration between the University of York (Prof. A. Tyrrell), Edinburgh Napier University (Prof. E. Hart), Bristol Robotics Laboratory (Prof. A. Winfield), University of Sunderland (Prof. J. Timmis), VU Amsterdam (Prof. A.E. Eiben). The vast majority of the work is carried out by Dr Léni Le Goff; Dr Edgar Buchanan; Dr Wei Li; Dr Matt Hale; Mike Angus; Rob Woolley; Matteo De Carlo.

Benchmarked: Optimization Meets Machine Learning 2022

Annelot Bosman, Universiteit Leiden, The Netherlands

Carolin Benjamins, Leibniz University Hannover, Germany

During the 22nd week of this year (30 May - 3 June 2022), we had the honor to be part of the Benchmarked: Optimization Meets Machine Learning workshop held at the Lorentz center in Leiden, the Netherlands. The workshop was organized by some of the most renowned scientists from the field of optimisation and machine learning, Carola Doerr, Mike Preuss, Marc Schoenauer, Thomas Stützle, and Joaquin Vanschoren. The theme of the workshop was benchmarking at the intersection of machine learning and optimization, as this is a very important yet generally overlooked subject. It was also a perfect occasion for bringing together the brightest minds in the field to discuss the topic. We, as beginning PhD students, were very lucky to participate in this prestigious workshop and it gave us the opportunity to meet many colleagues in real life for the first time. In this post we’d like to take you along through an educational and very fun week.

Leiden at dusk at the intersection of Rapenburg and Nieuwe Rijn.

Monday

Monday started sitting in the main room with 50 people, almost everyone equipped with a coffee mug. Thankfully, each of us got the chance to briefly introduce ourselves. Later, this provided great conversation starters.

Our day took off with our first invited speaker, a Leiden university resident, Jan van Rijn, who took us on a journey to discover Trustworthy and Reliable AI Systems and Speeding Up Neural Network Verification via Automated Algorithm Configuration. He ended his talk with an open question: how to introduce benchmarks in fast-developing, highly competitive fields?

Afterwards, many of the participants presented their benchmarking tools.

Finally, the official program ended with a Wine and Cheese social which was lovingly prepared by the Lorentz center team. Just when dusk fell, we had a walk through the city guided by one of the locals, showing us their favorite places..

At the beach in Katwijk.

Tuesday

Tuesday started bright and early with the invited keynote speaker Pascal Kerschke from TU Dresden on Automated Algorithm Selection Meets Continuous Black-Box Optimization. In the Q&A one thought emerged: Maybe we should start building problem-specific tools and move away from the hammer-hits-all method when doing Algorithm Selection.

The next talk made us see optimization problems from a different angle. Gabriela Ochoa’s talk Synthetic vs. Real-World Instances: A Local Optima Networks View showed that we can compress optimization landscapes into graphs.

The day continued with the first break-out sessions on various benchmarking related topics, such as real-world benchmarking, better benchmarking, benchmarking features and dynamic behavior. The discussions were very lively, due to the contribution of scientists from different backgrounds with very different opinions on the matter, which made the sessions even more valuable.

Our day came to an end at the beach in Katwijk close by enjoying the wide empty sands, the murmur of the sea and a satisfying vegetarian BBQ, also organized by the Lorentz center.

Our meeting room at the Lorentz Center

In Leiden; there are some windmills left at the typical grachts

Wednesday

This day was reserved for continuing the breakout sessions and for sneaking away in the afternoon to the botanical garden. Some of the attendees enjoyed a nice evening in the city drinking our favorite beverages at the canals.

Thursday

Thursday, we were treated to a talk by keynote speaker Kate Smith-Miles from the University of Melbourne first thing in the morning. Her talk Instance Space Analysis for Blackbox Optimisation showed us how to analyse the distribution of instances in a test set and determine whether the set is complete and unbiased. Even though it felt quite early for many of us, the presentation was followed by a lively discussion.

After lunch we were all kindly invited to take part in the celebrations around the 25 years anniversary of LIACS, the computer science department of the Universiteit Leiden. The festivities consisted of talks by, amongst others, the institute manager, presentations on the history of LIACS, cake, Italian food and too many drinks. This informal activity gave us yet another fantastic opportunity to meet and talk with fellow colleagues.


Happy participants on the last day.

Friday

The last day of the workshop started with Aneta Neumann from the University of Adelaide talking about Benchmarking for Advanced Mine Optimization under Uncertainty. In the plenary we discussed “Better benchmarking!” where all agreed that there is lots of potential to improve and that we are on a good path. We recapitulated the workshop and had a nice lunch in the canteen before we parted ways.

Final Thoughts and Thanks

Many of the attendees we spoke to, us included, enjoyed the workshop very much. We had thorough discussions to see where we, as a community, would like to be in the future. We felt integrated into this supportive community and are looking forward to meeting again and to future collaborations.

We would especially like to thank the workshop organizers Carola Doerr, Marc Schoenauer, Thomas Stützle, Joaquin Vanschoren and Mike Preuss for bringing us together. Special thanks to the Lorentz Center team for their efforts and making our stay enjoyable.

The Authors

Annelot Bosman (left), Universiteit Leiden, The Netherlands

I am a first year PhD candidate at Universiteit Leiden, supervised by Holger Hoos and Jan van Rijn. The research topic of my PhD consists of robustness verification of deep neural networks. I am part of the European network TAILOR, whose objective is to foster collaborations specifically in the field of safe use of AI.

Carolin Benjamins (right), Leibniz University Hannover, Germany

I am a 2nd year PhD candidate under the supervision of Prof. Marius Lindauer. The love for automation drives me and my research interests lie in Contextual Reinforcement Learning and Dynamic Algorithm Configuration. This was the first time I attended a real-life workshop due to Covid and I had an amazing time. I left inspired.


Forthcoming Events

Genetic and Evolutionary Computation Conference

GECCO 2022, will take place in Boston USA, July 9 - 13, 2022 as a hybrid on-site, online event. GECCO presents the latest high-quality results in genetic and evolutionary computation since 1999.

Topics include genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems (artificial life, robotics, evolvable hardware, generative and developmental systems, artificial immune systems), digital entertainment technologies and arts, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, real-world applications, search-based software engineering, theory and more.

Parallel Problem Solving from Nature

The 17th International Conference on Parallel Problem Solving from Nature (PPSN 2022) will be held in Dortmund (Germany), 10–14 September 2022. The workshops, tutorials and the conference will be held on-site only, there will not be an online component.

PPSN was originally designed to bring together researchers and practitioners in the field of Natural Computing, the study of computing approaches that are gleaned from natural models. Today, the conference series has evolved and welcomes works on all types of iterative optimization heuristics. Notably, we also welcome submissions on connections between search heuristics and machine learning or other artificial intelligence approaches. Submissions covering the entire spectrum of work, ranging from rigorously derived mathematical results to carefully crafted empirical studies, are invited.

About this Newsletter

SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW].

We solicit contributions in the following categories:

Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.

Short surveys and position papers: We invite short surveys and position papers in EC and EC related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.

Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.

Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.

Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.

Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.

Forthcoming Events. If you have an EC event you wish to announce, this is the place.

News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.

Letters. If you want to ask or to say something to SIGEVO members, please write us a letter!

Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.

Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.

Enquiries about submissions and contributions can be emailed to gabriela.ochoa@stir.ac.uk

All the issues of SIGEVOlution are also available online at: www.sigevolution.org

Notice to contributing authors to SIG newsletters

By submitting your article for distribution in the Special Interest Group publication, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:

  • to publish in print on condition of acceptance by the editor

  • to digitize and post your article in the electronic version of this publication

  • to include the article in the ACM Digital Library

  • to allow users to copy and distribute the article for noncommercial, educational or research purposes

However, as a contributing author, you retain copyright to your article and ACM will make every effort to refer requests for commercial use directly to you.

Editor: Gabriela Ochoa

Sub-editor: James McDermott

Associate Editors: Emma Hart, Bill Langdon, Una-May O'Reilly, Nadarajen Veerapen, and Darrell Whitley