SIGEVOlution
newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation

Julian F. Miller in Newfoundland, Canada, September 2006.
Editorial
Welcome to the first 2022 issue of the SIGEvolution newsletter! We start with a statement of support for Ukraine from the SIGEVO Executive Board. We then dedicate this issue to Julian F. Miller's memory, who sadly left us on February 14th, 2022, after battling blood cancer. Julian described himself as "the inventor of CGP (Cartesian Genetic Programming) and a pioneer in trying to get materials to solve computational problems (evolution in materio)." (cartesiangp.com). Julian was deeply regarded as the numerous warm and personal tributes collected in this SIGEVOlution issue reveal. Clearly, he was a source of inspiration with a vision ahead of his time, and his research passion as well as his human attributes made a mark on many. A recollection of Julian's recent work on evolving programs to build neural networks is neatly presented by one of his young collaborators. The article describes a model, which Julian aptly named IMPROBED, an acronym probably related to the title of his very last journal paper "Multiple PROblem-solving Brain via Evolved Developmental programs".
We continue with an article about open complexity and the long-term evolution of genetic programming. We finish the issue with the usual call for papers and forthcoming events announcements. Please get in touch if you would like to contribute an article for a future issue or have suggestions for the newsletter.
Gabriela Ochoa, Editor.

SIGEVO Statement of Support for Ukraine
Jürgen Branke, (SIGEVO vice-chair) The University of Warkick, UK.
The ACM SIGEVO Executive Board stands with the people of Ukraine and supports their democratically elected Government and territorial sovereignty. We join institutions and leaders from across the world in condemning in the strongest possible terms the unjustified attack by Russia on Ukraine and the violation of international law.
As scientists, we are also saddened by the oppression of free speech in Russia and highly respect the courage of all Russians who speak up against the war and call for peace. Our thoughts and hopes are with everybody affected by this crisis.
In support of our colleagues in Ukraine, GECCO will waive all registration fees for researchers from Ukrainian institutions.
Tributes to Julian F. Miller (1955 – 2022)
Wolfgang Banzhaf, James A. Foster, Simon Harding, Roman Kalkreuth, Gul Muhammad Khan, W.B. Langdon, Riccardo Poli, Stephen Smith, Susan Stepney, Martin Trefzer and Dennis G. Wilson

Wolfgang Banzhaf
Wolfgang Banzhaf
Michigan State University, Michigan State University, USA
Julian Miller was a good friend of mine. When he joined EvoNet we quickly hit it off as he was thinking deeply about evolution and self-organization. I think it did help that we were both educated in physics and could relate to each other's thinking very well. Julian always had a fascinating and deeply moving problem on his mind. Last we interacted about the idea of the development of a brain, and his recent contributions were in this field. But earlier on, in the 1990s and 2000s, we discussed regularly on more general computational developmental systems. At one point, we had the idea of a computational stem cell, that could develop into any (computational) behavior possible. We never worked it out in detail, but the idea that complexity has to grow, rather than be created in one go has remained very important to both him and me.
I am deeply grateful to have met him in this life.

James A. Foster
James A. Foster
University of Idaho, Moscow, Idaho, USA
Julian Miller and I knew each other for many, many years. One of the highlights of EvoStar each year, for me, was spending time with my friend.
Julian was best known for his creation and development of Cartesian GP, especially for evolving hardware. CGP evolves descriptions of circuits, in terms of (x,y) coordinates for components and connections. That was clever enough. But what I found amazing about Julian’s CGP work was how meticulously he dissected the process to show how it actually worked. It wasn’t enough for him THAT it worked. He had to know HOW.
Julian’s love for physical reality led him to develop what he called in materio evolution, where one evolves a physical substrate directly. He and his student, Simon Harding, used liquid crystal (LC) displays as substrates for evolving circuits. They attached leads for electrical input and output, then evolved LC configurations directly, selecting for input-output mappings that implemented signal processing circuits. Simon built one of these systems for my own lab when he did a postdoc with me. I enjoyed long nights brainstorming with Julian about other physical systems one might evolve, from steam pipes to quantum dots, as I recall.
Julian was a political activist when he had to be. Once when I was staying with him in York, he told me that the field behind his house was the site of an ancient battle. It was slated for development. Julian led a movement to preserve the archaeology instead.
Julian enjoyed abstraction but was always happier with the tangible. This was one reason he took up sculpting. He showed me pictures of some of his sculptures. Ironically, I never saw one in person—only in the form of digital images. They were abstract, but bursting with emotions like longing and joy. Emotion made physical. Like Julian.
Julian also wrote poetry, in a style uniquely his. Here is a couplet from a sonnet of his titled “You are made of…”. I find the invented words perfectly expressive.
"You have seen the coolmindlight and mathrhyme.
Most here are glumfoolish and sparklenought."
I will miss Julian: scientist, engineer, thinker, artist, friend.

Simon Harding
Simon Harding
Machine Intelligence Ltd
I have known Julian since I was an undergraduate working with him on my final year project – an implementation of Cartesian Genetic Programming. This is quite a few years ago now, but a lot of my life has been shaped by what we did together that spring term at university. In many ways, my entire academic and business life can be traced back to that project. The fact that I am continuing the same piece of R&D today tells you how important Julian’s ‘crazy’ ideas are.
We worked together on many projects over the years, but I think the “evolution in-materio” was the highlight and most profound. His concept was that the physical properties of the universe could be usefully unlocked using the simplest of algorithms. He marvelled at how all life was created using the same process, and this gave him a deep appreciation for the world around him – which also inspired his artistic interests.
Having toured conferences with Julian many times I was always amazed by how everyone knew him, everyone liked him and everyone wanted to hear what he had to talk about. Julian loved to talk and share his ideas and passions. I remember a joint two hour session, where my 30 minutes was squeezed down to a few minutes in extra time as Julian had got carried away with the excitement of his part of the presentation. I remember even though we continued through the break, the whole audience stayed to hear what Julian had to say.
Never one for the mainstream or group-think, Julian wanted to work on concepts that should change the world – one unconventional idea after another. They certainly changed mine.

Roman Kalkreuth
Roman Kalkreuth
TU Dortmund University, Dortmund, Germany
Dr Julian Francis Miller was the founder of the well-established graph-based genetic programming variant, called Cartesian Genetic Programming (CGP). Julian was a great pioneer and a visionary personality in the field of evolutionary computation. His genetic programming approach to learning Boolean functions greatly contributed to the formation of evolvable hardware research. His later work on evolving artificial neural network topologies has been also a great contribution to the field of evolutionary-driven deep learning. Julian received the prestigious Evostar award in 2011, for his outstanding contributions to the field of evolutionary computation. Julian was a highly cited author with over 10,000 citations and over 230 publications. He gave over 50 invited talks.
I will never forget our meeting in a small cafe in the lovely city of Salisbury (England, UK) back in 2016 where we had a great discussion about recent developments in Genetic Programming (GP) while having good tea and cake. At that time, I was on the verge of proposing a new idea for graph recombination and was slightly nervous since recombination was considered a big unresolved issue in CGP.
Despite all earlier failures of recombination in CGP, Julian encouraged the follow-up of its adaption. That impressed me about Julian. He never lost his scientific vision and passion. Julian officially retired at the University of York in 2016 but followed up his scientific life until his death in February 2022. After his retirement he concentrated on the so-called CGP Brain Project which has a visionary character:
"For a few years now I have been working on using CGP to evolve two programs that develop a whole neural network and can learn to solve multiple problems simultaneously. One evolved program controls the body of a neuron and the other controls the dendrites. When you run the programs it builds an entire brain-like network which can solve three problems simultaneously" – Julian Miller (Statement on www.cartesiangp.com)
Julian has been a huge inspiration throughout my academic life and for my research in CGP. Overall, CGP research has made great progress over the last two decades and I rejoice in Julian's passionate scientific effort. The evolutionary computation community has lost a great personality.
Thank you, Julian! You will be greatly missed.

Gul Muhammad Khan
Gul Muhammad Khan
University of Engineering and Technology, Peshawar, Pakistan
I remember on the 19th of October, 2005, it was my third day of PhD at York University, in the corridor of the Electronics department; I met Julian for the first time. We had a short chit chat that day, and later that weekend was our first formal meeting, and I must confess it was the day all the constraints limiting my brain to think were lifted.
He freed my brain to the possibilities, I precisely remember him saying “if you can think, if it makes sense, it can be done. If it doesn’t make sense, you probably have to break it down, in pieces, until each piece makes sense.” And in his words, “If you don’t understand something, go reread, it’s out there you just have to dig deeper.
He learned from nature, the way it organized itself and survived over the years, he said all the secrets lie inside it. I had a chance to work with him, we built neural developmental models inspired by the biological brain and explored its abilities in various game scenarios. We also worked on implementation of methodologies for automatic design of Artificial Neural networks, both feedback and feed forward and contributed in the field of neuro-evolution. His understanding was that if something doesn’t change, it cannot learn, thus plastic Neural Networks concepts were born, and exploited for generalization and robustness. He never stops learning, experimenting and contributing to science, he was a true scientist complete in all aspects.
Over time we got attached socially, and I got to know about his family. He was a wonderful son for choosing to stay single for 18 years, to dedicate his time to his unwell mother. He talked about his father and his art, he was proud to be a son to a world war hero. He was a caring brother; he was happy to have met his brother after so long. If it wasn’t for his cancer, he would have continued to contribute to science. We even had a discussion on this new model for neural development, just a few weeks before he got hospitalized. We had plans of continuing to work together. I pray he rests in peace.

Bill Langdon
Bill Langdon
University College London, London, UK
I was sad to hear of Julian's death. He had been unwell for several years but appeared to have recovered and I think he enjoyed his retirement from the University of York. As he put it, his retirement freed him from the responsibilities of academic life so he could work full time on important things, like research. He left York and set up home near Salisbury in the south of England from where he continued to collaborate and to publish his research. Recently I discovered that one of his recent papers was the 15,000th paper on genetic programming.

Riccardo Poli
Riccardo Poli
University of Essex, Colchester, UK
Julian was a wonderful person, a visionary thinker and a friend. We worked in the same department in Birmingham between 1999 and 2001 (when I left for Essex). He was working more on the evolvable hardware side, I was more on the genetic programming side, so we never wrote a paper together. However, we co-organised events, special issues and were both founding associate editors of the Genetic Programming and Evolvable Machines journal. Julian was such a sweet and interesting person. It was easy for everyone to be friends with him, but we became really close on a flight (probably back from some conference) where we told each other our life histories. With my leaving Birmingham, and a few years later with my changing research direction (away from GP and evolutionary computation) we lost track of each other, but remained friends on Facebook. We exchanged a few emails, the last was in September 2020 when he had just discovered that after fending off leukaemia he had another blood cancer and would have to undergo many weeks of chemotherapy. I wrote back "Dear Julian. I'm so sorry to hear about this terrible news. I know you have great inner strength but, oh my God, another blood cancer just after leukaemia! Life is truly unfair... My thoughts are with you. I hope you will be able to fight off this one too. For the next 20 weeks, I'll keep my fingers crossed. Your good friend. Riccardo". I guess Julian put up a good fight as before, but this time the illness had the better of him. Goodbye, my dear friend.

Stephen Smith
Stephen Smith
University of York, York, UK
In his own, quiet, self-assured way, I don’t think Julian feared much at all. Not only was he never afraid to voice his own ideas and opinions but in other, more down to earth ways as well. I remember fondly a research meeting we attended in Trondheim many years ago (2008?) organised by Pauline Haddow. The venue was a beautiful wood cabin resort in the hills with copious snow. One afternoon we all went cross-county skiing and Julian chose the black route - for experts only, despite not having skied much, if ever at all! When he encountered a steep slope, which the rest of us had bottled out of and stumbled down, he put his head down and went for it. Luckily, he fell over quite quickly, slowing his descent, otherwise, we were told, he would have suffered some considerable injury as it wasn’t a slope that even experts could navigate on cross-country skis!

Susan Stepney
Susan Stepney
University of York, York, UK
There's a feeling in some parts of academia that to be a good scientist you have to be brutal. Julian proved this to be complete nonsense. He was an excellent scientist – creative, rigorous, unflinching – and one of the nicest people it has been my pleasure to work with. We worked together over the years on several ideas, including "growing" and "gardening" cyber-physical systems, and we co-supervised two PhD students. I was always impressed with his insight and his breadth of understanding, with his quietly joyful out-of-the-box thinking, and also his care for his students, to ensure that they grew into excellent scientists, too.
If all Julian had contributed to science was the now-established field of CGP, that alone would be a worthwhile legacy. But he also contributed the now-burgeoning field of in materio computing: of computing directly in physical substrates, such as liquid crystals and carbon nanotubes. And in his "retirement" years, he was working a new evolved developmental brain model, and publishing right up to the end. Who knows what else he would have done, had he not been so cruelly struck down.
And, of course, who could forget the acronyms? Projects and proposals delightfully dubbed
SEEDS, NASCENCE (and the follow up, RECONASENSE), IMPROBED, and more:
Julian had a way with letters.
I will miss him.

Martin Trefzer
Martin Trefzer
University of York, York, UK
It was my pleasure to have known and worked with Julian over a number of years and across disciplines on “bio-inspired” projects. When remembering Julian, three qualities come to my mind: inspiring, exploring and philosophical.
I met Julian for the first time during my PhD. This was some 15-20 years ago when he gave an inspiring talk on his ideas about “a programmable matter array”, which later developed into his work on in-materio computing. This was well-ahead of using novel materials other than silicon for computing.
Julian had a hands-on mentality and he explored initial ideas on in-materio computing using a liquid-crystal display that his PhD student rescued from a pile of electronic recycling. With this maverick setup, they made significant discoveries on the influence of the test setup on the outcome of experiments.
I remember many philosophical discussions on diverse topics over lunch. A particular one that comes to mind is when we talked about evolution, extraterrestrials and the film K-PAX. The film is about PROT who claims to be from a faraway planet and changes the way his therapist Dr. Powell, and the audience, looks at the world:
(Prot) "You know what I've learned about your planet? There's enough life on Earth to fill 50 planets. Plants, animals, people, fungi, viruses, all jostling to find their place, bouncing off each other, feeding off each other. Connected."
(Dr Powell) "You don't have that kind of connection on K-PAX?"
(Prot) "Nobody wants, nobody needs. On K-PAX, when I'm gone, nobody misses me. There would be no reason to. And yet I sense that when I leave here -- I will be missed. Yes. Strange feeling.”

Dennis G. Wilson
Dennis G. Wilson
University of Toulouse; Toulouse Mind & Brain Institute, Toulouse, France
I was lucky to have worked with Julian over the past six years. We reached out to him for discussion around CGP when I began using it in my thesis and quickly invited him to Toulouse after. Julian was caring, quick, and happy to explain to a young PhD student various advances in CGP which had been explored and the context around them. Over the course of six years, Julian advised me and advanced our common interests in CGP and developmental neural networks. While he was struggling with health issues during this period, Julian persisted and was always positive, inquisitive, and supportive.
Julian had a very clear notion of what was important and interesting for him to study and he was fundamental in advancing those topics. He was excited to see other colleagues advance on the topic of neural development and aimed to cultivate discussion and collaboration on this topic. He was always eager to learn from others, especially biologists whose insights into neural functionality inspired him. He was an example of how a senior researcher, with an established field developed and led by him, could be inspired by a new topic, asking questions and integrating new insights into his model. He was intensely curious about the brain and pursued his research on its intelligence until his last moments. Research in development and structural learning in neural networks is advancing and Julian has been recognized as a pioneer in this domain, well ahead of his time. While that time has now ended, I believe that Julian's ideas will resonate and inspire others for many years to come, as they have inspired me. Farewell, dear friend, and thank you.
Evolving Programs to Build Artificial Neural Networks
Dennis G. Wilson, University of Toulouse; Toulouse Mind & Brain Institute, Toulouse, France
Introduction
The biological brain has inspired many artificial intelligence algorithms, especially artificial neural networks (ANNs). Originally designed to replicate neural functionality [10], these models are now used in a wide variety of applications and have shown impressive ability in computer vision, natural language processing, and control. Despite these significant advances, contemporary ANNs are far removed from their biological sources and do not demonstrate the same ability to generalize to new tasks and in new environments as biological organisms [4, 7]. We suggest that the focus on synaptic weight change as the learning mechanisms in ANNs is a contributing factor to their limitations in generalization. As neural development, which changes network structure, is an integral part of biological learning, we propose that structural development should be included in learning in ANNs.
Developmental neural networks have not widely been explored in the literature and there remains a need for a concerted effort to explore a greater variety of effective models. In this article, we summarize a conceptually simple neural model named IMPROBED. This model represents development with two programs that control neural somata and dendrites, respectively. These programs allow somata and dendrites to move, change, die and replicate. The internal logic of these programs is evolved with the goal of developing networks that can solve multiple problems of different types.
Development of the IMPROBED model
This article summarizes previous publications proposing various versions of the IMPROBED model. In this section, we focus on work directly related to IMPROBED. The inspiration for this model can be traced back to a developmental graph genetic programming method [20] used to grow circuits. Cartesian Genetic Programming (CGP) [11, 19] was further used for evolving developmental ANNs in the CGPDN (Cartesian Genetic Programming Developmental Network) model [8, 9]. Miller and Khan [18] offer a summary of the development of this model, which can be seen as a precursor for IMPROBED. Compared to IMPROBED, CGPDN was a rather complex model which used seven independent CGP genomes for controlling interaction, growth, modification, and destruction for axons, dendrites, and neurons.
The first publication of the eventual IMPROBED model was [21]. The main difference from CGPDN was the simplification of the number of programs controlling development to two: a neural program and a dendritic program. As an ANN can be considered a type of graph, we considered it necessary to have different programs controlling the nodes and edges of the graph, represented by the somata and dendrites, but aimed to simplify the model as much as possible. We evaluated networks that developed from the same cellular base for data analysis on three standard classification problems, with final results presented in [22].
Various aspects of IMPROBED were studied in subsequent works. In [13], the utility of neural movement was investigated, along with the use of a direct modification of neural and dendritic values compared to modification through fixed increments. Activity dependence, the integration of neural activity in the development function inputs, was first proposed in [15] and further developed in [16]. The use of neuron position and the inclusion of performance as a neural input were studied in [12]. These modifications will be further detailed below in the presentation of the current version of IMPROBED.
IMPROBED was designed explicitly for the development of networks that could solve multiple problems. The objective of this choice is to orient evolution towards developmental programs which are useful not for specific tasks, but for learning in general; such developmental rules could be applied to learn networks on tasks not seen during evolution. In [21] and [22], the tasks used were the cancer, diabetes and glass classification problems from the UCI machine learning repository. However, to encourage the evolution of truly general developmental programs, we introduced a separate family of tasks from the control and reinforcement learning literature; in [12], the IMPROBED model was applied to the problems of double pole balancing and ball throwing. These two tasks are combined with the glass and diabetes problems in [14] and [17], where development is tasked with creating networks for four separate problems from two very different domains. Throughout the development of IMPROBED, the combination of tasks used grew in ambition and few other contemporary works attempt to solve such different problems with the same algorithm [23].
The development of IMPROBED was heavily influenced by other evolutionary and developmental methods, such as L-systems [6], ES-HyperNEAT [24], and the neurogenesis model [2]. While this article does not aim to present the developmental neural model literature, [17] presents a comprehensive overview of this domain. Methods such as [5, 6] which evolve the rules of development, are especially close to IMPROBED.
The IMPROBED model
The IMPROBED model is illustrated in Fig. 1. The soma and dendrite programs are represented using CGP. The programs are sets of mathematical equations that read variables associated with neurons and dendrites to output updates of those variables. In the proposed model, weights are determined from a program that is a function of neuron position, together with the health, weight and length of dendrites. It is neuro-centric and temporal in nature; the neural networks can change over time.

Figure 1: The IMPROBED model. Each neuron has a position, health and bias and a variable number of dendrites. Each dendrite has a position, health and weight. The behaviour of a neuron soma is governed by a single evolved program. In addition each dendrite is governed by another single evolved program. The soma program decides the new values of soma variables position, health and bias, based on previous values, the average over all dendrites belonging to the neuron of dendrite health, position and weight and an external input called problem type. The latter is a floating point value that indicates the neuron type. The dendrite program updates dendrite health, position and weight based on previous values, the parent neuron’s health, position and bias and problem type. When the evolved programs are executed, neurons can change, die, replicate and grow more dendrites and their dendrites can also change or die.
The soma program uses as inputs the health, bias and position of the neuron, the average health, length and weight of all dendrites connected to the neuron, and problem type; as output, the program updates the neuron health, bias, and position. The problem type is a constant (in the range [-1, 1]) which indicates, firstly, whether or not a neuron is an output neuron; in the case of an output neuron, the problem type indicates what computational problem the output neuron belongs to. The motivation behind the problem type input is that since output neurons are dedicated to a particular computational problem, they should be given information that relates to this so that the identical neural programs can behave differently according to the computational problem they are associated with.
Every dendrite belonging to each neuron is controlled by an evolved dendrite program. The inputs to this program are the health, weight and position of the dendrite and also the health, bias and position of the parent neuron. In addition, dendrite programs can also receive the problem type of the parent neuron. The evolved dendrite program decides how the health, weight and position of the dendrite are updated.
Development is determined using the health variables of neurons and dendrites. If a neuron’s health is incremented by the soma program above a defined threshold NHbirth, the cell will replicate and a new neuron with similar values will be created. Similarly, if the health value falls below a threshold NHdeath, the neuron will be removed. Soma health is also used to determine the creation of new dendrites based on a threshold DHbirth, and each dendrite’s individual health is used to determine its removal if the health falls below a threshold DHdeath. While these thresholds are currently set by the user, they could also be evolved with the developmental programs.
Final ANNs are extracted based on dendrite proximity to presynaptic neurons. Dendrites will snap to the nearest soma to form a connection, as shown at the bottom of Fig. 1 by dotted lines. Individual ANNs are extracted per task, using the input and output neurons associated with that task (yellow versus red). Miller, Wilson and Cussat-Blanc [22] studied the question of sharing inputs and outputs or using discrete sets per task and found that development performed better with discrete sets; however, the final ANNs often share intermediary neurons and connections. This demonstrates that development can create networks that share components to perform efficiently over multiple tasks.
A fundamental question for IMPROBED is the information to provide to the developmental programs. The use of task performance was studied in [12] and improved results over trials without performance as a program input. This is similar to other structural learning methods which use gradient signals to trigger neurogenesis [1], although IMPROBED does not use gradient-based learning. One surprising result is that neural activity did not significantly alter the performance of IMPROBED [16]; in the biological brain, neural activity appears to be an important factor for development [25]. A clear future direction for improvement in IMPROBED is in refining the developmental program inputs.
Future directions
While this short summary of the recently developed IMPROBED model did not present all technical details of the model, we believe it important to highlight the promising future directions for this approach; interested readers are directed to [17] for a complete technical description. To date, we believe that IMPROBED is the only developmental ANN method that allows for solving multiple tasks, and it does so using a general framework with many opportunities for improvement.
Currently, artificial evolution tends towards programs that rapidly develop networks within the first learning epoch and do not improve in performance afterwards. There are multiple possible causes for this, including task simplicity, weak learning signals, or overly permissive development thresholds. Further study is needed to ensure that learning is integrated with development and could be done by providing gradient information to development or limiting growth rates.
Another avenue for exploration is neuron and dendrite initialization. The problem of initialization during neurogenesis is now receiving attention in deep learning [3], and in IMPROBED we have made heuristic choices about initialization which could be better informed using these methods. As learning in IMPROBED is different from backpropagation-based deep learning, finding initialization methods adapted for IMPROBED is a challenging but promising direction.
Finally, IMPROBED has been demonstrated successfully in multi-task learning, even using problems from different domains. This was the original intention of this model and we believe that development is a necessary component for generalization in ANNs. However, other benefits of development remain to be explored. Studying how IMPROBED deals with overfitting, in low data regimes, or during the loss of network components are all worthwhile directions.
It is the hope of the author that IMPROBED inspires further research on this model but more generally in developmental ANNs. Our results have already demonstrated the advantages of development for generalization to multiple tasks and there may be many others. As evolution created the biological brain with moving components that are continuously grown and pruned, we believe artificial evolution can be used to determine developmental programs which do the same for ANNs.
References
[1] X. Dai, H. Yin, and N. K. Jha. NeST: A neural network synthesis tool based on a grow-and-prune paradigm. IEEE Transactions on Computers, 68(10):1487–1497, 2019.[2] K. L. Downing. Supplementing evolutionary developmental systems with abstract models of neurogenesis. Genetic and Evolutionary Computation Conference, GECCO, pages 990–996, 2007.[3] U. Evci, M. Vladymyrov, T. Unterthiner, B. van Merri ̈enboer, and F. Pedregosa. Gradmax: Growing neural networks using gradient information. arXiv preprint arXiv:2201.05125, 2022.[4] R. M. French. Catastrophic Forgetting in Connectionist Networks: Causes, Consequences and Solutions. Trends in Cognitive Sciences, 3(4):128–135, 1999.[5] F. Gruau. Neural Network Synthesis Using Cellular Encoding And The Genetic Algorithm. PhD thesis, Ecole Normale Supirieure de Lyon, France, 1994.[6] G. S. Hornby and J. B. Pollack. Creating high-level components with a generative representation for body-brain evolution. Artificial Life, 8(3), 2002.[7] R. Kemker, M. McClure, A. Abitino, T. Hayes, and C. Kanan. Measuring catastrophic forgetting in neural networks. In AAAI Conference on Artificial Intelligence, volume 32:1, 2018.[8] G. M. Khan, J. F. Miller, and D. M. Halliday. Evolution of Cartesian Genetic Programs for Development of Learning Neural Architecture. Evol. Computation, 19(3):469–523, 2011.[9] M. M. Khan, G. M. Khan, and J. F. Miller. Developmental plasticity in cartesian genetic programming based neural networks. In ICINCO (1), pages 449–458, 2011.[10] McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5:115–133, 1943.[11] J. Miller. An empirical study of the efficiency of learning boolean functions using a cartesian genetic programming approach. Genetic and Evolutionary Computation Conference, GECCO, volume 2, pages 1135–1142, 1999.[12] J. F. Miller. Evolving an artificial brain to solve multiple problems. Artificial Life Conference, ALIFE, volume 19, 2019.[13] J. F. Miller. Evolving developmental neural networks to solve multiple problems. Artificial Life Conference, ALIFE, pages 473–482. MIT Press, 2020.[14] J. F. Miller. Evolving developmental neural networks to solve multiple problems. Artificial Life Conference, ALIFE, pages 473–482. MIT Press, 2020.[15] J. F. Miller. Multiple problem solving with evolved developmental neural networks: activity dependence. Workshop on Developmental Neural Networks (DevANN2020) at ALIFE, 2020.[16] J. F. Miller. Designing multiple ANNs with evolutionary development: Activity dependence. In Genetic Programming Theory and Practice XVIII, pages 165–180. Springer, 2022.[17] J. F. Miller. Improbed: Multiple problem-solving brain via evolved developmental programs. Artificial Life, 27(3–4):300–335, 2022.[18] J. F. Miller and G. M. Khan. Where is the Brain inside the Brain? Memetic Computing, 3(3):217–228, 2011.[19] J. F. Miller and P. Thomson. Cartesian genetic programming. European Conf. on Genetic Programming, EuroGP, volume 10802 of LNCS, pages 121–132, 2000.[20] J. F. Miller and P. Thomson. A Developmental Method for Growing Graphs and Circuits. Int. Conf. on Evolvable Systems, volume 2606 of LNCS, pages 93–104, 2003.[21] J. F. Miller and D. G. Wilson. A developmental artificial neural network model for solving multiple problems. Genetic and Evolutionary Computation Conference Companion, pages 69–70. ACM, 2017.[22] J. F. Miller, D. G. Wilson, and S. Cussat-Blanc. Evolving developmental programs that build neural networks for solving multiple problems. In Genetic Programming Theory and Practice XVI, pages 137–178. Springer, 2019.[23] M. Reid, Y. Yamada, and S. S. Gu. Can Wikipedia help offline reinforcement learning? arXiv preprint arXiv:2201.12122, 2022.[24] S. Risi, J. Lehman, and K. O. Stanley. Evolving the placement and density of neurons in the Hyper-NEAT substrate. Genetic and Evolutionary Computation Conference, GECCO, pages 563–570, 2010.[25] C. L. Waites, A. M. Craig, and C. C. Garner. Mechanisms of vertebrate synaptogenesis. Annu. Rev. Neurosci., 28:251–274, 2005.Evolving Open Complexity
W.B. Langdon, University College London, London, UK
Abstract
Information-theoretic analysis of large evolved programs produced by running genetic programming for up to a million generations has shown even functions as smooth and well behaved as floating-point addition and multiplication lose entropy and consequently are robust and fail to propagate disruption to their outputs. This means that, while dependent upon fitness tests, many genetic changes deep within trees are silent. For evolution to proceed at a reasonable rate it must be possible to measure the impact of most code changes, yet in large trees, most crossover sites are distant from the root node. We suggest that to evolve very large, very complex programs, it will be necessary to adopt an open architecture where most mutation sites are within 10–100 levels of the organism's environment.
Background
Recently we have been investigating the long-term evolution of genetic programming. Firstly, using Poli’s submachine code GP [24, 25], evolving large binary Boolean trees [7] and more recently exploiting SIMD Intel AVX and multi-core parallelism to evolve floating-point GP [9, 14, 12, 5]. Running for up to a million generations without size limits has generated, at more than a billion nodes, the biggest programs yet evolved and forced the development [10, 11] of, at the equivalent of more than a trillion GP operations per second, the fastest GP system. It has also prompted information- theoretic analysis of programs [16]. (Of course, information theory has long been used with evolutionary computing, e.g. [2].)
One immediately applicable result has been the realisation that in deep GP trees most changes have no impact on fitness and once this has been proved, for a given child, its fitness evaluation can be cut short, and fitness simply copied from the parent. This can lead to enormous speedups [13].
We have also considered traditional imperative (human-written) programs and shown these too are much more robust than is often assumed [15, 6, 8]. Indeed, we suggest that information theory provides a unified view of the difficulty of testing software [17, 1, 22].
The question of why fitness is so often exactly inherited [20, 21, 23] despite brutal genetic change is answered by the realisation that without side effects the disruption caused by the mutation must be propagated up the tree through a long chain of irreversible operations to the root node. Each function in the chain can lose entropy. In many cases, deeply nested functions progressively lose information about the perturbation as the disruption fails to propagate to the program’s output. Thus, the mutation becomes invisible to fitness testing and its utility cannot be measured. Without fitness selective pressure, evolution degenerates into an undirected random walk.
In bloated structures, information loss leads, from an evolutionary point of view, to extremely high resilience, robustness and so stasis. From the engineering point of view, this is problematic, as then almost all genetic changes have no impact and evolutionary progress slows to a dawdle.
Since all digital computing is irreversible [18] it inherently loses information and so without shortcuts, must lead to failed disruption propagation (FDP) [22]. We suggest in order to evolve large complex systems it must be possible to measure the impact of genetic changes, therefore we must control FDP and suggest in the next section that to evolve large systems, they be composed of many programs of limited nesting depth and structured to allow rapid communication of both inputs and outputs to the (fitness determining) environment.

Figure 1: Lung-like open complex evolving system composed of 1300 individual GP programs or functions. These computed elements are placed side-by-side to form an open structure. The gaps promote short-cut side effects between functions’ input and outputs and the environment.
Open Complex System
Can we evolve systems more like cell interiors, with high surface area membranes composed of very many small adjacent programs each of limited depth placed side by side? The membranes form an open structure with many gaps between them. The gaps themselves support rapid communication (which might be implemented using global memory or enhanced communication links or buses) with no, or little, processing ability and consequently little information loss.
Figure 1 shows 1300 programs arranged in an open structure (based on fit Sextic polynomial [4] trees with average height 9.22, quartile range 7–11).
Rich Lenski, in his long-term evolution experiment (LTEE) [19, 3] has demonstrated that Nature can continue to innovate in a static environment for more than 75000 generations. We have shown GP can do similarly for at least 100000 generations. However, when evolving deep structures, progress slows dramatically and therefore we feel monolithic deep structures will not be sufficient to automatically evolve complex systems. Instead, an open structure like Figure 1 may be needed.
References
[1] D. Clark, W. B. Langdon, and J. Petke. Software robustness: A survey, a theory, and some prospects. Facebook Testing and Verification Symposium 2020, 1-3 December 2020.[2] L. M. Deschaine, P. Nordin, and J. D. Pinter. A computational geometric/information theoretic method to invert physics-based MEC models attributes for MEC discrimination. Journal of Mathematical Machines and Systems, (2):50–61, 2011.[3] B. H. Good, M. J. McDonald, J. E. Barrick, R. E. Lenski, and M. M. Desai. The dynamics of molecular evolution over 60,000 generations. Nature, 551:45–50, 18 October 2017.[4] J. R. Koza. Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, MA, USA, 1992.[5] W. B. Langdon. Genetic programming convergence. Genetic Programming and Evolvable Machines, 2021.[6] W. B. Langdon. Software is not fragile, volume 5, chapter 4.7, pages 100–101. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 2016.[7] W. B. Langdon. Long-term evolution of genetic programming populations. Genetic and Evolutionary Computation Conference Companion, GECCO ’17, pages 235–236, Berlin, 15-19 July 2017. ACM.[8] W. B. Langdon. Long-term stability of genetic programming landscapes. GECCO 2017 Workshop on Landscape-Aware Heuristic Search, Berlin, 2017. No paper, presentation only.[9] W. B. Langdon. Parallel GPQUICK. Genetic and Evolutionary Computation Conference Companion, pages 63–64, 2019. ACM.[10] W. B. Langdon. Fast generation of big random binary trees. Technical Report RN/20/01, Computer Science, University College, London, Gower Street, London, UK, 13 Jan. 2020.[11] W. B. Langdon. Multi-threaded memory efficient crossover in C++ for generational genetic programming. SIGEVOLution newsletter, 13(3):2–4, Oct. 2020.[12] W. B. Langdon. Fitness first, Genetic Programming Theory and Practice XVIII, Springer. 2021.[13] W. B. Langdon. Incremental evaluation in genetic programming. European Conference on Genetic Programming, EuroGP, volume 12691 of LNCS, pages 229–246, 2021. Springer Verlag.[14] W. B. Langdon and W. Banzhaf. Continuous long-term evolution of genetic programming. Conference on Artificial Life (ALIFE 2019), pages 388–395, 2019. MIT Press.[15] W. B. Langdon and J. Petke. Software is not fragile. Complex Systems Digital Campus E-conference, CS-DC’15, Proceedings in Complexity, pages 203–211. Springer, Sept. 30-Oct. 1 2015. Invited talk.[16] W. B. Langdon, J. Petke, and D. Clark. Dissipative polynomials. Workshop on Landscape-Aware Heuristic Search, GECCO 2021 Companion, pages 1683–1691, Internet, 10-14 July 2021. ACM.[17] W. B. Langdon, J. Petke, and D. Clark. Information loss leads to robustness. IEEE Software Blog, 12 September 2021.[18] W. B. Langdon and R. Poli. Mapping non-conventional extensions of genetic programming. Natural Computing, 7:21–43, Mar. 2008. Invited contribution to special issue on Unconventional computing.[19] R. E. Lenski et al. Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli. Proceedings of the Royal Society B, 282(1821), 22 December 2015.[20] J. Petke, B. Alexander, E. T. Barr, A. E. I. Brownlee, M. Wagner, and D. R. White. A survey of genetic improvement search spaces. GI @ GECCO 2019, pages 1715–1721, 2019. ACM.[21] J. Petke and A. Blot. Refining fitness functions in test-based program repair. The First International Workshop on Automated Program Repair (APR@ICSE 2020), pages 13–14, internet, 2020. ACM[22] J. Petke, D. Clark, and W. B. Langdon. Software robustness: A survey, a theory, and some prospects. ESEC/FSE 2021, Ideas, Visions and Reflections, pages 1475– 1478, 2021. ACM.[23] C. G. Pimenta, A. G. C. de Sa, G. Ochoa, and G. L. Pappa. Fitness landscape analysis of automated machine learning search spaces. European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP), LNCS volume 12102, pages 114–130, 2020. Springer.[24] R. Poli and W. B. Langdon. Sub-machine-code genetic programming. In L. Spector, W. B. Langdon, U.-M. O’Reilly, and P. J. Angeline, editors, Advances in Genetic Programming 3, chapter 13, pages 301–323. MIT Press, Cambridge, MA, USA, June 1999.[25] R. Poli and J. Page. Solving high-order Boolean parity problems with smooth uniform crossover, sub-machine code GP and demes. Genetic Programming and Evolvable Machines, 1(1/2):37–56, Apr. 2000.Call for Submissions
Parallel Problem Solving from Nature (PPSN 2022)
The 17th International Conference on Parallel Problem Solving from Nature (PPSN 2022) will be held in Dortmund (Germany), 10–14 September 2022. The workshops, tutorials and the conference will be held “on-site” only. The participants should take this into account in their planning. The local organizers will provide adequate rooms and technical equipment for local presentations. A hybrid format is not planned. In case of pandemic-caused restrictions, the event will have to switch to a fully online format.
PPSN was originally designed to bring together researchers and practitioners in the field of Natural Computing, the study of computing approaches that are gleaned from natural models. Today, the conference series has evolved and welcomes works on all types of iterative optimization heuristics. Notably, we also welcome submissions on connections between search heuristics and machine learning or other artificial intelligence approaches. Submissions covering the entire spectrum of work, ranging from rigorously derived mathematical results to carefully crafted empirical studies, are invited.
Important Dates
Paper submission (extension): 13 April 2022
Author notification: 6 June 2022
Camera-ready: 25 June 2022
Tutorials/Workshops: 10–11 September 2022
Conference: 12–14 September 2022
Program Chairs: Hernan Aguirre, Pascal Kerschke, Gabriela Ochoa, Tea Tusar

EvoStar 2022
EvoStar 2022 is planned as a hybrid event (online and onsite) from 20 to 22 April 2022. Onsite, EvoStar will be held in Madrid, Spain.
EvoStar2022 invites submissions of late-breaking abstracts (LBAs) summarising ongoing research and recent studies in all areas of Evolutionary Computation and other Nature-inspired techniques. LBAs will be presented as posters during the conference and published online in an indexed repository (arXiv).
At least one author of each accepted work has to register for the conference, attend the conference and present the work (short oral presentation of 10 minutes).
Submission details: Papers will be reviewed by the organising committee to ensure quality and relevance to the conference. Please, ease up our work by not anonymizing your submission. Submit your manuscript in LNCS format using the following link.
Page limit: 4 pages
Submission deadline: 10 April
EvoStar 2022 LBA Chair: Antonio M Mora, University of Granada, Spain
The 11th InternationalWorkshop on Genetic Improvement
GI 2022 will be co-located with the Genetic and Evolutionary Computation Conference, GECCO 2022, in Boston, USA during July 9 - 13.
We invite submissions that discuss recent developments in all areas of research on, and applications of, Genetic Improvement. GI is the premier workshop in the field and provides an opportunity for researchers interested in automated program repair and software optimisation to disseminate their work, exchange ideas and discover new research directions.
Important Dates
Paper submission due: 11th April 2022
Workshops: 9 or 10 of July 2022
Workshop Chairs: Bobby R. Bruce, Vesna Nowack, Aymeric Blot, Emily Winter, W. B. Langdon, Justyna Petke
The Human-Competitive Awards
Humies 2022 will take place in conjunction with the Genetic and Evolutionary Computation Conference, GECCO 2022, in Boston, USA during July 9 - 13.
Entries are hereby solicited for awards totalling $10,000 for human-competitive results that have been produced by any form of genetic and evolutionary computation (including, but not limited to genetic algorithms, genetic programming, evolution strategies, evolutionary programming, learning classifier systems, genetic improvement, grammatical evolution, gene expression programming, differential evolution, etc.) and that have been published in the open literature between the deadline for the previous competition and the deadline for the current competition.
Important Dates
Friday, May 27, 2022 — Deadline for entries (consisting of one TEXT file, PDF files for one or more papers, and possible "in press" documentation (explained below). Please send entries to goodman at msu dot edu.
Friday, June 10, 2022 — Finalists will be notified by e-mail.
Friday, June 24, 2022 — Finalists not presenting in person must submit a 10-minute video presentation (or the link and instructions for downloading the presentation, NOT a YouTube link) to goodman at msu dot edu.
July 9-13, 2022 (Saturday - Wednesday) — GECCO conference (the schedule for the Humies session is not yet final, so please check the GECCO program as it is updated).
Monday, July 11, 2022 Presentation session.
Wednesday, July 13, 2023 — Announcement of awards at the plenary session of the GECCO conference.
Judging Committee: Erik Goodman, Una-May O'Reilly, Wolfgang Banzhaf, Darrell Whitley, Lee Spector, Stephanie Forrest
Publicity Chair: William Langdon
Forthcoming Events

EvoStar 2022
EvoStar 2022 is planned as a hybrid event (online and onsite) from 20 to 22 April 2022. Onsite, EvoStar will be held in Madrid, Spain.
EvoStar is comprised of four co-located conferences
EuroGP 25th European Conference on Genetic Programming
EvoApplications 25th European Conference on the Applications of Evolutionary and bio-inspired Computation
EvoCOP 22nd European Conference on Evolutionary Computation in Combinatorial Optimisation
EvoMUSART 11th International Conference (and 18th European event) on Evolutionary and Biologically Inspired Music, Sound, Art and Design
Genetic and Evolutionary Computation Conference
GECCO 2022, will take place in Boston, USA during July 9 - 13, 2022. GECCO presents the latest high-quality results in genetic and evolutionary computation since 1999.
Topics include genetic algorithms, genetic programming, ant colony optimization and swarm intelligence, complex systems (artificial life, robotics, evolvable hardware, generative and developmental systems, artificial immune systems), digital entertainment technologies and arts, evolutionary combinatorial optimization and metaheuristics, evolutionary machine learning, evolutionary multiobjective optimization, evolutionary numerical optimization, real-world applications, search-based software engineering, theory and more.
Important dates
Camera-ready: April 14, 2022
Presenter mandatory registration: May 2, 2022
About this Newsletter
SIGEVOlution is the newsletter of SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. To join SIGEVO, please follow this link: [WWW].
We solicit contributions in the following categories:
Art: Are you working with Evolutionary Art? We are always looking for nice evolutionary art for the cover page of the newsletter.
Short surveys and position papers: We invite short surveys and position papers in EC and EC related areas. We are also interested in applications of EC technologies that have solved interesting and important problems.
Software. Are you a developer of a piece of EC software, and wish to tell us about it? Then send us a short summary or a short tutorial of your software.
Lost Gems. Did you read an interesting EC paper that, in your opinion, did not receive enough attention or should be rediscovered? Then send us a page about it.
Dissertations. We invite short summaries, around a page, of theses in EC-related areas that have been recently discussed and are available online.
Meetings Reports. Did you participate in an interesting EC-related event? Would you be willing to tell us about it? Then send us a summary of the event.
Forthcoming Events. If you have an EC event you wish to announce, this is the place.
News and Announcements. Is there anything you wish to announce, such as an employment vacancy? This is the place.
Letters. If you want to ask or to say something to SIGEVO members, please write us a letter!
Suggestions. If you have a suggestion about how to improve the newsletter, please send us an email.
Contributions will be reviewed by members of the newsletter board. We accept contributions in plain text, MS Word, or Latex, but do not forget to send your sources and images.
Enquiries about submissions and contributions can be emailed to gabriela.ochoa@stir.ac.uk
All the issues of SIGEVOlution are also available online at: www.sigevolution.org
Notice to contributing authors to SIG newsletters
By submitting your article for distribution in the Special Interest Group publication, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:
to publish in print on condition of acceptance by the editor
to digitize and post your article in the electronic version of this publication
to include the article in the ACM Digital Library
to allow users to copy and distribute the article for noncommercial, educational or research purposes
However, as a contributing author, you retain copyright to your article and ACM will make every effort to refer requests for commercial use directly to you.
Editor: Gabriela Ochoa
Sub-editor: James McDermott
Associate Editors: Emma Hart, Bill Langdon, Una-May O'Reilly, Nadarajen Veerapen, and Darrell Whitley