Thursday, June 21, 2012

Can artificial music evolve in a Darwinian way?

Natural selection expresses the idea that organisms (i.e. their genes) vary and that variability has consequences. Some variants are unfit and go extinct, others adapt and do well. This process, repeated over millions of years, has given us the variety of life on earth.

Many authors have played with the idea how to combine these insights from evolutionary biology to changes in culture, the notion of ‘memes’ being one of them. Richard Dawkins proposed that human culture is composed of a multitude of particulate units, memes, which are analogous to the genes of biological transmission. These cultural replicators are transmitted by imitation between members of a community and are subject to mutational-evolutionary pressures over time.

This week an interesting study appeared in PNAS (early edition) showing that a simple Darwinian process can produce music. Inspired by cultural transmission theory, the study suggests that the evolution of music can be viewed and analyzed in terms of selection-variation processes, and, as such, may shed light on the evolution of real musical cultures.

The experiment described in the paper works as follows: An algorithm maintains a population of tree-like digital genomes, each of which encodes a computer program. Each genome-program specifies note placement, instrumentation, and performance parameters (with tempo, meter, and tuning system fixed for all loops). Loops periodically replicate to produce new loops. The selective pressure on the music that is generated comes from a population of consumers who listen to samples of the loops via a Web interface (DarwinTunes) and rate them for their appeal. These ratings are then the basis of a fitness function that determines which loops in a given generation will be allowed to mate and reproduce. Robert M. MacCallum, Matthias Mauch, Austin Burta, & Armand M. Leroi (2012). Evolution of music by public choice. Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1203182109

(See also earlier blog entry).


  1. I found their work very interesting, but it fits more into computational evolutionary design than it does evolutionary studies of culture. The major reason for this is that their process of evolution was too teleological to be analogous to natural evolution, even in memes. In this way, it's comparable to human-driven artificial selection, as with plants and farm animals.

    The problem with using it to study cultural development is that, unlike in culture, there was an end-goal in mind (perceived pleasantness), and that goal was pre-determined by the participants already developed tastes. This paradigm lacks any possibility to account for the emergence of *new* trends and tastes because cultural evolution is not random, and its selective pressure has rarely been the prevailing tastes of contemporary society (in fact, most cultural innovations tend to met with displeasure until a generation or two after they appear).

  2. Thanks for your response. I do agree that the fitness/cost-function (perceived-pleasantness) is the tricky part.

    A further draw back I find the decisions made on the stimuli, using a rigid temporal grid. It makes the computations manageable, I suspect, but a continuous signal would be far more attractive and ecologically valid. A nice example is the work of Bart de Boer and Tessa Verhoef here at the UvA using a slide whistle as instrument.[1]

    [1] Verhoef, T., de Boer, B., & Kirby, S. (2012) Holistic or synthetic protolanguage: Evidence from iterated learning of whistled signals. In The evolution of language: Proceedings of the 8th international conference (evolang8). (pp. 386-375). Hackensack NJ: World Scientific.

  3. This is very interesting indeed! I didn't know of the Bart de Boer research.
    We are working with the Genetic Choir ( ) on the same music-evolutionary assumption, and can be much more open to what exactly a 'gene' of music constitutes, as we don't have to fit it into a computational model: All agents of the process are real human singers, interacting with each other.

    So the question how big a 'piece of music' is in order to be worth replicating, is something we answer every time in the process of listening to the sound-gene-pool around us.

    Incidentally, we just embarked on a project ("Loop-Copy-Mutate") in which we want to transfer the Genetic Choir findings and techniques into a computational environment. So the defining/categorizing of sound is for this project a major issue.

    We are still looking for experts in the field who can advice us and join the project.
    Here is a quick overview of our ambition:

    The point that Callum is making about taste is very dear to us: How can we let 'new music' be composed by the evolutionary organism of the choir, without subsiding to 'general taste' assumptions (which inevitably create only predictable and rather boring music) :-D