The Dutch TV program Boeken introduced the Cockatoo-video as the most fun and intriguing video of the year. Tijs Goldschmidt (a biologist and writer known from, e.g., Darwin's Dreampond) tells about the phenomenon of beat induction and why it is so relevant to cognitive scientists (see also an earlier blog).
In his upcoming book called Music, Language and the Brain, Ani Patel chose beat induction — referring to it as ‘beat-based rhythm processing’— as a key area in music-language research. He proposes it an important candidate in demonstrating "that there is a fundamental aspect of music cognition that is not a byproduct of cognitive mechanisms that also serve other, more clearly adaptive, domains (e.g. auditory scene analysis or language)." (Patel, 2008).
I couldn't agree more: beat induction could well turn out to be a key cognitive process in the evolution of music, and arguably central to the origins of music.*
With regard to the video mentioned above: Patel’s group is currently systematically filming the Cockatoo for analyses.
P.S. Yet another item from Dutch TV on beat induction:
*1994 demo on beat induction.
Monday, December 31, 2007
Thursday, December 20, 2007
In Amsterdam at the end of the year?
As an unrelated yet musical blog entry, today a plug for my brother. Since I will be spending time, like most of you, with friends and family, I might well share the event that I look forward to. In the pop-temple Paradiso —on December 29— three groups will perform a wide variety of music ranging from electro (Wired Paradise), Schubert (Mulder & Honing) to Orient (Rima Khcheich Group). You might like it.
Friday, December 14, 2007
Too catchy a tune? (earworm)
It’s a well-known phenomenon in media land that once you contributed to a TV item on a compelling —general interest— question, people will return to you with the same question over and over again, and, basically, wanting you to redo the same answer :-)
It happened to me a few years ago when I was asked to contribute to a Dutch TV item on the question why some melodies stick in your mind. My first answer was: we do not know. Since, if we knew, an ‘earworm’-generating computer program would exist that can generate melodies that are guaranteed to stick in people’s mind for days. In this particular case however, I’m sure nobody would mind. Unfortunately, now —four years later— still little is understood of the phenomenon. [And yes, again on Dutch TV]
What we do know —mainly from questionnaire-style research— is that most people suffer from the ‘earworm’ phenomenon (also referred to as brainworm, cognitive itch, or musical imagery repetition), females slightly more than males. And that the tunes that spontaneously pop-up in one’s mind are generally not the most striking compositions. Actually, they are commonly reported as being simply irritating (see examples on link below).
Why does this happen? And what does it tells us about our cognition? And why does it happen with music, and significantly less with text or images? What is in the musical structure of that particular fragment that makes it spontaneously pop-up from memory? PhD-students in cognitive science looking for an exciting relatively unexplored topic in music (neuro)cognition, jump on it!
Dutch webpage on this topic.
It happened to me a few years ago when I was asked to contribute to a Dutch TV item on the question why some melodies stick in your mind. My first answer was: we do not know. Since, if we knew, an ‘earworm’-generating computer program would exist that can generate melodies that are guaranteed to stick in people’s mind for days. In this particular case however, I’m sure nobody would mind. Unfortunately, now —four years later— still little is understood of the phenomenon. [And yes, again on Dutch TV]
What we do know —mainly from questionnaire-style research— is that most people suffer from the ‘earworm’ phenomenon (also referred to as brainworm, cognitive itch, or musical imagery repetition), females slightly more than males. And that the tunes that spontaneously pop-up in one’s mind are generally not the most striking compositions. Actually, they are commonly reported as being simply irritating (see examples on link below).
Why does this happen? And what does it tells us about our cognition? And why does it happen with music, and significantly less with text or images? What is in the musical structure of that particular fragment that makes it spontaneously pop-up from memory? PhD-students in cognitive science looking for an exciting relatively unexplored topic in music (neuro)cognition, jump on it!
Dutch webpage on this topic.
Thursday, December 06, 2007
[Dutch] Doe je mee?
[This item is in Dutch. Next week it will be in English again.]
Muziekcognitie krijgt de laatste jaren meer en meer aandacht in wetenschappelijk onderzoek. In dit vakgebied staan vragen centraal als: Waarom raakt muziek ons zo direct? Wat maakt een ritme spannend, sloom of saai? Waarom blijven sommige liedjes in je hoofd steken? Of, kan je een machine leren luisteren? Kortom: muziek bekeken vanuit het gezichtspunt van de luisteraar.
Half januari wordt bekend of MCG door mag naar de tweede ronde van de Academische Jaarprijs, onder het motto 'zonder luisteraar geen muziek'.
Gezocht: twee master, of gevorderde bachelor studenten die ons team willen versterken.
Ben je geïnteresseerd? Stuur dan voor 10 januari a.s. een e-mail naar battle@musiccognition.nl met een korte motivatie en een prikkelende onderzoeksvraag die volgens jou iets duidelijk maakt over het luisteren naar muziek. Je wordt dan in ieder geval uitgenodigd voor de kick-off bijeenkomst in de laatste week van januari. De twee geselecteerde studenten krijgen —behalve eeuwige roem— een deel van het vrij te besteden prijzengeld.
We kijken uit naar jullie reactie!
Henkjan Honing & Olivia Ladinig
Muziekcognitie krijgt de laatste jaren meer en meer aandacht in wetenschappelijk onderzoek. In dit vakgebied staan vragen centraal als: Waarom raakt muziek ons zo direct? Wat maakt een ritme spannend, sloom of saai? Waarom blijven sommige liedjes in je hoofd steken? Of, kan je een machine leren luisteren? Kortom: muziek bekeken vanuit het gezichtspunt van de luisteraar.
Half januari wordt bekend of MCG door mag naar de tweede ronde van de Academische Jaarprijs, onder het motto 'zonder luisteraar geen muziek'.
Gezocht: twee master, of gevorderde bachelor studenten die ons team willen versterken.
Ben je geïnteresseerd? Stuur dan voor 10 januari a.s. een e-mail naar battle@musiccognition.nl met een korte motivatie en een prikkelende onderzoeksvraag die volgens jou iets duidelijk maakt over het luisteren naar muziek. Je wordt dan in ieder geval uitgenodigd voor de kick-off bijeenkomst in de laatste week van januari. De twee geselecteerde studenten krijgen —behalve eeuwige roem— een deel van het vrij te besteden prijzengeld.
We kijken uit naar jullie reactie!
Henkjan Honing & Olivia Ladinig
Wednesday, November 28, 2007
Is music mere play?
Early this morning I was called by the Dutch radio for a daily question on science, and was confronted with the question: Why do we like music? (fragment). Since why-questions are generally almost impossible to answer, I was happy —just in time— to think of the idea of ‘music as play’. But because all of this went almost too quickly, I thought I would slightly eloborate on it in this blog...
The idea is that music, as a human phenomenon, can be seen as something that plays with our senses, our memory, our attention and our emotions, in the way young lions play, without any real threat. Music, generally, does not harm us, it also doesn’t make us less hungry, but it directly addresses our physiological and cognitive functions. For many music listeners this is a pleasant, rewarding, purposeful and sometimes even a consoling play.*
I like this idea of ‘music as play’ far better than the discussion of whether music is an adaptation or a mere evolutionary by-product of more important functions, such as those involved in language (Pinker, 1997). Also Geoffrey Miller’s alternative suggesting sexual selection to be the primary mechanism in the evolution of music is still lacking the proper arguments and evidence. ‘Music as play’ is far more attractive, because it might explain several of our strange behaviors, such as listening to ‘sad’ music when we are sad, to make us even more sad — we apparently know it will not really harm us!
The idea of ‘man as a player’ was brought forward by several authors, including Johan Huizinga who wrote Homo Ludens (‘Man the Player’) in the 1930s. It is the topic of the 2007 Huizinga lectureby Tijs Goldschmidt (a biologist and writer known from, e.g., Darwin's Dreampond). His lecture will be called Doen alsof je doet alsof (‘Pretend to pretend’). I'm sure he will say something about music too. [cf. pp. 20-21]
* See Dutch article on the idea of 'music as play'.
The idea is that music, as a human phenomenon, can be seen as something that plays with our senses, our memory, our attention and our emotions, in the way young lions play, without any real threat. Music, generally, does not harm us, it also doesn’t make us less hungry, but it directly addresses our physiological and cognitive functions. For many music listeners this is a pleasant, rewarding, purposeful and sometimes even a consoling play.*
I like this idea of ‘music as play’ far better than the discussion of whether music is an adaptation or a mere evolutionary by-product of more important functions, such as those involved in language (Pinker, 1997). Also Geoffrey Miller’s alternative suggesting sexual selection to be the primary mechanism in the evolution of music is still lacking the proper arguments and evidence. ‘Music as play’ is far more attractive, because it might explain several of our strange behaviors, such as listening to ‘sad’ music when we are sad, to make us even more sad — we apparently know it will not really harm us!
The idea of ‘man as a player’ was brought forward by several authors, including Johan Huizinga who wrote Homo Ludens (‘Man the Player’) in the 1930s. It is the topic of the 2007 Huizinga lectureby Tijs Goldschmidt (a biologist and writer known from, e.g., Darwin's Dreampond). His lecture will be called Doen alsof je doet alsof (‘Pretend to pretend’). I'm sure he will say something about music too. [cf. pp. 20-21]
* See Dutch article on the idea of 'music as play'.
Monday, November 19, 2007
Is beat induction special? (Part 2)
Rhythmic behavior in non-human animals, such as chimpanzees, has been studied quite regularly. The video below is a nice illustration of how chimpanzees can use tools in a rhythmic, periodic fashion. Other researchers have shown that some apes are even capable of regularly tapping a drum. However, they seem unable to beat a drum—or rhythmically move or dance, for that matter— in synchrony to music, like a human would be able to do.
Hence the big surprise of the video below. A YouTube video that attracted quite some media attention in the US. What do you think? Evidence for beat induction (*) in animals?
The ultimate test is to do an experiment in which the speed (or tempo) of the music is systematically controlled for, to be able to answer the crucial question: will the Cockatoo dance slightly faster if the music is presented slightly faster?
I would be flabbergasted if that would be the case, since for a long time beat induction was considered a human trait, which I argued —along with some colleagues— to be essential to the origins of music in humans.
Currently, a North-American research group tries to find out. I'll keep you posted.
* Beat induction is the process in which a regular isochronous pattern (the beat) is activated while listening to music. This beat, often tapped along by musicians, is a central issue in time keeping in music performance. But also for non-experts the process seems to be fundamental to the processing, coding and appreciation of temporal patterns. The induced beat carries the perception of tempo and is the basis of temporal coding of temporal patterns. Furthermore, it determines the relative importance of notes in, for example, the melodic and harmonic structure.
Desain, P., Honing, H. (1999). Computational Models of Beat Induction: The Rule-Based Approach.. Journal of New Music Research, 28(1), 29-42.
Hence the big surprise of the video below. A YouTube video that attracted quite some media attention in the US. What do you think? Evidence for beat induction (*) in animals?
The ultimate test is to do an experiment in which the speed (or tempo) of the music is systematically controlled for, to be able to answer the crucial question: will the Cockatoo dance slightly faster if the music is presented slightly faster?
I would be flabbergasted if that would be the case, since for a long time beat induction was considered a human trait, which I argued —along with some colleagues— to be essential to the origins of music in humans.
Currently, a North-American research group tries to find out. I'll keep you posted.
* Beat induction is the process in which a regular isochronous pattern (the beat) is activated while listening to music. This beat, often tapped along by musicians, is a central issue in time keeping in music performance. But also for non-experts the process seems to be fundamental to the processing, coding and appreciation of temporal patterns. The induced beat carries the perception of tempo and is the basis of temporal coding of temporal patterns. Furthermore, it determines the relative importance of notes in, for example, the melodic and harmonic structure.
Desain, P., Honing, H. (1999). Computational Models of Beat Induction: The Rule-Based Approach.. Journal of New Music Research, 28(1), 29-42.
Sunday, November 11, 2007
In Barcelona this week?
CosmoCaixa, the new science museum in Barcelona, is organizing an exhibition with the title Física y música. Vibraciones para el alma (see impression). In this context the museum organizes on November 16 a full day of lectures with the title “Music, Neuroscience, Technology” for which a number of international researchers have been invited to present their work in music cognition and related areas.
The day opens with Isabelle Peretz and Robert Zatorre, well-known for their neuroscientific work - the first for her medical case-studies and research in amusia, the latter for his innovative brain imaging research in music and audition. The rest of the day will include presentations by Tecumseh Fitch (University of St Andrews, UK) who will look at music from an evolutionary perspective, Patrick Juslin (Uppsala University, Sweden) who will talk about music and emotion, Michel Thaut on music therapy (Colorado State University, USA), and Xavier Serra (Universidad Pompeu Fabra, Spain) on music technology. Finally, I will present some recent work on rhythm cognition as it was realized in the context of an EU-FP6 project on emergent cognition (see earlier blog). The latter project, EmCAP —a consortium of 6 research groups from 4 European universities—, will be reviewed this week in Barcelona as well. I look forward to it ...
The day opens with Isabelle Peretz and Robert Zatorre, well-known for their neuroscientific work - the first for her medical case-studies and research in amusia, the latter for his innovative brain imaging research in music and audition. The rest of the day will include presentations by Tecumseh Fitch (University of St Andrews, UK) who will look at music from an evolutionary perspective, Patrick Juslin (Uppsala University, Sweden) who will talk about music and emotion, Michel Thaut on music therapy (Colorado State University, USA), and Xavier Serra (Universidad Pompeu Fabra, Spain) on music technology. Finally, I will present some recent work on rhythm cognition as it was realized in the context of an EU-FP6 project on emergent cognition (see earlier blog). The latter project, EmCAP —a consortium of 6 research groups from 4 European universities—, will be reviewed this week in Barcelona as well. I look forward to it ...
Saturday, November 03, 2007
Is beat induction special?
In the 1990s several researchers in cognitive science were concerned with trying to understand beat induction: the cognitive process of attributing a regular pulse to a musical fragment, the beat we're sometimes forced to tap our foot to.
I would like to argue that, from an evolutionary perspective, beat induction is one, if not the most fundamental aspect that made music possible. It allows us, humans, to synchronize, to dance, to clap, and to make music together, synchronizing to the beat of the music. Beat induction seems essential for all kinds of social and cultural activities, including rituals.
Interestingly, we do not share this capability with other animals. Researchers have, until now, unsuccessfully tried to have non-human animals —such as chimpanzees and elephants— synchronize to music. While non-human animals might show rhythmic behavior (like chimpanzees using tools) , they can not, for instance, play a drum in synchrony with the music, and consequently change it while the music changes tempo. However, some researchers, like Ani Patel of the Neuroscience Institute San Diego (see *), are optimistic.
For me, personally, there is no need to show that beat induction is solely a human trait, but it suggests that beat induction could have made a difference in the cognitive development of the human species.
* Patel, A.D. & Iversen, J.R. (2006). A non-human animal can drum a steady beat on a musical instrument. In: Proceedings of the 9th International Conference on Music Perception & Cognition (ICMPC9), Bologna/Italy, August 22-26 2006, p. 477.
I would like to argue that, from an evolutionary perspective, beat induction is one, if not the most fundamental aspect that made music possible. It allows us, humans, to synchronize, to dance, to clap, and to make music together, synchronizing to the beat of the music. Beat induction seems essential for all kinds of social and cultural activities, including rituals.
Interestingly, we do not share this capability with other animals. Researchers have, until now, unsuccessfully tried to have non-human animals —such as chimpanzees and elephants— synchronize to music. While non-human animals might show rhythmic behavior (like chimpanzees using tools) , they can not, for instance, play a drum in synchrony with the music, and consequently change it while the music changes tempo. However, some researchers, like Ani Patel of the Neuroscience Institute San Diego (see *), are optimistic.
For me, personally, there is no need to show that beat induction is solely a human trait, but it suggests that beat induction could have made a difference in the cognitive development of the human species.
* Patel, A.D. & Iversen, J.R. (2006). A non-human animal can drum a steady beat on a musical instrument. In: Proceedings of the 9th International Conference on Music Perception & Cognition (ICMPC9), Bologna/Italy, August 22-26 2006, p. 477.
Monday, October 22, 2007
What is music cognition?
In the last three years our group has spend quite some energy in promoting music cognition as an interesting field of research in the cognitive sciences. The strategy was simple but effective: simply say ‘yes’ —and think along with— any journalist that contacts you. And, as is often the case in media land, once an idea is out and considered interesting, other media want more of the same. The challenge is, however, to make sure music cognition —its insights and results, its aims and prospects— is represented in an appropriate way, without falling into the trap of being reduced to simple facts that are useful for a popular TV quiz (see video below).
In that sense, I sympathize with initiatives like the ‘Battle of the Universities’ that promote the idea that scientists themselves should to take a lead in presenting their research (instead of ‘complaining’ about the media simplifying it too much :-). However, it is not easy to bring forward the essence of one’s field in an intriguing way.
Outreach —as it is often called— is not the same as ‘going on your knees’ to explain your research to a general audience or making populist interpretations of your field. You are in fact challenged to explain your research and insights in different terms. And that can be very rewarding and even influence to your own thinking. With regard to my own research, I could start talking about the computational modeling of music cognition, and the theoretical, empirical and computational methods that we use, but I’m sure a general audience will quickly loose me. A common trick is to think of a typical example that speaks to everyone’s imagination. I often explain my research in terms of the scientific challenge to make a listening machine. Imagine what that would be like? A machine that can listen and react in a human and musical way. And, of course, it should make the same mistakes! It allows you to explain all kinds of computational modeling notions, what should such a machine know, what should it listen for, and how can we compare and evaluate them? We just heard from our university that MCG has been selected to defend the University of Amsterdam in the Battle of the Universities. A challenge to look forward to ...
In that sense, I sympathize with initiatives like the ‘Battle of the Universities’ that promote the idea that scientists themselves should to take a lead in presenting their research (instead of ‘complaining’ about the media simplifying it too much :-). However, it is not easy to bring forward the essence of one’s field in an intriguing way.
Outreach —as it is often called— is not the same as ‘going on your knees’ to explain your research to a general audience or making populist interpretations of your field. You are in fact challenged to explain your research and insights in different terms. And that can be very rewarding and even influence to your own thinking. With regard to my own research, I could start talking about the computational modeling of music cognition, and the theoretical, empirical and computational methods that we use, but I’m sure a general audience will quickly loose me. A common trick is to think of a typical example that speaks to everyone’s imagination. I often explain my research in terms of the scientific challenge to make a listening machine. Imagine what that would be like? A machine that can listen and react in a human and musical way. And, of course, it should make the same mistakes! It allows you to explain all kinds of computational modeling notions, what should such a machine know, what should it listen for, and how can we compare and evaluate them? We just heard from our university that MCG has been selected to defend the University of Amsterdam in the Battle of the Universities. A challenge to look forward to ...
Sunday, October 14, 2007
What makes a metaphor informative?
Metaphor. When I read that word I always hear the voice of Massimo Troisi (Il Postino) saying ‘Metáfore’. And indeed, like in that movie, metaphor can be mesmerizing and beautiful. However, in music research metaphor has had a debatable role. Metaphors like ‘music is movement’ —it makes you move—, ‘music is a language’, ‘music is distilled emotion’ etc. are often reducing what music is, instead of contributing to a real understanding. While crucial in the arts, metaphor is often all but informative in research.
However, the research of Zohar Eitan (Tel Aviv University) is one of the important exceptions. Instead of taking the ‘music is abstract motion’ metaphor as an explanation of how phenomena in music are constrained —governed by, for instance, the rules of elementary mechanics—, his group designed a nice set of experiments in which participants were asked to imagine a cartoon character while listening to music.
In these listening studies participants had to report when or how the imagined cartoon character was moving in response to the music. Instead of using the physical motion metaphor as an explanation, the association of listeners with physical space and bodily motion was used to reveal how music can influence mental images of motion. Interestingly, it turned out that most musical-spatial analogies are quite asymmetrical. As such providing evidence that, while music and the motion metaphor can influence each other, the latter can not fully capture the actual phenomena.
Eitan, Z., Granot, R.Y. (2006). How Music Moves. Music Perception, 23(3), 221-248. DOI: 10.1525/mp.2006.23.3.221
However, the research of Zohar Eitan (Tel Aviv University) is one of the important exceptions. Instead of taking the ‘music is abstract motion’ metaphor as an explanation of how phenomena in music are constrained —governed by, for instance, the rules of elementary mechanics—, his group designed a nice set of experiments in which participants were asked to imagine a cartoon character while listening to music.
In these listening studies participants had to report when or how the imagined cartoon character was moving in response to the music. Instead of using the physical motion metaphor as an explanation, the association of listeners with physical space and bodily motion was used to reveal how music can influence mental images of motion. Interestingly, it turned out that most musical-spatial analogies are quite asymmetrical. As such providing evidence that, while music and the motion metaphor can influence each other, the latter can not fully capture the actual phenomena.
Eitan, Z., Granot, R.Y. (2006). How Music Moves. Music Perception, 23(3), 221-248. DOI: 10.1525/mp.2006.23.3.221
Saturday, October 06, 2007
Is music a luxury?
Could scientific books on music until recently not afford not to include a chapter on the anatomy of the ear, currently most books on music have to include a chapter on the anatomy of the brain. The brain is 'cool, hip and happening'.
Luckily Oliver Sacks' new book Musicophilia: Tales of Music and the Brain does not include this obligatory chapter. Instead, he uses his familiar observational style in revealing his personal and medical interest in people with a particular brain disorder. In this book he focuses on music-related mental phenomena ranging from amusia and absolute pitch to dysharmonia and synesthesia, while also discussing the role of music in Parkinson’s disease, Tourette’s and Williams syndrome. It is a book that fits in a trend of books by scholars that analyze and promote the importance of music in a wider perspective than is normally done by musical experts. Like Steven Mithen’s The Singing Neatherthals and Dan Levitin’s This is your brain on music, this book discusses what is so special about music, while it seriously wonders why some consider it a mere luxury (if not simply cheesecake).
In his book Sacks describes a series of medical cases where a neural deficit reveals something about the workings (or breaking down) of an intrinsic human quality we name ‘music’:
Luckily Oliver Sacks' new book Musicophilia: Tales of Music and the Brain does not include this obligatory chapter. Instead, he uses his familiar observational style in revealing his personal and medical interest in people with a particular brain disorder. In this book he focuses on music-related mental phenomena ranging from amusia and absolute pitch to dysharmonia and synesthesia, while also discussing the role of music in Parkinson’s disease, Tourette’s and Williams syndrome. It is a book that fits in a trend of books by scholars that analyze and promote the importance of music in a wider perspective than is normally done by musical experts. Like Steven Mithen’s The Singing Neatherthals and Dan Levitin’s This is your brain on music, this book discusses what is so special about music, while it seriously wonders why some consider it a mere luxury (if not simply cheesecake).
In his book Sacks describes a series of medical cases where a neural deficit reveals something about the workings (or breaking down) of an intrinsic human quality we name ‘music’:
“We humans are a musical species no less than a linguistic one. This takes many different forms. All of us (with very few exceptions) can perceive music, perceive tones, timbre, pitch intervals, melodic contours, harmony, and (perhaps most elementally) rhythm. We integrate all of these and “construct” music in our minds using many different parts of the brain. And to this largely unconscious structural appreciation of music is added an often intense and profound emotional reaction to music. “The inexpressible depth of music,” Schopenhauer wrote, “so easy to understand and yet so inexplicable, is due to the fact that it reproduces all the emotions of our innermost being, but entirely without reality and remote from its pain.... Music expresses only the quintessence of life and of its events, never these themselves.”The book is a hopeful account of the role music can play in our lives and how our brain is involved in this. It will be released this November in many languages at the same time (I just read a preprint). So this will likely not be the last bit you read about it.
Monday, October 01, 2007
What makes a theory of music surprising?
Quite a while ago, a fellow musicologist referred to me as a ‘positivist’. As I was, at that time, not too familiar with postmodern parlance, I thought of it as a compliment (making an association with the Dutch comedy duo De Positivo’s that were sheer optimistic). It turned out that actually the opposite was meant.
Last weekend in Cologne, being invited to speak at the Gesellschaft für Musikforschung, I was reminded of this remark. For some reason the methods associated with positivism, such as those used in the natural and social sciences, still flag a divide in music research between, for instance, the systematic and historically oriented approaches to music. A divide that seems to be fed by a misunderstanding of much of Popper’s ideas on ‘science’ versus ‘pseudoscience’ (see earlier blog).
While the idea of ‘falsification’ is indeed, as Popper showed in some of his later work, not very useful in historical research —as in archeology, a new found manuscript can easily falsify a long established historical theory— this does not make archeology or historical musicology a 'pseudoscience'. In my opinion, it is not so much the inapplicability of the empirical method (and hence the possibility of falsification) for historically oriented musicology, but the apparent resistance to formulate theories that can be tested, that might characterize the discussion. Is it impossible to make a theory about some aspect of (the history of) music that can be tested (or evaluated), independent of empirical evidence?
I like, particularly in this context, Popper’s idea that a theory can be intrinsically compelling or ‘surprising’, even in the absence of empirical evidence. What is intended here is not ‘surprising’ in the sense that a new fact is found that we did not yet knew about, but a prediction that, while we would expect X —given everything we know—, it actually predicts X is not the case, but rather Y. A prediction that is the consequence of a theory (made up of intuition, empirical observations or otherwise) that is violating our expectations based on what we know. I do not see why both historical and systematic musicology could use that as an additional method.
Last weekend in Cologne, being invited to speak at the Gesellschaft für Musikforschung, I was reminded of this remark. For some reason the methods associated with positivism, such as those used in the natural and social sciences, still flag a divide in music research between, for instance, the systematic and historically oriented approaches to music. A divide that seems to be fed by a misunderstanding of much of Popper’s ideas on ‘science’ versus ‘pseudoscience’ (see earlier blog).
While the idea of ‘falsification’ is indeed, as Popper showed in some of his later work, not very useful in historical research —as in archeology, a new found manuscript can easily falsify a long established historical theory— this does not make archeology or historical musicology a 'pseudoscience'. In my opinion, it is not so much the inapplicability of the empirical method (and hence the possibility of falsification) for historically oriented musicology, but the apparent resistance to formulate theories that can be tested, that might characterize the discussion. Is it impossible to make a theory about some aspect of (the history of) music that can be tested (or evaluated), independent of empirical evidence?
I like, particularly in this context, Popper’s idea that a theory can be intrinsically compelling or ‘surprising’, even in the absence of empirical evidence. What is intended here is not ‘surprising’ in the sense that a new fact is found that we did not yet knew about, but a prediction that, while we would expect X —given everything we know—, it actually predicts X is not the case, but rather Y. A prediction that is the consequence of a theory (made up of intuition, empirical observations or otherwise) that is violating our expectations based on what we know. I do not see why both historical and systematic musicology could use that as an additional method.
Thursday, September 27, 2007
Did you hear something new this week?
Sharing musical taste, or, even stronger, luring others into your favorite music, is a common activity at school and birthday parties. Nevertheless, the most common way we get to know about new or different music is probably through reviews by experts on concerts and recently released CDs. New in that sense is the role that the Internet starts to play in all this.
In the last few years it became possible to share music and musical taste through Internet communities on an much larger and more personal, one-to-one scale. This is a growing —if not booming— social and cultural phenomenon, that is clearly changing the way communities of music lovers ‘lure’ each other into other musical domains and modes of listening. Next to being facilitated by recent digital recording and distribution techniques, it is mainly the partly unforeseen impact of user-generated content (‘Web 2.0’) that contributes to its success. It seems a development that is worth studying for music, media and music cognition researchers (At the University of Amsterdam a consortium is currently thinking in that direction).
P.S. Did I hear something new last week? Well, yes. Last Monday (in the Bimhuis) I heard for the first time a begana (a 10 string lira), an instrument I saw, around age 8, on the first page of a history of musical instruments book but that I actually never heard. It didn’t sound at all the way I imagined it (listen).
In the last few years it became possible to share music and musical taste through Internet communities on an much larger and more personal, one-to-one scale. This is a growing —if not booming— social and cultural phenomenon, that is clearly changing the way communities of music lovers ‘lure’ each other into other musical domains and modes of listening. Next to being facilitated by recent digital recording and distribution techniques, it is mainly the partly unforeseen impact of user-generated content (‘Web 2.0’) that contributes to its success. It seems a development that is worth studying for music, media and music cognition researchers (At the University of Amsterdam a consortium is currently thinking in that direction).
P.S. Did I hear something new last week? Well, yes. Last Monday (in the Bimhuis) I heard for the first time a begana (a 10 string lira), an instrument I saw, around age 8, on the first page of a history of musical instruments book but that I actually never heard. It didn’t sound at all the way I imagined it (listen).
Friday, September 21, 2007
Do you have amusia?
Some people doubt whether they have a sense for musical pitch? However being tone deaf is a relatively rare phenomenon that is studied by neuroscientists (who refer to it as amusia) because it might give us clues about the specificity of music. This week, in exchange for a short blog, a reference to an amusia test made available by the University of Montreal.
Sunday, September 16, 2007
Are there four beats or only three?
You are browsing, let us imagine, in a music shop, and come across a box of faded pianola rolls. One of them bears an illegible title, and you unroll the first foot or two, to see if you can recognize the work from the pattern of holes in the paper. Are there four beats in the bar, or only three? Does the piece begin on the tonic, or some other note? Eventually you decide that the only way of finding out is to buy the roll, take it home, and play it on the pianola. Within seconds your ears have told you what your eyes were quite unable to make out — that you are now the proud possessor of a piano arrangement of “Colonel Bogey”.
This is the opening paragraph of an article that was printed in 1979 in the Proceedings of the Royal Society of London. It came out as part of the book Mental Processes: Studies in Cognitive Science by H. Christopher Longuet-Higgins in 1987.
The chapter on music —of which the citation above is part— made a lasting experience on me, and actually made me decide that music cognition is worth dedicating all of one's research to. It made me realize that all those things musicologists and music theorists considered mere axiom’s —such as a meter, an upbeat, or a syncopation— were extremely interesting in themselves, and could be studied using methods from this developing field called ‘cognitive science’.
It is now precisely twenty years ago since H. Christopher Longuet-Higgins’ book was published: An impressive collection of papers with topics ranging from music and language to vision and memory.
It also includes his comments to the Lighthill Report, published in 1973, in which he proposed ‘Cognitive Science’ as a label for what he saw, then, as an emerging interdisciplinary field.
Unfortunately, you have to go to the library to read it: it has been out of print for quite a while.
(See also a text in Dutch —Nieuwsbrief 102— on the same topic.)
This is the opening paragraph of an article that was printed in 1979 in the Proceedings of the Royal Society of London. It came out as part of the book Mental Processes: Studies in Cognitive Science by H. Christopher Longuet-Higgins in 1987.
The chapter on music —of which the citation above is part— made a lasting experience on me, and actually made me decide that music cognition is worth dedicating all of one's research to. It made me realize that all those things musicologists and music theorists considered mere axiom’s —such as a meter, an upbeat, or a syncopation— were extremely interesting in themselves, and could be studied using methods from this developing field called ‘cognitive science’.
It is now precisely twenty years ago since H. Christopher Longuet-Higgins’ book was published: An impressive collection of papers with topics ranging from music and language to vision and memory.
It also includes his comments to the Lighthill Report, published in 1973, in which he proposed ‘Cognitive Science’ as a label for what he saw, then, as an emerging interdisciplinary field.
Unfortunately, you have to go to the library to read it: it has been out of print for quite a while.
(See also a text in Dutch —Nieuwsbrief 102— on the same topic.)
Friday, September 07, 2007
A sense for rhythm? (Part 3)
Last Monday, together with four colleagues from the University of Amsterdam, I was asked to speak at the Opening of the Academic Year.
I started with talking about a wonderful study by Erin Hannon and Sandra Trehub (University of Cornell and University of Toronto) called "Metrical Categories in Infancy and Adulthood", and gratefully used some of the sound examples they made available online.
Interestingly, a "one-trial" version of their experiment failed miserably :-) The sign language interpreter picked up the differences immediately and communicated them so clearly to the audience that I had to ask the audience to close their eyes for the other sound examples!
The overall message was, motivated by e.g. the Hannon & Trehub study: Listen as often and as varied as you can, it will improve your sense for rhythm!
I started with talking about a wonderful study by Erin Hannon and Sandra Trehub (University of Cornell and University of Toronto) called "Metrical Categories in Infancy and Adulthood", and gratefully used some of the sound examples they made available online.
Interestingly, a "one-trial" version of their experiment failed miserably :-) The sign language interpreter picked up the differences immediately and communicated them so clearly to the audience that I had to ask the audience to close their eyes for the other sound examples!
The overall message was, motivated by e.g. the Hannon & Trehub study: Listen as often and as varied as you can, it will improve your sense for rhythm!
Wednesday, September 05, 2007
Perfect Pitch: You either have it or not?
Last week a paper was published in the Proceedings of the National Academy of Sciences of the USA (PNAS) that generated quite a stir, both in the academic world and in the press. In that paper researchers from the University of California presented the results from an elaborated web-based study (with about 2200 participants) that investigated the ability of Absolute Pitch (AP): being able to name the pitch of a tone without the use of a reference tone. Something some see as a musical gift, others as a burden.
The researchers found a bimodal distribution in pitch-naming ability that was interpreted as “you either have it or not”. Furthermore, they suggested a genetic basis for AP. And that’s were the discussion started ...
While there is some research in the possible genetic basis for AP, related studies (not mentioned in the PNAS paper) have argued, and to a large extend shown, that AP might well be a result of biases due to the task and stimuli used, largely a result of training, and problably more widepread than some think.
For example, Glenn Schellenberg and Sandra Trehub form the University of Toronto found support for a normal, not bimodal, distribution once pitch-naming or reproduction requirements are eliminated (such knowledge about piano keyboards or music notation) and familiar materials (such as soundtracks of tv programs) are used. They argue that good pitch memory is actually widespread.
Oliver Vitouch from the University of Klagenfurt wrote a comment a few years ago, called “Absolute models of absolute pitch are absolutely misleading”, summarizing the state of affairs in AP research, and arguing that it is mainly a result of musical training. Clearly there is little agreement on the claim that AP is a trait.
In addition, I find AP actually not such a special phenomenon. Although we could agree that AP occurs at different levels of preciseness on a continuous scale, in the end we should also agree that Relative Pitch (RP) is far more special. While we might share AP with quite a few animals, RP is far less common, arguably making AP in humans less special.
The researchers found a bimodal distribution in pitch-naming ability that was interpreted as “you either have it or not”. Furthermore, they suggested a genetic basis for AP. And that’s were the discussion started ...
While there is some research in the possible genetic basis for AP, related studies (not mentioned in the PNAS paper) have argued, and to a large extend shown, that AP might well be a result of biases due to the task and stimuli used, largely a result of training, and problably more widepread than some think.
For example, Glenn Schellenberg and Sandra Trehub form the University of Toronto found support for a normal, not bimodal, distribution once pitch-naming or reproduction requirements are eliminated (such knowledge about piano keyboards or music notation) and familiar materials (such as soundtracks of tv programs) are used. They argue that good pitch memory is actually widespread.
Oliver Vitouch from the University of Klagenfurt wrote a comment a few years ago, called “Absolute models of absolute pitch are absolutely misleading”, summarizing the state of affairs in AP research, and arguing that it is mainly a result of musical training. Clearly there is little agreement on the claim that AP is a trait.
In addition, I find AP actually not such a special phenomenon. Although we could agree that AP occurs at different levels of preciseness on a continuous scale, in the end we should also agree that Relative Pitch (RP) is far more special. While we might share AP with quite a few animals, RP is far less common, arguably making AP in humans less special.
Saturday, August 18, 2007
A 2006 recording of Glenn Gould?
Sony Music recently released a new recording (made in 2006) of Glenn Gould performing the Goldberg Variations. Curious, not? The recording was made using measurements of the old recordings and then regenerating the performance on a computer-controlled grand piano, a modern pianola.
This technology dates from the early nineties, a time when several piano companies (including Yamaha and Bösendorfer) combined MIDI and modern solenoid technology with the older idea of a pianola. Old paper piano rolls with recordings of Rachmaninoff, Bartok, Stravinsky and others were translated to MIDI and could be reproduced ‘live’ on modern instruments like the Yamaha Disklavier. Until now, the only left challenge was to be able to do this for recordings of which no piano-rolls were available.
Besides the technicalties of all this, for most people the real surprise —or perhaps disillusion— might well be the realization that a piano performance can be reduced to the ‘when’, ‘what’ and ‘how fast’ the piano keys are pressed. Three numbers per note can fully capture a piano performance, and it allows for replicating any performance on a grand piano(-la). The moment a pianist hits the key with a certain velocity, the hammer releases, and any gesture that is made after that can be considered merely dramatic: it will have no effect on the sound. This realization puts all theories about the magic of touché in a different perspective.
Nevertheless, while it is relatively easy to make the translation from audio (say a recording from Glenn Gould from 1955) to the what (which notes), and the when (timing) in a MIDI-like representation, the problem is in the ‘reverse engineering’ of key velocity. What was the speed of Gould’s finger presses on the specific piano he used? The Zenph Studios claim to have solved it for at least a few recordings. Only trust your ears :-)
This technology dates from the early nineties, a time when several piano companies (including Yamaha and Bösendorfer) combined MIDI and modern solenoid technology with the older idea of a pianola. Old paper piano rolls with recordings of Rachmaninoff, Bartok, Stravinsky and others were translated to MIDI and could be reproduced ‘live’ on modern instruments like the Yamaha Disklavier. Until now, the only left challenge was to be able to do this for recordings of which no piano-rolls were available.
Besides the technicalties of all this, for most people the real surprise —or perhaps disillusion— might well be the realization that a piano performance can be reduced to the ‘when’, ‘what’ and ‘how fast’ the piano keys are pressed. Three numbers per note can fully capture a piano performance, and it allows for replicating any performance on a grand piano(-la). The moment a pianist hits the key with a certain velocity, the hammer releases, and any gesture that is made after that can be considered merely dramatic: it will have no effect on the sound. This realization puts all theories about the magic of touché in a different perspective.
Nevertheless, while it is relatively easy to make the translation from audio (say a recording from Glenn Gould from 1955) to the what (which notes), and the when (timing) in a MIDI-like representation, the problem is in the ‘reverse engineering’ of key velocity. What was the speed of Gould’s finger presses on the specific piano he used? The Zenph Studios claim to have solved it for at least a few recordings. Only trust your ears :-)
Wednesday, August 15, 2007
Is it a male or female performer?
This week an interesting new web-based experiment was launched by the International Music Education Research Centre. They address the question: Can listeners determine the gender of the performer on the basis of a recording? Do the experiment by clicking here, and help in finding out ...
Nowadays more and more music cognition researchers are making use of the internet. Next to becoming a serious alternative to some types of lab-based experiments, web-based data collection might even avoid some of the pitfalls of lab-based studies, such as the typical psychology-students-pool biased results, by potentially being able the reach a much larger, more varied and motivated participant pool, as well as having participants doing the experiment in a more natural environment as compared to the lab.
Nevertheless, the real challenge in web-based experiments is how to control for attention. And interestingly, this is not different from experiments performed in the lab. In an online experiment as well, one needs to make sure people are paying attention and actually doing what you instructed them to do. One of the ways we try to resolve that in our online experiments is —next to the standard tricks— to make online experiments as engaging as possible. For instance, by using screencasts, instead of having to read instructions from the screen, and, more importantly, designing a fun and doable experiment that is challenging at the same time. All such that we can assume we attract serious and really interested listeners (see example). With all these aspects improving over time, I am sure that web experiments will become a more and more a reliable source for empirical research in music cognition.
Nowadays more and more music cognition researchers are making use of the internet. Next to becoming a serious alternative to some types of lab-based experiments, web-based data collection might even avoid some of the pitfalls of lab-based studies, such as the typical psychology-students-pool biased results, by potentially being able the reach a much larger, more varied and motivated participant pool, as well as having participants doing the experiment in a more natural environment as compared to the lab.
Nevertheless, the real challenge in web-based experiments is how to control for attention. And interestingly, this is not different from experiments performed in the lab. In an online experiment as well, one needs to make sure people are paying attention and actually doing what you instructed them to do. One of the ways we try to resolve that in our online experiments is —next to the standard tricks— to make online experiments as engaging as possible. For instance, by using screencasts, instead of having to read instructions from the screen, and, more importantly, designing a fun and doable experiment that is challenging at the same time. All such that we can assume we attract serious and really interested listeners (see example). With all these aspects improving over time, I am sure that web experiments will become a more and more a reliable source for empirical research in music cognition.
Tuesday, August 07, 2007
Piano touch unraveled?
A few postings ago I mentioned a remake of Glenn Gould’s Goldberg variations. It was related to the topic of piano touch (or touché), a notion pianists and music lovers often talk about, and that is, nevertheless, surrounded with a lot of magic.
Several researchers are researching this topic, including Werner Goebl and Caroline Palmer at McGill University, Canada. They presented their recent findings at the SMPC conference on music perception and cognition in Montreal. Using a movement tracking device it was possible to track a pianist’s finger movements on a digital piano keyboard (Apparently a grand piano could not be used because of the need to film/measure these movements from the piano towards the hands; see photo).
By analyzing the performances of twelve professional pianists, they found that different finger movements did not lead to differences in timing precision or in tone intensity. That is a novel finding. However, the actual relation between the finger movements and the resulting velocity of the piano key after contact was not studied as yet (a replication of this study on a modern pianola —like the Yamaha Disklavier or Bösendorfer— seems a logical next step).
My hunch is that the finger dynamics will not matter so much (as was in part suggested by this study). The gestures made by a pianist, including finger movements and what is generally referred to as piano touch, have more to do with habit and a sense of control, then that they actually have an influence on the key velocity that, next to the timing, effectively contributes to the sound and musical quality of the performance. This type of research will find out soon …
Several researchers are researching this topic, including Werner Goebl and Caroline Palmer at McGill University, Canada. They presented their recent findings at the SMPC conference on music perception and cognition in Montreal. Using a movement tracking device it was possible to track a pianist’s finger movements on a digital piano keyboard (Apparently a grand piano could not be used because of the need to film/measure these movements from the piano towards the hands; see photo).
By analyzing the performances of twelve professional pianists, they found that different finger movements did not lead to differences in timing precision or in tone intensity. That is a novel finding. However, the actual relation between the finger movements and the resulting velocity of the piano key after contact was not studied as yet (a replication of this study on a modern pianola —like the Yamaha Disklavier or Bösendorfer— seems a logical next step).
My hunch is that the finger dynamics will not matter so much (as was in part suggested by this study). The gestures made by a pianist, including finger movements and what is generally referred to as piano touch, have more to do with habit and a sense of control, then that they actually have an influence on the key velocity that, next to the timing, effectively contributes to the sound and musical quality of the performance. This type of research will find out soon …
Wednesday, August 01, 2007
'What paper did you like most?'
I’m currently in Montreal visiting the SMPC, a conference on music cognition with more than 150 papers from 21 countries in four days. What is the paper I liked most, half way the conference?
If I had to choose now, it would be a poster by Laurel Trainor and colleagues from McMaster University on the effects of the vestibular system on perception. Intriguing research. First, they replicated the effect of rocking movement on rhythm perception in adults, a result they showed for babies a few years ago. An ambiguous rhythmic pattern (|.|||.) was perceived in triple meter (>.>.>.) when a listener was moved in three, and it was perceived as duple (>..>..) when they were moved in two. Obvious in a way, but one of the few studies that actually shows an influence of movement on rhythm perception.
Interestingly, seeing some one else moving to the rhythm in two or three did not have such an disambiguating effect. That makes an important difference with mirror neuron system research that suggests a strong relation between doing and observing an action. Trainor and colleagues argue, and partially showed, that it therefore is likely the vestibular (or balance) system that causes this effect. With, as far as this was controlled for, no cognitive influence on the disambiguation like one might expect. Independent of the alternative explanations that might be possible, it is a striking result in the still little understood relation between music and movement.
Thursday, July 26, 2007
'Why do people sing so shamelessly out of tune?'
This week a national newspaper called me with this peculiar question. It reminded me immediately of a lecture that Isabelle Peretz (University of Montreal) gave this spring in the UK on amusia or tone deafness. In that she showed recent video material of a lab member who sang very much out of tune, but who was not aware of it. Surprising, because he has a degree in music education.
The reason I mention the example is that we often equal a talent for music to performance, such as being able to sing or play an instrument, and not so much to perception, for instance, being sensitive to subtle differences in pitch and timing when listening to music. When somebody sings out of tune, we might infer that he or she has no talent for music.
That is of course a misunderstanding. We can not simply judge someone’s musicality through the acrobatics of performance (Besides it needs years of training; see an earlier posting). More and more research is showing that mere exposure —not musical expertise as a result of formal training— has an influence on making sophisticated musical judgments.
With regard to performance, an intriguing study was done by Simone Dalla Bella and colleagues (just published in JASA). They asked occasional singers, recruited in a public park, to sing a well-known Quebecan birthday song. It was no surprise to find the professional musicians to reproduce the song much more precise than the ‘non-musicians’. However, when the ‘non-musicians’ were invited in the lab, and were asked to sing it again at a slightly slower pace, most sang it just as accurately as the professional singers. Another example that shows that musical skills are more common than we might think.
Dalla Bella, S., Giguère, J., & Peretz, I. (2007). Singing proficiency in the general population The Journal of the Acoustical Society of America, 121 (2) DOI: 10.1121/1.2427111
The reason I mention the example is that we often equal a talent for music to performance, such as being able to sing or play an instrument, and not so much to perception, for instance, being sensitive to subtle differences in pitch and timing when listening to music. When somebody sings out of tune, we might infer that he or she has no talent for music.
That is of course a misunderstanding. We can not simply judge someone’s musicality through the acrobatics of performance (Besides it needs years of training; see an earlier posting). More and more research is showing that mere exposure —not musical expertise as a result of formal training— has an influence on making sophisticated musical judgments.
With regard to performance, an intriguing study was done by Simone Dalla Bella and colleagues (just published in JASA). They asked occasional singers, recruited in a public park, to sing a well-known Quebecan birthday song. It was no surprise to find the professional musicians to reproduce the song much more precise than the ‘non-musicians’. However, when the ‘non-musicians’ were invited in the lab, and were asked to sing it again at a slightly slower pace, most sang it just as accurately as the professional singers. Another example that shows that musical skills are more common than we might think.
Dalla Bella, S., Giguère, J., & Peretz, I. (2007). Singing proficiency in the general population The Journal of the Acoustical Society of America, 121 (2) DOI: 10.1121/1.2427111
Monday, July 23, 2007
What makes a theory compelling?
Karl Popper was a philosopher of science that was very much interested in this question. He tried to distinguish 'science' from 'pseudoscience', but got more and more dissatisfied with the idea that the empirical method (supporting a theory with observations and experiments) could effectively mark this distinction. He sometimes used the example of astrology “with its stupendous mass of empirical evidence based on observation”, but also nuanced it by stating that “science often errs, and that pseudoscience may happen to stumble on the truth.”
Next to his well-known work on falsification, Popper started to develop alternatives to determine the scientific status or quality of a theory. He wrote that “confirmations [of a theory] should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.” Popper, 1963).
Popper was especially thrilled with the result of Eddington’s eclipse observations, which in 1919 brought the first important confirmation of Einstein's theory of gravitation. It was the surprising consequence of this theory that light should bend in the presence of large, heavy objects (Einstein was apparently willing to drop his theory if this would not be the case). Independent of whether such a prediction turns out to be true or not, Popper considered it an important quality of ‘real science’ to make such ‘risky predictions’.
I still find this an intriguing idea. The notion of ‘risky’ or ‘surprising predictions’ might actually be the beginning of a fruitful alternative to existing model selection techniques, such as goodness-of-fit (which theory predicts the data best) and simplicity (which theory gives the simplest explanation). Also in music cognition measures like goodness-of-fit (r-squared, percentage variance accounted for, and other measures from the experimental psychology toolkit) are often used to confirm a theory.* Nevertheless, it is non-trivial to think of (existing) theories in music cognition that make surprising predictions. That is, a theory that predicts a yet unknown phenomenon as a consequence of the intrinsic structure of the theory itself (If you know of any, let me know!)
Well, these are still relatively raw ideas. I hope to be able to present them in a more digested format next week at the music perception and cognition conference (SMPC) in Montreal. Looking forward to it!
* If you want to read more on this topic, see here.
Next to his well-known work on falsification, Popper started to develop alternatives to determine the scientific status or quality of a theory. He wrote that “confirmations [of a theory] should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.” Popper, 1963).
Popper was especially thrilled with the result of Eddington’s eclipse observations, which in 1919 brought the first important confirmation of Einstein's theory of gravitation. It was the surprising consequence of this theory that light should bend in the presence of large, heavy objects (Einstein was apparently willing to drop his theory if this would not be the case). Independent of whether such a prediction turns out to be true or not, Popper considered it an important quality of ‘real science’ to make such ‘risky predictions’.
I still find this an intriguing idea. The notion of ‘risky’ or ‘surprising predictions’ might actually be the beginning of a fruitful alternative to existing model selection techniques, such as goodness-of-fit (which theory predicts the data best) and simplicity (which theory gives the simplest explanation). Also in music cognition measures like goodness-of-fit (r-squared, percentage variance accounted for, and other measures from the experimental psychology toolkit) are often used to confirm a theory.* Nevertheless, it is non-trivial to think of (existing) theories in music cognition that make surprising predictions. That is, a theory that predicts a yet unknown phenomenon as a consequence of the intrinsic structure of the theory itself (If you know of any, let me know!)
Well, these are still relatively raw ideas. I hope to be able to present them in a more digested format next week at the music perception and cognition conference (SMPC) in Montreal. Looking forward to it!
* If you want to read more on this topic, see here.
Sunday, July 22, 2007
Can newborns make sense of rhythm?
Last month our research group organized the annual EmCAP workshop in Amsterdam: A consortium of four European universities that collaborate in trying to understand how cognition might emerge in active perception. Or, in other words, how we accumulate knowledge in the world by actively being engaged with it. Music was chosen as the ideal domain to figure that out.
One of the big challenges of this project is to see whether, and if so to what extent, newborns have musical capabilities, and how exposure to music allows cognitive constructs like harmony or meter to emerge. More and more studies show that even a few month old babies have all kinds of perceptual and musical skills that allow them, for instance, to note the difference between violations in complex Balkan rhythms and (for us) more straightforward western rhythms, a difference that adult listeners, in general, find difficult to notice.
In the EmCAP project, in collaboration with the Bulgarian baby-lab, we planned to start this spring to have newborns —like those in the picture above—listen to syncopated and non-syncopated rhythms as a way to find out whether they are sensitive to the concept of meter as an emergent property. Something that could, alternatively, well be simply a learned music theoretical and/or cultural concept. We hope to find out ...
One of the big challenges of this project is to see whether, and if so to what extent, newborns have musical capabilities, and how exposure to music allows cognitive constructs like harmony or meter to emerge. More and more studies show that even a few month old babies have all kinds of perceptual and musical skills that allow them, for instance, to note the difference between violations in complex Balkan rhythms and (for us) more straightforward western rhythms, a difference that adult listeners, in general, find difficult to notice.
In the EmCAP project, in collaboration with the Bulgarian baby-lab, we planned to start this spring to have newborns —like those in the picture above—listen to syncopated and non-syncopated rhythms as a way to find out whether they are sensitive to the concept of meter as an emergent property. Something that could, alternatively, well be simply a learned music theoretical and/or cultural concept. We hope to find out ...
Wednesday, July 18, 2007
Does music have an alphabet?
We all learn the alphabet at school and are quite used to the idea that just twenty-six letters and a few punctuation marks are enough to communicate the wildest stories and can easily evoke the most vivid imageries and delicate feelings. The question is: can music also be reduced to an alphabet as effectively and meaningful as language? Can it be reduced to a set of discrete symbols with which the essence or meaning of music can be captured?
In some sense it is a rhetorical question. The presence of music notation shows that it is at least partly possible. But how close to our experienced, mental representation of music actually is music notation?
And that’s the other rhetorical part of the question: I would argue that music notation (as we know it) has little to do with music listening. While very useful for some purposes (e.g., sight reading or as a set of instructions of how to perform the music), as a reflection of the listening experience it fails miserably. This is why researchers like Marie Louis Serafine and Jeanne Bamberger often stressed the fact that music notation is merely an ‘after-the-fact’ notion of music, not to be taken too seriously: notation is as informative to listening as a cooking recipe is to tasting.
[More on the same topic in Dutch]
In some sense it is a rhetorical question. The presence of music notation shows that it is at least partly possible. But how close to our experienced, mental representation of music actually is music notation?
And that’s the other rhetorical part of the question: I would argue that music notation (as we know it) has little to do with music listening. While very useful for some purposes (e.g., sight reading or as a set of instructions of how to perform the music), as a reflection of the listening experience it fails miserably. This is why researchers like Marie Louis Serafine and Jeanne Bamberger often stressed the fact that music notation is merely an ‘after-the-fact’ notion of music, not to be taken too seriously: notation is as informative to listening as a cooking recipe is to tasting.
[More on the same topic in Dutch]
Tuesday, July 17, 2007
Do hours of musical exercise help?
Last week I received an email from an enthusiastic amateur musician who was wondering whether indeed his teachers were right in stating that ‘to get better in music is mainly a matter of exercise’. Apparently he was doubting his talent for music: Would he ever come close to the quality of his beloved musicians?
John Sloboda of the University of Keele did an elaborated study in the nineties in which he proposed a number of challenges to what he called the ‘Myth’ of musical talent. Maybe four of them can provide some comfort to the hardworking amateur:
First, in several cultures a majority of the people arrive at a level of expertise that is far above the norm for our own society. This suggests that cultural, not biological, factors are limiting the spread of musical expertise in our own society.
Second, the majority of top-ranking professional musicians were not child prodigies. In fact, studies reveal that very few able musicians showed any signs of special musical promise in infancy.
Third, there are no clear examples of outstanding achievement in musical performance (or composition) that were not preceded by many years of intense preparation or practice (N.B. a twenty-one year old musician has generally accumulated more than ten thousand hours of formal practice)
Fourth, many perceptual skills, required to handle musical input, are very widespread, develop spontaneously though the first ten years of life, and do not require formal musical instruction (for the full list, see Sloboda, 1994).
So a talent for music appears not so much to be constraint by our biology as it is by our culture. We all seem to have a talent for music. Nevertheless, if you want to become good at it—like most musicians— one has to spend hours and hours doing it.
John Sloboda of the University of Keele did an elaborated study in the nineties in which he proposed a number of challenges to what he called the ‘Myth’ of musical talent. Maybe four of them can provide some comfort to the hardworking amateur:
First, in several cultures a majority of the people arrive at a level of expertise that is far above the norm for our own society. This suggests that cultural, not biological, factors are limiting the spread of musical expertise in our own society.
Second, the majority of top-ranking professional musicians were not child prodigies. In fact, studies reveal that very few able musicians showed any signs of special musical promise in infancy.
Third, there are no clear examples of outstanding achievement in musical performance (or composition) that were not preceded by many years of intense preparation or practice (N.B. a twenty-one year old musician has generally accumulated more than ten thousand hours of formal practice)
Fourth, many perceptual skills, required to handle musical input, are very widespread, develop spontaneously though the first ten years of life, and do not require formal musical instruction (for the full list, see Sloboda, 1994).
So a talent for music appears not so much to be constraint by our biology as it is by our culture. We all seem to have a talent for music. Nevertheless, if you want to become good at it—like most musicians— one has to spend hours and hours doing it.
Why does it sound slow?
We know that it is not simply the number of notes (or event-rate) that defines a listeners impression of tempo. There are quite a few musical examples that have a lot of notes but that are generally judged to have a slow tempo (e.g, Javanese gamelan music). The inverse, an impression of a fast tempo caused by only a few notes, is more difficult to find (but I’m sure some of you know of an example).
One correlate of tempo is the ‘metricallity’ of the music, especially the tactus, the rate at which events pass by regularly at a moderate tempo (typically around half a second or 120 bpm). Models of beat induction try to explain this: how listeners arrive at perceiving a beat or pulse in the music. Interestingly, the most salient pulse might not be explicitly present in the musical material itself. It can be ‘induced’ by music while listening (hence the term ‘beat induction’). It’s one of those classical examples that shows that cognition influences our perception of music.
At the recent Rhythm Perception and Production Workshop (RPPW) in Dublin tempo perception was one of the topics. Subjective judgments of duration were shown, once more, to be influenced by event density. Listeners had to continue tapping after hearing a regular beat with the intervals filled with soft random clicks, so-called ‘raindrops’. Participants tapped slower when more ‘raindrops’ were inserted. Apparently they judged the regular beat to be at a slower tempo when more events occurred between the beats. This is of course a relatively artificial setup, but the effect of event or note density on tempo judgments was also shown in more musically realistic contexts. What we can conclude from this is that tempo —defined as the subjective judgment of speed— is at least a product of two things: a sense of pulse (or tactus) and event density. It still is quite a challenge for music cognition researchers to come up with a model that actually can predict and explain these tempo judgments in real music, to, for instance, be able to predict when listeners will perceive music as nice and slow.
One correlate of tempo is the ‘metricallity’ of the music, especially the tactus, the rate at which events pass by regularly at a moderate tempo (typically around half a second or 120 bpm). Models of beat induction try to explain this: how listeners arrive at perceiving a beat or pulse in the music. Interestingly, the most salient pulse might not be explicitly present in the musical material itself. It can be ‘induced’ by music while listening (hence the term ‘beat induction’). It’s one of those classical examples that shows that cognition influences our perception of music.
At the recent Rhythm Perception and Production Workshop (RPPW) in Dublin tempo perception was one of the topics. Subjective judgments of duration were shown, once more, to be influenced by event density. Listeners had to continue tapping after hearing a regular beat with the intervals filled with soft random clicks, so-called ‘raindrops’. Participants tapped slower when more ‘raindrops’ were inserted. Apparently they judged the regular beat to be at a slower tempo when more events occurred between the beats. This is of course a relatively artificial setup, but the effect of event or note density on tempo judgments was also shown in more musically realistic contexts. What we can conclude from this is that tempo —defined as the subjective judgment of speed— is at least a product of two things: a sense of pulse (or tactus) and event density. It still is quite a challenge for music cognition researchers to come up with a model that actually can predict and explain these tempo judgments in real music, to, for instance, be able to predict when listeners will perceive music as nice and slow.
Does rhythm make our bodies move?
Why do some people dance more rhythmically to music than others? Are these differences genetically or culturally determined? These are some typical questions journalists, interested in rhythm research, do not hesitate to ask.
The link between musical rhythm and movement has been a fascination for a small yet passionate group of researchers. Early examples, from the 1920s, are the works by Alexander Truslit and Gustav Becking. More recently researchers like Neil Todd (University of Manchester) defend a view that makes a direct link between musical rhythm and movement. Direct in the sense that it is argued that rhythm perception can be explained in terms of our physiology and body metrics (from the functioning of our vestibular system to leg length and body size).
While this might be a natural line of thought for most people, the consequences of such theories are peculiar. They predict, for instance, that body length will have an effect on our rhythm perception, longer people preferring slower musical tempi (or rates), shorter people preferring faster ones. Hence, females (since they are on average shorter than men) should have a preference for faster tempi as compared to males.
To me that is too direct and naïve a relation. There are quite a few studies that looked for these direct physiological relations (like heart rate, spontaneous tapping rate, walking speed, etc.) and how these might influence or even determine rhythm perception. However, none of these succeeded in finding a convincing correlation, let alone a causal relation. In addition, they ignore the influence that culture and cognition apparently have on rhythm perception. Nevertheless it should be added that embodied explanations do form a healthy alternative to the often too restricted ‘mentalist’ or cognitive approach.
An intriguing study in that respect was done by Jessica Phillips-Silver and Laurel Trainor (Canada) a few years ago. They did an inventive experiment with seven month old babies, and showed that body movement (i.e. not body size) can influence rhythm perception. Although one could be critical on some important details, it is a striking empirical finding, and a small step forward in trying to underpin the relation between rhythm cognition and human movement.
The link between musical rhythm and movement has been a fascination for a small yet passionate group of researchers. Early examples, from the 1920s, are the works by Alexander Truslit and Gustav Becking. More recently researchers like Neil Todd (University of Manchester) defend a view that makes a direct link between musical rhythm and movement. Direct in the sense that it is argued that rhythm perception can be explained in terms of our physiology and body metrics (from the functioning of our vestibular system to leg length and body size).
While this might be a natural line of thought for most people, the consequences of such theories are peculiar. They predict, for instance, that body length will have an effect on our rhythm perception, longer people preferring slower musical tempi (or rates), shorter people preferring faster ones. Hence, females (since they are on average shorter than men) should have a preference for faster tempi as compared to males.
To me that is too direct and naïve a relation. There are quite a few studies that looked for these direct physiological relations (like heart rate, spontaneous tapping rate, walking speed, etc.) and how these might influence or even determine rhythm perception. However, none of these succeeded in finding a convincing correlation, let alone a causal relation. In addition, they ignore the influence that culture and cognition apparently have on rhythm perception. Nevertheless it should be added that embodied explanations do form a healthy alternative to the often too restricted ‘mentalist’ or cognitive approach.
An intriguing study in that respect was done by Jessica Phillips-Silver and Laurel Trainor (Canada) a few years ago. They did an inventive experiment with seven month old babies, and showed that body movement (i.e. not body size) can influence rhythm perception. Although one could be critical on some important details, it is a striking empirical finding, and a small step forward in trying to underpin the relation between rhythm cognition and human movement.
Friday, June 29, 2007
‘So much to talk about’
Today it is a week since I decided to tryout writing a daily blog on music cognition. It feels good. However, not to start repeating myself too quickly, I will lower that rate somewhat to, say, once a week.
I’m now off to the (roughly) bi-annual RPPW, a workshop in Dublin on Rhythm Perception and Production. My favorite topic, besides being the first scientific conference I ever went to (the picture above is from the 1988 RPPW). It collects research from a diversity of fields, ranging from psycho-linguistics to music psychology, with a small group of participants all seriously interested in rhythmic phenomena. Earlier conferences had, besides sessions on rhythm in language and music, talks on golf, rowing and Parkinson disease. Traditionally the meeting is held in a place that has a lot similarities with a monastery: talking, eating and sleeping, all in one remote place. I miss Amsterdam already, but do look forward to the many discussions on rhythm, timing and tempo.
I’m now off to the (roughly) bi-annual RPPW, a workshop in Dublin on Rhythm Perception and Production. My favorite topic, besides being the first scientific conference I ever went to (the picture above is from the 1988 RPPW). It collects research from a diversity of fields, ranging from psycho-linguistics to music psychology, with a small group of participants all seriously interested in rhythmic phenomena. Earlier conferences had, besides sessions on rhythm in language and music, talks on golf, rowing and Parkinson disease. Traditionally the meeting is held in a place that has a lot similarities with a monastery: talking, eating and sleeping, all in one remote place. I miss Amsterdam already, but do look forward to the many discussions on rhythm, timing and tempo.
Thursday, June 28, 2007
Why doesn’t it groove?
Jazz and pop musicians spent a lot of time trying to work out ‘the feel’, ‘the groove’, or how to ‘time’ a particular piece of music. It is everything but arbitrary, and even the smallest detail counts. All to get the right timing at the right tempo. It clearly matters!
Music performance studies have looked at these timing details a lot. While often focusing on classical music, more and more studies are now looking at jazz, pop and world music. Tomorrow Bas de Haas (studying at the University of Utrecht) hopes to graduate on a MSc thesis on groove and swing. He asked three well known Dutch drummers —Joost Lijbaart, Joost Kroon and Marcel Seriese— to play a fragment of Swing, the famous break from Funky Drummer by James Brown, and a so-called Shuffle. And had them do this at different tempi.
As always, the relation between timing and tempo turns out to be more complicated than thought of previously. A straightforward model would be that all timing scales proportionally with tempo. It is like making a particular movement: when performing it at a different speed, people generally do it faster (or slower) by doing every part of the movement faster (or slower) in proportion. This apparently works for computer models that imitate human movement, but does not work for music, let alone for our ears. If you slow down a recording you probably immediately hear that something is wrong. Not because the tempo is wrong, but because the timing sounds awkward.
The challenge is to make a model of timing and tempo that, when, for instance, Funky Drummer is scaled to a different tempo, it still sounds groovy. Bas de Haas hopes to show his first attempts at a conference in Montreal this summer.
* See here for a related paper.
Music performance studies have looked at these timing details a lot. While often focusing on classical music, more and more studies are now looking at jazz, pop and world music. Tomorrow Bas de Haas (studying at the University of Utrecht) hopes to graduate on a MSc thesis on groove and swing. He asked three well known Dutch drummers —Joost Lijbaart, Joost Kroon and Marcel Seriese— to play a fragment of Swing, the famous break from Funky Drummer by James Brown, and a so-called Shuffle. And had them do this at different tempi.
As always, the relation between timing and tempo turns out to be more complicated than thought of previously. A straightforward model would be that all timing scales proportionally with tempo. It is like making a particular movement: when performing it at a different speed, people generally do it faster (or slower) by doing every part of the movement faster (or slower) in proportion. This apparently works for computer models that imitate human movement, but does not work for music, let alone for our ears. If you slow down a recording you probably immediately hear that something is wrong. Not because the tempo is wrong, but because the timing sounds awkward.
The challenge is to make a model of timing and tempo that, when, for instance, Funky Drummer is scaled to a different tempo, it still sounds groovy. Bas de Haas hopes to show his first attempts at a conference in Montreal this summer.
* See here for a related paper.
Sunday, June 24, 2007
Does KV 448 make you smarter?
Mozart’s Sonata for Two Pianos in D Major (KV 448) is one of the most used compositions in music cognition research. Since the publication of the study Music and spatial task performance in Nature in 1993, numerous researchers have tried to replicate the so-called ‘Mozart effect’ using this composition. And often with little success. The idea is of course compelling: to become smarter by simply listening to Mozart’s music. It could be a helpful fact in the much needed support for a more prominent place of music in the curricula. However, the effect has been shown to appear not only with the music of Mozart, but also that of Beethoven, Sibelius, and even a ‘Blur effect’ was shown, based on a study in which 8,000 teenagers participated (see reference below).
Currently, the most likely interpretation of the effect is that music listening can have a positive effect on our cognitive abilities when the music is enjoyed by the listener. Apparently (and in a way unfortunately), it is not so much the structure of the music that causes the effect, but a change in the mood of the listener. While this indirectness might be disappointing for admirers of Mozart’s music, it is important to note that, at the same time, it leaves uncovered an important aspect of music appreciation. What makes certain music so effective in changing or intensifying our mood? It seems that while we are all experienced and active users of music as a kind of mood regulator (widely ranging from energizer to consoler of grief), music research has only just begun to explore the how and why of the relation between music and emotion.
SCHELLENBERG, E., & HALLAM, S.(2005). Music Listening and Cognitive Abilities in 10- and 11-Year-Olds: The Blur Effect Annals of the New York Academy of Sciences, 1060 (1), 202-209 DOI: 10.1196/annals.1360.013
Currently, the most likely interpretation of the effect is that music listening can have a positive effect on our cognitive abilities when the music is enjoyed by the listener. Apparently (and in a way unfortunately), it is not so much the structure of the music that causes the effect, but a change in the mood of the listener. While this indirectness might be disappointing for admirers of Mozart’s music, it is important to note that, at the same time, it leaves uncovered an important aspect of music appreciation. What makes certain music so effective in changing or intensifying our mood? It seems that while we are all experienced and active users of music as a kind of mood regulator (widely ranging from energizer to consoler of grief), music research has only just begun to explore the how and why of the relation between music and emotion.
SCHELLENBERG, E., & HALLAM, S.(2005). Music Listening and Cognitive Abilities in 10- and 11-Year-Olds: The Blur Effect Annals of the New York Academy of Sciences, 1060 (1), 202-209 DOI: 10.1196/annals.1360.013
Friday, June 22, 2007
Why do I remember the next song?
Last week a national newspaper phoned me for an answer to a readers question Why do I suddenly remember the next song when listening to a CD? A phenomenon I’m almost too familiar with since I have the habit to repeatedly listen to music. When playing an album from beginning to end the phenomenon appears just before the end of a song, just when the silence between two songs is about to start. The next song pops up in your mind (and often quite loudly), while you were hardly aware of this just a few seconds ago. To avoid this irritating effect I always use the shuffle function of the CD player or iPod, effectively avoiding the apparent learning of these transitions. However, the phenomenon is interesting in itself. It is is a result of what seems our ‘iconic’ or absolute memory for music at work. More common is our relative memory for music. While most of us can easily recognize the melody of a popular song, few can judge whether it is played at the original pitch height. We seem to remember the pitch intervals or contour of the melody, not the frequencies of the pitches itself. By contrast, the phenomenon discussed above hints at the presence of an absolute memory for music. It seems that even young children can judge whether a familiar television tune is the original or one that is scaled up a few tones. Apparently we all have both absolute and relative memory for music.
Subscribe to:
Posts (Atom)