Group photo of all participants (except Peter Tyack and Constance Scharff, who had to leave early) contributing to a Lorentz Workshop on Musicality in April 2014.
For me personally, it was simply a pivotal workshop. With over twenty international scientists and scholars (i.e. Judith Becker, Simon E. Fisher, Tecumseh Fitch, Bruno Gingras, Jessica Grahn, Yuko Hattori, Marisa Hoeschele, David Huron, Yukiko Kikuchi, Hugo Merchant, Bjorn Merker, Iain Morley, Ani Patel, Isabelle Peretz, Martin Rohrmeier, Constance Scharff, Carel ten Cate, Laurel Trainor, Sandra Trehub, Peter Tyack, Geraint Wiggins and Jelle Zuidema) interested in musicality, we succeeded in agreeing on a research agenda to study what makes us musical animals. A publication is expected to be out in 2015. But for now, I can only be exhilarated that this group of fine researchers share a sincere fascination for this topic. A photo impression of the workshop can be found here.
Henkjan Honing (UvA) en Marc Leman (UGent) gaven 9 oktober j.l. een lezing in de Handelsbeurs van Gent en gingen daarna met Eos-redacteur Reinout Verbeke in gesprek.
Je zit in je auto en draait wat aan de knop van de radio. Je hoort al snel of bepaalde muziek je bevalt of niet. Je herkent een stem, een liedje of zelfs de uitvoering ervan. Iedereen doet het, iedereen kan het. En vaak ook nog eens razendsnel: sneller dan een noot gemiddeld klinkt.
Als u gevraagd zou worden om naar een reeks muziekfragmenten van 0,2 seconde te luisteren, zal blijken dat u met gemak aan kan geven welk fragment klassiek, jazz, R&B of pop is (zie luistertest). Een snippertje geluid geeft ons toegang tot de herinnering aan eerder gehoorde muziek, ook al hebben we deze serie noten nog nooit eerder gehoord. Die herinnering kan heel specifiek zijn: aan een liedje van Björk, bijvoorbeeld. Maar ze kan ook heel algemeen zijn: we herkennen een bepaald genre: klassiek, country, jazz. De nuances in klankkleur, karakteristiek voor een liedje of een heel genre, zitten kennelijk op een abstracte manier in ons geheugen opgeslagen. Daarom is de draaiknop (of tiptoets) van de autoradio zo’n succesvolle interface geworden…
Vandaag verschenen er verschillende items in de media n.a.v. van een stukje in Volkskrant over de oorwurm en de hype rond Song Pop, een app die gebruik maakt van het hierboven beschreven muzikale talent dat we allemaal delen: het razendsnel herkennen van muziek.
Gjerdingen, Robert O., & Perrott, D. (2008). Scanning the Dial: The Rapid Recognition of Music Genres Journal of New Music Research, 37 (2), 93-100 DOI: 10.1080/09298210802479268
In the eighties of the last century there was an almost utopian vibe that the computer would change not only music, mathematics, linguistics and related fields, but also education. Special programming languages were developed that were aimed to resonate with the intuition of children (and adults) about a certain domain, be it mathematics, music, or language. It generated an enormous amount of ideas, especially at MIT, where for instance Jeanne Bamberger was for long professor of Music and Urban Education. The cognitivist underpinnings of her work marked a groundbreaking shift in the design of music education software, a field dominated at the time by programs influenced by behaviorist “skill and drill” theories of music learning and teaching. Influenced by the work of Seymour Papert on Logo (a lisp-like programming language designed for educational purposes), Jeanne set out to design project-based musical micro-worlds that researchers and teachers could use to help make children’s musical thinking, intuitions, and problem solving processes audible and visible.
Are musicians born or made? What is the line between skill and talent in any domain, and can we acquire either later in life? Is it possible to learn an instrument at the age of forty? Those are the questions that Gary Marcus explores in Guitar Zero: The New Musician and the Science of Learning:
'If critical periods aren't quite so firm as people once believed, a world of possibility emerges for the many adults who harbor secret dreams—whether to learn a language, to become a pastry chef, or to pilot a small plane. And quests like these, no matter how quixotic they may seem, and whether they succeed in the end or not, could bring unanticipated benefits, not just for their ultimate goals but of the journey itself. Exercising our brains helps maintain them, by preserving plasticity (the capacity of the nervous system to learn new thing), warding off degeneration, and literally keeping the blood flowing. Beyond the potential benefits for our brains, there are benefits for our emotional well-being, too. There may be no better way to achieve lasting happiness—as opposed to mere fleeting pleasure—than pursuing a goal that helps us broaden our horizons.'
Marcus, G. (2012) Guitar Zero. The New Musician and the Science of Learning. New York: Penguin.
In de NTR-serie Pavlov stellen acht bekende Nederlanders een vraag aan de wetenschap. In deze uitzending test Fleur Bouwer (UvA) de muzikaliteit van Lavinia Meijer.
Zelf ook de luistertest doen? Hij duurt ongeveer twintig minuten. Klik hier.
Voor de volledige uitzending zie de website van Pavlov.
It's a persistent myth to think that music is processed solely in the right hemisphere. This week yet another study shows that, even when the processes are restricted to listening alone, virtually the whole brain is involved.
A Finish research group led by Petri Toiviainen found that music listening recruits not only the auditory areas of the brain, but also employs large-scale neural networks. They could show that the processing of musical pulse recruits motor areas in the brain, supporting the idea that music and movement are closely intertwined. Limbic areas of the brain, known to be associated with emotions, were found to be involved in rhythm and tonality processing. Processing of timbre was associated with activations in the so-called default mode network, which is assumed to be associated with mind-wandering and creativity.
Adapted from Stewart et al. (2009) Oxford Handbook of Music Psychology.
As said, this study is not alone in this. In a recent chapter Laurel Stewart and colleagues made a similar claim based on a review of a vast amount of literature. In the figure above (redrawn from the original) the circles indicate the areas where more than 50% of the existing literature agrees that they are involved. (N.B. it is good to realize these areas are actually part of whole networks, and not just single locations.) And here again, if you look at the brain networks involved in listening, you’ll notice that virtually the whole brain is involved.
Alluri, V., Toiviainen, P., Jääskeläinen, I., Glerean, E., Sams, M., & Brattico, E. (2011). Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm NeuroImage DOI: 10.1016/j.neuroimage.2011.11.019
Stewart L, von Kriegstein K, Warren JD, & Griffiths TD (2006). Music and the brain: disorders of musical listening. Brain : a journal of neurology, 129 (Pt 10), 2533-53 PMID: 16845129
Vandaag in de Volkskrant (in de rubriek Opinie & Debat) een stuk van Dick Swaab, Erik Scherder en ondergetekende met de titel Amuzikaal zijn is de grote uitzondering: over waarom muziek geen luxe is.
Op woensdag 22 juni organiseert Muziek telt! het symposium Muziek en het Brein. Vragen als: wat zijn die positieve effecten van muziek op het brein? Wanneer vinden ze plaats? En hoe ver zijn wetenschappers in hun onderzoek hiernaar? worden beantwoord door de keynote speakers:
prof. dr. Erik Scherder (hoogleraar Klinische Neuropsychologie)
prof. dr. Henkjan Honing (hoogleraar Muziekcognitie)
prof. dr. Dick Swaab (hoogleraar neurobiologie)
Het symposium is bedoeld voor iedereen die werkzaam is in onderwijs, wetenschap, politiek, gezondheidszorg en muziek.
Presentatie van de middag is in handen van Paul Witteman.
Om aan te melden klik hier (mogelijk tot 31 mei a.s.).
Last week PloS One published an interesting finding that shows that one month old infants can recognize a melody that they heard about three weeks before they were born.
Developmental psychobiologist Carolyn Granier-Deferre (Paris Descartes University, France) and her colleagues asked fifty women to play a brief recording of a descending piano melody (one that gets lower in pitch) twice daily in the 35th, 36th and 37th weeks of their pregnancy. When the infants were one month old, both the descending melody and an ascending melody were played to the babies in the laboratory (while they slept; see notation below). On average, the heart rates of the sleeping babies briefly slowed by about twelve beats a minute with the familiar descending melody (right), and by only five or six beats with the unfamiliar ascending melody (left). A result that was interpreted as the infants paying more attention to the familiar than the unfamiliar melody.
We know for a while that newborns can discriminate or perceive most of the acoustic properties of speech. The prevailing theoretical view is that these capacities are mostly independent of previous auditory experience and that newborns have an innate bias or skill for perceiving speech.
By contrast, these results show (as the authors stress in a press release) that merely exposing a human fetus’ developing auditory system to complex stimuli (read ‘music’) can affect how it functions.
Next to role of mere exposure one should add that this result is equally convincing evidence for a newborn’s capacity of perceiving and recalling music (see my earlier ‘language bias’ entry). In that sense this study adds to the growing literature that shows that infants in the womb are sensitive to, and can memorize both melody and rhythm. These findings play an important role in a further understanding of a potential biological and evolutionary role of music (cf. Parncutt, 2009).
Granier-Deferre, C., Bassereau, S., Ribeiro, A., Jacquet, A., & DeCasper, A. (2011). A Melodic Contour Repeatedly Experienced by Human Near-Term Fetuses Elicits a Profound Cardiac Reaction One Month after Birth PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0017304
Parncutt, R. (2009). Prenatal development and the phylogeny and ontogeny of musical behaviour. In S. Hallam, I. Cross, & M. Thaut (Eds.), Oxford handbook of music psychology (pp. 219-228). Oxford: Oxford University Press.
At the end of the 1990s, cognitive psychologist Steven Pinker infamously characterized music as “auditory cheesecake”: a delightful dessert but, from an evolutionary perspective, no more than a by-product of language. But Pinker was probably right when he wrote: “I suspect music is auditory cheesecake, an exquisite confection crafted to tickle the sensitive spots of...our mental faculties.” Or, to express his idea less graphically: music affects our brains at specific places, thereby stimulating the production of unique substances that have a pleasurable effect on our mood. However, rather than a by-product of evolution, music or more precisely musicality is likely to be a characteristic that survived natural selection in order to stimulate and develop our mental faculties (cf. Honing, 2011).
Pinker’s idea may actually be a very fruitful hypothesis whose significance has wrongfully gone unacknowledged because of all the criticism it elicited. After all, the purely evolutionary explanations for the origins of music largely overlook the experience of music we all share: the pleasure we derive from it, not only from the acrobatics of making it but also from the act of listening to it.
In a recent study Canadian researchers were able to show precisely that: Music can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. They were able to show that intense pleasure in response to music can lead to dopamine release in the striatal system. And, more importantly, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself.
Salimpoor, V., Benovoy, M., Larcher, K., Dagher, A., & Zatorre, R. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music Nature Neuroscience DOI: 10.1038/nn.2726
Honing, H. (in press, 2011) Musical Cognition. A Science of Listening. New Brunswick, N.J.: Transaction Publishers.
The BBC just launched a new experiment which aims to discover more about the science of musicality. How Musical Are You? was designed by BBC Lab UK in collaboration with academics from the Music, Mind and Brain group at Goldsmiths, University of London. The scientific data will be analyzed to establish whether people who are untrained but passionate about music can be just as musical as people who have been formally trained. The experiment includes questionnaires and musical tests that evaluate your ability to categorize musical styles, memorize tunes, and recognize the beat in pieces of music. The tests aim to assess general musical ability.
The initiative was recently covered on BBC Radio 4's 'Today' programme.
The actual website can be found here.
The next few weeks there will be no new entries in this blog. However, I hope to see some of you on 19 January 2011[N.B. Cancelled due to illness] when Glenn Schellenberg will give a lecture at the Cognitive Science Center Amsterdam (CSCA) of the University of Amsterdam with the title Does music make you smarter? Schellenberg will show that the available evidence indicates that music listening leads to enhanced performance on a variety of cognitive tests, but that such effects are short-term and stem from the impact of music on arousal level and mood, which, in turn, affect cognitive performance; experiences other than music listening have similar effects. However, music lessons in childhood tell a different story. They are associated with small but general and long-lasting intellectual benefits that cannot be attributed to obvious confounding variables such as family income and parents' education. The mechanisms underlying this association have yet to be determined. Other controversial issues include the direction of causation, and the reason why "real musicians" often fail to exhibit enhanced performance on measures of intelligence.
See here for more information on the lecture and location.
SCHELLENBERG, E., & PERETZ, I. (2008). Music, language and cognition: unresolved issues Trends in Cognitive Sciences, 12 (2), 45-46 DOI: 10.1016/j.tics.2007.11.005
This week a video entry with a clip of the Dutch tv program Vrije Geluiden: Last Sunday prof. Erik Scherder (Free University Amsterdam) explained some recent research (by, e.g., Hyde et al., 2009) on the influence of music performance and music listening on brain plasticity.
The full episode can be viewed here (N.B. no subtitles).
Hyde, K., Lerch, J., Norton, A., Forgeard, M., Winner, E., Evans, A., & Schlaug, G. (2009). Musical Training Shapes Structural Brain Development Journal of Neuroscience, 29 (10), 3019-3025 DOI: 10.1523/JNEUROSCI.5118-08.2009
"French babies cry differently as compared to German babies. This was the conclusion from a study that was published a year ago in Current Biology (see earlier entry). Three day old German babies cry in a downward fashion, their French contemporaries showed an increasing swelling of the cry and stop abruptly.
It was a surprising observation, especially in the light of the general belief that in crying the pitch should always drop as a physiological consequence of the respiratory cycle. Apparently, babies of just a few days old can control both the dynamics and the intonation contour of their crying. Why would they do this?
The researchers interpreted it as the first steps in the development of language. In spoken French the mean intonation contour is rising (dropping at the very end of an utterance), in German the mean intonation typically exhibits a falling contour. This combined with the fact that the human auditory system is already functional in the last trimester of pregnancy made the researchers conclude that these babies picked up the intonation contours of their native language in these last months and consequently imitated them in their crying.
This observation is also surprising since the literature suggests that children only get interested in their native language roughly between six and eighteen months, when they start to imitate it in their babbling. Is it indeed the case, as stressed by these researchers (and the recent literature citing it; e.g. Elk & Hunnius, 2010), that this is unique evidence for a much earlier sensitivity to language than commonly thought? Or is there another interpretation possible?
Although the empirical results are clear, this interpretation is a typical example of what one could call a ‘language bias’: an understandable enthusiasm of linguists to interpret a range of phenomena in the real world as ‘linguistic’. One can, however, easily make the argument that this early sensitivity to intonation contour is a not a linguistic skill but a musical one.
Most linguists see the use of rhythm, dynamics, and intonation as an aid for making infants familiar with the words and sentence structures of the language of the culture in which they will be raised. Words and word divisions are emphasized through exaggerated intonation contours and varied rhythmic intervals, thereby facilitating the process of learning a specific language. These aspects are referred to as prosody, but they are actually the basic building blocks of music. Only much later in the development of a child will this ‘musical prosody’ be used, for instance in the marking, and consequently the recognition of word boundaries. But these early signs of musical skill are — and I like to stress this – not of a linguistic nature. It is the preverbal and preliterate stage of our musical listening in development."
Honing, H. (2010). De ongeletterde luisteraar. Over muziekcognitie, muzikaliteit en methodologie. Amsterdam: Royal Netherlands Academy of Arts and Sciences (KNAW).
Mampe, B., Friederici, A., Christophe, A., & Wermke, K. (2009). Newborns' Cry Melody Is Shaped by Their Native Language Current Biology DOI: 10.1016/j.cub.2009.09.064
Elk, M. van & Hunnius, S. (2010) Het babybrein, over de ontwikkeling van de hersenen bij baby's. Amsterdam: Bert Bakker.
Even the crying of newborn babies seems to be more musical than we think. This can be concluded from an interesting study that was published last month in Current Biology. German researchers were able to show that newborns don’t just cry randomly, but - when studying the audio signal of their crying - one can distinguish between French and German babies. The German babies - only three days old - cry in a downward fashion, their French contemporaries showed an increasing swelling of the cry and stop abruptly.
Sound example: German & French baby cries.
How can we explain these differences? Babies do hear about three months before they are born. And the few prenatal studies that are available show that babies, in that stage of their development, already perceive and remember sounds. For instance, they recognize the sound of their mothers voice just after birth, and they can distinguish between tunes that they heard during pregnancy from those that they have never been exposed to before.
The correlation between the mother language of the babies and their average crying pattern, suggests that exposure to the language spoken by their caregivers (mother, father, etc.) influences the crying, since French language, on average, consists of raising melodies, and German intonation often shows a decreasing shape. The researchers suggest that this as a sign of a sensitivity to language from very early on in life.
My interpretation would be different. I would not so much relate these results to language, as well as a sign of a high sensitivity to the musical aspects of speech: rhythm, melody, stress (i.e. prosody). As quite some studies have shown (e.g., authors like Fernald, Trehub, Trainor, and others), infants and young children are extremely sensitive to these 'musical' variations in their environment. For example, infants seem to be highly sensitive for the musical and emotional aspects of infant-directed speech (IDS), more so than the actual linguistic structure, let alone semantics. I would therefore claim the results of the baby-study are actual evidence for very early signs of musical sensitivity to intonation and other musical aspects of sound, than that it should be seen as evidence for the start of learning a language.
Mampe, B., Friederici, A., Christophe, A., & Wermke, K. (2009). Newborns' Cry Melody Is Shaped by Their Native Language Current Biology DOI: 10.1016/j.cub.2009.09.064
Last week a paper was published in PLoS-ONE suggesting a relation between AVPR1A-Haplotypes and musical creativity. A group of Finish researchers analyzed 19 families with a total of 343 family members on their musical aptitude —using the Seashore test and a test developed by one of the authors— and their DNA profiles. They were able to show an association between these and related genes and levels of musical creativity. The research contrasts earlier research with twins that suggested no such relation (e.g., Coon & Carey, 1989). The authors propose the interesting hypothesis that music perception and creativity in music are linked to the same phenotypic spectrum of human cognitive social skills, like human bonding and altruism, both associated with AVPR1A. Music as a form of ‘extreme’ bonding behavior...
It was just a matter of time for such a study to emerge. Still, the results of this study are merely correlational. I like to think of the capacity for music as shared instead of being special, and a result of complex nature and nurture interactions.
Ukkola, L., Onkamo, P., Raijas, P., Karma, K., & Järvelä, I. (2009). Musical Aptitude Is Associated with AVPR1A-Haplotypes PLoS ONE, 4 (5) DOI: 10.1371/journal.pone.0005534
Coon, H., & Carey, G. (1989). Genetic and environmental determinants of musical ability in twins Behavior Genetics, 19 (2), 183-193 DOI: 10.1007/BF01065903
In the Netherlands (and I’m sure there are versions of it in the UK and the US as well) there is a weekly radio show containing a returning item in which music experts are asked to compare and judge two or three CD recordings of the same piece, without knowing who the musicians are. They have to guess the performers and describe why they do (or don’t) like that particular performance.
How well would you do in such a test? The common hypothesis is that experts do this much better, e.g. under the assumption that they are more sensitive in their listening skills. But do experts indeed hear more detali and more nuances when compared to a 'common listener'? Or do they just have more terminology available to verbalize these differences?
Two years ago our group did a large-scale online listening experiment with a similar task. Participants were asked to compare several pairs of recordings of well-known musicians. One of the recordings was taken directly from a CD, but the other was originally performed at another tempo (faster or slower) and then scaled to be similar in tempo to the former recording. The task was to judge which recording was real and which one was manipulated, by focusing on the timing used by the performer.
To give you an idea of the difficulty of the task, below an example.
A
B
(See answer at the bottom.)
The results were recently published in the Journal of Experimental Psychology, with a surprising outcome: the judgments seem to be largely influenced by exposure to music (listening a lot to one’s favorite music) and not (at all) by the level of expertise (amount of formal musical training). One seems to learn a lot by simply listening.
Honing, H., & Ladinig, O. (2009). Exposure influences expressive timing judgments in music. Journal of Experimental Psychology: Human Perception and Performance, 35 (1), 281-288 DOI: 10.1037/a0012732
* The first recording is the original. It is Glenn Gould performing English Suite No. 4 by J.S. Bach. The second recording is Sviatoslav Richter performing the same piece. However, this recording was sped up from 70 to 87 bpm making his use of temporubato 'unnatural'.
Last week I received an email from an enthusiastic amateur musician who was wondering whether indeed his teachers were right in stating that ‘to get better in music is mainly a matter of exercise’. Apparently he was doubting his talent for music: Would he ever come close to the quality of his beloved musicians?
John Sloboda of the University of Keele did an elaborated study in the nineties in which he proposed a number of challenges to what he called the ‘Myth’ of musical talent. Maybe four of them can provide some comfort to the hardworking amateur:
First, in several cultures a majority of the people arrive at a level of expertise that is far above the norm for our own society. This suggests that cultural, not biological, factors are limiting the spread of musical expertise in our own society.
Second, the majority of top-ranking professional musicians were not child prodigies. In fact, studies reveal that very few able musicians showed any signs of special musical promise in infancy.
Third, there are no clear examples of outstanding achievement in musical performance (or composition) that were not preceded by many years of intense preparation or practice (N.B. a twenty-one year old musician has generally accumulated more than ten thousand hours of formal practice)
Fourth, many perceptual skills, required to handle musical input, are very widespread, develop spontaneously though the first ten years of life, and do not require formal musical instruction (for the full list, see Sloboda, 1994).
So a talent for music appears not so much to be constraint by our biology as it is by our culture. We all seem to have a talent for music. Nevertheless, if you want to become good at it—like most musicians— one has to spend hours and hours doing it.