Friday, November 07, 2025

Gaat muzikaliteit aan muziek én taal vooraf? [Dutch]

Foto: Iris Vette
Hoe het brein van onze verre voorouders eruitzag, is niet meer na te gaan. Toch is er via een omweg misschien iets te zeggen over het ontstaan van taal, en de rol die muziek daarbij speelde.

Veel taalkundigen geloven —vreemd genoeg— dat onze liefde voor muziek meelift op ons taalvermogen (zie bijvoorbeeld NRC uit 2016 en Steven Pinker's invloedrijke boek How the mind works). Maar zou het niet, en even waarschijnlijk, precies andersom kunnen zijn?

Voor een overzicht van de recente ontwikkelingen op het gebied van de neurowetenschappen van taal en muziek, zie bijv. Peretz et al. (2015), Norman-Haignere et al. (2015) en de video hieronder: een registratie van de lezing Voor de muziek uit die ik in 2016 gaf op het tweejaarlijkse congres Onze Taal in het Chassé Theater in Breda.

N.B. Een samenvatting van de tekst verscheen in het tijdschrift Onze Taal. De integrale tekst verscheen in het interdisciplinaire tijdschrift Blind.



Norman-Haignere, S., Kanwisher, N., & McDermott, J. (2015). Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition Neuron, 88 (6), 1281-1296 DOI: 10.1016/j.neuron.2015.11.035

Peretz, I., Vuvan, D., Lagrois, M., & Armony, J. (2015). Neural overlap in processing music and speech Philosophical Transactions of the Royal Society B: Biological Sciences, 370 (1664), 20140090-20140090 DOI: 10.1098/rstb.2014.0090

Thursday, November 06, 2025

Can birds imitate Artoo-Detoo?

The research summarized in an infographic (Dam et al., 2025).

When you think of birds imitating sounds, parrots and starlings might come to mind. They’re famous for copying human speech, car alarms, and even ringtone melodies. But what happens when you challenge them with something really complex, like the electronic beeps and boops of R2-D2, the beloved Star Wars droid? Researchers from the University of Amsterdam and Leiden University put nine species of parrots and European starlings to the test.

Starlings versus parrots

It turns out that starlings had the upper hand when it came to mimicking the more complex 'multiphonic sounds. Thanks to the unique morphology of their vocal organ, the syrinx, which has two sound sources. This allows starlings to reproduce multiple tones at once—perfect for R2-D2-style chatter.

Parrots, on the other hand, are limited to producing one tone at a time (just like humans). Still, they held their own when it came to the simpler “monophonic” beeps of R2-D2. Interestingly, it weren’t the famously chatty African grey parrots or amazon parrots that did best, but the smaller species, like budgerigars and cockatiels. These little birds, often thought of as less impressive vocalists, actually outperformed the larger species in this specific task, likely by using different strategies to imitate sounds.

Even sounds from science fiction can teach us something real

The researchers call their study a fun but powerful window into how anatomy, like the structure of a bird’s vocal organ, can shape the limits and possibilities of their vocal skills. It is the first time that so many different species all produced the same complex sounds, which finally allows for a direct comparison. This shows that even sounds from science fiction can teach us something real about the evolution of communication and learning in animals.

And here’s the cool part: much of the sound data came from pet owners and bird lovers participating in citizen science through the Bird Singalong Project. With their help, the researchers were able to gather a richer, more diverse collection of bird sounds than ever before, proving that science doesn't always have to happen in a lab.

Reference

Dam, N.C.P., Honing, H. & M.J. Spierings (2025). What imitating an iconic robot reveals on allospecific vocal imitation in parrots and starlings. Scientific Reports, 15, 36816. https://doi.org/10.1038/s41598-025-23444-7

Friday, October 03, 2025

Piano touch unraveled. Touché?

[From Kuromiya et al., 2025: Figure 1B]

 

[Adapted from interview by Elleke Bal, Trouw, 3 October 2025]

Two professional pianists may perform on the exact same piano, in the same concert hall, and even play the same notes at the same tempo. Yet, through the way they touch the keys, they are able to produce strikingly different sounds from the instrument.

This so-called timbre—the tonal color or quality of sound—has long been a subject of fascination and debate among musicians and listeners alike. Consider, for example, the crystalline clarity of Glenn Gould versus the warmth of Sviatoslav Richter. But what exactly constitutes clarity or warmth? For pianists, these are intuitive concepts. For scientists, however, the challenge has been to find objective evidence that such distinctions arise from unique motor actions at the keyboard.

The researchers examined which motor skills underpinned these differences. They found that timbral variation was strongly associated with a limited set of keystroke parameters: the velocity of key descent, the temporal spacing between successive key presses, and the synchrony between hands. Crucially, one factor emerged as particularly decisive: the acceleration of the key movement at the precise moment the hammer disengages. According to the authors, this acceleration largely determines the resulting timbre (Kuromiya et al., 2025).

(c) 2025 Trouw

This study demonstrates the extraordinary precision achieved by highly trained pianists. Nuances in timing and velocity of a few milliseconds can shape timbre in ways that are musically significant. These micro-timing differences are the product of extensive practice and experimentation at the instrument. However, the study overlooked one key factor: the velocity of the hammer striking the string. Without measuring hammer dynamics, the account of timbre remains incomplete. Companies such as Yamaha have long recognized this; their Disklavier Pro system, for example, replicates hammer velocity to convincingly reproduce the playing of pianists like Glenn Gould.

Ultimately, it is the hammer which, at the moment it is released from the piano’s action mechanism, independently carries the cumulative timing and dynamic input of the performer. Its subsequent trajectory toward the string—and the resulting timbre—is determined by its momentum, defined by the combined effects of its velocity and mass.

Does reducing the artistry of great pianists to numerical parameters diminish the magic of a performance? I don’t think so: This research only reinforces the extraordinary dexterity, control, and timing that distinguish master pianists.

References 
Kuromiya, K., et al. (2025). Motor origins of timbre in piano performance, Proc. Natl. Acad. Sci.,122 (39) e2425073122, https://doi.org/10.1073/pnas.2425073122 
Goebl, W., & Palmer, C. (2008). Tactile feedback and timing accuracy in piano performance. Experimental Brain Research, 186 (3), 471-479 DOI: 10.1007/s00221-007-1252-1 

 

Thursday, September 11, 2025

Was het vroeger allemaal anders? [Dutch]

[N.B. Column uit 2005]  

De faculteit verplicht vanaf heden iedere medewerker alleen nog maar te schrijven met een balpen van het merk Bic. De argumenten zijn: ze schrijven net zo goed als andere pennen, zijn een stuk goedkoper, behoeven zo goed als geen onderhoud, en mocht er dit toch nodig zijn dan heeft de faculteit een geolied team van Bic-experts in dienst. Vulpennen en potloden mogen niet meer gebruikt worden. Alleen na uitzonderlijke toestemming mag een medewerker zijn vulpen gebruiken maar onder de voorwaarde alleen op eigen papier te schrijven (‘inktvlekken zijn voor eigen risico; Bic’s vlekken niet’ zo zegt de ondersteunende afdeling).  

Leest dit als een knullig fantasieverhaaltje? Vervang de Bicpen door een Windows-computer en de vulpen door bijvoorbeeld een Apple-computer en het voorbeeld beschrijft het huidige ICT-beleid van de faculteit.  Wat is hier nu zo irritant aan? Dit soort beleid stamt uit een tijd dat een computer iets onbeheersbaars was, iets waar alleen bèta’s in stofjassen aan mochten komen. We zijn inmiddels zo’n dertig jaar verder en de computer heeft een belangrijke en soms haast persoonlijke plaats in onze dagelijkse denk- en werkruimte ingenomen. Een privédomein waaraan je specifieke en persoonlijke eisen stelt. Een werkgever die de inrichting en de mogelijke bewegingen in deze denk- en werkruimte bepaalt is niet meer van deze tijd. 

Een voorbeeld. In mijn muziekonderzoek speelt de computer een grote rol. Naast een belangrijke rol in theorievorming wordt de computer regelmatig ingezet voor online internetexperimenten. Het formaat dat voor de geluidsvoorbeelden gebruikt wordt is MPEG4 (een internationale standaard die hoge geluidskwaliteit garandeert via zowel modem als DSL). Deze standaard wordt echter om strategische redenen (Microsoft wil zijn eigen standaards promoten) niet ondersteund op Windows-computers. De experimenten kunnen dus niet door de studenten en medewerkers op UvA-computers uitgevoerd worden. Met als concreet gevolg dat weinig studenten en medewerkers musicologie meedoen aan het onderzoek. 

Dit soort restricties als gevolg van beleidskeuzes heeft zo z’n gevolgen. De trend die je ziet ontstaan is dat studenten en medewerkers UvA-DSL nemen en thuis in alle vrijheid hun laptop inrichten zoals zij dat willen. En aangezien een laptop niet aangesloten mag worden op het interne UvA-netwerk wordt zo het thuiswerken extra gestimuleerd (en onderhoud en beheer een privézaak). 

Het beleid heeft echter ook grotere gevolgen dan irritaties bij studenten en medewerkers. Wederom een voorbeeld. Bij de onderhandelingen over een recentelijk toegekend onderzoeksproject van de EU met een grote technologische component, is om o.a. infrastructurele redenen besloten het project niet bij de faculteit geesteswetenschappen (FGw) maar bij de bèta wetenshcappen (FNWI) te plaatsen. Dat is erg jammer en het kan het onderzoek en onderwijs bij de FGw alleen maar verarmen. 

Mijn (ongevraagd) advies is dan ook: leg overal in de faculteit WiFi aan, geef iedere studenten een high-tech laptop (met flinke korting), en laat iedereen vrijelijk experimenteren en downloaden. Mijn voorspelling: binnen twee jaar zijn er geen vaste computers én geen ondersteuning meer nodig, net zo min als er ooit een afdeling vulpennen-beheer nodig was. 

Henkjan Honing (Amsterdam, April 2005)

Tuesday, September 02, 2025

Are there controversies in pitch and timbre perception research? [in 333 words]

European Starling (Sturnus vulgaris)

At the heart of human musicality lie fundamental questions about how we perceive sound. In the coming academic year our group will dedicate several meetings on exploring and clarifying the spectral percepts that might underlie musicality with an agenda set around some enduring controversies. These span the roles of learning, culture, and cross-species comparisons, as well as evolutionary explanations for why music holds such sway over human minds. 

Among the most debated topics is the relationship between pitch and timbre perception. Both pitch and timbre are percepts: mental constructs arising from acoustic input. In humans, pitch perception is central to melodic recognition. When we hear a melody, we tend to identify it by its sequence of relative pitches—hearing it as the “same” tune regardless of changes in timbre, loudness, or duration. This reliance on relative pitch is a cornerstone of human music cognition. 

But is pitch such a universal perceptual anchor? For years, researchers assumed so, pointing to songbirds as an obvious parallel. Birds, it was thought, must also use pitch cues, though often in the form of absolute rather than relative pitch. Yet recent evidence complicates this narrative. In a striking study, Bregman et al. (2016) reported that European starlings do not, in fact, rely on pitch when recognizing sequences of complex harmonic tones. Instead, they appear to attend more closely to spectral shape, or the broader distribution of energy across frequencies. 

This finding raises a further question: is it really the spectral envelope (i.e. spectral shape) that matters, or something more subtle? Because the methods used—particularly contour-preserving noise vocoding—leave open another possibility: birds may actually be attuned to fine spectral-temporal modulations, the intricate contours woven into sound. Such results remind us that perceptual categories humans take for granted may not map cleanly onto other species, and that the universality of pitch as a cognitive anchor remains an open, and fascinating, controversy (cf. Patel, 2017; ten Cate & Honing, 2025). 

N.B. These entries are part of a new series of explorations on the notion of Spectral Percepts (in 333 words each). 

Bregman, M. R., Patel, A. D. & Gentner, T. Q. (2016). Songbirds use spectral shape, not pitch, for sound pattern recognition. Proceedings of the National Academy of Sciences, 113(6), 1666–1671. doi: 10.1073/pnas.1515380113 

Patel, A. D. (2017). Why Doesn’t a Songbird (the European Starling) Use Pitch to Recognize Tone Sequences? The Informational Independence Hypothesis. Comparative Cognition & Behavior Reviews, 12, 19–32. doi: 10.3819/CCBR.2017.120003 

ten Cate, C. & Honing, H. (2025). Precursors of music and language in animals. In D. Sammler (Ed.), The Oxford Handbook of Language and Music. Oxford University Press. doi: 10.1093/oxfordhb/9780192894700.001.0001

Sunday, August 31, 2025

Is consonance a biological or a cultural phenomenon? [in 333 words]

Chick in consonance experiment (Chiandetti & Vallortigara, 2011).

The distinction between consonance and dissonance has long occupied a central place in the scientific study of auditory perception and music cognition. Consonant intervals are typically described as stable, harmonious, or pleasing, whereas dissonant intervals are often characterized as tense, unstable, or even harsh. Yet even these seemingly straightforward descriptions quickly lead to methodological debate. 

A central difficulty arises from the frequent conflation of “dissonance” with “roughness.” Roughness refers to a physiological effect caused by closely spaced frequencies interacting on the basilar membrane of the inner ear. This phenomenon is measurable, consistent, and largely universal across listeners. Consonance, however, is not reducible to physiology alone. Recent research emphasizes that consonance is a multidimensional construct, shaped by both acoustic properties such as harmonicity and by layers of cognitive and cultural familiarity (Lahdelma & Eerola, 2020). 

This controversy can be framed around two major questions (Harrison, 2021). First, do humans possess an innate preference for consonance over dissonance? Second, if such a preference exists, how might it be explained in evolutionary terms? A landmark study by McDermott et al. (2016) with the Tsimane’, an Amazonian group minimally exposed to Western music, found no consistent preference for consonant over dissonant intervals. Their conclusion was that what many listeners call “pleasant” is primarily shaped by cultural experience. 

This interpretation has been vigorously challenged. Bowling et al. (2017) cite empirical evidence from human infants (Trainor et al., 2002) and even non-human animals (Chiandetti & Vallortigara, 2011) that points toward at least some innate, hardwired auditory sensitivity. If so, consonance may reflect evolutionary selective pressures, possibly related to the spectral composition of human vocalizations and the neurophysiological mechanisms underlying pitch perception and auditory scene analysis. 

In the end, consonance appears to be neither purely biological nor purely cultural. Our ears detect roughness and harmonicity, but our minds interpret these sensations through cultural frameworks. What sounds stable in one tradition may sound unfamiliar in another. The consonance controversy thus highlights music cognition as an intricate interplay between biology and culture. 

N.B. These entries are part of a new series of explorations on the notion of Spectral Percepts (in 333 words each).

References

Bowling, D. L., Hoeschele, M., Gill, K. Z. & Fitch, W. T. (2017). The nature and nurture of musical consonance. Music Perception, 118–121. 

Chiandetti, C. & Vallortigara, G. (2011). Chicks like consonant music. Psychological Science, 22(10), 1270– 1273. https://doi.org/10.1177/0956797611418244 
 
Harrison, P. M. C. (2021). Three Questions concerning Consonance Perception. Music Perception, 337–339. https://doi.org/10.1525/MP.2021.38.3.337 
 
Lahdelma, I. & Eerola, T. (2020). Cultural familiarity and musical expertise impact the pleasantness of consonance/dissonance but not its perceived tension. Scientific Reports, 10(1), 8693. https://doi.org/10.1038/s41598-020-65615-8 
 
McDermott, J. H., Schultz, A. F., Undurraga, E. A. & Godoy, R. A. (2016). Indifference to dissonance in native Amazonians reveals cultural variation in music perception. Nature, 25, 21–25. https://doi.org/10.1038/nature18635 
 
Trainor, L. J. & Unrau, A. (2012). Development of Pitch and Music Perception. In Human Auditory Development (pp. 223–254). Springer. https://doi.org/10.1007/978-1-4614-1421-6_8

Thursday, August 28, 2025

Why does a well-tuned modern piano not sound out-of-tune?

Karlheinz Stockhausen is listening.

"Neue Musik ist anstrengend", wrote Die Zeit some time ago: "Der seit Pythagoras’ Zeiten unternommene Versuch, angenehme musikalische Klänge auf ganzzahlige Frequenzverhältnisse der Töne zurückzuführen, ist schon mathematisch zum Scheitern verurteilt. Außereuropäische Kulturen beweisen schließlich, dass unsere westliche Tonskala genauso wenig naturgegeben ist wie eine auf Dur und Moll beruhende Harmonik: Die indonesische Gamelan-Musik und Indiens Raga-Skalen klingen für europäische Ohren schräg."

The definition of music as “sound” wrongly suggests that music, like all natural phenomena, adheres to the laws of nature. In this case, the laws would be the acoustical patterns of sound such as the (harmonic) relationships in the structure of the dominant tones, which determine the timbre. This is an idea that has preoccupied primarily the mathematically oriented music scientists, from Pythagoras to Hermann von Helmholtz.

The first, and oldest, of these scientists, Pythagoras, observed, for example, that “beautiful” consonant intervals consist of simple frequency relationships (such as 2:3 or 3:4). Several centuries later, Galileo Galilei wrote that complex frequency relationships only “tormented” the eardrum.

But, for all their wisdom, Pythagoras, Galilei, and like-minded thinkers got it wrong. In music, the “beautiful,” so-called “whole-number” frequency relationships rarely occur—in fact, only when a composer dictates them. The composer often even has to have special instruments built to achieve them, as American composer Harry Partch did in the twentieth century.

Contemporary pianos are tuned in such a way that the sounds produced only approximate all those beautiful “natural” relationships. The tones of the instrument do not have simple whole number ratios, as in 2:3 or 3:4. Instead, they are tuned so that every octave is divided into twelve equal parts (a compromise to facilitate changes of key). The tones exist, therefore, not as whole number ratios of each other, but as multiples of 12√2 (1:1.05946).

According to Galilei, each and every one of these frequency relationships are “a torment” to the ear. But modern listeners experience them very differently. They don’t particularly care how an instrument is tuned, otherwise many a concertgoer would walk out of a piano recital because the piano sounded out of tune. It seems that our ears adapt quickly to “dissonant” frequencies. One might even conclude that whether a piano is “in tune” or “out of tune” is entirely irrelevant to our appreciation of music. 

[fragment from Honing, 2021; Published here earlier in 2012]

Honing, H. (2012). Een vertelling. In S. van der Maas, C. Hulshof, & P. Oldenhave (Eds.), Liber Plurum Vocum voor Rokus de Groot (pp. 150-154). Amsterdam: Universiteit van Amsterdam (ISBN 978-90-818488-0-0).Honing, H. (2021). Music Cognition: The Basics. Routledge. doi 10.4324/9781003158301Kursell, Julia (2011). Kräftespiel. Zur Dissymmetrie von Schall und Wahrnehmung. Zeitschrift für Medienwissenschaft, 2 (1), 24-40 DOI: 10.4472_zfmw.2010.0003Whalley, Ian. (2006). William A. Sethares: Tuning, Timbre, Spectrum, Scale (Second Edition). Computer Music Journal, 30 (2) DOI: 10.1162/comj.2006.30.2.92

Wednesday, August 27, 2025

What makes two melodies feel like the same song? [in 333 words]

(cf. Krumhansl, 1989).

One of the most intriguing questions in music cognition research is also one of the simplest: when are two melodies experienced as the same?

At first glance, the answer might seem obvious — they share the same notes, in the same order, with the same rhythm. But a closer look, across cultures and even across species, reveals a more complex picture. What our brains latch onto when recognizing a tune involves a web of spectral percepts — the fundamental features of sound that guide humans and other animals in interpreting auditory patterns. This may sound like a niche research topic, but it lies at the heart of debates about authorship, originality, and musical ownership.

Consider hearing a melody played in a different key or on an unfamiliar instrument. Most people can still recognize it. How is this possible? Explanations often point to intervallic structure — the sequence of pitch intervals between notes — the contour, which is the overall shape of a melody as it rises and falls, or timbre, often described as the “color” of sound, including brightness, texture, and loudness.

For decades, music research treated timbre as secondary — something layered over supposedly “meaningful” musical features like pitch and rhythm (cf. McAdams & Cunible, 1992). Increasing evidence now suggests timbre is not merely decorative but a core perceptual building block. Timbre may also support “relative listening,” the ability to track patterns of change across different features. Exploring it carefully could reveal flexible and universal aspects of music cognition previously underestimated.

Recognizing that humans and non-human animals may rely on different spectral cues is equally crucial for understanding music’s evolutionary roots. A melody meaningful to humans may not register as such for a zebra finch — and vice versa.

By broadening music cognition research to include timbre, spectral contour, and species-specific strategies, scientists hope to uncover the shared perceptual foundations of musicality. Such work moves us closer to answering a deceptively simple but deeply complex question: what truly makes two melodies feel like the same song?

N.B. These entries are part of a new series of explorations on the notion of Spectral Percepts (in 333 words each).

References

McAdams, S, & Cunible, J-C (1992). Perception of timbral analogies. Philosophical Transactions of the Royal Society B: Biological Sciences, 336, 383-389. 

Krumhansl, C. L. (1989). Why is musical timbre so hard to understand? In S. Nielzén & O. Olsson (Eds.), Structure and perception of electroacoustic sound and music (pp. 43– 53). Elsevier.

Saturday, July 12, 2025

Want to test your musical memory?

Test your musical memory! A beta version of #TuneTwins is now online at https://tunetwins.app.

Note: Some things may still not work perfectly here and there. Please let us know via the feedback button – it helps us a lot!

Big thanks to Jiaxin, Noah, Bas, Ashley, Berit and the Music Cognition Group at large !