Showing posts with label music and language. Show all posts
Showing posts with label music and language. Show all posts

Tuesday, December 23, 2025

Are humans unique?

[This text is based on a column 
delivered at SPUI25 on 4 November 2025] 

In The Unique Animal, Rens Bod revisits an age-old philosophical question: what makes us human? His answer is that our uniqueness lies in unbounded recursion—which, according to Bod, is the defining feature that fundamentally distinguishes humans from all other animals. 

Recursion is undoubtedly an elegant notion, with a long and rich intellectual history, which gained renewed momentum in the second half of the twentieth century through, among others, Douglas Hofstadter’s influential book Gödel, Escher, Bach, Mandelbrot’s theory of fractals, and Chomsky’s claim that recursion is the only truly distinctive property of the human language faculty. 

Yet I wish to argue that this very search for uniqueness—for a single capacity that defines us—is a misleading enterprise. However intriguing recursion may be, it does not provide the solid foundation that some believe it does.  

1. The problem of uniqueness 

All animal species are unique. In that sense, we humans are unique as well—but not more so than other animals. Uniqueness is not rare; it is ubiquitous. The attempt to single out one exclusive feature in humans is therefore a peculiar, perhaps even pretentious, endeavor. The history of thought is full of such attempts—from Aristotle’s rational animal to Chomsky’s syntactic animal. But these efforts often reveal more about our desire to draw boundaries than about reality itself. We like to draw a sharp line between “human” and “animal,” while nature rarely complies.  

2. Recursion and its limits 

Recursion is without doubt a fascinating phenomenon, both mathematically and cognitively: the capacity to have thoughts about thoughts and to embed sentences within sentences. In theory, this allows infinite complexity to emerge from finite means—an influential idea.

But empirical reality is far less unbounded. Humans can process only a few levels of embedding—three, at most four—before losing track. In language, we lose the thread after the third subordinate clause; the same applies to reasoning and play. We can still follow that someone is pretending to pretend, but add yet another layer and we are lost. Unbounded recursion therefore does not describe how the human brain actually functions. Rather, it is a theoretical idealization—a concept that helps describe grammars and other hierarchical patterns in behavior, without necessarily contributing to deeper understanding. (The trees of linguistics have more than once prevented us from seeing the forest of our cognitive capacities.)

3. The problem of testability  

This leads to a more fundamental objection: unbounded recursion is not empirically testable. No experiment can demonstrate its existence, let alone falsify it, because every human performance is by definition finite. Nor can it be refuted, since any limitation can always be explained away as a matter of attention or memory. Thus the concept slips from the hands of science and drifts into the realm of metaphysics—into belief in something that may be true, but cannot be proven. For that reason, we must reject the thesis on rational grounds: not out of reluctance, but because a scientific explanation must be testable—or rather, falsifiable. 

4. Beyond the linguistic lens 

There is, moreover, another important factor at play: a widespread and persistent language bias in our thinking—a tendency I have pointed out before and written about elsewhere. Many researchers prefer to view cognitive phenomena through a linguistic lens. Because formal language systems are characterized by recursive structures, it is then often assumed that thought—and thus the human mind—must be recursive in nature. But cognition is more than language. 

Music offers an interesting counterexample. Music also exhibits hierarchical structures—structures characterized by multiple levels of organization, repetition, variation, and symmetry. These properties are not uniquely human: songbirds and various marine mammals structure their vocalizations in ways that show striking similarities to human music and can be regarded as precursors of our musical capacity. Hierarchy, however, is not synonymous with recursion. Although the two concepts are closely related, there is an essential difference: recursion presupposes a hierarchical structure, but a hierarchy need not be recursive.  

5. A different idea of uniqueness 

Perhaps we should therefore abandon the search for a single, exclusive feature and understand uniqueness differently—not as something belonging solely to humans, but as an emergent phenomenon arising from the convergence of multiple capacities. What makes humans special, then, is not one isolated property such as recursion, but a specific combination of components that together form a unique whole. From this perspective, the question shifts from “What do humans have that other animals do not?” to “Which capacities make a species unique?” Our uniqueness is not a single essence, but an evolutionary pattern—a fabric of gradually developed capacities that together form the basis for culture, language, and music.  

Epilogue — in relation to The Arrogant Ape  

In The arrogant ape, Christine Webb dismantles the deeply rooted human tendency to overestimate our own exceptionalism. Humans, she argues, are not the pinnacle of evolution but one species among many—remarkable, yes, but not categorically superior. What Webb calls “arrogance” is precisely the urge critiqued in the column above: the desire to locate one decisive trait that elevates us above all other animals. In short, abandoning the myth of the uniquely gifted human does not diminish us. On the contrary: it situates us more accurately within the living world, as one expressive, musical, meaning-making species among many—remarkable not despite that continuity, but because of it. 

Bod, R. (2025). Het unieke dier: Op zoek naar het specifiek menselijke. Amsterdam: Prometheus.
Webb, C. (2025). De arrogante aap: Waarom we niet zo uniek zijn als we denken. Amsterdam: Atlas Contact.

 See also https://hdl.handle.net/11245.1/e38e36c0-4c95-4f7c-a3fa-5c890f5b5b7f.

Thursday, April 24, 2025

What is the use of the comparative approach in studying the origins of language and music?

Diagrammatic representation of the comparative
approach (as discussed in ten Cate & Honing, 2022/2025)
Comparative studies can be done in several ways. One approach is to examine the sounds made by animals and look for shared features or parallels with language or music. To study these, one can, for example, examine how the structure of a sequence of sounds compares to syntactic structures in language or rhythmic structures in music, or whether harmonic sounds are recognized by their pitch (like in music) or by their spectral structure (like in speech). The presence of such features can indicate that similar sensory or cognitive mechanisms may underlie their perception and production and those needed for language and music in humans. However, one needs to be cautious with drawing such conclusions. That a sound produced by an animal has certain features in common with language or music may be incidental and a result of human interpretation, rather than indicating shared mechanisms per se. Animal sounds showing, for example, a specific rhythmic pattern (e.g., in the call of the indri, a lemur species; De Gregorio et al., 2021) or that contain tones based on a harmonic series (e.g., in the hermit thrush; Doolittle et al., 2014), need not indicate an ability of the animal to perceive or produce rhythms or harmonic sounds in general, as is common in humans. To show this, it is necessary to demonstrate the perception or production of such patterns outside and beyond what is realized in the species-specific sound patterns. This requires a second approach: using controlled experiments to address whether animals can (learn to) distinguish and generalize artificially constructed sounds that differ in specific linguistic or musical features. The two approaches, observational-analytical and experimental, are complementary: the first one may hint at presence of a certain ability, while the second one can test its existence and the limits of the capacity (Adapted from: ten Cate & Honing, 2025).

De Gregorio, C., Valente, D., Raimondi, T., Torti, V., Miaretsoa, L., Friard, O., Giacoma, C., Ravignani, A. & Gamba, M. (2021). Categorical rhythms in a singing primate. Current Biology, 31(20), R1379–R1380. https://doi.org/10.1016/j.cub.2021.09.032 

Doolittle, E. L., Gingras, B., Endres, D. M. & Fitch, W. T. (2014). Overtone-based pitch selection in hermit thrush song: Unexpected convergence with scale construction in human music. Proceedings of the National Academy of Sciences, 11(46), 1–6. https://doi.org/10.1073/pnas.1406023111

Ten Cate, C. & & Honing, H. (2025). Precursors of Music and Language in Animals. Sammler, D. (ed.) Oxford Handbook of Language and Music Oxford: Oxford University Press. DOI 10.1093/oxfordhb/9780192894700.013.0026. Preprint: psyarxiv.com/4zxtr.

Tuesday, April 26, 2022

Do language and music share one precursor?

One way of categorizing the sensitivities of animals to the building blocks of language and music is to group these sensitivities along the frequency/spectral and temporal dimensions of sound. Although speech and music share many acoustic features, music appears to take advantage of a different set of acoustic features than speech. In humans the frequency dimension is central to music/melody perception, while for understanding speech the temporal dimension appears to be most fundamental (Albouy et al., 2020; Shannon et al., 1995). With respect to the frequency dimension of speech, humans attend primarily to the spectral structure (which enables the distinction between the different vowels and consonants), while for music the attention appears to be less on a spectral quality (e.g., the sound of a guitar versus that of a flute), but instead on the melodic and rhythmic patterns. As such, it might well be that humans are an exception in that they can interpret the same sound signal in (at least) two distinct ways: as speech or as music (cf. speech-to-song illusion). In other animals such distinction is not observed (as yet). In humans, melody and speech are processed along specific and distinct neural pathways (Albouy et al., 2020; Norman-Haignere et al., 2022) and it could be that brain networks that support musicality are partly recycled for language (Peretz et al., 2018). This could imply that both language and music share one precursor. In fact, it is one possible route to test the Darwin-inspired conjecture that musicality precedes music and language (Honing, 2021). In a recent preprint (ten Cate & Honing, 2022) we discuss the potential components of such a precursor.

Albouy, P., Benjamin1, L., Morillon, B., & Zatorre, R. J. (2020). Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science, 367(6481), 1043–1047. https://doi.org/10.1126/science.aaz3468.

Honing, H. (2021). Unravelling the origins of musicality: Beyond music as an epiphenomenon of language. Behavioral and Brain Sciences, 44(E78), 66–69. https://doi.org/10.1017/S0140525X20001211.

Norman-Haignere, S. V., Feather, J., Boebinger, D., Brunner, P., Ritaccio, A., McDermott, J. H., … Kanwisher, N. (2022). A neural population selective for song in human auditory cortex. Current Biology, 1–15. https://doi.org/10.1016/j.cub.2022.01.069.

Peretz, I., Vuvan, D. T., Armony, J. L., Lagrois, M.-É., & Armony, J. L. (2018). Neural overlap in processing music and speech. In H. Honing (Ed.), The Origins of Musicality (Vol. 370, pp. 205–220). Cambridge, Mass.: The MIT Press. http://dx.doi.org/10.1098/rstb.2014.0090.

Shannon, R. V., Zeng, F. G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270(5234), 303–304. https://doi.org/10.1126/science.270.5234.303 

Ten Cate, C., & Honing, H. (2022). Precursors of music and language in animals. PsyArXiv Preprint. Retrieved from psyarxiv.com/4zxtr.
 

Thursday, January 28, 2021

Interested in the Evolution of Language and Music?

Last semester we finalized a new edition of the course Evolution of Language and Music. As every year, we closed it off with a student mini-conference. You can find the output of this years online edition here: a website full of blog posts and pitch videos that were made by the participating students.

N.B. The next edition will be held in the Spring of 2022 (See UvA Studiegids).

Saturday, November 19, 2016

What makes us musical animals?

A pair of gibbons sing together (credit: Andrew Walmsley / NPL)
Exploring the biological and social processes that underly our musical abilities, Nancy Ferranti talks to music researcher Henkjan Honing about the origins of music and musical behaviours. The podcast was broadcasted this morning at CKUT, a campus-community radio station based at McGill University.



Wednesday, January 28, 2015

Can we borrow your ears?

The Music Cognition Group is continuously looking for participants in their experiments. See our website if you want to contribute.

Wednesday, January 07, 2015

SMART Cognitive Science: the Amsterdam Conference


The SMART Cognitive Science Conference will be hosted by the University of Amsterdam from March 25-28th, 2015. It will consist of 6 exciting workshops (each 2 full days, with 3 in parallel) on the cognitive science of music, language, communication and art, and a common evening program with debates and plenaries, and will be free to attend.

For more information and free registration see smartcs.humanities.uva.nl. 

N.B. There are also some interesting pre-conference events, such as an ABC lecture by Tecumseh Fitch (Vienna) on The Syntax of Mind: Dendrophilia and Human Cognition.

Saturday, January 18, 2014

Zijn de vocalisaties van orka’s een taal? [Dutch]

Orka's (Orcinus orca).
[Verschijnt zaterdag 18 januari 2014 in NRC] 

Waarom zouden we de vocalisaties van orka’s geen taal noemen, vraagt een lezer zich af in een ingezonden brief in het NRC Handelsblad n.a.v. het essay van Tijs Goldschmidt (bijlage wetenschap, 4 januari; De brief is hier te lezen).

Goede vraag. En de in de brief genoemde argumenten om fluitjes, gegalm, geklik en geknetter van orka’s (maar ook liedjes van zangvogels) taal te noemen worden vaker geopperd. Toch wordt de visie van de briefschrijver wat verkleurd door een taalbril. Zoals taalkundigen in hun enthousiasme nogal eens geneigd zijn om een grote diversiteit aan culturele-, sociale- én biologische verschijnselen te interpreteren als ‘talig.’ Wij menen daarentegen dat de vocalisaties van orka’s eerder verwant zijn met muziek dan met taal.

De melodische, ritmische en dynamische aspecten van het orkalied, aspecten die taalkundigen graag onder de term ‘prosodie’ scharen, zijn feitelijk de bouwstenen van muziek. In de ontwikkeling van de mens is de gevoeligheid voor 'muzikale prosodie’ al actief zo’n drie maanden voor de geboorte, om pas veel later – rond de zes maanden – een rol te gaan spelen in wat we taal plegen te noemen, zoals het herkennen van woordgrenzen (cf. Mattys et al., 1999).

NRC 20140118
Als een liedje 'complex' is, wil dat bovendien niet zeggen dat het zich houdt aan de regels van een grammatica zoals die voor een menselijke taal gelden. Het is ook onduidelijk of de syntaxis, de onderliggende structuur van het orkalied, zich wel laat vergelijken met menselijke taal (cf. van Heijningen et al., 2009), zoals dat ook bij zangvogels onder discussie staat (Berwick et al., 2013; Bolhuis & Everaert, 2013). Voor zover nu bekend komen vocalisaties dichter in de buurt van muziek. Daarom lijkt het voorlopig adequater te spreken van liedjes, net zoals onderzoekers dat bij zangvogels gewoonlijk doen.

Tijs Goldschmidt
Henkjan Honing

ResearchBlogging.orgBolhuis, JJ, & Everaert, M (2013). Birdsong, speech, and language: exploring the evolution of mind and brain. Cambridge, MA: MIT Press.

ResearchBlogging.orgBerwick RC, Okanoya K, Beckers GJ, & Bolhuis JJ (2011). Songs to syntax: the linguistics of birdsong. Trends in cognitive sciences, 15 (3), 113-21 PMID: 21296608
 
ResearchBlogging.orgMattys, S. L., Jusczyk, P. W., Luce, P. A., & Morgan, J. L. (1999). Phonotactic and Prosodic Effects on Word Segmentation in Infants. Cognitive Psychology, 38(4), 465–494. Retrieved from http://www.sciencedirect.com/science/article/pii/S0010028599907211

ResearchBlogging.orgvan Heijningen CA, de Visser J, Zuidema W, & ten Cate C (2009). Simple rules can explain discrimination of putative recursive syntactic structures by a songbird species. Proceedings of the National Academy of Sciences of the United States of America, 106 (48), 20538-43 PMID: 19918074

Thursday, June 20, 2013

What's new on music, language and the brain?

From 8-12 May 2011 about forty researchers were asked to join a week of discussions in Frankfurt am Main, Germany in the context of the Ernst Strüngmann Forum.

The Forum can best be imagined as an intellectual retreat. A group of international experts are brought together for a week to identify gaps in knowledge; key questions are posed and innovative ways of filling these gaps are sought. To complete the communication process, the Ernst Strüngmann Forum publishes the results in partnership with MIT Press.

The 2011 Forum explored the relationships between language, music, and the brain by pursuing four key themes and the crosstalk among them: 1) song and dance as a bridge between music and language, 2) multiple levels of structure from brain to behavior to culture, 3) the semantics of internal and external worlds and the role of emotion, and 4) the evolution and development of language.

See more information on the resulting book at MIT Press.

Monday, December 10, 2012

Do music and language share the same resources?

The interest in the relationship between music and language is a long-standing one. While Lerdahl & Jackendoff in their seminal book on the generative theory of tonal music built mostly on insights of metrical phonology of the time, more recent studies draw attention to the parallels with current minimalist syntactic theory rather than phonology. However, there are compelling reasons to consider music and language as two distinct cognitive systems. Recent findings in the neuroscience of music suggest that music is likely a cognitively unique and evolutionary distinct faculty (e.g., Peretz & Colheart 2003). This is referred to as the modularity-hypothesis.

This position can be contrasted with the resource-sharing hypothesis that suggests music and language share processing mechanisms, especially those of a syntactic nature, and that they are just distinct in terms of the lexicon used (Patel 2003). For this hypothesis there is now quite some evidence (see, e.g., Slevc et al., 2009). That study showed enhanced syntactic garden path effects when the sentences were paired with syntactically unexpected chords, whereas the musical manipulation had no reliable effect on the processing of semantic violations.

However, last week a new study was published in Psychonomic Bulletin & Review (Perruchet & Poulin-Charronnat, 2012) that not only replicated the results of the former study, but also tested semantic garden paths, with – surprisingly – similar effects. The researchers suggest that the mechanism that might in fact underpin these interactions is the ‘garden path configuration’, rather than the implication of an alleged syntactic module (as is suggested by the resource-sharing hypothesis). It might well be that a different amount of attentional resources is recruited to process the linguistic manipulations and as such modulating the resources left available for the processing of music.

 ResearchBlogging.orgPerruchet P, & Poulin-Charronnat B (2012). Challenging prior evidence for a shared syntactic processor for language and music. Psychonomic Bulletin & Review PMID: 23180417

ResearchBlogging.orgPeretz, I., & Coltheart, M. (2003). Modularity of music processing Nature Neuroscience, 6 (7), 688-691 DOI: 10.1038/nn1083

ResearchBlogging.org Patel, A. (2003). Language, music, syntax and the brain Nature Neuroscience, 6 (7), 674-681 DOI: 10.1038/nn1082
 

Monday, October 22, 2012

Interested in a PhD or Postdoc position?

As of today, Leiden University, University of Amsterdam, and the Meertens Institute are looking for several PhD and Postdoc candidates for an ambitious research project starting February 1, 2013.

(See links below on those related to music cognition.)

In various domains of cognitive science, a new paradigm holds that humans and non-human animals are born with a small set of hard-wired cognitive abilities that are task-specific, language-independent, and non-species-specific. These core knowledge systems are innate cognitive skills that have the capacity for building mental representations of objects, persons, spatial relationships, numerosity, and social interaction. In addition to core knowledge systems, humans possess species-specific, uniquely human abilities such as language and music.

The ‘core knowledge’ paradigm challenges scholars in the humanities to ask the question how nurture and culture build on nature. This project examines the way in which innate, non specifically human, core knowledge systems for object representation, number, and geometry constrain cultural expressions in music, language, and the visual arts. In this research program, four domains of the humanities will be investigated from the point of view of core knowledge:

Subproject 1:  Music Cognition        
PhD & Postdoc, teamleader: Prof.dr H. Honing

Subproject 2:  Language and Number              
PhD & Postdoc, teamleader: Prof.dr S. Barbiers

Subproject 3:  Visual Arts and Geometry
PhD & Postdoc, teamleaders: Prof.dr.ir M. Delbeke & Prof.dr.ir C. van Eck

Subproject 4:  Poetry, Rhythm, and Meter
PhD & Postdoc, teamleader: Prof.dr M. van Oostendorp

Deadline for applications: 23 November 2012.

For more information see research proposal.

See also related entries

Monday, May 21, 2012

What is the role of core knowledge in music and language?

From 29 May 2012 through 1 Jun 2012 at The Lorentz Center in Leiden the workshop Core Knowledge, Language and Culture will be held.

This workshop will address the relation between core knowledge, language, music, and culture, with a view to assessing the current understanding of these questions for a theory of the mind/brain. We hope that the participants – scholars from fields as diverse as psychology, linguistics, neurobiology, neurolinguistics, music cognition, and cognitive anthropology – will contribute to defining a research program that may address both new and as yet unresolved research questions in this area.

To give one example, it is puzzling that a notion such as recursion (and the cognate notions of Merge or successor function) seems to play a role in apparently unrelated domains such as number/arithmetics, language, and music. Both linguistic and nonlinguistic quantification seem to be built on shared primitives. Such issues are often related to the (dis)similarity or (dis)continuity between the animal and the human domain. The question arises whether core knowledge of number constitutes an intriguing exception to the discontinuity thesis, with potential ramifications for the representation of time and space in the spirit of Kant.

The organizers of the workshop will be pleased to facilitate an open and fruitful debate on these topics. See for more information the Lorentz Center website.

Sunday, May 20, 2012

Is the ability to distinguish 'speech sound contrasts' strictly human or also present in birds?

Carel ten Cate
For various reasons the song of songbirds are currently considered to be the closest animal analogue to language. This raises questions about to what extent particular perceptual and cognitive abilities that are considered to be closely linked to the production, perception and learning of language are present in songbirds. In an upcoming SMART-Talk at the University of Amsterdam on Friday 25 May 2012, Prof. dr Carel ten Cate (Leiden University) will address two of such abilities.



One concerns speech perception: is the ability to distinguish specific speech sound contrasts strictly human or also present in birds? The other one concerns the ability for detecting and learning particular grammar rules, using the paradigm of artificial grammars: what is the level of complexity that birds can cope with? These findings in birds will be related to those obtained in other animal species.

ResearchBlogging.orgten Cate, C., Verzijden, M., & Etman, E. (2006). Sexual Imprinting Can Induce Sexual Preferences for Exaggerated Parental Traits Current Biology, 16 (11), 1128-1132 DOI: 10.1016/j.cub.2006.03.068

Wednesday, May 16, 2012

What is the relation between core knowledge systems, music, and language?










On Thursday 24 May 2012, 11.00 – 12.30 there wil be a seminar at NIAS Campus on the Horizon research project Knowledge and Culture that was recently submitted to NWO for funding. In this project, we intend to study the relationship between innate cognitive capacities that are not specific for humans, so-called core knowledge systems, and innate cognitive capacities that are uniquely human, such as language and music.

Numbers (by Johan Rooryck)
The core knowledge system for number, for instance, has been argued to contain two nonverbal subsystems that are present in both newborn infants and animals. The Approximate Number System (ANS) compares the numerosities of distinct sets without individuating their members. The Object Tracking System (OTS) yields representations of small numbers of objects, i.e. 1 to 3 (perhaps 4), and does not work on sets larger than 4. Linguistic evidence suggests that the split between OTS and ANS is reflected in the language system, and that children acquire numbers in a sequence.

Music (by Henkjan Honing)
We have known for some time that babies possess a keen perceptual sensitivity for the melodic, rhythmic and dynamic aspects of speech and music: aspects that linguists are inclined to categorize under the term ‘prosody’, but which are in fact the building blocks of music. Only much later in a child’s development does he make use of this ‘musical prosody’, for instance in delineating and subsequently recognizing word boundaries. Henkjan Honing will make a case for ‘illiterate listening’, the human ability to discern, interpret and appreciate musical nuances already from day one, long before a single word has been uttered, let alone conceived. It is the preverbal and preliterate stage that is dominated by musical listening.

The lecture is followed by an open discussion.

Johan Rooryck is the Distinguished Lorentz Fellow 2011/12, and Henkjan Honing holds a KNAW-Hendrik Muller Chair in Music Cognition and is Professor of Cognitive and Computational Musicology at the University of Amsterdam.

Friday, March 09, 2012

Working in the humanities, interested in cognition?

Tecumseh Fitch presenting at
SMART on April 20th, 2012
SMART Cognitive Science is an new initiative of the Faculty of Humanities of the University of Amsterdam. It embodies lectures, meetings and discussions, offline and online, to highlight the important contributions to cognitive science from traditional humanities disciplines.

SMART is an acronym for Speech and language, Music, Art, Reasoning and Thought. These activities are organized in close collaboration with the Cognitive Science Center Amsterdam, in which also the Faculties of Science, Medicine, Social Science and Economics & Econometrics participate.

Some interesting upcoming SMART Cognitive Science Lectures are:
  • Tuesday 20/3, 15h30 – 16h, P2.27, Anne Baker (Amsterdam), SMART Perspective on Language and Executive Function; followed from 16h-18h in the same room by a CSCA Lecture by Ianthi Tsimpli (Thessaloniki) on Signed and Spoken Language Asymmetries in a Polyglot-Savant 
  • Friday 20/4, 16h-18h, UT3.01, Tecumseh Fitch (Vienna), Cognitive Overlap between Language and Music (see abstract)
  • Friday 25/5, 16h-18h, UT3.01, Carel ten Cate (Leiden), On the linguistic abilities of songbirds 
  • Friday 22/6, 16h-18h, Doelenzaal, Östen Dahl (Stockholm), How languages get complex 
For more information see here.

Tuesday, February 21, 2012

What is the relation between language and cognition?

What is the relation between language and cognition? On the one hand, researchers like Noam Chomsky thought of language as an independent function with its own rules. However, other people – mainly psychologists – thought that language as a system is embedded in cognition and subject to all models of cognition. How do researchers currently view the relation between language and cognition? Have new techniques for brain research and research on cognitive functions led to a great change in this regard?

Here* you can find a recording of a recent debate that was held at the Cognitive Science Center Amsterdam with a panel consisting of Rens Bod, Annette de Groot, Jeannette Schaeffer, and Hedde Zeijlstra all working at the University of Amsterdam.

Interestingly, music showed up several times in the discussion as well.

* UvA streaming video; Sorry, only visible for UvA-students and employees.

ResearchBlogging.orgHauser MD, Chomsky N, & Fitch WT (2002). The faculty of language: what is it, who has it, and how did it evolve? Science (New York, N.Y.), 298 (5598), 1569-79 PMID: 12446899

Friday, February 17, 2012

Do musicians listen better?

Today, Makiko Sadakata (Donders Center, Nijmegen) gave a presentation at our monthly meeting on music cognition and computational musicology [1]. She presented a study in which the question was whether musicians do better in perceiving pitch, duration or other timbral deviations in their own and/or unfamiliar languages.

A striking example was the difference in pronunciation between the Japanese words ‘kanyo’ and ‘kannyo’. To my ears, and most of the audience, identical. However, to Japanese ears two very different meanings.

Using discrimination and identification tasks, Sadakata investigated in how far musicians do better in picking up these nuances. It turns out that in some, specific situations musicians indeed do better than non-musicians.

I personally got very interested in the idea in how far ‘listening mode’, i.e. listening to the sound as if it is ‘language’ or ‘music’, might actually explain these differences. Are the differences a result of musicians attending to the sound (e.g., the intonation or timing pattern) instead of to the semantics, the meaning of the linguistic utterance? Future research will tell…

ResearchBlogging.orgSadakata, M., and Sekiyama, K. (2011). Enhanced perception of various linguistic features by musicians: A cross-linguistic study Acta Psychologica, 138 (1), 1-10 DOI: 10.1016/j.actpsy.2011.03.007

Tuesday, August 09, 2011

What makes us musical animals? [Part 2]

This week a new essay came out in which I try to make a case for ‘illiterate listening’, the human ability to discern, interpret and appreciate musical nuances. We have known for some time that babies possess a keen perceptual sensitivity for the melodic, rhythmic and dynamic aspects of speech and music: aspects that linguists are inclined to categorize under the term ‘prosody’, but which are in fact the building blocks of music. Only much later in a child’s development does s/he make use of this ‘musical prosody’, for instance in delineating and subsequently recognizing word boundaries. We all share these musical skills, from day one, and long before a single word has been uttered, let alone conceived. It is the preverbal and preliterate stage of our development that is dominated by musical listening.

The Illiterate listener is available online for free.

ResearchBlogging.org Honing, H. (2011). The illiterate listener. On music cognition, musicality and methodology. Amsterdam: Amsterdam University Press.


Tuesday, June 21, 2011

Are we ‘illiterate listeners’? [Part 2]


This week a fragment from The Illiterate Listener that will be published later this year at Amsterdam University Press:
"French babies cry differently than German babies. That was the conclusion of a study published at the end of 2009 in the scientific journal Current Biology. German babies were found to cry with a descending pitch; French babies, on the other hand, with an ascending pitch, descending slightly only at the end. It was a surprising observation, particularly in light of the currently accepted theory that when one cries, the pitch contour will always descend, as a physiological consequence of the rapidly decreasing pressure during the production of sound. Apparently, babies only a few days old can influence not only the dynamics, but also the pitch contour of their crying. Why would they do this?

The researchers interpreted it as the first steps in the development of language: in spoken French, the average intonation contour is ascending, while in German it is just the opposite. This, combined with the fact that human hearing is already functional during the last trimester of pregnancy, led the researchers to conclude that these babies absorbed the intonation patterns of the spoken language in their environment in the last months of pregnancy and consequently imitated it when they cried.

This observation was also surprising because until now one generally assumed that infants only develop an awareness for their mother tongue between six and eighteen months, and imitate it in their babbling. Could this indeed be unique evidence, as the researchers emphasized, that language sensitivity is already present at a very early stage? Or are other interpretations possible?

Although the facts are clear, this interpretation is a typical example of what one could call a language bias: the linguist’s understandable enthusiasm to interpret many of nature’s phenomena as linguistic. There is, however, much more to be said for the notion that these newborn babies exhibit an aptitude whose origins are found not in language but in music.

We have known for some time that babies possess a keen perceptual sensitivity for the melodic, rhythmic and dynamic aspects of speech and music: aspects that linguists are inclined to categorize under the term ‘prosody’, but which are in fact the building blocks of music. Only much later in a child’s development does he make use of this ‘musical prosody’, for instance in delineating and subsequently recognizing word boundaries. But let me emphasize that these very early indications of musical aptitude are not in essence linguistic."

ResearchBlogging.org Honing, H. (2011, in press). The illiterate listener. On music cognition, musicality and methodology. Amsterdam: Amsterdam University Press.

Sunday, December 05, 2010

Are we ‘illiterate listeners’? [Part 1]

"French babies cry differently as compared to German babies. This was the conclusion from a study that was published a year ago in Current Biology (see earlier entry). Three day old German babies cry in a downward fashion, their French contemporaries showed an increasing swelling of the cry and stop abruptly.

It was a surprising observation, especially in the light of the general belief that in crying the pitch should always drop as a physiological consequence of the respiratory cycle. Apparently, babies of just a few days old can control both the dynamics and the intonation contour of their crying. Why would they do this?

The researchers interpreted it as the first steps in the development of language. In spoken French the mean intonation contour is rising (dropping at the very end of an utterance), in German the mean intonation typically exhibits a falling contour. This combined with the fact that the human auditory system is already functional in the last trimester of pregnancy made the researchers conclude that these babies picked up the intonation contours of their native language in these last months and consequently imitated them in their crying.

This observation is also surprising since the literature suggests that children only get interested in their native language roughly between six and eighteen months, when they start to imitate it in their babbling. Is it indeed the case, as stressed by these researchers (and the recent literature citing it; e.g. Elk & Hunnius, 2010), that this is unique evidence for a much earlier sensitivity to language than commonly thought? Or is there another interpretation possible?

Although the empirical results are clear, this interpretation is a typical example of what one could call a ‘language bias’: an understandable enthusiasm of linguists to interpret a range of phenomena in the real world as ‘linguistic’. One can, however, easily make the argument that this early sensitivity to intonation contour is a not a linguistic skill but a musical one.

Most linguists see the use of rhythm, dynamics, and intonation as an aid for making infants familiar with the words and sentence structures of the language of the culture in which they will be raised. Words and word divisions are emphasized through exaggerated intonation contours and varied rhythmic intervals, thereby facilitating the process of learning a specific language. These aspects are referred to as prosody, but they are actually the basic building blocks of music. Only much later in the development of a child will this ‘musical prosody’ be used, for instance in the marking, and consequently the recognition of word boundaries. But these early signs of musical skill are — and I like to stress this – not of a linguistic nature. It is the preverbal and preliterate stage of our musical listening in development."

Fragment from inaugural address De ongeletterde luisteraar (Honing, 2010).





ResearchBlogging.orgHoning, H. (2010). De ongeletterde luisteraar. Over muziekcognitie, muzikaliteit en methodologie. Amsterdam: Royal Netherlands Academy of Arts and Sciences (KNAW).

ResearchBlogging.orgMampe, B., Friederici, A., Christophe, A., & Wermke, K. (2009). Newborns' Cry Melody Is Shaped by Their Native Language Current Biology DOI: 10.1016/j.cub.2009.09.064

 ResearchBlogging.orgElk, M. van & Hunnius, S. (2010) Het babybrein, over de ontwikkeling van de hersenen bij baby's. Amsterdam: Bert Bakker.