Showing posts with label musilanguage. Show all posts
Showing posts with label musilanguage. Show all posts

Tuesday, April 26, 2022

Do language and music share one precursor?

One way of categorizing the sensitivities of animals to the building blocks of language and music is to group these sensitivities along the frequency/spectral and temporal dimensions of sound. Although speech and music share many acoustic features, music appears to take advantage of a different set of acoustic features than speech. In humans the frequency dimension is central to music/melody perception, while for understanding speech the temporal dimension appears to be most fundamental (Albouy et al., 2020; Shannon et al., 1995). With respect to the frequency dimension of speech, humans attend primarily to the spectral structure (which enables the distinction between the different vowels and consonants), while for music the attention appears to be less on a spectral quality (e.g., the sound of a guitar versus that of a flute), but instead on the melodic and rhythmic patterns. As such, it might well be that humans are an exception in that they can interpret the same sound signal in (at least) two distinct ways: as speech or as music (cf. speech-to-song illusion). In other animals such distinction is not observed (as yet). In humans, melody and speech are processed along specific and distinct neural pathways (Albouy et al., 2020; Norman-Haignere et al., 2022) and it could be that brain networks that support musicality are partly recycled for language (Peretz et al., 2018). This could imply that both language and music share one precursor. In fact, it is one possible route to test the Darwin-inspired conjecture that musicality precedes music and language (Honing, 2021). In a recent preprint (ten Cate & Honing, 2022) we discuss the potential components of such a precursor.

Albouy, P., Benjamin1, L., Morillon, B., & Zatorre, R. J. (2020). Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science, 367(6481), 1043–1047. https://doi.org/10.1126/science.aaz3468.

Honing, H. (2021). Unravelling the origins of musicality: Beyond music as an epiphenomenon of language. Behavioral and Brain Sciences, 44(E78), 66–69. https://doi.org/10.1017/S0140525X20001211.

Norman-Haignere, S. V., Feather, J., Boebinger, D., Brunner, P., Ritaccio, A., McDermott, J. H., … Kanwisher, N. (2022). A neural population selective for song in human auditory cortex. Current Biology, 1–15. https://doi.org/10.1016/j.cub.2022.01.069.

Peretz, I., Vuvan, D. T., Armony, J. L., Lagrois, M.-É., & Armony, J. L. (2018). Neural overlap in processing music and speech. In H. Honing (Ed.), The Origins of Musicality (Vol. 370, pp. 205–220). Cambridge, Mass.: The MIT Press. http://dx.doi.org/10.1098/rstb.2014.0090.

Shannon, R. V., Zeng, F. G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270(5234), 303–304. https://doi.org/10.1126/science.270.5234.303 

Ten Cate, C., & Honing, H. (2022). Precursors of music and language in animals. PsyArXiv Preprint. Retrieved from psyarxiv.com/4zxtr.
 

Saturday, November 19, 2016

What makes us musical animals?

A pair of gibbons sing together (credit: Andrew Walmsley / NPL)
Exploring the biological and social processes that underly our musical abilities, Nancy Ferranti talks to music researcher Henkjan Honing about the origins of music and musical behaviours. The podcast was broadcasted this morning at CKUT, a campus-community radio station based at McGill University.



Saturday, January 18, 2014

Zijn de vocalisaties van orka’s een taal? [Dutch]

Orka's (Orcinus orca).
[Verschijnt zaterdag 18 januari 2014 in NRC] 

Waarom zouden we de vocalisaties van orka’s geen taal noemen, vraagt een lezer zich af in een ingezonden brief in het NRC Handelsblad n.a.v. het essay van Tijs Goldschmidt (bijlage wetenschap, 4 januari; De brief is hier te lezen).

Goede vraag. En de in de brief genoemde argumenten om fluitjes, gegalm, geklik en geknetter van orka’s (maar ook liedjes van zangvogels) taal te noemen worden vaker geopperd. Toch wordt de visie van de briefschrijver wat verkleurd door een taalbril. Zoals taalkundigen in hun enthousiasme nogal eens geneigd zijn om een grote diversiteit aan culturele-, sociale- én biologische verschijnselen te interpreteren als ‘talig.’ Wij menen daarentegen dat de vocalisaties van orka’s eerder verwant zijn met muziek dan met taal.

De melodische, ritmische en dynamische aspecten van het orkalied, aspecten die taalkundigen graag onder de term ‘prosodie’ scharen, zijn feitelijk de bouwstenen van muziek. In de ontwikkeling van de mens is de gevoeligheid voor 'muzikale prosodie’ al actief zo’n drie maanden voor de geboorte, om pas veel later – rond de zes maanden – een rol te gaan spelen in wat we taal plegen te noemen, zoals het herkennen van woordgrenzen (cf. Mattys et al., 1999).

NRC 20140118
Als een liedje 'complex' is, wil dat bovendien niet zeggen dat het zich houdt aan de regels van een grammatica zoals die voor een menselijke taal gelden. Het is ook onduidelijk of de syntaxis, de onderliggende structuur van het orkalied, zich wel laat vergelijken met menselijke taal (cf. van Heijningen et al., 2009), zoals dat ook bij zangvogels onder discussie staat (Berwick et al., 2013; Bolhuis & Everaert, 2013). Voor zover nu bekend komen vocalisaties dichter in de buurt van muziek. Daarom lijkt het voorlopig adequater te spreken van liedjes, net zoals onderzoekers dat bij zangvogels gewoonlijk doen.

Tijs Goldschmidt
Henkjan Honing

ResearchBlogging.orgBolhuis, JJ, & Everaert, M (2013). Birdsong, speech, and language: exploring the evolution of mind and brain. Cambridge, MA: MIT Press.

ResearchBlogging.orgBerwick RC, Okanoya K, Beckers GJ, & Bolhuis JJ (2011). Songs to syntax: the linguistics of birdsong. Trends in cognitive sciences, 15 (3), 113-21 PMID: 21296608
 
ResearchBlogging.orgMattys, S. L., Jusczyk, P. W., Luce, P. A., & Morgan, J. L. (1999). Phonotactic and Prosodic Effects on Word Segmentation in Infants. Cognitive Psychology, 38(4), 465–494. Retrieved from http://www.sciencedirect.com/science/article/pii/S0010028599907211

ResearchBlogging.orgvan Heijningen CA, de Visser J, Zuidema W, & ten Cate C (2009). Simple rules can explain discrimination of putative recursive syntactic structures by a songbird species. Proceedings of the National Academy of Sciences of the United States of America, 106 (48), 20538-43 PMID: 19918074

Wednesday, May 16, 2012

What is the relation between core knowledge systems, music, and language?










On Thursday 24 May 2012, 11.00 – 12.30 there wil be a seminar at NIAS Campus on the Horizon research project Knowledge and Culture that was recently submitted to NWO for funding. In this project, we intend to study the relationship between innate cognitive capacities that are not specific for humans, so-called core knowledge systems, and innate cognitive capacities that are uniquely human, such as language and music.

Numbers (by Johan Rooryck)
The core knowledge system for number, for instance, has been argued to contain two nonverbal subsystems that are present in both newborn infants and animals. The Approximate Number System (ANS) compares the numerosities of distinct sets without individuating their members. The Object Tracking System (OTS) yields representations of small numbers of objects, i.e. 1 to 3 (perhaps 4), and does not work on sets larger than 4. Linguistic evidence suggests that the split between OTS and ANS is reflected in the language system, and that children acquire numbers in a sequence.

Music (by Henkjan Honing)
We have known for some time that babies possess a keen perceptual sensitivity for the melodic, rhythmic and dynamic aspects of speech and music: aspects that linguists are inclined to categorize under the term ‘prosody’, but which are in fact the building blocks of music. Only much later in a child’s development does he make use of this ‘musical prosody’, for instance in delineating and subsequently recognizing word boundaries. Henkjan Honing will make a case for ‘illiterate listening’, the human ability to discern, interpret and appreciate musical nuances already from day one, long before a single word has been uttered, let alone conceived. It is the preverbal and preliterate stage that is dominated by musical listening.

The lecture is followed by an open discussion.

Johan Rooryck is the Distinguished Lorentz Fellow 2011/12, and Henkjan Honing holds a KNAW-Hendrik Muller Chair in Music Cognition and is Professor of Cognitive and Computational Musicology at the University of Amsterdam.

Sunday, April 19, 2009

De do do do, de da da da?*

For a long time I thought of it as quite a peculiar phenomenon: grown-ups who, the moment they spot a baby, start talking in a curious dialect. A dialect that has unclear semantics, little or no grammar, and is full of exaggerated rhythmic and melodic diversions.

Nevertheless, babies seem to love it. They react —cooing with pleasure—to melodies that are not unlike pop songs as ‘De do do do, de da da da’ of The Police or ‘La la la’ by Kylie Minoque.

This babbling or, more formally, infant-direct speech (IDS) differs from normal adult speech by its high pitch, exaggerated melodic contours, slower tempo, and more rhythmic variation. A kind of ‘musilanguage’ indeed. It is a widespread phenomenon that is —as far as we know— present in all cultures and has more similarities than differences -- even when some characteristics of IDS conflict with the rules of the adult language (e.g. Chinese). So it seems quite unlikely that IDS is ‘just’ a preparation for language, until recently the most common interpretation.

Laurel Trainor, and her team at McMaster University (Ontario, Canada), suggests that IDS is essentially a tool to communicate emotion. The decoding of the speech patterns into their emotional meaning is something infants can do easily, and long before they learn about language. In that sense, it seems likely that language makes use of faculties special to music instead of it being a side effect of language (as as suggested once by a well-known cognitive psychologist).

ResearchBlogging.orgHenkjan Honing (2008). De vergeten luisteraar [The Forgotten Listener] Boekman (77), 42-47

* Repeated from June 6th, 2008.