Showing posts with label model selection. Show all posts
Showing posts with label model selection. Show all posts

Monday, October 10, 2011

A history of music cognition?

One of the pioneers in the field that would come to be called music cognition was H. Christopher Longuet-Higgins (1923-2004). Not only was Longuet-Higgins one of the founders of the cognitive sciences (he coined the term in 1973), but as early as 1971 he formulated, together with Mark Steedman, the first computer model of musical perception. That early work was followed in 1976 with a full-fledged alternative in the journal Nature, seven years earlier than the more widely known, but, according to Longuet-Higgins, less precisely formulated, Generative Theory of Tonal Music of Lerdahl and Jackendoff. In a review in Nature in 1983 he wrote somewhat sourly:
‘Lerdahl and Jackendoff are, it seems, in favor of constructing a formally precise theory of music, in principle but not in practice.’
Although Lerdahl and Jackendorff’s book was far more precise than any musicological discussion found in the leading journals, the importance of formalization cannot be underestimated. Notwithstanding all our musicological knowledge, many fundamental concepts are in fact treated as axioms; musicologists are, after all, anxious to tackle far more interesting matters than basic notions like tempo, meter or syncopation, to name a few. But these axioms are not in actual fact understood, in the sense that we are not able (as yet) to formalize them sufficiently to explain them to a computer. This is still the challenge of ‘computer modelling’ (and of recent initiatives such as computational humanities) – a challenge that Longuet-Higgins was one of the first to take up [Excerpt from Honing, 2011].

ResearchBlogging.org Longuet-Higgins, H. C. (1983). All in theory — the analysis of music Nature, 304 (5921), 93-93 DOI: 10.1038/304093a0

ResearchBlogging.org Longuet-Higgins, H. C.  (1976). Perception of melodies Nature, 263 (5579), 646-653 DOI: 10.1038/263646a0
 
ResearchBlogging.org Honing, H. (2011). The illiterate listener. On music cognition, musicality and methodology. Amsterdam: Amsterdam University Press.

Thursday, March 05, 2009

What makes a theory compelling?*

Karl Popper was a philosopher of science that was very much interested in this question. He tried to distinguish 'science' from 'pseudoscience', but got more and more dissatisfied with the idea that the empirical method (supporting a theory with observations and experiments) could effectively mark this distinction. He sometimes used the example of astrology “with its stupendous mass of empirical evidence based on observation”, but also nuanced it by stating that “science often errs, and that pseudoscience may happen to stumble on the truth.”

Next to his well-known work on falsification, Popper started to develop alternatives to determine the scientific status or quality of a theory. He wrote the complex yet intriguing sentence “confirmations [of a theory] should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.” (Popper, 1963).

Popper was especially thrilled with the result of Eddington’s eclipse observations, which in 1919 brought the first important confirmation of Einstein's theory of gravitation. It was the surprising consequence of this theory that light should bend in the presence of large, heavy objects (Einstein was apparently willing to drop his theory if this would not be the case). Independent of whether such a prediction turns out to be true or not, Popper considered it an important quality of ‘real science’ to make such ‘risky predictions’. Interesting thought, not?

I still find this an intriguing idea. The notion of ‘risky’ or ‘surprising predictions’ might actually be the beginning of a fruitful alternative to existing model selection techniques, such as goodness-of-fit (which theory predicts the data best) and simplicity (which theory gives the simplest explanation). Also in music cognition measures like goodness-of-fit (r-squared, percentage variance accounted for, and other measures from the experimental psychology toolkit) are often used to confirm a theory. Nevertheless, it is non-trivial to think of theories that make surprising predictions. That is, a theory that predicts a yet unknown phenomenon as a consequence of the intrinsic structure of the theory itself. If you know of any, let me know!

ResearchBlogging.orgK. R. Popper (1963). Conjectures and Refutations. London: Routledge.

* Repeated blog entry from July 23, 2007 (celebrating finalizing a research proposal with Jan-Willem Romeijn on these issues, hoping to be able to address these issues head-on ;-)

Monday, July 23, 2007

What makes a theory compelling?

Karl Popper was a philosopher of science that was very much interested in this question. He tried to distinguish 'science' from 'pseudoscience', but got more and more dissatisfied with the idea that the empirical method (supporting a theory with observations and experiments) could effectively mark this distinction. He sometimes used the example of astrology “with its stupendous mass of empirical evidence based on observation”, but also nuanced it by stating that “science often errs, and that pseudoscience may happen to stumble on the truth.”

Next to his well-known work on falsification, Popper started to develop alternatives to determine the scientific status or quality of a theory. He wrote that “confirmations [of a theory] should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.” Popper, 1963).

Popper was especially thrilled with the result of Eddington’s eclipse observations, which in 1919 brought the first important confirmation of Einstein's theory of gravitation. It was the surprising consequence of this theory that light should bend in the presence of large, heavy objects (Einstein was apparently willing to drop his theory if this would not be the case). Independent of whether such a prediction turns out to be true or not, Popper considered it an important quality of ‘real science’ to make such ‘risky predictions’.

I still find this an intriguing idea. The notion of ‘risky’ or ‘surprising predictions’ might actually be the beginning of a fruitful alternative to existing model selection techniques, such as goodness-of-fit (which theory predicts the data best) and simplicity (which theory gives the simplest explanation). Also in music cognition measures like goodness-of-fit (r-squared, percentage variance accounted for, and other measures from the experimental psychology toolkit) are often used to confirm a theory.* Nevertheless, it is non-trivial to think of (existing) theories in music cognition that make surprising predictions. That is, a theory that predicts a yet unknown phenomenon as a consequence of the intrinsic structure of the theory itself (If you know of any, let me know!)

Well, these are still relatively raw ideas. I hope to be able to present them in a more digested format next week at the music perception and cognition conference (SMPC) in Montreal. Looking forward to it!

* If you want to read more on this topic, see here.