Musical semantics: Difference between revisions
imported>Daniel Mietchen (linking back to course page) |
mNo edit summary |
||
(5 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
{{subpages}} | {{subpages}} | ||
{{ | {{dambigbox|Musical semantics|Semantics}} | ||
Music is one of the oldest, and most basic, socio-cognitive domains of the human species. Primate vocalizations are mainly determined by music-like features (such as pitch, amplitude-and frequency-modulations, timbre and rhythm), and it is assumed that human musical abilities played a key phylogenetical part in the evolution of language. Likewise, it is assumed that, ontogenetically, infants’ first steps into language are based on prosodic information, and that musical communication in early childhood (such as maternal music) has a major role for emotional, cognitive and social development of children. The music faculty is in some respects unique to the human species; only humans compose music, learn to play musical instruments and play instruments cooperatively together in groups. Playing a musical instrument in a group is a tremendously demanding task for the human brain that potentially engages all cognitive processes that we are aware of.It involves perception, action, learning, memory, emotion, etc., making music an ideal tool to investigate human cognition and the underlying brain mechanisms. The relatively young discipline of ’neurocognition of music’includes a wide field of biopsychological research, beginning with the investigation of psychoacoustics and the neural coding of sounds, and ending with brain functions underlying cognition and emotion during the perception and production of highly complex musical information. | |||
Semantics is a key feature of language, but whether or not music is able to activate brain mechanisms related to the processing of semantic meaning has remained unanswered until 2004. In this year, Koelsch et al. published a study that investigated semantic priming of words using sentences and music (Koelsch, S. et al.: Music, language and meaning: brain signatures of semantic processing, Nature Neuroscience, 2004). The following article mainly refers to this study that proved the ability of music to transfer semantic information at all. | |||
What is the semantic priming effect? | |||
A sentence such as Sissy sings a song disburdens the neural processing of semantically related words like music, whereas it does not alleviate processing of semantically unrelated words like carpet. This effect is known as the semantic priming effect; it refers to the highly consistent processing advantage seen for words that are preceded by a semantically related context. This semantic processing effect is electrophysically reflected by the N400 component of event related potential (ERP) measurements. The N400 is a negative polarity ERP component that is maximal over centro-parietal electrode sites. It emerges at around 250 ms after the onset of word stimulation and reaches is maximal amplitude at about 400 ms. When a word is preceded by a semantic context, the amplitude of the N400 is inversely related to the degree of semantic congruousness between the word and is preceding semantic context. The processing of almost any type of semantically meaningful information seems to be associated with an N400 effect (Kutas, M. et al: Electrophysiology reveals semantic memory use in language comprehension, Trends Cogn. Sci., 2000). | |||
Semantic information is clearly a key feature of language, but is this kind of information also an important aspect of music? | |||
Most music theorists posit at least four different aspects of musical meaning: | |||
i) meaning that emerges from a connection across different frames of reference suggested by common patterns or forms (sound patterns in terms of pitch, dynamics, tempo, timbre etc. that resemble features of objects like rushing water, for example) | |||
ii) meaning that arises from the suggestion of a particular mood | |||
iii) meaning that results from extra-musical associations (national anthem, for example) | |||
iv) meaning that can be attributed to the interplay of formal structures in creating patterns of tension and resolution. | |||
Sources: | |||
Jones, M.R. et al.: Cognitive Bases of Musical Communication, American Psychological Association, 1992 | |||
Swain, J.: Musical languages, New York, 1997 | |||
Raffmann, D.: Language, Music and Mind, MIT Press, 1993 | |||
Meyer, L.B.: Emotion and Meaning in Music, Univ. of Chicago Press, 1956 | |||
Hevner, K.: The affective value of pitch and tempo in music, Am. J. Psych., 1937 | |||
Peirce, C.: The Collected Papers of C.S. Peirce, Harvard Univ. Press, 1958 | |||
Zbikowski, L.: Conceptualizing Music: Cognitive Structure, Theory and Analysis, Oxford Univ. Press, 2002 | |||
Most linguists, however, would reject the notion that music can transfer specific semantic concepts (Pinker, Norton, New York 1997, How the Mind Works). Though the study of Koelsch et al. in 2004 could provide strong behavioural and electrophysiological evidence that music is able to transfer semantic information. | |||
Intuitively, it seems plausible that certain passages of Holst´s the planets or Beethoven´s symphonies prime the word hero, rather than the word flea. | |||
As primes Koelsch et al., as already mentioned above, used sentences and musical excerpts that were, with respect to their meaning, either related or unrelated to a target word. Half of the targets were abstract, the other half were concrete words. Most of the stimuli that primed concrete words resembled sounds (e.g., bird) or qualities of objects (e.g., low tones associated with basement or ascending pitch steps associated with staircase). Some musical stimuli (especially those used as primes for abstract words) resembled prosodic and possibly gestural cues that can be associated with particular words (e.g., sigh, consolation). Other stimuli presented stereotypic musical forms or styles that are commonly associated with particular words (e.g., a church anthem and the word devotion). | |||
Importantly, participants were not familiar with the musical excerpts, so meaning could not simply be ascribed by extra-musical associations that had an explicit, direct link to language (such as titles or lyrics). As priming of words could not rely on direct associations between musical primes and target words, Koelsch et al. were enabled to investigate whether the N400 can also be elicited by stimuli that are not directly linked to language. | |||
Behaviourally, subjects categorized 92% of the target words correctly when target words were presented after a sentence. When target words were preceded by musical excerpts, 80% of the targets were categorized correctly. Further behavioural data was collected by a pre-experiment in which the subjects had to rate the semantic relatedness between prime and target words by using a scale ranging from -5 to +5, and an additional experiment in which the subjects were instructed to choose the word semantically most closely related to the prime out of a five-word list. The Pre-experiment and the additional experiment could maintain the results of the behavioural performance during the first EEG experiment. | |||
Semantic processing was measured using EEG (ElectroEncephaloGram). Target words elicited an N400 when elicited after semantically unrelated sentences. Likewise, an N400 effect was elicited when target words were preceded by semantically unrelated musical excerpts, showing that music can transfer semantically meaningful information by priming representations of meaningful concepts. The ERPs of the target words showed, as expected, larger N400s when the targets were presented after semantically unrelated sentences compared to those after semantically related sentences. As when preceded by the sentences, the target words also elicited larger N400s when presented after an unrelated musical excerpt compared to when presented after a related excerpt. In both, language and music, both concrete and abstract target words elicited significant N400 effects. | |||
The N400 effect (that means, the effect of unprimed versus primed target words) did not differ between the language domain (sentences followed by target words) and the music domain (musical excerpts followed by target words), concerning amplitude, latency or scalp distribution. In both domains, a bilateral N400 was maximal around 410ms over centro-parietal electrode sites. | |||
The N400 effects did not differ between the prime-target pairs with and without balanced content, neither in the language nor in the music domain. This findings rules out the possibility that the musical excerpts merely primed an emotional state that was (in)consistent with the emotional content of the target word. | |||
The sources of electric brain activity underlying the N400 effect did not statistically differ between the language and the music domain, neither with respect to locations, nor with respect to orientations, strengths, time point of maximum or explanation of variance. The source analysis of the N400 effect indicated generators located in the posterior portion of the middle temporal gyrus (MTG, Brodmann´s area 21/37), quite close to the superior temporal sulcus. This localization concurs with numerous studies on the functional neuroanatomy of semantic processes at the level of both words and sentences | |||
Sources: | |||
Friederici, A.D.: Towards a neural basis of auditory sentence processing, Trends. Cogn. Sci., 2002 | |||
Démonet, J. et al: The anatomy of phonological and semantic processing in normal subjects, Brain, 1992 | |||
Price, C. et al.: Segregating semantic from phonological processes during reading, J. Cogn. Neurosci., 1997 | |||
Friederici, A.D. et al.: Segregating semantic and syntactic aspects of processing in the human brain: an fMRI investigation of different word types, Cereb. Cortex, 2000 | |||
Ni, W. et al.: An event-related distinguishing form and content in sentence processing, J. Cogn. Neurosci., 2000 | |||
Kuperberg, G. et al.: Common and distinct neural substrates for pragmatic, semantic and syntactic processing of spoken sentences: an fRMI study, J. Cogn. Neurosci. 2000 | |||
Baumgaertner, A. et al.: An event-related fMRI reveals cortical sites involved in contextual sentence integration, Neuroimage, 2002 | |||
Halgren, E. et al.: N400-like magnetoencephalography responses modulated by semantic context, word frequency and lexical class in sentences, Neuroimage, 2002 | |||
Helenius, P. et al.: Distinct time courses of word and context comprehension in the left temporal cortex, Brain, 1998 | |||
To conclude, these results indicate that music transfers considerably more semantic information than previously believed. There is ample evidence that the N400 elicited by words reflects processing of meaning information. The data collected by Koelsch et al. Show that the influence on processing of the meaning of target words can be identical for language and music. The N400 priming effect in the music domain did not differ from that in the language domain.[[Category:Suggestion Bot Tag]] |
Latest revision as of 06:00, 22 September 2024
Music is one of the oldest, and most basic, socio-cognitive domains of the human species. Primate vocalizations are mainly determined by music-like features (such as pitch, amplitude-and frequency-modulations, timbre and rhythm), and it is assumed that human musical abilities played a key phylogenetical part in the evolution of language. Likewise, it is assumed that, ontogenetically, infants’ first steps into language are based on prosodic information, and that musical communication in early childhood (such as maternal music) has a major role for emotional, cognitive and social development of children. The music faculty is in some respects unique to the human species; only humans compose music, learn to play musical instruments and play instruments cooperatively together in groups. Playing a musical instrument in a group is a tremendously demanding task for the human brain that potentially engages all cognitive processes that we are aware of.It involves perception, action, learning, memory, emotion, etc., making music an ideal tool to investigate human cognition and the underlying brain mechanisms. The relatively young discipline of ’neurocognition of music’includes a wide field of biopsychological research, beginning with the investigation of psychoacoustics and the neural coding of sounds, and ending with brain functions underlying cognition and emotion during the perception and production of highly complex musical information.
Semantics is a key feature of language, but whether or not music is able to activate brain mechanisms related to the processing of semantic meaning has remained unanswered until 2004. In this year, Koelsch et al. published a study that investigated semantic priming of words using sentences and music (Koelsch, S. et al.: Music, language and meaning: brain signatures of semantic processing, Nature Neuroscience, 2004). The following article mainly refers to this study that proved the ability of music to transfer semantic information at all.
What is the semantic priming effect?
A sentence such as Sissy sings a song disburdens the neural processing of semantically related words like music, whereas it does not alleviate processing of semantically unrelated words like carpet. This effect is known as the semantic priming effect; it refers to the highly consistent processing advantage seen for words that are preceded by a semantically related context. This semantic processing effect is electrophysically reflected by the N400 component of event related potential (ERP) measurements. The N400 is a negative polarity ERP component that is maximal over centro-parietal electrode sites. It emerges at around 250 ms after the onset of word stimulation and reaches is maximal amplitude at about 400 ms. When a word is preceded by a semantic context, the amplitude of the N400 is inversely related to the degree of semantic congruousness between the word and is preceding semantic context. The processing of almost any type of semantically meaningful information seems to be associated with an N400 effect (Kutas, M. et al: Electrophysiology reveals semantic memory use in language comprehension, Trends Cogn. Sci., 2000).
Semantic information is clearly a key feature of language, but is this kind of information also an important aspect of music?
Most music theorists posit at least four different aspects of musical meaning:
i) meaning that emerges from a connection across different frames of reference suggested by common patterns or forms (sound patterns in terms of pitch, dynamics, tempo, timbre etc. that resemble features of objects like rushing water, for example)
ii) meaning that arises from the suggestion of a particular mood
iii) meaning that results from extra-musical associations (national anthem, for example)
iv) meaning that can be attributed to the interplay of formal structures in creating patterns of tension and resolution.
Sources:
Jones, M.R. et al.: Cognitive Bases of Musical Communication, American Psychological Association, 1992
Swain, J.: Musical languages, New York, 1997
Raffmann, D.: Language, Music and Mind, MIT Press, 1993
Meyer, L.B.: Emotion and Meaning in Music, Univ. of Chicago Press, 1956
Hevner, K.: The affective value of pitch and tempo in music, Am. J. Psych., 1937
Peirce, C.: The Collected Papers of C.S. Peirce, Harvard Univ. Press, 1958
Zbikowski, L.: Conceptualizing Music: Cognitive Structure, Theory and Analysis, Oxford Univ. Press, 2002
Most linguists, however, would reject the notion that music can transfer specific semantic concepts (Pinker, Norton, New York 1997, How the Mind Works). Though the study of Koelsch et al. in 2004 could provide strong behavioural and electrophysiological evidence that music is able to transfer semantic information.
Intuitively, it seems plausible that certain passages of Holst´s the planets or Beethoven´s symphonies prime the word hero, rather than the word flea. As primes Koelsch et al., as already mentioned above, used sentences and musical excerpts that were, with respect to their meaning, either related or unrelated to a target word. Half of the targets were abstract, the other half were concrete words. Most of the stimuli that primed concrete words resembled sounds (e.g., bird) or qualities of objects (e.g., low tones associated with basement or ascending pitch steps associated with staircase). Some musical stimuli (especially those used as primes for abstract words) resembled prosodic and possibly gestural cues that can be associated with particular words (e.g., sigh, consolation). Other stimuli presented stereotypic musical forms or styles that are commonly associated with particular words (e.g., a church anthem and the word devotion).
Importantly, participants were not familiar with the musical excerpts, so meaning could not simply be ascribed by extra-musical associations that had an explicit, direct link to language (such as titles or lyrics). As priming of words could not rely on direct associations between musical primes and target words, Koelsch et al. were enabled to investigate whether the N400 can also be elicited by stimuli that are not directly linked to language.
Behaviourally, subjects categorized 92% of the target words correctly when target words were presented after a sentence. When target words were preceded by musical excerpts, 80% of the targets were categorized correctly. Further behavioural data was collected by a pre-experiment in which the subjects had to rate the semantic relatedness between prime and target words by using a scale ranging from -5 to +5, and an additional experiment in which the subjects were instructed to choose the word semantically most closely related to the prime out of a five-word list. The Pre-experiment and the additional experiment could maintain the results of the behavioural performance during the first EEG experiment.
Semantic processing was measured using EEG (ElectroEncephaloGram). Target words elicited an N400 when elicited after semantically unrelated sentences. Likewise, an N400 effect was elicited when target words were preceded by semantically unrelated musical excerpts, showing that music can transfer semantically meaningful information by priming representations of meaningful concepts. The ERPs of the target words showed, as expected, larger N400s when the targets were presented after semantically unrelated sentences compared to those after semantically related sentences. As when preceded by the sentences, the target words also elicited larger N400s when presented after an unrelated musical excerpt compared to when presented after a related excerpt. In both, language and music, both concrete and abstract target words elicited significant N400 effects.
The N400 effect (that means, the effect of unprimed versus primed target words) did not differ between the language domain (sentences followed by target words) and the music domain (musical excerpts followed by target words), concerning amplitude, latency or scalp distribution. In both domains, a bilateral N400 was maximal around 410ms over centro-parietal electrode sites. The N400 effects did not differ between the prime-target pairs with and without balanced content, neither in the language nor in the music domain. This findings rules out the possibility that the musical excerpts merely primed an emotional state that was (in)consistent with the emotional content of the target word.
The sources of electric brain activity underlying the N400 effect did not statistically differ between the language and the music domain, neither with respect to locations, nor with respect to orientations, strengths, time point of maximum or explanation of variance. The source analysis of the N400 effect indicated generators located in the posterior portion of the middle temporal gyrus (MTG, Brodmann´s area 21/37), quite close to the superior temporal sulcus. This localization concurs with numerous studies on the functional neuroanatomy of semantic processes at the level of both words and sentences
Sources:
Friederici, A.D.: Towards a neural basis of auditory sentence processing, Trends. Cogn. Sci., 2002
Démonet, J. et al: The anatomy of phonological and semantic processing in normal subjects, Brain, 1992
Price, C. et al.: Segregating semantic from phonological processes during reading, J. Cogn. Neurosci., 1997
Friederici, A.D. et al.: Segregating semantic and syntactic aspects of processing in the human brain: an fMRI investigation of different word types, Cereb. Cortex, 2000
Ni, W. et al.: An event-related distinguishing form and content in sentence processing, J. Cogn. Neurosci., 2000
Kuperberg, G. et al.: Common and distinct neural substrates for pragmatic, semantic and syntactic processing of spoken sentences: an fRMI study, J. Cogn. Neurosci. 2000
Baumgaertner, A. et al.: An event-related fMRI reveals cortical sites involved in contextual sentence integration, Neuroimage, 2002
Halgren, E. et al.: N400-like magnetoencephalography responses modulated by semantic context, word frequency and lexical class in sentences, Neuroimage, 2002
Helenius, P. et al.: Distinct time courses of word and context comprehension in the left temporal cortex, Brain, 1998
To conclude, these results indicate that music transfers considerably more semantic information than previously believed. There is ample evidence that the N400 elicited by words reflects processing of meaning information. The data collected by Koelsch et al. Show that the influence on processing of the meaning of target words can be identical for language and music. The N400 priming effect in the music domain did not differ from that in the language domain.