Loading AI tools
From Wikipedia, the free encyclopedia
Musical memory refers to the ability to remember music-related information, such as melodic content and other progressions of tones or pitches. The differences found between linguistic memory and musical memory have led researchers to theorize that musical memory is encoded differently from language and may constitute an independent part of the phonological loop. The use of this term is problematic, however, since it implies input from a verbal system, whereas music is in principle nonverbal.[1]
Consistent with hemispheric lateralization, there is evidence to suggest that the left and right hemispheres of the brain are responsible for different components of musical memory. By studying the learning curves of patients who have had damage to either their left or right medial temporal lobes, Wilson & Saling (2008) found hemispheric differences in the contributions of the left and right medial temporal lobes in melodic memory.[2] Ayotte, Peretz, Rousseau, Bard & Bojanowski (2000) found that those patients who had their left middle cerebral artery cut in response to an aneurysm suffered greater impairments when performing tasks of musical long-term memory, than those patients who had their right middle cerebral artery cut.[3] Thus, they concluded that the left hemisphere is mainly important for musical representation in long-term memory, whereas the right is needed primarily to mediate access to this memory. Sampson and Zatorre (1991) studied patients with severe epilepsy who underwent surgery for relief as well as control subjects. They found deficits in memory recognition for text regardless of whether it was sung or spoken after a left, but not right temporal lobectomy.[4] However, melody recognition when a tune was sung with new words (as compared to encoding) was impaired after either right or left temporal lobectomy. Finally, after a right but not left temporal lobectomy, impairments of melody recognition occurred in the absence of lyrics. This suggests dual memory codes for musical memory, with the verbal code utilizing the left temporal lobe structures and the melodic relying on the encoding involved.
Platel (2005) defined musical semantic memory as memory for pieces without memory for the temporal or spatial elements; and musical episodic memory as memory for pieces and the context in which they were learned.[5] It was found that two distinct patterns of neural activations existed when comparing semantic and episodic components of musical memory. Controlling for processes of early auditory analysis, working memory and mental imagery, Platel found that retrieval of semantic musical memory involved activation in the right inferior and middle frontal gyri, the superior and inferior right temporal gyri, the right anterior cingulate gyrus and parietal lobe region. There was also some activation in the middle and inferior frontal gyri in the left hemisphere. Retrieval of episodic musical memory, which includes music-evoked autobiographical memory, resulted in activation bilaterally in the middle and superior frontal gyri and the precuneus. Although bilateral activation was found there was dominance in the right hemisphere. This research suggests independence of episodic and semantic musical memory. The Levitin effect demonstrates accurate semantic memory for musical pitch and tempo among listeners, even without musical training, and without episodic memory of the original learning context.
Gaab, Keenan & Schlaug (2003) found a difference between males and females in the processing and subsequent memory for pitch using fMRI. More specifically, males showed more lateralized activity in the anterior and posterior perisylvin regions with greater activation in the left. Males also had more cerebellar activation than females did. However, females showed more posterior cingulate and retrosplenial cortex activation than did males. Nevertheless, it was demonstrated that the behavioural performance did not differ between males and females.[6]
It has been found by Deutsch[7][8] that lefthanders with mixed hand preference outperform righthanders in tests of short-term memory for pitch. This may be due to more storage of information on both sides of the brain by the mixed lefthanded group.
Experts have tremendous experience through practice and education in a particular field. Musical experts use some of the same strategies as do many experts in fields that require large amounts of memorization: chunking, organization and practice.[9] For example, musical experts may organize notes into scales or create a hierarchical retrieval scheme to facilitate retrieval from long-term memory. In a case study on an expert pianist, researchers Chaffin & Imreh (2002) found that a retrieval scheme was developed to guarantee that the music was recalled with ease. This expert used auditory and motor memory along with conceptual memory.[10] Together the auditory and motor representations allow for automaticity during performance, whereas the conceptual memory is mainly used to mediate when the piece is getting off track. When studying concert soloists, Chaffin and Logan (2006) reiterate that a hierarchical organization exists in memory, but also take this a step further suggesting that they actually use a mental map of the piece allowing them to keep track of the progression of the piece.[11] Chaffin and Logan (2006) also demonstrate that there are performance cues that monitor the automatic aspects of performance and adjust them accordingly. They distinguish between basic performance cues, interpretive performance cues, and expressive performance cues. Basic cues monitor technical features, interpretive cues monitor changes made in different aspects of the piece, and expressive cues monitor the feelings of the music. These cues are developed when experts pay attention to a particular aspect during practice.[11]
A savant syndrome is described as a person with a low IQ but who has superior performance in one particular field.[12] Sloboda, Hermelin and O'Connor (1985) discussed a patient, NP, who was able to memorize very complex musical pieces after hearing them three or four times. NP's performance exceeded that of experts with very high IQs. However, his performance on other memory tasks was average for a person with an IQ in his range. They used NP to suggest that high IQ is not needed for the skill of musical memorization and in fact, other factors must be influencing this performance. Miller (1987) also studied a 7-year-old child who was said to be a musical savant.[13] This child had superior short-term memory for music that was found to be influenced by the attention given to the complexity of the music, the key signature, and repeated configurations within a string. Miller (1987) suggests that a savant's ability is due to encoding the information into already existing meaningful structures in long-term memory.
Ruthsatz & Detterman (2003) define a prodigy as a child (younger than 10) who is able to excel at "culturally relevant" tasks to an extent that even isn't seen often in professionals in the field.[14] They describe a case of one particular boy who had already released two CDs (on which he sings in 2 different languages) and was able to play several instruments by the age of 6.
Other observations that were made of this young child were that he had:
Amusia is also known as tone deafness. Amusics primarily have deficits in processing pitch. They also have problems with musical memory, singing and timing. Amusics also cannot tell melodies apart from their rhythm or beat. However, amusics can recognize other sounds at a normal level (i.e. lyrics, voices, and sounds from the environment), therefore demonstrating that amusia is not due to deficits in exposure, hearing or cognition.[15]
Music has been shown to improve memory in several situations. In one study of musical effects on memory, visual cues (filmed events) were paired with background music. Later, participants who could not recall details of the scene were presented with the background music as a cue and recovered the inaccessible scene information.[16]
Other research provides support for memory of text being improved by musical training.[17] Words presented by song were remembered significantly better than when presented by speech. Earlier research has supported for this finding, that advertising jingles that pair words with music are remembered better than words alone or spoken words with music in the background.[18] Memory was also enhanced for pairing brands with their proper slogans if the advertising incorporated lyrics and music rather than spoken words and music in the background.
Training in music has also been shown to improve verbal memory in children and adults.[19] Participants trained in music and participants without a musical background were tested for immediate recall of words and recall of words after 15 minute delays. Word lists were presented orally to each participant 3 times and then participants recalled as many words as they could. Even when matched for intelligence, the musically trained participants tested better than non-musically trained participants. The authors of this research suggest that musical training enhances verbal memory processing due to neuroanatomical changes in the left temporal lobe, (responsible for verbal memory) which is supported by previous research.[20] MRI has been used to show that this region of the brain is larger in musicians than non-musicians, which may be due to changes in cortical organization contributing to improved cognitive function.
Anecdotal evidence, from an amnesic patient named CH who suffered from declarative memory deficits, was obtained supporting a preserved memory capacity for song titles. CH's unique knowledge of accordion music allowed for experimenters to test verbal and musical associations. When presented with song titles CH was able to successfully play the correct song 100% of the time, and when presented with the melody she chose the appropriate title from several distractors with a 90% success rate.[21]
Interference occurs when information in short-term memory interferes with or obstructs the retrieval of other information. Some researchers believe that interference in memory for pitch is due to a general limited capacity of the short-term memory system, regardless of the type of information that it retains. However, Deutsch has shown that memory for pitch is subject to interference based on the presentation of other pitches but not by the presentation of spoken numbers.[22] Further work has shown that short-term memory for the pitch of a tone is subject to highly specific effects produced by other tones, which depend on the pitch relationship between the interfering tones and the tone to be remembered.[23][24][25][26] It appears, therefore, that memory for pitch is the function of a highly organized system that specifically retains pitch information.
Any additional information present at the time of comprehension has the ability to displace the target information from short-term memory. Therefore, there is potential that one's ability to understand and remember will be compromised if one studies with the television or radio on.[27]
While studies have reported inconsistent results with regards to music's effect on memory, it has been demonstrated that music is able to interfere with various memory tasks. It has been demonstrated that new situations require new combinations of cognitive processing. This subsequently results in conscious attention being drawn to novel aspects of situations.[28] Therefore, the loudness of music presentation along with other musical elements can assist in distracting one from normal responses by encouraging attentiveness to the musical information.[29] Attention and recall have been shown to be negatively affected by the presence of a distraction.[30] Wolfe (1983) cautions that educators and therapists should be made aware of the potential for environments with sounds occurring simultaneously from many sources (musical and non-musical) to distract and interfere with student learning.[29]
Researchers Campbell and Hawley (1982) provided evidence of the regulation of arousal differences between introverts and extroverts. They found that when studying in a library, extroverts were more likely to choose to work in areas with bustle and activity, while introverts were more likely to choose a quiet, secluded area.[31] Accordingly, Adrian Furnham and Anna Bradley discovered that introverts presented with music at the time of two cognitive tasks (prose recall and reading comprehension) performed significantly worse on a test of memory recall than extroverts who were also presented with music at the time of the tasks. However, if music was not present at the time of the tasks, introverts and extroverts performed at the same level.[30]
Recent research has demonstrated that the normal right hemisphere of the brain responds to melody holistically, consistent with Gestalt Psychology, whereas the left hemisphere of the brain evaluates melodic passages in a more analytic fashion, similar to the feature-detecting capacity of the left hemisphere's visual field.[32] For instance, Regalski (1977) demonstrated that while listening to the melody of the popular carol "Silent Night", the right hemisphere thinks, "Ah, yes, Silent Night", while the left hemisphere thinks, "two sequences: the first a literal repetition, the second a repetition at different pitch levels—ah, yes, Silent Night by Franz Gruber, typical pastorate folk style." The brain for the most part works well when each hemisphere performs its own function while solving a task or problem; the two hemispheres are quite complementary. However, situations arise when the two modes are in conflict, resulting in one hemisphere interfering with the operation of the other hemisphere.[32]
Absolute pitch (AP) is the ability to produce or recognize specific pitches without reference to an external standard.[33][34] People boasting AP have internalized pitch references, and thus are able to maintain stable representations of pitch in long-term memory. AP is regarded as a rare and somewhat mysterious ability, occurring in as few as 1 in 10,000 people. A method commonly used to test for AP is as follows: subjects are first asked to close their eyes and imagine that a specific song is playing in their heads. Encouraged to start anywhere in the tune they like, subjects are instructed to try to reproduce the tones of that song by singing, humming, or whistling. Productions made by the subject are then recorded on digitally. Lastly, the subjects' productions are compared to the actual tones sung by the artists. Errors are measured in semitone deviations from the correct pitch.[33] This test, however, does not determine whether or not the subject has true absolute pitch, but rather is a test of implicit absolute pitch. Where true absolute pitch is concerned, Deutsch and colleagues have shown that music conservatory students who are speakers of tone languages have a far higher prevalence of absolute pitch than do speakers of nontone languages such as English.[35][36][37]
The ability to recognize incorrect pitch (musical) is most often tested by using the Distorted Tunes Test (DTT). The DTT was originally developed in the 1940s and was used in large studies in the British population. The DTT measures musical pitch recognition ability on an ordinal scale, scored as the number of correctly classified tunes. More specifically the DTT is used to evaluate subjects on how well they judge whether simple popular melodies contain notes with incorrect pitch. Researchers have used this method to investigate genetic correlates of musical pitch recognition in both monozygotic and dizygotic twins.[38] Drayna, Manichaikul, Lange, Snieder and Spector (2001) determined that the variation in musical pitch recognition is primarily due to highly heritable differences in auditory functions not tested by conventional audiologic methods. Therefore, the DTT method may provide a benefit to advancing research studies similar to this one.[38]
The following testing procedure has been used to assess infants' ability to recall familiar, yet complex pieces of music,[39] and also their preference for timbre and tempo.[40] The following procedure has demonstrated not only that infants attend longer to familiar than to unfamiliar pieces of music but also that infants remember the tempo and timbre of the familiarized melodies over long periods of time. This has been demonstrated by the fact that by changing the tempo or timbre at test, one eliminates an infant's preference for the novel melody. Thus, indicating that infants' long-term memory representations are not simply of the abstract musical structure, but contain surface or performance features as well. This testing procedure contains three phases:
Many students listen to music while they study. Many of these students maintain that the reasons why they listen to music are to prevent drowsiness and to maintain their arousal for study. Some even believe that background music facilitates better work performance.[41] However, Salame and Baddeley (1989) showed that both vocal and instrumental music interfered with performance of linguistic memory.[42] They explained that disturbance in performance was caused by task-irrelevant phonological information using resources in the working memory system.[41] This disturbance can be explained by the fact that the linguistic component of music can occupy the phonological loop, similar to the way speech does.[43] This is further demonstrated by the fact that vocal music has been perceived to interfere more with memory than instrumental music and nature sound music.[41] Rolla (1993) explains that lyrics, being language, develop images that allow for the interpretation of experience in the communicative process.[44] Current research[which?] coincides with this idea and maintains that the sharing of experience by language in song may communicate feeling and mood much more directly than either language itself or instrumental music alone. Vocal music also affects emotion and mood much more swiftly than instrumental music.[44] However, Fogelson (1973) reported instrumental music interfered with children's performance on a reading comprehension test.[45]
Neural structures form and become more sophisticated as a result of experience. For example, the preference for consonance, the harmony or agreement of components, over dissonance, an unstable tone combination, is found early in development. Research suggests that this is due to both the experiencing of structured sounds and the fact they stem from development of the basilar membrane and auditory nerve, two early developing structures in the brain.[46] An incoming auditory stimulus evokes responses measured in the form of an event-related potential (ERP), measured brain responses resulting directly from a thought or perception. There is a difference in ERP measures for normally developing infants ranging from 2–6 months in age. Measures in infants 4 months and older demonstrate faster, more negative ERPs. In contrast, newborns and infants up to 4 months of age show slow, unsynchronized, positive ERPs.[47] Trainor, et al. (2003) hypothesized that these results indicated that responses from infants less than four months of age are produced by subcortical auditory structures, whereas with older infants responses tend to originate in the higher cortical structures.
There are two methods of encoding/remembering music. The first process is known as relative pitch, which refers to a person's ability to identify the intervals between given tones. Therefore, the song is learned as a continuous succession of intervals. Some people can also use absolute pitch in the process; this is ability to name or replicate a tone without reference to an external standard. Another term used in rare cases is the idea of perfect pitch. Perfect pitch refers to seeing or hearing any given note and being able to sing or cite the determined note/interval respectively. Relative pitch has also been credited by some with being the more sophisticated of the two processes as it allows for quick recognition regardless of pitch, timbre or quality, as well as having the ability to produce physiological responses, for example, if the melody violates the learned relative pitch.[46] Relative pitch has been shown to develop at varying rates dependent on culture. Trehub and Schellenberg (2008) found that 5- and 6-year-old Japanese children performed significantly better at a task requiring the utilization of relative pitch than same-aged Canadian children. They hypothesized that this could be because the Japanese children have more exposure to pitch accent via Japanese language and culture than the predominantly stressed environment Canadian children experience.
Early acquisition of relative pitch allows for accelerated learning of scales and intervals. Musical training assists with the attentional and executive functioning necessary to interpret and efficiently encode music. In conjunction with brain plasticity, these processes become more and more stable. However, this process expresses a degree of circular logic in that the more learning that takes place, the greater the stability of the processes, ultimately decreasing overall brain plasticity.[46] This could possibly explain the discrepancy in amount of effort both children and adults have to put into mastering new tasks.
Atkinson and Shiffrin's 1968 model consists of separate components for short and long term memory storage. It states that short-term memory is limited by its capacity and duration.[48] Research suggests that musical short-term memory is stored differently from verbal short-term memory. Berz (1995) found dissimilar results for the correlation between modality and recency effects in language versus music, suggesting that different encoding processes are engaged.[49] Berz also demonstrated different levels of interference on tasks as a result of language stimulus vs. musical stimulus. Finally Berz provided evidence for a separate store theory through the "Unattended Music Effect", stating "If there was a singular acoustic store, unattended instrumental music would cause the same disruptions on verbal performance as would unattended vocal music or unattended vocal speech; this, however, [is] not the case".[49]
Baddeley and Hitch's 1974 Model consists of three components; one main component, the central executive and two sub components, the phonological loop and the visuospatial sketchpad.[50] The central executive's primary role is to mediate between the two sub-systems. The visuospatial sketchpad holds information about what we see. The phonological loop can be further divided into: the Articulatory control system, the "inner voice" responsible for verbal rehearsal; and the Phonological store, the "inner ear" responsible for speech-based storage. Major criticisms of this model include a lack of musical processing/encoding and an ignorance towards other sensory inputs regarding the encoding and storage of olfactory, gustatory, and tactile inputs.[49]
This theoretical model proposed by William Berz (1995) is based on the Baddeley and Hitch model.[49] However, Berz modified the model to include a musical memory loop as a loose addition (meaning, almost a separate loop altogether) to the phonological loop. This new musical perceptual loop contains musical inner speech in addition to the verbal inner speech provided by the original phonological loop. He also proposed another loop to include other sensory inputs that were disregarded in the Baddeley and Hitch model.[49]
In a model outlined by Stefan Koelsch and Walter Siebel, music stimuli are perceived in a successive timeline, breaking down the auditory input into different characteristics and meaning. He maintained that upon perception the sound reaches the auditory nerve, brainstem and thalamus. At this point features involving pitch height, chroma, timbre, intensity and roughness are extracted. This occurs about at about 10–100ms. Next, melodic and rhythmic grouping occurs, which is then perceived by auditory sensory memory. After this, an analysis is made of intervals and chord progressions. A harmony is then built upon the structure of metre, rhythm and timbre. This occurs from about 180–400ms after the initial perception. Following this, structural reanalysis and repair occur, at about 600–900ms. Finally, the autonomic nervous system and multimodal association cortices are activated. Koelsch and Siebel proposed that from about 250–500ms, based on the sound's meaning, interpretation and emotion occurs continuously throughout this process. This is indicated by N400, a negative spike at 400ms, as measured by an "event related potential".[51]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.