This is a self-indulgent post on music and language, more precisely on brain connections between music and language. The post is inspired by a great book I read this summer: Musicophilia – Tales of Music and the Brain by Oliver Sacks.
Oliver Sacks, a renowned neurologist and author, loved words so much; he often dreamed of them, and sometimes dreamed them up. “Musicophilia” is one of the words he made up, meaning an intense love of music.
In his book Sacks describes our exquisite sensitivity to music. Music has a unique power to express our feelings or to evoke some memories. It can get us dancing to its beat or lift us out of depression. Music can cause some catchy tunes to subject us to hours of mental replay, or can be the reason of nonstop musical hallucinations that a surprising number of people can acquire.
The author examines the powers of music through the individual experiences of patients, musicians, and ordinary people – from a man who is struck by lightning and suddenly inspired to become a pianist at the age of forty-two, to a group of children with Williams syndrome, who are hypermusical from birth; he describes people with “amusia,” to whom a symphony sounds like the clattering of pots and pans, and a man whose memory spans only seven seconds – for everything but music.
Music is both completely abstract and profoundly emotional
Sacks writes that there is a tendency in philosophy and psychology to separate the mind, the intellectual operations, from the passions, the emotions. The neuroscience of music, in particular, has concentrated almost exclusively on the neural mechanisms by which we perceive pitch, tonal intervals, melody, rhythm, and so on, and, until very recently, has paid little attention to the affective aspects of appreciating music.
“Yet music calls to both parts of our nature – it is essentially emotional, as it is essentially intellectual. Often when we listen to music, we are conscious of both: we may be moved to the depths even as we appreciate the formal structure of a composition.”
Professional musicians, or anyone practising a piece of music, may sometimes have to listen with a critical ear to ensure that all the minutiae of a performance are technically correct, but technical correctness alone is not enough; once this is achieved, emotion must return, or one may be left with nothing beyond an arid virtuosity. And, as the author points out “it is always a balance, a coming together, that is needed.”
The author explores further and explains that we have separate and distinct mechanisms for appreciating the structural and the emotional aspects of music which “is brought home by the wide variety of responses (and even “dissociations”) that people have to music.” And thus it is quite striking that “one may be quite “musical” and yet almost indifferent to music, or almost tone-deaf yet passionately sensitive to music.”
Professional musicians, in general, possess what most of us would regard as remarkable “powers of musical imagery”, as many composers do not compose initially or entirely at an instrument but in their minds. The author gives an example of Beethoven who wrote some brilliant compositions even after he had become deaf.
“There is no more extraordinary example of this than Beethoven, who continued to compose (and whose compositions rose to greater and greater heights) years after he had become totally deaf. It is possible that his musical imagery was even intensified by deafness, for with the removal of normal auditory input, the auditory cortex may become hypersensitive, with heightened powers of musical imagery (and sometimes even auditory hallucinations).”
Brain connections between music & language
There are theories that music is older than speech or language, and some even argue that speech evolved from music.
“Language and music both depend on phonatory and articulatory mechanisms that are rudimentary in other primates, and both depend, for their appreciation, on distinctly human brain mechanisms dedicated to the analysis of complex, segmented, rapidly changing streams of sound. And yet there are major differences (and some overlaps) in the processing of speech and song in the brain.”
In Musicophilia Sacks gives examples of the patients with so-called nonfluent aphasia who do not only have an impairment of vocabulary and grammar, but have “forgotten” or lost the feeling of the rhythms and inflections of speech; hence the broken, unmusical, telegraphic style of their speech, to the extent that they still have any words available.
As we know speech itself is not just a succession of words in the proper order – it has inflections, intonations, tempo, rhythm, and “melody.” And, pitch is not only central in music but in language, too, revealing the meaning of utterances (question or statement, angry or ironic, enthusiasm or indifference, etc.).
Intonation in speech and music are largely similar in concept. While pondering this similarity/difference, a post I read a few years ago On singing accents by David Crystal came to my mind. There is a common phenomenon, that most British singers adopt an American accent while singing. One (most convincing) theory is that British singers lose their accents because of the melodies and beats they are trying to follow as they sing. According to Crystal there are two reasons for it: the first is phonetic, singers are forced to stress syllables as they are accented in the music, which forces singers to elongate their vowels. Crystal says it is unusual for a singer to hold a regional accent throughout the whole song, resulting in what he calls ‘mixed accents’ for most. The other reason for accent levelling in songs is social. Some singers want to drop their regional accent, because they want to sing like the fashionable mainstream. This has been especially noticeable in popular music since the early days of rock ‘n’ roll.
Do Musicians make better L2 learners?
In the New York Times article “New ways into the brains room” from 2016, Natalie Angier writes on the research of a group of professors of neuroscience at the Massachusetts Institute of Technology (they reported their results in the journal Neuron). The findings offer researchers a new tool for exploring the contours of human musicality.
The researchers at M.I.T. have devised a radical new approach to brain imaging that reveals neural pathways that react almost exclusively to the sound of music – any music. When a musical passage is played, a distinct set of neurons tucked inside a furrow of a listener’s auditory cortex will fire in response. Other sounds, by contrast – a dog barking, a car skidding, a toilet flushing – leave the musical circuits unmoved.
“Importantly, the M.I.T. team demonstrated that the speech and music circuits are in different parts of the brain’s sprawling auditory cortex, where all sound signals are interpreted, and that each is largely deaf to the other’s sonic cues, although there is some overlap when it comes to responding to songs with lyrics.”
In an article on second language learning and musical ability Do musicians make better language learners, Aneta Pavlenko, Research Professor at the Center for Multilingualism in Society across the Lifespan at the University of Oslo, writes that it seems that music and language do rely on common – or at least similar – processes: detection of differences in pitch, rhythm, phrasing and interpretation, tonal memory, memory for long sequences, and the ability to imitate and improvise based on familiar sequences. These similarities led researchers to ask two questions: Are abilities in one domain easily transferred to another? And are musicians better L2 learners than the rest of us?
Some studies found that speakers of tonal languages (Vietnamese and Mandarin, for example) were better at identifying musical pitches than speakers of English or French. They are also more likely to have absolute pitch (i.e. the ability to identify and recreate musical notes without the use of a reference tone). A. Pavlenko points out that these findings suggest that musical training enhances pitch ability and/or because people with high levels of pitch ability gravitate towards musical training.
“Other than a minor advantage in discriminating tones, however, there does not appear to be any conclusive evidence that musicians are better at L2 learning or have superior pronunciation skills.”
According to these neuropsychological studies music and language are represented in distinct areas of the brain, indicating thereby that the link between musical ability and second language learning is not as direct as one would think.
So where do we stand on the relationship between music and language?
“We certainly should not jump to the conclusion that speakers of tonal languages make better musicians. There is more to musical talent than sensitivity to pitch. By the same token, not every musician is a polyglot – there is much more to L2 learning than tonal discrimination and when it comes to syntax, vocabulary or pragmatics, musicians have no advantage over the rest of us.”
Musicophilia by Oliver Sacks
DC Blog: On singing accents http://david-crystal.blogspot.com/2009/11/on-singing-accents.html
The Telegraph: Why you put on an American accent when you sing –https://www.telegraph.co.uk/culture/music/rockandpopmusic/11720137/Why-you-put-on-an-American-accent-when-you-sing.html
Do musicians make better language learners https://www.psychologytoday.com/us/blog/life-bilingual/201707/do-musicians-make-better-language-learners
The New York Times: New ways into the brains music room https://www.nytimes.com/2016/02/09/science/new-ways-into-the-brains-music-room.html?_r=3