Spoken Language: Formulaic, Musical, & Bicameral

One could argue for an underlying connection between voice-hearing, formulaic language, and musical ability. This could relate to Julian Jaynes’ theory of the bicameral mind, as this has everything with the hemispheric division of neurocogntive functioning.

It is enticing to consider the possibility that language originally developed out of or in concert with music, the first linguistic expression having been sing-song utterances. And it is fascinating to imagine that the voices of gods, ancestors, etc might have spoken in a formulaic musicality. I remember reading about a custom, as I recall in pre-literate Germany, of people greeting each other with traditional (and probably formulaic) poems/rhymes. When I came across that, I wondered if it might have been a habit maintained from an earlier bicameralism.

Maybe poetic and musical language was common in most pre-literate societies. But by the time literacy comes around to write down languages, those traditions and the mindsets that go with them might already be severely on the decline. That would mean little evidence would survive. We do know, for example, that Socrates wanted to exclude the poets from his utopian Axial Age (i.e., post-bicameral) society.

Spoken language with rhymes or rhythm is dangerous because it has power over the human mind. It speaks to (or maybe even from) something ancient dwelling within us.

* * *

Rajeev J Sebastian: “Found this very interesting paper that suggests differences between grammatical language and so-called “formulaic” language and the link between melody/music and “formulaic” language … echoes of [Julian Jaynes’] theory in there.”

Ed Buffaloe: “It makes me wonder if communication in bicameral men may have been largely through right-brain-controlled formulaic language.”

Tapping into neural resources of communication: formulaic language in aphasia therapy
by Benjamin Stahl & Diana Van Lancker Sidtis

Decades of research highlight the importance of formulaic expressions in everyday spoken language (Vihman, 1982; Wray, 2002; Kuiper, 2009). Along with idioms, expletives, and proverbs, this linguistic category includes conversational speech formulas, such as “You’ve got to be kidding,” “Excuse me?” or “Hang on a minute” (Fillmore, 1979; Pawley and Syder, 1983; Schegloff, 1988). In their modern conception, formulaic expressions differ from newly created, grammatical utterances in that they are fixed in form, often non-literal in meaning with attitudinal nuances, and closely related to communicative-pragmatic context (Van Lancker Sidtis and Rallon, 2004). Although the proportion of formulaic expressions to spoken language varies with type of measure and discourse, these utterances are widely regarded as crucial in determining the success of social interaction in many communicative aspects of daily life (Van Lancker Sidtis, 2010).

The unique role of formulaic expressions in spoken language is reflected at the level of their functional neuroanatomy. While left perisylvian areas of the brain support primarily propositional, grammatical utterances, the processing of conversational speech formulas was found to engage, in particular, right-hemisphere cortical areas and the bilateral basal ganglia (Hughlings-Jackson, 1878; Graves and Landis, 1985; Speedie et al., 1993; Van Lancker Sidtis and Postman, 2006; Sidtis et al., 2009; Van Lancker Sidtis et al., 2015). It is worth pointing out that parts of these neural networks are intact in left-hemisphere stroke patients, leading to the intriguing observation that individuals with classical speech and language disorders are often able to communicate comparably well based on a repertoire of formulaic expressions (McElduff and Drummond, 1991; Lum and Ellis, 1994; Stahl et al., 2011). An upper limit of such expressions has not yet been identified, with some estimates reaching into the hundreds of thousands (Jackendoff, 1995). […]

Nonetheless, music-based rehabilitation programs have been demonstrated to directly benefit the production of trained expressions in individuals with chronic non-fluent aphasia and apraxia of speech (Wilson et al., 2006; Stahl et al., 2013; Zumbansen et al., 2014). One may argue that the reported progress in the production of such expressions depends, at least in part, on increased activity in right-hemisphere neural networks engaged in the processing of formulaic language, especially when considering the repetitive character of the training (cf. Berthier et al., 2014).

* * *

Music and Dance on the Mind

Over at Ribbonfarm, Sarah Perry has written about this and similar things. Her focus is on the varieties and necessities of human consciousness. The article is “Ritual and the Consciousness Monoculture“. It’s a longer piece and packed full of ideas, including an early mention of Jaynesian bicameralism.

The author doesn’t get around to discussing the above topics until about halfway into the piece. It’s in a section titled, “Hiving and Rhythmic Entrainment”. The hiving refers to Jonathan Haidt’s hive hypothesis. It doesn’t seem all that original of an understanding, but still it’s an important idea. This is an area where I’d agree with Haidt, despite my other disagreements elsewhere. In that section, Perry writes that:

Donald Brown’s celebrated list of human universals, a list of characteristics proposed to be common to all human groups ever studied, includes many entries on music, including “music related in part to dance” and “music related in part to religion.” The Pirahã use several kinds of language, including regular speech, a whistling language, and a musical, sung language. The musical language, importantly, is used for dancing and contacting spirits. The Pirahã, Everett says, often dance for three days at a time without stopping. They achieve a different consciousness by performing rituals calibrated to evoke mental states that must remain opaque to those not affected.

Musical language is the type of evidence that seems to bridge different aspects of human experience. It has been argued that language developed along with human tendencies of singing, dance, ritual movement, communal mimicry, group bonding, and other social behaviors. Stephen Mithen has an interesting theory about the singing of early hominids (The Singing Neanderthal).

That brings to mind Lynne Kelly’s book on preliterate mnemonic practices, Knowledge and Power in Prehistoric Societies. Kelly goes into great detail about the practices of the Australian Aborigines with their songlines, which always reminds me of the English and Welsh beating of the bounds. A modern example of the power of music is choral singing, which research has shown to create non-conscious mimicry, physical synchrony, and self-other merging.

* * *

Development of Language and Music

Did Music Evolve Before Language?
by Hank Campbell, Science 2.0

Gottfriend Schlaug of Harvard Medical School does something a little more direct that may be circumstantial but is a powerful exclamation point for a ‘music came first’ argument. His work with patients who have suffered severe lesions on the left side of their brain showed that while they could not speak – no language skill as we might define it – they were able to sing phrases like “I am thirsty”, sometimes within two minutes of having the phrase mapped to a melody.

Theory: Music underlies language acquisition
by B.J. Almond, Rice University

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

* * *

Music and Dance on the Mind

In singing with a choral group or marching in an army, we moderns come as close as we are able to this ancient mind. It’s always there within us, just normally hidden. It doesn’t take much, though, for our individuality to be submerged and something else to emerge. We are all potential goosestepping authoritarian followers, waiting for the right conditions to bring our primal natures out into the open. With the fiery voice of authority, we can be quickly lulled into compliance by an inspiring or invigorating vision:

[T]hat old time religion can be heard in the words and rhythm of any great speaker. Just listen to how a recorded speech of Martin Luther King jr can pull you in with its musicality. Or if you prefer a dark example, consider the persuasive power of Adolf Hitler for even some Jews admitted they got caught up listening to his speeches. This is why Plato feared the poets and banished them from his utopia of enlightened rule. Poetry would inevitably undermine and subsume the high-minded rhetoric of philosophers. “[P]oetry used to be divine knowledge,” as Guerini et al states in Echoes of Persuasion, “It was the sound and tenor of authorization and it commanded where plain prose could only ask.”

Poetry is one of the forms of musical language. Plato’s fear wasn’t merely about the aesthetic appeal of metered rhyme. Living in an oral culture, he would have intimately known the ever-threatening power and influence of the spoken word. Likewise, the sway and thrall of rhythmic movement would have been equally familiar in that world. Community life in ancient Greek city-states was almost everything that mattered, a tightly woven identity and experience.

Development of Language and Music

Evidence Rebuts Chomsky’s Theory of Language Learning
by Paul Ibbotson and Michael Tomasello

All of this leads ineluctably to the view that the notion of universal grammar is plain wrong. Of course, scientists never give up on their favorite theory, even in the face of contradictory evidence, until a reasonable alternative appears. Such an alternative, called usage-based linguistics, has now arrived. The theory, which takes a number of forms, proposes that grammatical structure is not in­­nate. Instead grammar is the product of history (the processes that shape how languages are passed from one generation to the next) and human psychology (the set of social and cognitive capacities that allow generations to learn a language in the first place). More important, this theory proposes that language recruits brain systems that may not have evolved specifically for that purpose and so is a different idea to Chomsky’s single-gene mutation for recursion.

In the new usage-based approach (which includes ideas from functional linguistics, cognitive linguistics and construction grammar), children are not born with a universal, dedicated tool for learning grammar. Instead they inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools—such as categorization, the reading of communicative intentions, and analogy making, with which children build grammatical categories and rules from the language they hear around them.

Broca and Wernicke are dead – it’s time to rewrite the neurobiology of language
by Christian Jarrett, BPS Research Digest

Yet the continued dominance of the Classic Model means that neuropsychology and neurology students are often learning outmoded ideas, without getting up to date with the latest findings in the area. Medics too are likely to struggle to account for language-related symptoms caused by brain damage or illness in areas outside of the Classic Model, but which are relevant to language function, such as the cerebellum.

Tremblay and Dick call for a “clean break” from the Classic Model and a new approach that rejects the “language centric” perspective of the past (that saw the language system as highly specialised and clearly defined), and that embraces a more distributed perspective that recognises how much of language function is overlaid on cognitive systems that originally evolved for other purposes.

Signing, Singing, Speaking: How Language Evolved
by Jon Hamilton, NPR

There’s no single module in our brain that produces language. Instead, language seems to come from lots of different circuits. And many of those circuits also exist in other species.

For example, some birds can imitate human speech. Some monkeys use specific calls to tell one another whether a predator is a leopard, a snake or an eagle. And dogs are very good at reading our gestures and tone of voice. Take all of those bits and you get “exactly the right ingredients for making language possible,” Elman says.

We are not the only species to develop speech impediments
by Moheb Costandi, BBC

Jarvis now thinks vocal learning is not an all-or-nothing function. Instead there is a continuum of skill – just as you would expect from something produced by evolution, and which therefore was assembled slowly, piece by piece.

The music of language: exploring grammar, prosody and rhythm perception in zebra finches and budgerigars
by Michelle Spierings, Institute of Biology Leiden

Language is a uniquely human trait. All animals have ways to communicate, but these systems do not bear the same complexity as human language. However, this does not mean that all aspects of human language are specifically human. By studying the language perception abilities of other species, we can discover which parts of language are shared. It are these parts that might have been at the roots of our language evolution. In this thesis I have studied language and music perception in two bird species, zebra finches and budgerigars. For example, zebra finches can perceive the prosodic (intonation) patterns of human language. The budgerigars can learn to discriminate between different abstract (grammar) patterns and generalize these patterns to new sounds. These and other results give us insight in the cognitive abilities that might have been at the very basis of the evolution of human language.

How Music and Language Mimicked Nature to Evolve Us
by Maria Popova, Brain Pickings

Curiously, in the majority of our interaction with the world, we seem to mimic the sounds of events among solid objects. Solid-object events are comprised of hits, slides and rings, producing periodic vibrations. Every time we speak, we find the same three fundamental auditory constituents in speech: plosives (hit-sounds like t, d and p), fricatives (slide-sounds like f, v and sh), and sonorants (ring-sounds like a, u, w, r and y). Changizi demonstrates that solid-object events have distinct “grammar” recurring in speech patterns across different languages and time periods.

But it gets even more interesting with music, a phenomenon perceived as a quintessential human invention — Changizi draws on a wealth of evidence indicating that music is actually based on natural sounds and sound patterns dating back to the beginning of time. Bonus points for convincingly debunking Steven Pinker’s now-legendary proclamation that music is nothing more than “auditory cheesecake.”

Ultimately, Harnessed shows that both speech and music evolved in culture to be simulacra of nature, making our brains’ penchant for these skills appear intuitive.

The sounds of movement
by Bob Holmes, New Scientist

It is this subliminal processing that spoken language taps into, says Changizi. Most of the natural sounds our ancestors would have processed fall into one of three categories: things hitting one another, things sliding over one another, and things resonating after being struck. The three classes of phonemes found in speech – plosives such as p and k, fricatives such as sh and f, and sonorants such as r, m and the vowels – closely resemble these categories of natural sound.

The same nature-mimicry guides how phonemes are assembled into syllables, and syllables into words, as Changizi shows with many examples. This explains why we acquire language so easily: the subconscious auditory processing involved is no different to what our ancestors have done for millions of years.

The hold that music has on us can also be explained by this kind of mimicry – but where speech imitates the sounds of everyday objects, music mimics the sound of people moving, Changizi argues. Primitive humans would have needed to know four things about someone moving nearby: their distance, speed, intent and whether they are coming nearer or going away. They would have judged distance from loudness, speed from the rate of footfalls, intent from gait, and direction from subtle Doppler shifts. Voila: we have volume, tempo, rhythm and pitch, four of the main components of music.

Scientists recorded two dolphins ‘talking’ to each other
by Maria Gallucci, Mashable

While marine biologists have long understood that dolphins communicate within their pods, the new research, which was conducted on two captive dolphins, is the first to link isolated signals to particular dolphins. The findings reveal that dolphins can string together “sentences” using a handful of “words.”

“Essentially, this exchange of [pulses] resembles a conversation between two people,” Vyacheslav Ryabov, the study’s lead researcher, told Mashable.

“The dolphins took turns in producing ‘sentences’ and did not interrupt each other, which gives reason to believe that each of the dolphins listened to the other’s pulses before producing its own,” he said in an email.

“Whistled Languages” Reveal How the Brain Processes Information
by Julien Meyer, Scientific American

Earlier studies had shown that the left hemisphere is, in fact, the dominant language center for both tonal and atonal tongues as well as for nonvocalized click and sign languages. Güntürkün was interested in learning how much the right hemisphere—associated with the processing of melody and pitch—would also be recruited for a whistled language. He and his colleagues reported in 2015 in Current Biology that townspeople from Kuşköy, who were given simple hearing tests, used both hemispheres almost equally when listening to whistled syllables but mostly the left one when they heard vocalized spoken syllables.

Did Music Evolve Before Language?
by Hank Campbell, Science 2.0

Gottfriend Schlaug of Harvard Medical School does something a little more direct that may be circumstantial but is a powerful exclamation point for a ‘music came first’ argument. His work with patients who have suffered severe lesions on the left side of their brain showed that while they could not speak – no language skill as we might define it – they were able to sing phrases like “I am thirsty”, sometimes within two minutes of having the phrase mapped to a melody.

Chopin, Bach used human speech ‘cues’ to express emotion in music
by Andrew Baulcomb, Science Daily

“What we found was, I believe, new evidence that individual composers tend to use cues in their music paralleling the use of these cues in emotional speech.” For example, major key or “happy” pieces are higher and faster than minor key or “sad” pieces.

Theory: Music underlies language acquisition
by B.J. Almond, Rice University

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

– See more at: http://news.rice.edu/2012/09/18/theory-music-underlies-language-acquisition/#sthash.kQbEBqnh.dpuf

How Brains See Music as Language
by Adrienne LaFrance, The Atlantic

What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations “take root in the brain as a language,” Limb said.

“It makes perfect sense,” said Ken Schaphorst, chair of the Jazz Studies Department at the New England Conservatory in Boston. “I improvise with words all the time—like I am right now—and jazz improvisation is really identical in terms of the way it feels. Though it’s difficult to get to the point where you’re comfortable enough with music as a language where you can speak freely.”

Along with the limitations of musical ability, there’s another key difference between jazz conversation and spoken conversation that emerged in Limb’s experiment. During a spoken conversation, the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But Limb and his colleagues found that brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this kind of music is syntactic but it’s not semantic.

“Music communication, we know it means something to the listener, but that meaning can’t really be described,” Limb said. “It doesn’t have propositional elements or specificity of meaning in the same way a word does. So a famous bit of music—Beethoven’s dun dun dun duuuun—we might hear that and think it means something but nobody could agree what it means.”