Political Right Rhetoric

The following is an accurate description of the political rhetoric, the labels and language in its use on the political right (from a Twitter thread). It is by Matthew A. Sears, an Associate Professor of Classics and Ancient History at the University of New Brunswick.

1. “I’m neither a liberal nor a conservative.” = “I’m totally a conservative.”

2. “I’m a radical centrist.” = “I’m totally a conservative.”

3. “I’m a classical liberal.” = “I’m a neoliberal who’s never read any classical liberals.”

4. “I’m not usually a fan of X.” *Retweets and agrees with everything X says.*

5. “I’m a free speech absolutist.” = “I’m glad racists are now free to speak publicly.”

6. “I believe in confronting views one finds offensive.” *Whines about being bullied by lefties.*

7. “My views are in the minority and aren’t given a fair hearing.”*Buys the best-selling book in the world.*

8. “Where else would you rather live?” = “Canada is perfect for me, and it better not frigging change to be better for anyone else.”

9. “Nazis should be able to speak and given platforms so we can debate them.” *Loses mind if someone says ‘fuck’ to a Nazi.*

10. “The left has taken over everything.” *Trump is president and the Republicans control Congress.*

And, finally, the apex of Twitterspeak:

11. “The left are tyrants and have taken over everything and refuse to hear other perspectives and pose a dire threat to the republic and Western Civilization.” *Ben Shapiro has over a million followers.*

I’d say treat this thread as an Enigma Machine for Quillette-speak/viewpoint-diversity-speak/reverse-racism-speak/MRA-speak, but none of these chaps are enigmas.

I can’t believe I have to add this, but some are *outraged* by this thread: I don’t mind if you’re *actually* centrist or conservative. I just mind if you *pretend to be* left/centrist for rhetorical/media cred/flamewar purposes, while *only* taking conservative stances. Sheesh

Like, I’m pretty left-wing on many issues these days. It would be sneaky of me to identity as “conservative” or “classical liberal” or whatever only to dump on all their ideas and always support opposing ideas. A left-winger or centrist is what a left-winger or centrist tweets.

James Taoist added:

12. “I’m a strict Constitutionalist” = “I’m as racist as fuck.”

Advertisements

Spoken Language: Formulaic, Musical, & Bicameral

One could argue for an underlying connection between voice-hearing, formulaic language, and musical ability. This could relate to Julian Jaynes’ theory of the bicameral mind, as this has everything with the hemispheric division of neurocogntive functioning.

It is enticing to consider the possibility that language originally developed out of or in concert with music, the first linguistic expression having been sing-song utterances. And it is fascinating to imagine that the voices of gods, ancestors, etc might have spoken in a formulaic musicality. I remember reading about a custom, as I recall in pre-literate Germany, of people greeting each other with traditional (and probably formulaic) poems/rhymes. When I came across that, I wondered if it might have been a habit maintained from an earlier bicameralism.

Maybe poetic and musical language was common in most pre-literate societies. But by the time literacy comes around to write down languages, those traditions and the mindsets that go with them might already be severely on the decline. That would mean little evidence would survive. We do know, for example, that Socrates wanted to exclude the poets from his utopian Axial Age (i.e., post-bicameral) society.

Spoken language with rhymes or rhythm is dangerous because it has power over the human mind. It speaks to (or maybe even from) something ancient dwelling within us.

* * *

Rajeev J Sebastian: “Found this very interesting paper that suggests differences between grammatical language and so-called “formulaic” language and the link between melody/music and “formulaic” language … echoes of [Julian Jaynes’] theory in there.”

Ed Buffaloe: “It makes me wonder if communication in bicameral men may have been largely through right-brain-controlled formulaic language.”

Tapping into neural resources of communication: formulaic language in aphasia therapy
by Benjamin Stahl & Diana Van Lancker Sidtis

Decades of research highlight the importance of formulaic expressions in everyday spoken language (Vihman, 1982; Wray, 2002; Kuiper, 2009). Along with idioms, expletives, and proverbs, this linguistic category includes conversational speech formulas, such as “You’ve got to be kidding,” “Excuse me?” or “Hang on a minute” (Fillmore, 1979; Pawley and Syder, 1983; Schegloff, 1988). In their modern conception, formulaic expressions differ from newly created, grammatical utterances in that they are fixed in form, often non-literal in meaning with attitudinal nuances, and closely related to communicative-pragmatic context (Van Lancker Sidtis and Rallon, 2004). Although the proportion of formulaic expressions to spoken language varies with type of measure and discourse, these utterances are widely regarded as crucial in determining the success of social interaction in many communicative aspects of daily life (Van Lancker Sidtis, 2010).

The unique role of formulaic expressions in spoken language is reflected at the level of their functional neuroanatomy. While left perisylvian areas of the brain support primarily propositional, grammatical utterances, the processing of conversational speech formulas was found to engage, in particular, right-hemisphere cortical areas and the bilateral basal ganglia (Hughlings-Jackson, 1878; Graves and Landis, 1985; Speedie et al., 1993; Van Lancker Sidtis and Postman, 2006; Sidtis et al., 2009; Van Lancker Sidtis et al., 2015). It is worth pointing out that parts of these neural networks are intact in left-hemisphere stroke patients, leading to the intriguing observation that individuals with classical speech and language disorders are often able to communicate comparably well based on a repertoire of formulaic expressions (McElduff and Drummond, 1991; Lum and Ellis, 1994; Stahl et al., 2011). An upper limit of such expressions has not yet been identified, with some estimates reaching into the hundreds of thousands (Jackendoff, 1995). […]

Nonetheless, music-based rehabilitation programs have been demonstrated to directly benefit the production of trained expressions in individuals with chronic non-fluent aphasia and apraxia of speech (Wilson et al., 2006; Stahl et al., 2013; Zumbansen et al., 2014). One may argue that the reported progress in the production of such expressions depends, at least in part, on increased activity in right-hemisphere neural networks engaged in the processing of formulaic language, especially when considering the repetitive character of the training (cf. Berthier et al., 2014).

* * *

Music and Dance on the Mind

Over at Ribbonfarm, Sarah Perry has written about this and similar things. Her focus is on the varieties and necessities of human consciousness. The article is “Ritual and the Consciousness Monoculture“. It’s a longer piece and packed full of ideas, including an early mention of Jaynesian bicameralism.

The author doesn’t get around to discussing the above topics until about halfway into the piece. It’s in a section titled, “Hiving and Rhythmic Entrainment”. The hiving refers to Jonathan Haidt’s hive hypothesis. It doesn’t seem all that original of an understanding, but still it’s an important idea. This is an area where I’d agree with Haidt, despite my other disagreements elsewhere. In that section, Perry writes that:

Donald Brown’s celebrated list of human universals, a list of characteristics proposed to be common to all human groups ever studied, includes many entries on music, including “music related in part to dance” and “music related in part to religion.” The Pirahã use several kinds of language, including regular speech, a whistling language, and a musical, sung language. The musical language, importantly, is used for dancing and contacting spirits. The Pirahã, Everett says, often dance for three days at a time without stopping. They achieve a different consciousness by performing rituals calibrated to evoke mental states that must remain opaque to those not affected.

Musical language is the type of evidence that seems to bridge different aspects of human experience. It has been argued that language developed along with human tendencies of singing, dance, ritual movement, communal mimicry, group bonding, and other social behaviors. Stephen Mithen has an interesting theory about the singing of early hominids (The Singing Neanderthal).

That brings to mind Lynne Kelly’s book on preliterate mnemonic practices, Knowledge and Power in Prehistoric Societies. Kelly goes into great detail about the practices of the Australian Aborigines with their songlines, which always reminds me of the English and Welsh beating of the bounds. A modern example of the power of music is choral singing, which research has shown to create non-conscious mimicry, physical synchrony, and self-other merging.

* * *

Development of Language and Music

Did Music Evolve Before Language?
by Hank Campbell, Science 2.0

Gottfriend Schlaug of Harvard Medical School does something a little more direct that may be circumstantial but is a powerful exclamation point for a ‘music came first’ argument. His work with patients who have suffered severe lesions on the left side of their brain showed that while they could not speak – no language skill as we might define it – they were able to sing phrases like “I am thirsty”, sometimes within two minutes of having the phrase mapped to a melody.

Theory: Music underlies language acquisition
by B.J. Almond, Rice University

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

* * *

Music and Dance on the Mind

In singing with a choral group or marching in an army, we moderns come as close as we are able to this ancient mind. It’s always there within us, just normally hidden. It doesn’t take much, though, for our individuality to be submerged and something else to emerge. We are all potential goosestepping authoritarian followers, waiting for the right conditions to bring our primal natures out into the open. With the fiery voice of authority, we can be quickly lulled into compliance by an inspiring or invigorating vision:

[T]hat old time religion can be heard in the words and rhythm of any great speaker. Just listen to how a recorded speech of Martin Luther King jr can pull you in with its musicality. Or if you prefer a dark example, consider the persuasive power of Adolf Hitler for even some Jews admitted they got caught up listening to his speeches. This is why Plato feared the poets and banished them from his utopia of enlightened rule. Poetry would inevitably undermine and subsume the high-minded rhetoric of philosophers. “[P]oetry used to be divine knowledge,” as Guerini et al states in Echoes of Persuasion, “It was the sound and tenor of authorization and it commanded where plain prose could only ask.”

Poetry is one of the forms of musical language. Plato’s fear wasn’t merely about the aesthetic appeal of metered rhyme. Living in an oral culture, he would have intimately known the ever-threatening power and influence of the spoken word. Likewise, the sway and thrall of rhythmic movement would have been equally familiar in that world. Community life in ancient Greek city-states was almost everything that mattered, a tightly woven identity and experience.

How Universal Is The Mind?

One expression of the misguided nature vs nurture debate is the understanding of our humanity. In wondering about the universality of Western views, we have already framed the issue in terms of Western dualism. The moment we begin speaking in specific terms, from mind to psyche, we’ve already smuggled in cultural preconceptions and biases.

Sabrina Golonka discusses several other linguistic cultures (Korean, Japanese, and Russian) in comparison to English. She suggests that dualism, even if variously articulated, underlies each conceptual tradition — a general distinction between visible and invisible. But all of those are highly modernized societies built on millennia of civilizational projects, from imperialism to industrialization. It would be even more interesting and insightful to look into the linguistic worldviews of indigenous cultures.

The Piraha, for example, are linguistically limited in only speaking about what they directly experience or about what those they personally know have directly experienced. They don’t talk about what is ‘invisible’, whether within the human sphere or beyond in the world, and as such they aren’t prone to theoretical speculations.

What is clear is that the Piraha’s mode of perception and description is far different, even to the point that what they see is sometimes invisible to those who aren’t Piraha. There is an anecdote shared by Daniel Everett. The Piraha crowded on the riverbank pointing to the spirit they saw on the other side, but Everett and his family saw nothing. That brings doubt to the framework of visible vs invisible. The Piraha were fascinated by what becomes invisible such as a person disappearing around the bend of a trail, although their fascination ended at that liminal point at the edge of the visible, not extending beyond it.

Another useful example would be the Australian Aborigine. The Songlines were traditionally integrated with their sense of identity and reality, signifying an experience that is invisible within the reality tunnel of WEIRD society (Western Educated Industrialized Rich Democratic). Prior to contact, individualism as we know it may have been entirely unknown for Songlines express a profoundly collective sense of being in the world.

If any kind of dualism between visible and invisible did exist within the Aboriginal worldview, it more likely would have been on a communal level of experience. In their culture, ritual songs are learned and then what they represent becomes visible to the initiated, however this process might be made sense of within Aboriginal language. A song makes some aspect of the world visible, which is to invoke a particular reality and the beings that inhabit that reality. This is what Westerners would interpret as states of mind, but that is clearly an inadequate understanding of the fully immersive and embodied experience.

Western psychology has made non-Western experience invisible to most Westerners. There is the invisible we talk about within our own cultural worldview, what we perceive as known and familiar, no matter how intangible. But even more important is the unknown and unfamiliar that is so fundamentally invisible that we are incapable of talking about it. This doesn’t merely limit our understanding. Entire ways of being in the world are precluded by the words and concepts we use. Our sense of our own humanity is lesser for it and, as cultural languages go extinct, this state of affairs worsens with the near complete monocultural destruction of the very alternatives that most powerfully challenge our assumptions.

* * *

How Universal Is The Mind?
by Sabrina Golonka

So, back to the mind and our current view of cognition. Cross-linguistic research shows that, generally speaking, every culture has a folk model of a person consisting of visible and invisible (psychological) aspects (Wierzbicka, 2005). While there is agreement that the visible part of the person refers to the body, there is considerable variation in how different cultures think about the invisible (psychological) part. In the West, and, specifically, in the English-speaking West, the psychological aspect of personhood is closely related to the concept of “the mind” and the modern view of cognition. But, how universal is this conception? How do speakers of other languages think about the psychological aspect of personhood? […]

In a larger sense, the fact that there seems to be a universal belief that people consist of visible and invisible aspects explains much of the appeal of cognitive psychology over behaviourism. Cognitive psychology allows us to invoke invisible, internal states as causes of behaviour, which fits nicely with the broad, cultural assumption that the mind causes us to act in certain ways.

To the extent that you agree that the modern conception of “cognition” is strongly related to the Western, English-speaking view of “the mind”, it is worth asking what cognitive psychology would look like if it had developed in Japan or Russia. Would text-books have chapter headings on the ability to connect with other people (kokoro) or feelings or morality (dusa) instead of on decision-making and memory? This possibility highlights the potential arbitrariness of how we’ve carved up the psychological realm – what we take for objective reality is revealed to be shaped by culture and language.

I recently wrote a blog about a related topic. In Pāli and Sanskrit – ancient Indian languages – there is no collective term for emotions. They do have words for all of the basic emotions and some others, but they do not think of them as a category distinct from thought. I have yet to think through all of the implications of this observation but clearly the ancient Indian view on psychology must have been very different to ours.

Han 21 December 2011 at 17:06

Very interesting post. Have you looked into Julian Jaynes’s strange and marvelous book “The Origin of Consciousness in the Breakdown of the Bicameral Mind”? Even if you regard bicameralism as iffy, there’s an interesting section on the creation of metaphorical spaces — body-words that become “containers” for feelings, thoughts, attributes etc. The culturally distinct descriptors of the “invisible” may be related to historical accidents that vary from place to place.

Simon 9 January 2012 at 06:33

Also relevant might be Lakoff and Johnson’s “Philosophy in the Flesh” looking at, in their formulation, the inevitably metaphorical nature of thought and speech and the ultimate grounding of (almost) all metaphors in our physical experience from embodiment in the world.

Verbal Behavior

There is a somewhat interesting discussion of the friendship between B.F. Skinner and W.V.O. Quine. The piece explores their shared interests and possible influences on one another. It’s not exactly an area of personal interest, but it got me thinking about Julian Jaynes.

Skinner is famous for his behaviorist research. When behaviorism is mentioned, what immediately comes to mind for most people is Pavlov’s dog. But behaviorism wasn’t limited to animals and simple responses to stimuli. Skinner developed his theory toward verbal behavior as well. As Michael Karson explains,

“Skinner called his behaviorism “radical,” (i.e., thorough or complete) because he rejected then-behaviorism’s lack of interest in private events. Just as Galileo insisted that the laws of physics would apply in the sky just as much as on the ground, Skinner insisted that the laws of psychology would apply just as much to the psychologist’s inner life as to the rat’s observable life.

“Consciousness has nothing to do with the so-called and now-solved philosophical problem of mind-body duality, or in current terms, how the physical brain can give rise to immaterial thought. The answer to this pseudo-problem is that even though thought seems to be immaterial, it is not. Thought is no more immaterial than sound, light, or odor. Even educated people used to believe, a long time ago, that these things were immaterial, but now we know that sound requires a material medium to transmit waves, light is made up of photons, and odor consists of molecules. Thus, hearing, seeing, and smelling are not immaterial activities, and there is nothing in so-called consciousness besides hearing, seeing, and smelling (and tasting and feeling). Once you learn how to see and hear things that are there, you can also see and hear things that are not there, just as you can kick a ball that is not there once you have learned to kick a ball that is there. Engaging in the behavior of seeing and hearing things that are not there is called imagination. Its survival value is obvious, since it allows trial and error learning in the safe space of imagination. There is nothing in so-called consciousness that is not some version of the five senses operating on their own. Once you have learned to hear words spoken in a way that makes sense, you can have thoughts; thinking is hearing yourself make language; it is verbal behavior and nothing more. It’s not private speech, as once was believed; thinking is private hearing.”

It’s amazing how much this is resonates with Jaynes’ bicameral theory. This maybe shouldn’t be surprising. After all, Jaynes was trained in behaviorism and early on did animal research. He was mentored by the behaviorist Frank A. Beach and was friends with Edward Boring who wrote a book about consciousness in relation to behaviorism. Reading about Skinner’s ideas about verbal behavior, I was reminded of Jaynes’ view of authorization as it relates to linguistic commands and how they become internalized to form an interiorized mind-space (i.e., Jaynesian consciousness).

I’m not the only person to think along these lines. On Reddit, someone wrote: “It is possible that before there were verbal communities that reinforced the basic verbal operants in full, people didn’t have complete “thinking” and really ran on operant auto-pilot since they didn’t have a full covert verbal repertoire and internal reinforcement/shaping process for verbal responses covert or overt, but this would be aeons before 2-3 kya. Wonder if Jaynes ever encountered Skinner’s “Verbal Behavior”…” Jaynes only references Skinner once in his book on bicameralism and consciousness. But he discusses behaviorism in general to some extent.

In the introduction, he describes behaviorism in this way: “From the outside, this revolt against consciousness seemed to storm the ancient citadels of human thought and set its arrogant banners up in one university after another. But having once been a part of its major school, I confess it was not really what it seemed. Off the printed page, behaviorism was only a refusal to talk about consciousness. Nobody really believed he was not conscious. And there was a very real hypocrisy abroad, as those interested in its problems were forcibly excluded from academic psychology, as text after text tried to smother the unwanted problem from student view. In essence, behaviorism was a method, not the theory that it tried to be. And as a method, it exorcised old ghosts. It gave psychology a thorough house cleaning. And now the closets have been swept out and the cupboards washed and aired, and we are ready to examine the problem again.” As dissatisfying as animal research was for Jaynes, it nonetheless set the stage for deeper questioning by way of a broader approach. It made possible new understanding.

Like Skinner, he wanted to take the next step, shifting from behavior to experience. Even their strategies to accomplish this appear to have been similar. Sensory experience itself becomes internalized, according to both of their theories. For Jaynes, perception of external space becomes the metaphorical model for a sense of internal space. When Karson says of Skinner’s view that “thinking is hearing yourself make language,” that seems close to Jaynes discussion of hearing voices as it develops into an ‘I’ and a ‘me’, the sense of identity split into subject and object which asserted was required for one to hear one’s own thoughts.

I don’t know Skinner’s thinking in detail or how it changed over time. He too pushed beyond the bounds of behavioral research. It’s not clear that Jaynes’ ever acknowledged this commonality. In his 1990 afterword to his book, Jaynes’ makes his one mention of Skinner without pointing out Skinner’s work on verbal behavior:

“This conclusion is incorrect. Self-awareness usually means the consciousness of our own persona over time, a sense of who we are, our hopes and fears, as we daydream about ourselves in relation to others. We do not see our conscious selves in mirrors, even though that image may become the emblem of the self in many cases. The chimpanzees in this experiment and the two-year old child learned a point-to-point relation between a mirror image and the body, wonderful as that is. Rubbing a spot noticed in the mirror is not essentially different from rubbing a spot noticed on the body without a mirror. The animal is not shown to be imagining himself anywhere else, or thinking of his life over time, or introspecting in any sense — all signs of a conscious life.

“This less interesting, more primitive interpretation was made even clearer by an ingenious experiment done in Skinner’s laboratory (Epstein, 1981). Essentially the same paradigm was followed with pigeons, except that it required a series of specific trainings with the mirror, whereas the chimpanzee or child in the earlier experiments was, of course, self-trained. But after about fifteen hours of such training when the contingencies were carefully controlled, it was found that a pigeon also could use a mirror to locate a blue spot on its body which it could not see directly, though it had never been explicitly trained to do so. I do not think that a pigeon because it can be so trained has a self-concept.”

Jaynes was making the simple, if oft overlooked, point that perception of body is not the same thing as consciousness of mind. A behavioral response to one’s own body isn’t fundamentally different than a behavioral response to anything else. Behavioral responses are found in every species. This isn’t helpful in exploring consciousness itself. Skinner too wanted to get beyond this level of basic behavioral research, so it seems. Interestingly, without any mention of Skinner, Jaynes does use the exact phrasing of Skinner in speaking about the unconscious learning of ‘verbal behavior’ (Book One, Chapter 1):

“Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, or whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying words. The important thing here is that the subject is not aware that he is learning anything at all. [13] He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it.”

This is just a passing comment in using one example among many, and he states that “Such unconscious learning is not confined to verbal behavior.” He doesn’t further explore language in this immediate section or repeat again the phrase ‘verbal behavior’ in any other section, although the notion of verbal behavior is central to the entire book. But a decade after the original publication date of his book, Jaynes wrote a paper where he does talk about Skinner’s ideas about language:

“One needs language for consciousness. We think consciousness is learned by children between two and a half and five or six years in what we can call the verbal surround, or the verbal community as B.F Skinner calls it. It is an aspect of learning to speak. Mental words are out there as part of the culture and part of the family. A child fits himself into these words and uses them even before he knows the meaning of them. A mother is constantly instilling the seeds of consciousness in a two- and three-year-old, telling the child to stop and think, asking him “What shall we do today?” or “Do you remember when we did such and such or were somewhere?” And all this while metaphor and analogy are hard at work. There are many different ways that different children come to this, but indeed I would say that children without some kind of language are not conscious.”
(Jaynes, J. 1986. “Consciousness and the Voices of the Mind.” Canadian Psychology, 27, 128– 148.)

I don’t have access to that paper. That quote comes from an article by John E. Limber: “Language and consciousness: Jaynes’s “Preposterous idea” reconsidered.” It is found in Reflections on the Dawn of Consciousness edited by Marcel Kuijsten (pp. 169-202).

Anyway, the point Jaynes makes is that language is required for consciousness as an inner sense of self because language is required to hear ourselves think. So verbal behavior is a necessary, if not sufficient, condition for the emergence of consciousness as we know it. As long as verbal behavior remains an external event, conscious experience won’t follow. Humans have to learn to hear themselves as they hear others, to split themselves into a speaker and a listener.

This relates to what makes possible the differentiation of hearing a voice being spoken by someone in the external world and hearing a voice as a memory of someone in one’s internal mind-space. Without this distinction, imagination isn’t possible for anything imagined would become a hallucination where internal and external hearing are conflated or rather never separated. Jaynes proposes this is why ancient texts regularly describe people as hearing voices of deities and deified kings, spirits and ancestors. The bicameral person, according to the theory, hears their own voice without being conscious that it is their own thought.

All of that emerges from those early studies of animal behavior. Behaviorism plays a key role simply in placing the emphasis on behavior. From there, one can come to the insight that consciousness is a neurocognitive behavior modeled on physical and verbal behavior. The self is a metaphor built on embodied experience in the world. This relates to many similar views, such as that humans learn a theory of mind within themselves by first developing a theory of mind in perceiving others. This goes along with attention schema and the attribution of consciousness. And some have pointed out what is called the double subject fallacy, a hidden form of dualism that infects neuroscience. However described, it gets at the same issue.

It all comes down our being both social animals and inhabitants of the world. Human development begins with a focus outward, culture and language determining what kind of identity forms. How we learn to behave is who we become.

Wordplay Schmordplay

What Do You Call Words Like Wishy-Washy or Mumbo Jumbo?

Words like wishy-washy or mumbo-jumbo, or any words that contain two identical or similar parts (a segment, syllable, or morpheme), are called reduplicative words or tautonyms. The process of forming such words is known as reduplication. In many cases, the first word is a real word, while the second part (sometimes nonsensical) is invented to create a rhyme and to create emphasis. Most reduplicative begin as hyphenated words, and through very common usage, eventually lose the hype to become single words. Regardless of their hyphenation, they underscore the playfulness of the English language.

Reduplication isn’t just jibber-jabber

There are several kinds of reduplication. One type replaces a vowel while keeping the initial consonant, as in “flip-flop,” “pish-posh,” and “ping-pong.” Another type keeps the vowel but replaces that first sound, as in “namby-pamby,” “hanky-panky,” “razzle-dazzle,” and “timey-wimey,” a word used by Dr. Who fans for time-travel shenanigans. Reduplication doesn’t get any simpler than when the whole word is repeated, like when you pooh-pooh a couple’s attempt to dress matchy-matchy. My favorite type is “schm” reduplication, though some might say “Favorite, schmavorite!” All the types show that redundancy isn’t a problem in word-making. Grant Barrett, host of the public radio show “A Way with Words,” notes via e-mail that even the word “reduplication” has an unnecessary frill: “I’ve always liked the ‘re’ in ‘reduplicate.’ We’re doing it again! It’s right there in the word!”

Reduplication

Reduplication in linguistics is a morphological process in which the root or stem of a word (or part of it) or even the whole word is repeated exactly or with a slight change.

Reduplication is used in inflections to convey a grammatical function, such as plurality, intensification, etc., and in lexical derivation to create new words. It is often used when a speaker adopts a tone more “expressive” or figurative than ordinary speech and is also often, but not exclusively, iconic in meaning. Reduplication is found in a wide range of languages and language groups, though its level of linguistic productivity varies.

Reduplication is the standard term for this phenomenon in the linguistics literature. Other terms that are occasionally used include cloningdoublingduplicationrepetition, and tautonym when it is used in biological taxonomies, such as “Bison bison”.

The origin of this usage of tautonym is uncertain, but it has been suggested that it is of relatively recent derivation.

Reduplication

The coinage of new words and phrases into English has been greatly enhanced by the pleasure we get from playing with words. There are numerous alliterative and rhyming idioms, which are a significant feature of the language. These aren’t restricted to poets and Cockneys; everyone uses them. We start in the nursery with choo-choos, move on in adult life to hanky-panky and end up in the nursing home having a sing-song.

The repeating of parts of words to make new forms is called reduplication. There are various categories of this: rhyming, exact and ablaut (vowel substitution). Examples, are respectively, okey-dokey, wee-wee and zig-zag. The impetus for the coining of these seems to be nothing more than the enjoyment of wordplay. The words that make up these reduplicated idioms often have little meaning in themselves and only appear as part of a pair. In other cases, one word will allude to some existing meaning and the other half of the pair is added for effect or emphasis.

New coinages have often appeared at times of national confidence, when an outgoing and playful nature is expressed in language; for example, during the 1920s, following the First World War, when many nonsense word pairs were coined – the bee’s knees, heebie-jeebies etc. That said, the introduction of such terms begin with Old English and continues today. Willy-nilly is over a thousand years old. Riff-raff dates from the 1400s and helter-skelter, arsy-versy (a form of vice-versa), and hocus-pocus all date from the 16th century. Coming up to date we have bling-bling, boob-tube and hip-hop. I’ve not yet recorded a 21st century reduplication. Bling-bling comes very close but is 20th century. ‘Bieber Fever’ is certainly 21st century, but isn’t quite a reduplication.

A hotchpotch of reduplication

Argy-bargy and lovey-dovey lie on opposite ends of the interpersonal scale, but they have something obvious in common: both are reduplicatives.

Reduplication is when a word or part of a word is repeated, sometimes modified, and added to make a longer term, such as aye-ayemishmash, and hotchpotch. This process can mark plurality or intensify meaning, and it can be used for effect or to generate new words. The added part may be invented or it may be an existing word whose form and sense are a suitable fit.

Reduplicatives emerge early in our language-learning lives. As infants in the babbling phase we reduplicate syllables to utter mama, dada, nana and papa, which is where these pet names come from. Later we use moo-moo, choo-choo, wee-wee and bow-wow (or similar) to refer to familiar things. The repetition, as well as being fun, might help children develop and practise the pronunciation of sounds.

As childhood progresses, reduplicatives remain popular, popping up in children’s books, songs and rhymes. Many characters in children’s stories have reduplicated names: Humpty Dumpty, Chicken Licken and Handy Andy, to name a few.

The language rule we know – but don’t know we know

Ding dong King Kong

Well, in fact, the Big Bad Wolf is just obeying another great linguistic law that every native English speaker knows, but doesn’t know that they know. And it’s the same reason that you’ve never listened to hop-hip music.

You are utterly familiar with the rule of ablaut reduplication. You’ve been using it all your life. It’s just that you’ve never heard of it. But if somebody said the words zag-zig, or ‘cross-criss you would know, deep down in your loins, that they were breaking a sacred rule of language. You just wouldn’t know which one.

All four of a horse’s feet make exactly the same sound. But we always, always say clip-clop, never clop-clip. Every second your watch (or the grandfather clock in the hall makes the same sound) but we say tick-tock, never tock-tick. You will never eat a Kat Kit bar. The bells in Frère Jaques will forever chime ‘ding dang dong’.

Reduplication in linguistics is when you repeat a word, sometimes with an altered consonant (lovey-dovey, fuddy-duddy, nitty-gritty), and sometimes with an altered vowel: bish-bash-bosh, ding-dang-dong. If there are three words then the order has to go I, A, O. If there are two words then the first is I and the second is either A or O. Mish-mash, chit-chat, dilly-dally, shilly-shally, tip top, hip-hop, flip-flop, tic tac, sing song, ding dong, King Kong, ping pong.

Why this should be is a subject of endless debate among linguists, it might be to do with the movement of your tongue or an ancient language of the Caucasus. It doesn’t matter. It’s the law, and, as with the adjectives, you knew it even if you didn’t know you knew it. And the law is so important that you just can’t have a Bad Big Wolf.

Jibber Jabber: The Unwritten Ablaut Reduplication Rule

In all these ablaut reduplication word pairs, the key vowels appear in a specific order: either i before a, or i before o.

In linguistic terms, you could say that a high vowel comes before a low vowel. The i sound is considered a high vowel because of the location of the tongue relative to the mouth in American speech. The a and o sounds are low vowels.

See-saw doesn’t use the letter i, but the high-vowel-before-low-vowel pattern still applies.

This Weird Grammar Rule is Why We Say “Flip Flop” Instead of “Flop Flip”

As to why this I-A-O pattern has such a firm hold in our linguistic history, nobody can say. Forsyth calls it a topic of “endless debate” among linguists that may originate in the arcane movements of the human tongue or an ancient language of the Caucasus. Whatever the case, the world’s English speakers are on-board, and you will never catch Lucy accusing Charlie Brown of being washy-wishy.

Reduplicative Words

Ricochet Word

wishy-washy, hanky panky – name for this type of word-formation?

argle-bargle

Easy-Peasy

Double Trouble

English Ryming Compound Words

Rhyming Compounds

Reduplicates

REDUPLICATION

English gitaigo: Flip-Flop Words

Research on Jayne’s Bicameral Theory

The onset of data-driven mental archeology
by Sidarta Ribeiro

For many years this shrewd hypothesis seemed untestable. Corollaries such as the right lateralization of auditory hallucinations were dismissed as too simplistic—although schizophrenic patients present less language lateralization (Sommer et al., 2001). Yet, the investigation by Diuk et al. (2012) represents a pioneering successful attempt to test Jaynes’ theory in a quantitative manner. The authors assessed dozens of Judeo-Christian and Greco-Roman texts from up to the second century CE, as well contemporary Google n-grams, to calculate semantic distances between the reference word “introspection” and all the words in these texts. Cleverly, “introspection” is actually absent from these ancient texts, serving as an “invisible” probe. Semantic distances were evaluated by Latent Semantic Analysis, a high-dimensional model in which the semantic similitude between words is proportional to their co-occurrence in texts with coherent topics (Deerwester et al., 1990; Landauer and Dumais, 1997). The approach goes well beyond the mere counting of word occurrence in a corpus, actually measuring how much the concept of introspection is represented in each text in a “distributed semantic sense,” in accordance with the semantic holism (Frege, 1884, 1980; Quine, 1951; Wittgenstein, 1953, 1967; Davidson, 1967) that became mainstream in artificial intelligence (AI) and machine learning (Cancho and Sole, 2001; Sigman and Cecchi, 2002).

The results were remarkable. In Judeo-Christian texts, similitude to introspection increased monotonically over time, with a big change in slope from the Old to the New Testaments. In Greco-Roman texts, comprising 53 authors from Homer to Julius Cesar, a more complex dynamics appeared, with increases in similitude to introspection through periods of cultural development, and decreases during periods of cultural decadence. Contemporary texts showed overall increase, with periods of decline prior to and during the two World Wars. As Jaynes would have predicted, the rise and fall of entire societies seems to be paralleled by increases and decreases in introspection, respectively.

Diuk et al. show that the evolution of mental life can be quantified from the cultural record, opening a whole new avenue of hypothesis testing for Jaynes’ theory. While it is impossible to prove that pre-Axial people “heard” the voices of the gods, the findings suggest new ways of studying historical and contemporary texts. In particular, the probing of ancient texts with words like “dream,” “god” and “hallucination” has great potential to test Jaynesian concepts.

The featured study lends supports to the notion that consciousness is a social construct in constant flux. Quoting senior author Guillermo Cecchi, “it is not just the “trending topics,” but the entire cognitive make-up that changes over time, indicating that culture co-evolves with available cognitive states, and what is socially considered dysfunction can be tested in a more quantitative way.”

Useful Fictions Becoming Less Useful

Humanity has long been under the shadow of the Axial Age, no less true today than in centuries past. But what has this meant in both our self-understanding and in the kind of societies we have created? Ideas, as memes, can survive and even dominate for millennia. This can happen even when they are wrong, as long as they are useful to the social order.

One such idea involves nativism and essentialism, made possible through highly developed abstract thought. This notion of something inherent went along with the notion of division, from mind-body dualism to brain modules (what is inherent in one area being separate from what is inherent elsewhere). It goes back at least to the ancient Greeks such as with Platonic idealism (each ideal an abstract thing unto itself), although abstract thought required two millennia of development before it gained its most powerful form through modern science. As Elisa J. Sobo noted, “Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world.”

Maybe we are finally coming around to more fully questioning these useful fictions because they have become less useful as the social order changes, as the entire world shifts around us with globalization, climate change, mass immigration, etc. We saw emotions as so essentialist that we decided to start a war against one of them with the War on Terror, as if this emotion was definitive of our shared reality (and a great example of metonymy, by the way), but obviously fighting wars against a reified abstraction isn’t the most optimal strategy for societal progress. Maybe we need new ways of thinking.

The main problem with useful fictions isn’t necessarily that they are false, partial, or misleading. A useful fiction wouldn’t last for millennia if it weren’t, first and foremost, useful (especially true in relation to the views of human nature found in folk psychology). It is true that our seeing these fictions for what they are is a major change, but more importantly what led us to question their validity is that some of them have stopped being as useful as they once were. The nativists, essentialists, and modularists argued that such things as emotional experience, color perception, and language learning were inborn abilities and natural instincts: genetically-determined, biologically-constrained, and neurocognitively-formed. Based on theory, immense amounts of time, energy, and resources were invested into the promises made.

This motivated the entire search to connect everything observable in humans back to a gene, a biological structure, or an evolutionary trait (with the brain getting outsized attention). Yet reality has turned out to be much more complex with environmental factors such as culture, peer influence, stress, nutrition and toxins, along with biological factors such as epigenetics, brain plasticity, microbiomes, parasites, etc. The original quest hasn’t been as fruitful as hoped for, partly because of problems in conceptual frameworks and the scientific research itself, and this has led some to give up on the search. Consider how when one part of the brain is missing or damaged, other parts of the brain often compensate and take over the correlated function. There have been examples of people lacking most of their brain matter and still able to function in what appears to be outwardly normal behavior. The whole is greater than the sum of the parts, such that the whole can maintain its integrity even without all of the parts.

The past view of the human mind and body has been too simplistic to an extreme. This is because we’ve lacked the capacity to see most of what goes on in making it possible. Our conscious minds, including our rational thought, is far more limited than many assumed. And the unconscious mind, the dark matter of the mind, is so much more amazing in what it accomplishes. In discussing what they call conceptual blending, Gilles Fauconnier and Mark Turner write (The Way We Think, p. 18):

“It might seem strange that the systematicity and intricacy of some of our most basic and common mental abilities could go unrecognized for so long. Perhaps the forming of these important mechanisms early in life makes them invisible to consciousness. Even more interestingly, it may be part of the evolutionary adaptiveness of these mechanisms that they should be invisible to consciousness, just as the backstage labor involved in putting on a play works best if it is unnoticed. Whatever the reason, we ignore these common operations in everyday life and seem reluctant to investigate them even as objects of scientific inquiry. Even after training, the mind seems to have only feeble abilities to represent to itself consciously what the unconscious mind does easily. This limit presents a difficulty to professional cognitive scientists, but it may be a desirable feature in the evolution of the species. One reason for the limit is that the operations we are talking about occur at lightning speed, presumably because they involve distributed spreading activation in the nervous system, and conscious attention would interrupt that flow.”

As they argue, conceptual blending helps us understand why a language module or instinct isn’t necessary. Research has shown that there is no single part of the brain nor any single gene that is solely responsible for much of anything. The constituent functions and abilities that form language likely evolved separately for other reasons that were advantageous to survival and social life. Language isn’t built into the brain as an evolutionary leap; rather, it was an emergent property that couldn’t have been predicted from any prior neurocognitive development, which is to say language was built on abilities that by themselves would not have been linguistic in nature.

Of course, Fauconnier and Turner are far from being the only proponents of such theories, as this perspective has become increasingly attractive. Another example is Mark Changizi’s theory presented in Harnessed where he argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature” (see more about this here and here). Whatever theory one goes with, what is required is to explain the research challenging and undermining earlier models of cognition, affect, linguistics, and related areas.

Another book I was reading is How Emotions are Made by Lisa Feldman Barrett. She is covering similar territory, despite her focus being on something so seemingly simple as emotions. We rarely give emotions much thought, taking them for granted, but we shouldn’t. How we understand our experience and expression of emotion is part and parcel of a deeper view that our society holds about human nature, a view that also goes back millennia. This ancient lineage of inherited thought is what makes it problematic, since it feels intuitively true in it being so entrenched within our culture (Kindle Locations 91-93):

“And yet . .  . despite the distinguished intellectual pedigree of the classical view of emotion, and despite its immense influence in our culture and society, there is abundant scientific evidence that this view cannot possibly be true. Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion.”

“So what are they, really?,” Barret asks about emotions (Kindle Locations 99-104):

“When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. In short, we find that your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real— that is, hardly an illusion, but a product of human agreement.”

This goes along with an area of thought that arose out of philology, classical studies, consciousness studies, Jungian psychology, and anthropology. As always, I’m particularly thinking of the bicameral mind theory of Julian Jaynes. In the most ancient civilizations, there weren’t monetary systems nor according to Jaynes was there consciousness as we know it. He argues that individual self-consciousness was built on an abstract metaphorical space that was internalized and narratized. This privatization of personal space led to the possibility of self-ownership, the later basis of capitalism (and hence capitalist realism). It’s abstractions upon abstractions, until all of modern civilization bootstrapped itself into existence.

The initial potentials within human nature could and have been used to build diverse cultures, but modern society has genocidally wiped out most of this once existing diversity, leaving behind a near total dominance of WEIRD monoculture. This allows us modern Westerners to mistake our own culture for universal human nature. Our imaginations are constrained by a reality tunnel, which further strengthens the social order (control of the mind is the basis for control of society). Maybe this is why certain abstractions have been so central in conflating our social reality with physical reality, as Barret explains (Kindle Locations 2999-3002):

“Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.”

We make the world in our own image. And then we force this social order on everyone, imprinting it onto not just onto the culture but onto biology itself. With epigenetics, brain plasticity, microbiomes, etc, biology readily accepts this imprinting of the social order (Kindle Locations 5499-5503):

“By virtue of our values and practices, we restrict options and narrow possibilities for some people while widening them for others, and then we say that stereotypes are accurate. They are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”

There are clear consequences to humans as individuals and communities. But there are other costs as well (Kindle Locations 129-132):

“Not long ago, a training program called SPOT (Screening Passengers by Observation Techniques) taught those TSA agents to detect deception and assess risk based on facial and bodily movements, on the theory that such movements reveal your innermost feelings. It didn’t work, and the program cost taxpayers $ 900 million. We need to understand emotion scientifically so government agents won’t detain us— or overlook those who actually do pose a threat— based on an incorrect view of emotion.”

This is one of the ways in which our fictions have become less than useful. As long as societies were relatively isolated, they could maintain their separate fictions and treat them as reality. But in a global society, these fictions end up clashing with each other in not just unuseful ways but in wasteful and dangerous ways. If TSA agents were only trying to observe people who shared a common culture of social constructs, the standard set of WEIRD emotional behaviors would apply. The problem is TSA agents have to deal with people from diverse cultures that have different ways of experiencing, processing, perceiving, and expressing what we call emotions. It would be like trying to understand world cuisine, diet, and eating habits by studying the American patrons of fast food restaurants.

Barret points to the historical record of ancient societies and to studies done on non-WEIRD cultures. What was assumed to be true based on WEIRD scientists studying WEIRD subjects turns out not to be true for the rest of the world. But there is an interesting catch to the research, the reason so much confusion prevailed for so long. It is easy to teach people cultural categories of emotion and how to identify them. Some of the initial research on non-WEIRD populations unintentionally taught the subjects the very WEIRD emotions that they were attempting to study. The structure of the studies themselves had WEIRD biases built into them. It was only with later research that they were able to filter out these biases and observe the actual non-WEIRD responses of non-WEIRD populations.

Researchers only came to understand this problem quite recently. Noam Chomsky, for example, thought it unnecessary to study actual languages in the field. Based on his own theorizing, he believed that studying a single language such as English would tell us everything we needed to know about the basic workings of all languages in the world. This belief proved massively wrong, as field research demonstrated. There was also an idealism in the early Cold War era that lead to false optimism, as Americans felt on top of the world. Chris Knight made this point in Decoding Chomsky (from the Preface):

“Pentagon’s scientists at this time were in an almost euphoric state, fresh from victory in the recent war, conscious of the potential of nuclear weaponry and imagining that they held ultimate power in their hands. Among the most heady of their dreams was the vision of a universal language to which they held the key. […] Unbelievable as it may nowadays sound, American computer scientists in the late 1950s really were seized by the dream of restoring to humanity its lost common tongue. They would do this by designing and constructing a machine equipped with the underlying code of all the world’s languages, instantly and automatically translating from one to the other. The Pentagon pumped vast sums into the proposed ‘New Tower’.”

Chomsky’s modular theory dominated linguistics for more than a half century. It still is held in high esteem, even as the evidence increasingly is stacked against it. This wasn’t just a waste of immense amount of funding. It derailed an entire field of research and stunted the development of a more accurate understanding. Generations of linguists went chasing after a mirage. No brain module of language has been found nor is there any hope of ever finding one. Many researchers wasted their entire careers on a theory that proved false and many of these researchers continue to defend it, maybe in the hope that another half century of research will finally prove it to be true after all.

There is no doubt that Chomsky has a brilliant mind. He is highly skilled in debate and persuasion. He won the battle of ideas, at least for a time. Through sheer power of his intellect, he was able to overwhelm his academic adversaries. His ideas came to dominate the field of linguistics, in what came to be known as the cognitive revolution. But Daniel Everett has stated that “it was not a revolution in any sense, however popular that narrative has become” (Dark Matter of the Mind, Kindle Location 306). If anything, Chomsky’s version of essentialism caused the temporary suppression of a revolution that was initiated by linguistic relativists and social constructionists, among others. The revolution was strangled in the crib, partly because it was fighting against an entrenched ideological framework that was millennia old. The initial attempts at research struggled to offer a competing ideological framework and they lost that struggle. Then they were quickly forgotten about, as if the evidence they brought forth was irrelevant.

Barret explains the tragedy of this situation. She is speaking of essentialism in terms of emotions, but it applies to the entire scientific project of essentialism. It has been a failed project that refuses to accept its failure, a paradigm that refuses to die in order to make way for something else. She laments all of the waste and lost opportunities (Kindle Locations 3245-3293):

“Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost. […]

“The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . .  . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion. […]

“It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.”

 

Development of Language and Music

Evidence Rebuts Chomsky’s Theory of Language Learning
by Paul Ibbotson and Michael Tomasello

All of this leads ineluctably to the view that the notion of universal grammar is plain wrong. Of course, scientists never give up on their favorite theory, even in the face of contradictory evidence, until a reasonable alternative appears. Such an alternative, called usage-based linguistics, has now arrived. The theory, which takes a number of forms, proposes that grammatical structure is not in­­nate. Instead grammar is the product of history (the processes that shape how languages are passed from one generation to the next) and human psychology (the set of social and cognitive capacities that allow generations to learn a language in the first place). More important, this theory proposes that language recruits brain systems that may not have evolved specifically for that purpose and so is a different idea to Chomsky’s single-gene mutation for recursion.

In the new usage-based approach (which includes ideas from functional linguistics, cognitive linguistics and construction grammar), children are not born with a universal, dedicated tool for learning grammar. Instead they inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools—such as categorization, the reading of communicative intentions, and analogy making, with which children build grammatical categories and rules from the language they hear around them.

Broca and Wernicke are dead – it’s time to rewrite the neurobiology of language
by Christian Jarrett, BPS Research Digest

Yet the continued dominance of the Classic Model means that neuropsychology and neurology students are often learning outmoded ideas, without getting up to date with the latest findings in the area. Medics too are likely to struggle to account for language-related symptoms caused by brain damage or illness in areas outside of the Classic Model, but which are relevant to language function, such as the cerebellum.

Tremblay and Dick call for a “clean break” from the Classic Model and a new approach that rejects the “language centric” perspective of the past (that saw the language system as highly specialised and clearly defined), and that embraces a more distributed perspective that recognises how much of language function is overlaid on cognitive systems that originally evolved for other purposes.

Signing, Singing, Speaking: How Language Evolved
by Jon Hamilton, NPR

There’s no single module in our brain that produces language. Instead, language seems to come from lots of different circuits. And many of those circuits also exist in other species.

For example, some birds can imitate human speech. Some monkeys use specific calls to tell one another whether a predator is a leopard, a snake or an eagle. And dogs are very good at reading our gestures and tone of voice. Take all of those bits and you get “exactly the right ingredients for making language possible,” Elman says.

We are not the only species to develop speech impediments
by Moheb Costandi, BBC

Jarvis now thinks vocal learning is not an all-or-nothing function. Instead there is a continuum of skill – just as you would expect from something produced by evolution, and which therefore was assembled slowly, piece by piece.

The music of language: exploring grammar, prosody and rhythm perception in zebra finches and budgerigars
by Michelle Spierings, Institute of Biology Leiden

Language is a uniquely human trait. All animals have ways to communicate, but these systems do not bear the same complexity as human language. However, this does not mean that all aspects of human language are specifically human. By studying the language perception abilities of other species, we can discover which parts of language are shared. It are these parts that might have been at the roots of our language evolution. In this thesis I have studied language and music perception in two bird species, zebra finches and budgerigars. For example, zebra finches can perceive the prosodic (intonation) patterns of human language. The budgerigars can learn to discriminate between different abstract (grammar) patterns and generalize these patterns to new sounds. These and other results give us insight in the cognitive abilities that might have been at the very basis of the evolution of human language.

How Music and Language Mimicked Nature to Evolve Us
by Maria Popova, Brain Pickings

Curiously, in the majority of our interaction with the world, we seem to mimic the sounds of events among solid objects. Solid-object events are comprised of hits, slides and rings, producing periodic vibrations. Every time we speak, we find the same three fundamental auditory constituents in speech: plosives (hit-sounds like t, d and p), fricatives (slide-sounds like f, v and sh), and sonorants (ring-sounds like a, u, w, r and y). Changizi demonstrates that solid-object events have distinct “grammar” recurring in speech patterns across different languages and time periods.

But it gets even more interesting with music, a phenomenon perceived as a quintessential human invention — Changizi draws on a wealth of evidence indicating that music is actually based on natural sounds and sound patterns dating back to the beginning of time. Bonus points for convincingly debunking Steven Pinker’s now-legendary proclamation that music is nothing more than “auditory cheesecake.”

Ultimately, Harnessed shows that both speech and music evolved in culture to be simulacra of nature, making our brains’ penchant for these skills appear intuitive.

The sounds of movement
by Bob Holmes, New Scientist

It is this subliminal processing that spoken language taps into, says Changizi. Most of the natural sounds our ancestors would have processed fall into one of three categories: things hitting one another, things sliding over one another, and things resonating after being struck. The three classes of phonemes found in speech – plosives such as p and k, fricatives such as sh and f, and sonorants such as r, m and the vowels – closely resemble these categories of natural sound.

The same nature-mimicry guides how phonemes are assembled into syllables, and syllables into words, as Changizi shows with many examples. This explains why we acquire language so easily: the subconscious auditory processing involved is no different to what our ancestors have done for millions of years.

The hold that music has on us can also be explained by this kind of mimicry – but where speech imitates the sounds of everyday objects, music mimics the sound of people moving, Changizi argues. Primitive humans would have needed to know four things about someone moving nearby: their distance, speed, intent and whether they are coming nearer or going away. They would have judged distance from loudness, speed from the rate of footfalls, intent from gait, and direction from subtle Doppler shifts. Voila: we have volume, tempo, rhythm and pitch, four of the main components of music.

Scientists recorded two dolphins ‘talking’ to each other
by Maria Gallucci, Mashable

While marine biologists have long understood that dolphins communicate within their pods, the new research, which was conducted on two captive dolphins, is the first to link isolated signals to particular dolphins. The findings reveal that dolphins can string together “sentences” using a handful of “words.”

“Essentially, this exchange of [pulses] resembles a conversation between two people,” Vyacheslav Ryabov, the study’s lead researcher, told Mashable.

“The dolphins took turns in producing ‘sentences’ and did not interrupt each other, which gives reason to believe that each of the dolphins listened to the other’s pulses before producing its own,” he said in an email.

“Whistled Languages” Reveal How the Brain Processes Information
by Julien Meyer, Scientific American

Earlier studies had shown that the left hemisphere is, in fact, the dominant language center for both tonal and atonal tongues as well as for nonvocalized click and sign languages. Güntürkün was interested in learning how much the right hemisphere—associated with the processing of melody and pitch—would also be recruited for a whistled language. He and his colleagues reported in 2015 in Current Biology that townspeople from Kuşköy, who were given simple hearing tests, used both hemispheres almost equally when listening to whistled syllables but mostly the left one when they heard vocalized spoken syllables.

Did Music Evolve Before Language?
by Hank Campbell, Science 2.0

Gottfriend Schlaug of Harvard Medical School does something a little more direct that may be circumstantial but is a powerful exclamation point for a ‘music came first’ argument. His work with patients who have suffered severe lesions on the left side of their brain showed that while they could not speak – no language skill as we might define it – they were able to sing phrases like “I am thirsty”, sometimes within two minutes of having the phrase mapped to a melody.

Chopin, Bach used human speech ‘cues’ to express emotion in music
by Andrew Baulcomb, Science Daily

“What we found was, I believe, new evidence that individual composers tend to use cues in their music paralleling the use of these cues in emotional speech.” For example, major key or “happy” pieces are higher and faster than minor key or “sad” pieces.

Theory: Music underlies language acquisition
by B.J. Almond, Rice University

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

– See more at: http://news.rice.edu/2012/09/18/theory-music-underlies-language-acquisition/#sthash.kQbEBqnh.dpuf

How Brains See Music as Language
by Adrienne LaFrance, The Atlantic

What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations “take root in the brain as a language,” Limb said.

“It makes perfect sense,” said Ken Schaphorst, chair of the Jazz Studies Department at the New England Conservatory in Boston. “I improvise with words all the time—like I am right now—and jazz improvisation is really identical in terms of the way it feels. Though it’s difficult to get to the point where you’re comfortable enough with music as a language where you can speak freely.”

Along with the limitations of musical ability, there’s another key difference between jazz conversation and spoken conversation that emerged in Limb’s experiment. During a spoken conversation, the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But Limb and his colleagues found that brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this kind of music is syntactic but it’s not semantic.

“Music communication, we know it means something to the listener, but that meaning can’t really be described,” Limb said. “It doesn’t have propositional elements or specificity of meaning in the same way a word does. So a famous bit of music—Beethoven’s dun dun dun duuuun—we might hear that and think it means something but nobody could agree what it means.”

 

Shaken and Stirred

I Is an Other
by James Geary
Kindle Locations 303-310

Descartes’s “Cogito ergo sum.”

This phrase is routinely translated as:

I think, therefore I am.

But there is a better translation.

The Latin word cogito is derived from the prefix co (with or together) and the verb agitare (to shake). Agitare is the root of the English words “agitate” and “agitation.” Thus, the original meaning of cogito is “to shake together,” and the proper translation of “Cogito ergo sum” is:

I shake things up, therefore I am.

Staying with the Trouble
by Donna J. Haraway
Kindle Locations 293-303

Trouble is an interesting word. It derives from a thirteenth-century French verb meaning “to stir up,” “to make cloudy,” “to disturb.” We— all of us on Terra— live in disturbing times, mixed-up times, troubling and turbid times. The task is to become capable, with each other in all of our bumptious kinds, of response. Mixed-up times are overflowing with both pain and joy— with vastly unjust patterns of pain and joy, with unnecessary killing of ongoingness but also with necessary resurgence. The task is to make kin in lines of inventive connection as a practice of learning to live and die well with each other in a thick present. Our task is to make trouble, to stir up potent response to devastating events, as well as to settle troubled waters and rebuild quiet places. In urgent times, many of us are tempted to address trouble in terms of making an imagined future safe, of stopping something from happening that looms in the future, of clearing away the present and the past in order to make futures for coming generations. Staying with the trouble does not require such a relationship to times called the future. In fact, staying with the trouble requires learning to be truly present, not as a vanishing pivot between awful or edenic pasts and apocalyptic or salvific futures, but as mortal critters entwined in myriad unfinished configurations of places, times, matters, meanings.

Piraha and Bicameralism

For the past few months, I’ve been reading about color perception, cognition, and terminology. I finally got around to finishing a post on it. The topic is a lot more complex and confusing than what one might expect. The specific inspiration was the color blue, a word that apparently doesn’t signify a universal human experience. There is no condition of blueness objectively existing in the external world. It’s easy to forget that a distinction always exists between perception and reality or rather between one perception of reality and another.

How do you prove something is real when it feels real in your experience? For example, how would you attempt to prove your consciousness, interior experience, and individuality? What does it mean for your sense of self to be real? You can’t even verify your experience of blue matches that of anyone else, much less show that blueness is a salient hue for all people. All you have is the experience itself. Your experience can motivate, influence, and shape what and how you communicate or try to communicate, but you can’t communicate the experience itself. This inability is a stumbling block of all human interactions. The gap between cultures can be even more vast.

This is why language is so important to us. Language doesn’t only serve the purpose of communication but more importantly the purpose of creating a shared worldview. This is the deeply ingrained human impulse to bond with others, no matter how imperfect this is achieved in practice. When we have a shared language, we can forget about the philosophical dilemmas of experience and to what degree it is shared. We’d rather not have to constantly worry about such perplexing and disturbing issues.

These contemplations were stirred up by one book in particular, Daniel L. Everett’s Don’t Sleep, There Are Snakes. In my post on color, I brought up some of his observations about the Piraha (read pp. 136-141 from that book and have your mind blown). Their experience is far beyond what most people experience in the modern West. They rely on immediacy of experience. If they don’t experience or someone they know doesn’t experience something, it has little relevance to their lives and no truth value in their minds. Yet what they consider to be immediate experience can seem bizarre for us outsiders.

Piraha spirituality isn’t otherworldly. Spirits exist, just as humans exist. In fact, there is no certain distinction. When someone is possessed by a spirit, they are that spirit and the Piraha treat them as such. The person who is possessed is simply not there. The spirit is real because they experience the spirit with their physical senses. Sometimes in coming into contact with a spirit, a Piraha individual will lose their old identity and gain a new one, the change being permanent and another name to go along with it. The previous person is no longer there and I suppose never comes back. They aren’t pretending to change personalities. That is their direct experience of reality. Talk about the power of language. A spirit gives someone a new name and they become a different person. The name has power, represents an entire way of being, a personality unto itself. The person becomes what they are named. This is why the Piraha don’t automatically assume someone is the same person the next time they meet them, for they live in a fluid world where change is to be expected.

A modern Westerner sees the Piraha individual. To their mind, it’s the same person. They can see he or she is physically the same person. But another Piraha tribal member doesn’t see the same person. For example, when possessed, the person is apparently not conscious of the experience and won’t remember it later. During possession, they will be in an entirely dissociated state of mind, literally being someone else with different behaviors and a different voice. The Piraha audience watching the possession also won’t remember anything other than a spirit having visited. It isn’t a possession to them. The spirit literally was there. That is their perceived reality, what they know in their direct experience.

What the Piraha consider crazy and absurd is the Western faith in a monotheistic tradition not based on direct experience. If you never met Jesus, they can’t comprehend why you’d believe in him. The very notion of ‘faith’ makes absolutely no sense to them, as it seems like an act of believing what you know not to be real in your own experience. They are sincere Doubting Thomases. Jesus isn’t real, until he physically walks into their village to be seen with their own eyes, touched with their own hands, and heard with their own ears. To them, spirituality is as real as the physical world around them and is proven by the same means, through direct experience or else the direct experience of someone who is personally trusted to speak honestly.

Calling the Piraha experience of spirits a mass hallucination is to miss the point. To the degree that is true, we are all mass hallucinating all the time. It’s just one culture’s mass hallucinations differ from that of another. We modern Westerners, however, so desperately want to believe there can only be one objective reality to rule them all. The problem is we humans aren’t objective beings. Our perceived reality is unavoidably subjective. We can’t see our own cultural biases because they are the only reality we know.

In reading Everett’s description of the Piraha, I couldn’t help thinking about Julian Jaynes’ theory of the bicameral mind. Jaynes wasn’t primarily focused on hunter-gatherers such as the Piraha. Even so, one could see the Piraha culture as having elements of bicameralism, whether or not they ever were fully bicameral. They don’t hallucinate hearing voices from spirits. They literally hear them. How such voices are spoken is apparently not the issue. What matters is that they are spoken and heard. And those spirit voices will sometimes tell the Piraha important information that will influence, if not determine, their behaviors and actions. These spirit visitations are obviously treated seriously and play a central role in the functioning of their society.

What is strangest of all is that the Piraha are not fundamentally different than you or I. They point to one of the near infinite possibilities that exist within our shared human nature. If a baby from Western society was raised by the Piraha, we have no reason to assume that he or she wouldn’t grow up to be like any other Piraha. It was only a few centuries ago when it also was common for Europeans to have regular contact with spirits. The distance between the modern mind and what came before is shorter than it first appears, for what came before still exists within us, as what we will become is a seed already planted.*

I don’t want this point to be missed. What is being discussed here isn’t ultimately about colors or spirits. This is a way of holding up a mirror to ourselves. What we see reflected back isn’t what we expected, isn’t how we appeared in our own imaginings. What if we aren’t what we thought we were? What if we turn out to be a much more amazing kind of creature, one that holds a multitude within?

(*Actually, that isn’t stated quite correctly. It isn’t what came before. The Piraha are still here, as are many other societies far different from the modern West. It’s not just that we carry the past within us. That is as true for the Piraha, considering they too carry a past within them, most of it being a past of human evolution shared with the rest of humanty. Modern individuality has only existed in a blip of time, a few hundred years in the hundreds of thousands of years of hominid existence. The supposed bicameral mind lasted for thousands of years longer than the entire post-bicameral age. What are the chances that our present experience of individuality will last as long? Highly unlikely.)

* * *

Don’t Sleep, There Are Snakes:
Life and Language in the Amazonian Jungle
by Daniel L Everett
pp. 138-139

Pirahãs occasionally talked about me, when I emerged from the river in the evenings after my bath. I heard them ask one another, “Is this the same one who entered the river or is it kapioxiai?”

When I heard them discuss what was the same and what was different about me after I emerged from the river, I was reminded of Heraclitus, who was concerned about the nature of identities through time. Heraclitus posed the question of whether one could step twice into the same river. The water that we stepped into the first time is no longer there. The banks have been altered by the flow so that they are not exactly the same. So apparently we step into a different river. But that is not a satisfying conclusion. Surely it is the same river. So what does it mean to say that something or someone is the same this instant as they were a minute ago? What does it mean to say that I am the same person I was when I was a toddler? None of my cells are the same. Few if any of my thoughts are. To the Pirahãs, people are not the same in each phase of their lives. When you get a new name from a spirit, something anyone can do anytime they see a spirit, you are not exactly the same person as you were before.

Once when I arrived in Posto Novo, I went up to Kóhoibiíihíai and asked him to work with me, as he always did. No answer. So I asked again, “Ko Kóhoi, kapiigakagakaísogoxoihí?” (Hey Kóhoi, do you want to mark paper with me?) Still no answer. So I asked him why he wasn’t talking to me. He responded, “Were you talking to me? My name is Tiáapahai. There is no Kóhoi here. Once I was called Kóhoi, but he is gone now and Tiáapahai is here.”

So, unsurprisingly, they wondered if I had become a different person. But in my case their concern was greater. Because if, in spite of evidence to the contrary, I turned out not to be a xíbiisi, I might really be a different entity altogether and, therefore, a threat to them. I assured them that I was still Dan. I was not kapioxiai.

On many rainless nights, a high falsetto voice can be heard from the jungle near a Pirahã village. This falsetto sounds spiritlike to me. Indeed, it is taken by all the Pirahãs in the village to be a kaoáíbógí, or fast mouth. The voice gives the villagers suggestions and advice, as on how to spend the next day, or on possible night dangers (jaguars, other spirits, attacks by other Indians). This kaoáíbógí also likes sex, and he frequently talks about his desire to copulate with village women, with considerable detail provided.

One night I wanted to see the kaoáíbógí myself. I walked through the brush about a hundred feet to the source of that night’s voice. The man talking in the falsetto was Xagábi, a Pirahã from the village of Pequial and someone known to be very interested in spirits. “Mind if I record you?” I asked, not knowing how he might react, but having a good idea that he would not mind.

“Sure, go ahead,” he answered immediately in his normal voice. I recorded about ten minutes of his kaoáíbógí speech and then returned to my house.

The next day, I went to Xagábi’s place and asked, “Say, Xagábi, why were you talking like a kaoáíbógí last night?”

He acted surprised. “Was there a kaoáíbógí last night? I didn’t hear one. But, then, I wasn’t here.”

pp. 140-141

After some delay, which I could not help but ascribe to the spirits’ sense of theatrical timing, Peter and I simultaneously heard a falsetto voice and saw a man dressed as a woman emerge from the jungle. It was Xisaóoxoi dressed as a recently deceased Pirahã woman. He was using a falsetto to indicate that it was the woman talking. He had a cloth on his head to represent the long hair of a woman, hanging back like a Pirahã woman’s long tresses. “She” was wearing a dress.

Xisaóoxoi’s character talked about how cold and dark it was under the ground where she was buried. She talked about what it felt like to die and about how there were other spirits under the ground. The spirit Xisaóoxoi was “channeling” spoke in a rhythm different from normal Pirahã speech, dividing syllables into groups of two (binary feet) instead of the groups of three (ternary feet) used in everyday talking. I was just thinking how interesting this would be in my eventual analysis of rhythm in Pirahã, when the “woman” rose and left.

Within a few minutes Peter and I heard Xisaóoxoi again, but this time speaking in a low, gruff voice. Those in the “audience” started laughing. A well-known comical spirit was about to appear. Suddenly, out of the jungle, Xisaóoxoi emerged, naked, and pounding the ground with a heavy section of the trunk of a small tree. As he pounded, he talked about how he would hurt people who got in his way, how he was not afraid, and other testosterone-inspired bits of braggadocio.

I had discovered, with Peter, a form of Pirahã theater! But this was of course only my classification of what I was seeing. This was not how the Pirahãs would have described it at all, regardless of the fact that it might have had exactly this function for them. To them they were seeing spirits. They never once addressed Xisaóoxoi by his name, but only by the names of the spirits.

What we had seen was not the same as shamanism, because there was no one man among the Pirahãs who could speak for or to the spirits. Some men did this more frequently than others, but any Pirahã man could, and over the years I was with them most did, speak as a spirit in this way.

The next morning when Peter and I tried to tell Xisaóoxoi how much we enjoyed seeing the spirits, he, like Xagábi, refused to acknowledge knowing anything about it, saying he wasn’t there.

This led me to investigate Pirahã beliefs more aggressively. Did the Pirahãs, including Xisaóoxoi, interpret what we had just seen as fiction or as fact, as real spirits or as theater? Everyone, including Pirahãs who listened to the tape later, Pirahãs from other villages, stated categorically that this was a spirit. And as Peter and I were watching the “spirit show,” I was given a running commentary by a young man sitting next to me, who assured me that this was a spirit, not Xisaóoxoi. Moreover, based on previous episodes in which the Pirahãs doubted that I was the same person and their expressed belief that other white people were spirits, changing forms at will, the only conclusion I could come to was that for the Pirahãs these were encounters with spirits— similar to Western culture’s seances and mediums.

Pirahãs see spirits in their mind, literally. They talk to spirits, literally. Whatever anyone else might think of these claims, all Pirahãs will say that they experience spirits. For this reason, Pirahã spirits exemplify the immediacy of experience principle. And the myths of any other culture must also obey this constraint or there is no appropriate way to talk about them in the Pirahã language.

One might legitimately ask whether something that is not true to Western minds can be experienced. There is reason to believe that it can. When the Pirahãs claim to experience a spirit they have experienced something, and they label this something a spirit. They attribute properties to this experience, as well as the label spirit. Are all the properties, such as existence and lack of blood, correct? I am sure that they are not. But I am equally sure that we attribute properties to many experiences in our daily lives that are incorrect.

* * *

Radical Human Mind: From Animism to Bicameralism and Beyond

On Being Strange

Self, Other, & World

Humanity in All of its Blindness

The World that Inhabits Our Mind