Research on Jayne’s Bicameral Theory

The onset of data-driven mental archeology
by Sidarta Ribeiro

For many years this shrewd hypothesis seemed untestable. Corollaries such as the right lateralization of auditory hallucinations were dismissed as too simplistic—although schizophrenic patients present less language lateralization (Sommer et al., 2001). Yet, the investigation by Diuk et al. (2012) represents a pioneering successful attempt to test Jaynes’ theory in a quantitative manner. The authors assessed dozens of Judeo-Christian and Greco-Roman texts from up to the second century CE, as well contemporary Google n-grams, to calculate semantic distances between the reference word “introspection” and all the words in these texts. Cleverly, “introspection” is actually absent from these ancient texts, serving as an “invisible” probe. Semantic distances were evaluated by Latent Semantic Analysis, a high-dimensional model in which the semantic similitude between words is proportional to their co-occurrence in texts with coherent topics (Deerwester et al., 1990; Landauer and Dumais, 1997). The approach goes well beyond the mere counting of word occurrence in a corpus, actually measuring how much the concept of introspection is represented in each text in a “distributed semantic sense,” in accordance with the semantic holism (Frege, 1884, 1980; Quine, 1951; Wittgenstein, 1953, 1967; Davidson, 1967) that became mainstream in artificial intelligence (AI) and machine learning (Cancho and Sole, 2001; Sigman and Cecchi, 2002).

The results were remarkable. In Judeo-Christian texts, similitude to introspection increased monotonically over time, with a big change in slope from the Old to the New Testaments. In Greco-Roman texts, comprising 53 authors from Homer to Julius Cesar, a more complex dynamics appeared, with increases in similitude to introspection through periods of cultural development, and decreases during periods of cultural decadence. Contemporary texts showed overall increase, with periods of decline prior to and during the two World Wars. As Jaynes would have predicted, the rise and fall of entire societies seems to be paralleled by increases and decreases in introspection, respectively.

Diuk et al. show that the evolution of mental life can be quantified from the cultural record, opening a whole new avenue of hypothesis testing for Jaynes’ theory. While it is impossible to prove that pre-Axial people “heard” the voices of the gods, the findings suggest new ways of studying historical and contemporary texts. In particular, the probing of ancient texts with words like “dream,” “god” and “hallucination” has great potential to test Jaynesian concepts.

The featured study lends supports to the notion that consciousness is a social construct in constant flux. Quoting senior author Guillermo Cecchi, “it is not just the “trending topics,” but the entire cognitive make-up that changes over time, indicating that culture co-evolves with available cognitive states, and what is socially considered dysfunction can be tested in a more quantitative way.”

Advertisements

Useful Fictions Becoming Less Useful

Humanity has long been under the shadow of the Axial Age, no less true today than in centuries past. But what has this meant in both our self-understanding and in the kind of societies we have created? Ideas, as memes, can survive and even dominate for millennia. This can happen even when they are wrong, as long as they are useful to the social order.

One such idea involves nativism and essentialism, made possible through highly developed abstract thought. This notion of something inherent went along with the notion of division, from mind-body dualism to brain modules (what is inherent in one area being separate from what is inherent elsewhere). It goes back at least to the ancient Greeks such as with Platonic idealism (each ideal an abstract thing unto itself), although abstract thought required two millennia of development before it gained its most powerful form through modern science. As Elisa J. Sobo noted, “Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world.”

Maybe we are finally coming around to more fully questioning these useful fictions because they have become less useful as the social order changes, as the entire world shifts around us with globalization, climate change, mass immigration, etc. We saw emotions as so essentialist that we decided to start a war against one of them with the War on Terror, as if this emotion was definitive of our shared reality (and a great example of metonymy, by the way), but obviously fighting wars against a reified abstraction isn’t the most optimal strategy for societal progress. Maybe we need new ways of thinking.

The main problem with useful fictions isn’t necessarily that they are false, partial, or misleading. A useful fiction wouldn’t last for millennia if it weren’t, first and foremost, useful (especially true in relation to the views of human nature found in folk psychology). It is true that our seeing these fictions for what they are is a major change, but more importantly what led us to question their validity is that some of them have stopped being as useful as they once were. The nativists, essentialists, and modularists argued that such things as emotional experience, color perception, and language learning were inborn abilities and natural instincts: genetically-determined, biologically-constrained, and neurocognitively-formed. Based on theory, immense amounts of time, energy, and resources were invested into the promises made.

This motivated the entire search to connect everything observable in humans back to a gene, a biological structure, or an evolutionary trait (with the brain getting outsized attention). Yet reality has turned out to be much more complex with environmental factors such as culture, peer influence, stress, nutrition and toxins, along with biological factors such as epigenetics, brain plasticity, microbiomes, parasites, etc. The original quest hasn’t been as fruitful as hoped for, partly because of problems in conceptual frameworks and the scientific research itself, and this has led some to give up on the search. Consider how when one part of the brain is missing or damaged, other parts of the brain often compensate and take over the correlated function. There have been examples of people lacking most of their brain matter and still able to function in what appears to be outwardly normal behavior. The whole is greater than the sum of the parts, such that the whole can maintain its integrity even without all of the parts.

The past view of the human mind and body has been too simplistic to an extreme. This is because we’ve lacked the capacity to see most of what goes on in making it possible. Our conscious minds, including our rational thought, is far more limited than many assumed. And the unconscious mind, the dark matter of the mind, is so much more amazing in what it accomplishes. In discussing what they call conceptual blending, Gilles Fauconnier and Mark Turner write (The Way We Think, p. 18):

“It might seem strange that the systematicity and intricacy of some of our most basic and common mental abilities could go unrecognized for so long. Perhaps the forming of these important mechanisms early in life makes them invisible to consciousness. Even more interestingly, it may be part of the evolutionary adaptiveness of these mechanisms that they should be invisible to consciousness, just as the backstage labor involved in putting on a play works best if it is unnoticed. Whatever the reason, we ignore these common operations in everyday life and seem reluctant to investigate them even as objects of scientific inquiry. Even after training, the mind seems to have only feeble abilities to represent to itself consciously what the unconscious mind does easily. This limit presents a difficulty to professional cognitive scientists, but it may be a desirable feature in the evolution of the species. One reason for the limit is that the operations we are talking about occur at lightning speed, presumably because they involve distributed spreading activation in the nervous system, and conscious attention would interrupt that flow.”

As they argue, conceptual blending helps us understand why a language module or instinct isn’t necessary. Research has shown that there is no single part of the brain nor any single gene that is solely responsible for much of anything. The constituent functions and abilities that form language likely evolved separately for other reasons that were advantageous to survival and social life. Language isn’t built into the brain as an evolutionary leap; rather, it was an emergent property that couldn’t have been predicted from any prior neurocognitive development, which is to say language was built on abilities that by themselves would not have been linguistic in nature.

Of course, Fauconnier and Turner are far from being the only proponents of such theories, as this perspective has become increasingly attractive. Another example is Mark Changizi’s theory presented in Harnessed where he argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature” (see more about this here and here). Whatever theory one goes with, what is required is to explain the research challenging and undermining earlier models of cognition, affect, linguistics, and related areas.

Another book I was reading is How Emotions are Made by Lisa Feldman Barrett. She is covering similar territory, despite her focus being on something so seemingly simple as emotions. We rarely give emotions much thought, taking them for granted, but we shouldn’t. How we understand our experience and expression of emotion is part and parcel of a deeper view that our society holds about human nature, a view that also goes back millennia. This ancient lineage of inherited thought is what makes it problematic, since it feels intuitively true in it being so entrenched within our culture (Kindle Locations 91-93):

“And yet . .  . despite the distinguished intellectual pedigree of the classical view of emotion, and despite its immense influence in our culture and society, there is abundant scientific evidence that this view cannot possibly be true. Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion.”

“So what are they, really?,” Barret asks about emotions (Kindle Locations 99-104):

“When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. In short, we find that your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real— that is, hardly an illusion, but a product of human agreement.”

This goes along with an area of thought that arose out of philology, classical studies, consciousness studies, Jungian psychology, and anthropology. As always, I’m particularly thinking of the bicameral mind theory of Julian Jaynes. In the most ancient civilizations, there weren’t monetary systems nor according to Jaynes was there consciousness as we know it. He argues that individual self-consciousness was built on an abstract metaphorical space that was internalized and narratized. This privatization of personal space led to the possibility of self-ownership, the later basis of capitalism (and hence capitalist realism). It’s abstractions upon abstractions, until all of modern civilization bootstrapped itself into existence.

The initial potentials within human nature could and have been used to build diverse cultures, but modern society has genocidally wiped out most of this once existing diversity, leaving behind a near total dominance of WEIRD monoculture. This allows us modern Westerners to mistake our own culture for universal human nature. Our imaginations are constrained by a reality tunnel, which further strengthens the social order (control of the mind is the basis for control of society). Maybe this is why certain abstractions have been so central in conflating our social reality with physical reality, as Barret explains (Kindle Locations 2999-3002):

“Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.”

We make the world in our own image. And then we force this social order on everyone, imprinting it onto not just onto the culture but onto biology itself. With epigenetics, brain plasticity, microbiomes, etc, biology readily accepts this imprinting of the social order (Kindle Locations 5499-5503):

“By virtue of our values and practices, we restrict options and narrow possibilities for some people while widening them for others, and then we say that stereotypes are accurate. They are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”

There are clear consequences to humans as individuals and communities. But there are other costs as well (Kindle Locations 129-132):

“Not long ago, a training program called SPOT (Screening Passengers by Observation Techniques) taught those TSA agents to detect deception and assess risk based on facial and bodily movements, on the theory that such movements reveal your innermost feelings. It didn’t work, and the program cost taxpayers $ 900 million. We need to understand emotion scientifically so government agents won’t detain us— or overlook those who actually do pose a threat— based on an incorrect view of emotion.”

This is one of the ways in which our fictions have become less than useful. As long as societies were relatively isolated, they could maintain their separate fictions and treat them as reality. But in a global society, these fictions end up clashing with each other in not just unuseful ways but in wasteful and dangerous ways. If TSA agents were only trying to observe people who shared a common culture of social constructs, the standard set of WEIRD emotional behaviors would apply. The problem is TSA agents have to deal with people from diverse cultures that have different ways of experiencing, processing, perceiving, and expressing what we call emotions. It would be like trying to understand world cuisine, diet, and eating habits by studying the American patrons of fast food restaurants.

Barret points to the historical record of ancient societies and to studies done on non-WEIRD cultures. What was assumed to be true based on WEIRD scientists studying WEIRD subjects turns out not to be true for the rest of the world. But there is an interesting catch to the research, the reason so much confusion prevailed for so long. It is easy to teach people cultural categories of emotion and how to identify them. Some of the initial research on non-WEIRD populations unintentionally taught the subjects the very WEIRD emotions that they were attempting to study. The structure of the studies themselves had WEIRD biases built into them. It was only with later research that they were able to filter out these biases and observe the actual non-WEIRD responses of non-WEIRD populations.

Researchers only came to understand this problem quite recently. Noam Chomsky, for example, thought it unnecessary to study actual languages in the field. Based on his own theorizing, he believed that studying a single language such as English would tell us everything we needed to know about the basic workings of all languages in the world. This belief proved massively wrong, as field research demonstrated. There was also an idealism in the early Cold War era that lead to false optimism, as Americans felt on top of the world. Chris Knight made this point in Decoding Chomsky (from the Preface):

“Pentagon’s scientists at this time were in an almost euphoric state, fresh from victory in the recent war, conscious of the potential of nuclear weaponry and imagining that they held ultimate power in their hands. Among the most heady of their dreams was the vision of a universal language to which they held the key. […] Unbelievable as it may nowadays sound, American computer scientists in the late 1950s really were seized by the dream of restoring to humanity its lost common tongue. They would do this by designing and constructing a machine equipped with the underlying code of all the world’s languages, instantly and automatically translating from one to the other. The Pentagon pumped vast sums into the proposed ‘New Tower’.”

Chomsky’s modular theory dominated linguistics for more than a half century. It still is held in high esteem, even as the evidence increasingly is stacked against it. This wasn’t just a waste of immense amount of funding. It derailed an entire field of research and stunted the development of a more accurate understanding. Generations of linguists went chasing after a mirage. No brain module of language has been found nor is there any hope of ever finding one. Many researchers wasted their entire careers on a theory that proved false and many of these researchers continue to defend it, maybe in the hope that another half century of research will finally prove it to be true after all.

There is no doubt that Chomsky has a brilliant mind. He is highly skilled in debate and persuasion. He won the battle of ideas, at least for a time. Through sheer power of his intellect, he was able to overwhelm his academic adversaries. His ideas came to dominate the field of linguistics, in what came to be known as the cognitive revolution. But Daniel Everett has stated that “it was not a revolution in any sense, however popular that narrative has become” (Dark Matter of the Mind, Kindle Location 306). If anything, Chomsky’s version of essentialism caused the temporary suppression of a revolution that was initiated by linguistic relativists and social constructionists, among others. The revolution was strangled in the crib, partly because it was fighting against an entrenched ideological framework that was millennia old. The initial attempts at research struggled to offer a competing ideological framework and they lost that struggle. Then they were quickly forgotten about, as if the evidence they brought forth was irrelevant.

Barret explains the tragedy of this situation. She is speaking of essentialism in terms of emotions, but it applies to the entire scientific project of essentialism. It has been a failed project that refuses to accept its failure, a paradigm that refuses to die in order to make way for something else. She laments all of the waste and lost opportunities (Kindle Locations 3245-3293):

“Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost. […]

“The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . .  . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion. […]

“It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.”

 

Development of Language and Music

Evidence Rebuts Chomsky’s Theory of Language Learning
by Paul Ibbotson and Michael Tomasello

All of this leads ineluctably to the view that the notion of universal grammar is plain wrong. Of course, scientists never give up on their favorite theory, even in the face of contradictory evidence, until a reasonable alternative appears. Such an alternative, called usage-based linguistics, has now arrived. The theory, which takes a number of forms, proposes that grammatical structure is not in­­nate. Instead grammar is the product of history (the processes that shape how languages are passed from one generation to the next) and human psychology (the set of social and cognitive capacities that allow generations to learn a language in the first place). More important, this theory proposes that language recruits brain systems that may not have evolved specifically for that purpose and so is a different idea to Chomsky’s single-gene mutation for recursion.

In the new usage-based approach (which includes ideas from functional linguistics, cognitive linguistics and construction grammar), children are not born with a universal, dedicated tool for learning grammar. Instead they inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools—such as categorization, the reading of communicative intentions, and analogy making, with which children build grammatical categories and rules from the language they hear around them.

Broca and Wernicke are dead – it’s time to rewrite the neurobiology of language
by Christian Jarrett, BPS Research Digest

Yet the continued dominance of the Classic Model means that neuropsychology and neurology students are often learning outmoded ideas, without getting up to date with the latest findings in the area. Medics too are likely to struggle to account for language-related symptoms caused by brain damage or illness in areas outside of the Classic Model, but which are relevant to language function, such as the cerebellum.

Tremblay and Dick call for a “clean break” from the Classic Model and a new approach that rejects the “language centric” perspective of the past (that saw the language system as highly specialised and clearly defined), and that embraces a more distributed perspective that recognises how much of language function is overlaid on cognitive systems that originally evolved for other purposes.

Signing, Singing, Speaking: How Language Evolved
by Jon Hamilton, NPR

There’s no single module in our brain that produces language. Instead, language seems to come from lots of different circuits. And many of those circuits also exist in other species.

For example, some birds can imitate human speech. Some monkeys use specific calls to tell one another whether a predator is a leopard, a snake or an eagle. And dogs are very good at reading our gestures and tone of voice. Take all of those bits and you get “exactly the right ingredients for making language possible,” Elman says.

We are not the only species to develop speech impediments
by Moheb Costandi, BBC

Jarvis now thinks vocal learning is not an all-or-nothing function. Instead there is a continuum of skill – just as you would expect from something produced by evolution, and which therefore was assembled slowly, piece by piece.

The music of language: exploring grammar, prosody and rhythm perception in zebra finches and budgerigars
by Michelle Spierings, Institute of Biology Leiden

Language is a uniquely human trait. All animals have ways to communicate, but these systems do not bear the same complexity as human language. However, this does not mean that all aspects of human language are specifically human. By studying the language perception abilities of other species, we can discover which parts of language are shared. It are these parts that might have been at the roots of our language evolution. In this thesis I have studied language and music perception in two bird species, zebra finches and budgerigars. For example, zebra finches can perceive the prosodic (intonation) patterns of human language. The budgerigars can learn to discriminate between different abstract (grammar) patterns and generalize these patterns to new sounds. These and other results give us insight in the cognitive abilities that might have been at the very basis of the evolution of human language.

How Music and Language Mimicked Nature to Evolve Us
by Maria Popova, Brain Pickings

Curiously, in the majority of our interaction with the world, we seem to mimic the sounds of events among solid objects. Solid-object events are comprised of hits, slides and rings, producing periodic vibrations. Every time we speak, we find the same three fundamental auditory constituents in speech: plosives (hit-sounds like t, d and p), fricatives (slide-sounds like f, v and sh), and sonorants (ring-sounds like a, u, w, r and y). Changizi demonstrates that solid-object events have distinct “grammar” recurring in speech patterns across different languages and time periods.

But it gets even more interesting with music, a phenomenon perceived as a quintessential human invention — Changizi draws on a wealth of evidence indicating that music is actually based on natural sounds and sound patterns dating back to the beginning of time. Bonus points for convincingly debunking Steven Pinker’s now-legendary proclamation that music is nothing more than “auditory cheesecake.”

Ultimately, Harnessed shows that both speech and music evolved in culture to be simulacra of nature, making our brains’ penchant for these skills appear intuitive.

The sounds of movement
by Bob Holmes, New Scientist

It is this subliminal processing that spoken language taps into, says Changizi. Most of the natural sounds our ancestors would have processed fall into one of three categories: things hitting one another, things sliding over one another, and things resonating after being struck. The three classes of phonemes found in speech – plosives such as p and k, fricatives such as sh and f, and sonorants such as r, m and the vowels – closely resemble these categories of natural sound.

The same nature-mimicry guides how phonemes are assembled into syllables, and syllables into words, as Changizi shows with many examples. This explains why we acquire language so easily: the subconscious auditory processing involved is no different to what our ancestors have done for millions of years.

The hold that music has on us can also be explained by this kind of mimicry – but where speech imitates the sounds of everyday objects, music mimics the sound of people moving, Changizi argues. Primitive humans would have needed to know four things about someone moving nearby: their distance, speed, intent and whether they are coming nearer or going away. They would have judged distance from loudness, speed from the rate of footfalls, intent from gait, and direction from subtle Doppler shifts. Voila: we have volume, tempo, rhythm and pitch, four of the main components of music.

Scientists recorded two dolphins ‘talking’ to each other
by Maria Gallucci, Mashable

While marine biologists have long understood that dolphins communicate within their pods, the new research, which was conducted on two captive dolphins, is the first to link isolated signals to particular dolphins. The findings reveal that dolphins can string together “sentences” using a handful of “words.”

“Essentially, this exchange of [pulses] resembles a conversation between two people,” Vyacheslav Ryabov, the study’s lead researcher, told Mashable.

“The dolphins took turns in producing ‘sentences’ and did not interrupt each other, which gives reason to believe that each of the dolphins listened to the other’s pulses before producing its own,” he said in an email.

“Whistled Languages” Reveal How the Brain Processes Information
by Julien Meyer, Scientific American

Earlier studies had shown that the left hemisphere is, in fact, the dominant language center for both tonal and atonal tongues as well as for nonvocalized click and sign languages. Güntürkün was interested in learning how much the right hemisphere—associated with the processing of melody and pitch—would also be recruited for a whistled language. He and his colleagues reported in 2015 in Current Biology that townspeople from Kuşköy, who were given simple hearing tests, used both hemispheres almost equally when listening to whistled syllables but mostly the left one when they heard vocalized spoken syllables.

Did Music Evolve Before Language?
by Hank Campbell, Science 2.0

Gottfriend Schlaug of Harvard Medical School does something a little more direct that may be circumstantial but is a powerful exclamation point for a ‘music came first’ argument. His work with patients who have suffered severe lesions on the left side of their brain showed that while they could not speak – no language skill as we might define it – they were able to sing phrases like “I am thirsty”, sometimes within two minutes of having the phrase mapped to a melody.

Chopin, Bach used human speech ‘cues’ to express emotion in music
by Andrew Baulcomb, Science Daily

“What we found was, I believe, new evidence that individual composers tend to use cues in their music paralleling the use of these cues in emotional speech.” For example, major key or “happy” pieces are higher and faster than minor key or “sad” pieces.

Theory: Music underlies language acquisition
by B.J. Almond, Rice University

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

– See more at: http://news.rice.edu/2012/09/18/theory-music-underlies-language-acquisition/#sthash.kQbEBqnh.dpuf

How Brains See Music as Language
by Adrienne LaFrance, The Atlantic

What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations “take root in the brain as a language,” Limb said.

“It makes perfect sense,” said Ken Schaphorst, chair of the Jazz Studies Department at the New England Conservatory in Boston. “I improvise with words all the time—like I am right now—and jazz improvisation is really identical in terms of the way it feels. Though it’s difficult to get to the point where you’re comfortable enough with music as a language where you can speak freely.”

Along with the limitations of musical ability, there’s another key difference between jazz conversation and spoken conversation that emerged in Limb’s experiment. During a spoken conversation, the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But Limb and his colleagues found that brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this kind of music is syntactic but it’s not semantic.

“Music communication, we know it means something to the listener, but that meaning can’t really be described,” Limb said. “It doesn’t have propositional elements or specificity of meaning in the same way a word does. So a famous bit of music—Beethoven’s dun dun dun duuuun—we might hear that and think it means something but nobody could agree what it means.”

 

Shaken and Stirred

I Is an Other
by James Geary
Kindle Locations 303-310

Descartes’s “Cogito ergo sum.”

This phrase is routinely translated as:

I think, therefore I am.

But there is a better translation.

The Latin word cogito is derived from the prefix co (with or together) and the verb agitare (to shake). Agitare is the root of the English words “agitate” and “agitation.” Thus, the original meaning of cogito is “to shake together,” and the proper translation of “Cogito ergo sum” is:

I shake things up, therefore I am.

Staying with the Trouble
by Donna J. Haraway
Kindle Locations 293-303

Trouble is an interesting word. It derives from a thirteenth-century French verb meaning “to stir up,” “to make cloudy,” “to disturb.” We— all of us on Terra— live in disturbing times, mixed-up times, troubling and turbid times. The task is to become capable, with each other in all of our bumptious kinds, of response. Mixed-up times are overflowing with both pain and joy— with vastly unjust patterns of pain and joy, with unnecessary killing of ongoingness but also with necessary resurgence. The task is to make kin in lines of inventive connection as a practice of learning to live and die well with each other in a thick present. Our task is to make trouble, to stir up potent response to devastating events, as well as to settle troubled waters and rebuild quiet places. In urgent times, many of us are tempted to address trouble in terms of making an imagined future safe, of stopping something from happening that looms in the future, of clearing away the present and the past in order to make futures for coming generations. Staying with the trouble does not require such a relationship to times called the future. In fact, staying with the trouble requires learning to be truly present, not as a vanishing pivot between awful or edenic pasts and apocalyptic or salvific futures, but as mortal critters entwined in myriad unfinished configurations of places, times, matters, meanings.

Piraha and Bicameralism

For the past few months, I’ve been reading about color perception, cognition, and terminology. I finally got around to finishing a post on it. The topic is a lot more complex and confusing than what one might expect. The specific inspiration was the color blue, a word that apparently doesn’t signify a universal human experience. There is no condition of blueness objectively existing in the external world. It’s easy to forget that a distinction always exists between perception and reality or rather between one perception of reality and another.

How do you prove something is real when it feels real in your experience? For example, how would you attempt to prove your consciousness, interior experience, and individuality? What does it mean for your sense of self to be real? You can’t even verify your experience of blue matches that of anyone else, much less show that blueness is a salient hue for all people. All you have is the experience itself. Your experience can motivate, influence, and shape what and how you communicate or try to communicate, but you can’t communicate the experience itself. This inability is a stumbling block of all human interactions. The gap between cultures can be even more vast.

This is why language is so important to us. Language doesn’t only serve the purpose of communication but more importantly the purpose of creating a shared worldview. This is the deeply ingrained human impulse to bond with others, no matter how imperfect this is achieved in practice. When we have a shared language, we can forget about the philosophical dilemmas of experience and to what degree it is shared. We’d rather not have to constantly worry about such perplexing and disturbing issues.

These contemplations were stirred up by one book in particular, Daniel L. Everett’s Don’t Sleep, There Are Snakes. In my post on color, I brought up some of his observations about the Piraha (read pp. 136-141 from that book and have your mind blown). Their experience is far beyond what most people experience in the modern West. They rely on immediacy of experience. If they don’t experience or someone they know doesn’t experience something, it has little relevance to their lives and no truth value in their minds. Yet what they consider to be immediate experience can seem bizarre for us outsiders.

Piraha spirituality isn’t otherworldly. Spirits exist, just as humans exist. In fact, there is no certain distinction. When someone is possessed by a spirit, they are that spirit and the Piraha treat them as such. The person who is possessed is simply not there. The spirit is real because they experience the spirit with their physical senses. Sometimes in coming into contact with a spirit, a Piraha individual will lose their old identity and gain a new one, the change being permanent and another name to go along with it. The previous person is no longer there and I suppose never comes back. They aren’t pretending to change personalities. That is their direct experience of reality. Talk about the power of language. A spirit gives someone a new name and they become a different person. The name has power, represents an entire way of being, a personality unto itself. The person becomes what they are named. This is why the Piraha don’t automatically assume someone is the same person the next time they meet them, for they live in a fluid world where change is to be expected.

A modern Westerner sees the Piraha individual. To their mind, it’s the same person. They can see he or she is physically the same person. But another Piraha tribal member doesn’t see the same person. For example, when possessed, the person is apparently not conscious of the experience and won’t remember it later. During possession, they will be in an entirely dissociated state of mind, literally being someone else with different behaviors and a different voice. The Piraha audience watching the possession also won’t remember anything other than a spirit having visited. It isn’t a possession to them. The spirit literally was there. That is their perceived reality, what they know in their direct experience.

What the Piraha consider crazy and absurd is the Western faith in a monotheistic tradition not based on direct experience. If you never met Jesus, they can’t comprehend why you’d believe in him. The very notion of ‘faith’ makes absolutely no sense to them, as it seems like an act of believing what you know not to be real in your own experience. They are sincere Doubting Thomases. Jesus isn’t real, until he physically walks into their village to be seen with their own eyes, touched with their own hands, and heard with their own ears. To them, spirituality is as real as the physical world around them and is proven by the same means, through direct experience or else the direct experience of someone who is personally trusted to speak honestly.

Calling the Piraha experience of spirits a mass hallucination is to miss the point. To the degree that is true, we are all mass hallucinating all the time. It’s just one culture’s mass hallucinations differ from that of another. We modern Westerners, however, so desperately want to believe there can only be one objective reality to rule them all. The problem is we humans aren’t objective beings. Our perceived reality is unavoidably subjective. We can’t see our own cultural biases because they are the only reality we know.

In reading Everett’s description of the Piraha, I couldn’t help thinking about Julian Jaynes’ theory of the bicameral mind. Jaynes wasn’t primarily focused on hunter-gatherers such as the Piraha. Even so, one could see the Piraha culture as having elements of bicameralism, whether or not they ever were fully bicameral. They don’t hallucinate hearing voices from spirits. They literally hear them. How such voices are spoken is apparently not the issue. What matters is that they are spoken and heard. And those spirit voices will sometimes tell the Piraha important information that will influence, if not determine, their behaviors and actions. These spirit visitations are obviously treated seriously and play a central role in the functioning of their society.

What is strangest of all is that the Piraha are not fundamentally different than you or I. They point to one of the near infinite possibilities that exist within our shared human nature. If a baby from Western society was raised by the Piraha, we have no reason to assume that he or she wouldn’t grow up to be like any other Piraha. It was only a few centuries ago when it also was common for Europeans to have regular contact with spirits. The distance between the modern mind and what came before is shorter than it first appears, for what came before still exists within us, as what we will become is a seed already planted.*

I don’t want this point to be missed. What is being discussed here isn’t ultimately about colors or spirits. This is a way of holding up a mirror to ourselves. What we see reflected back isn’t what we expected, isn’t how we appeared in our own imaginings. What if we aren’t what we thought we were? What if we turn out to be a much more amazing kind of creature, one that holds a multitude within?

(*Actually, that isn’t stated quite correctly. It isn’t what came before. The Piraha are still here, as are many other societies far different from the modern West. It’s not just that we carry the past within us. That is as true for the Piraha, considering they too carry a past within them, most of it being a past of human evolution shared with the rest of humanty. Modern individuality has only existed in a blip of time, a few hundred years in the hundreds of thousands of years of hominid existence. The supposed bicameral mind lasted for thousands of years longer than the entire post-bicameral age. What are the chances that our present experience of individuality will last as long? Highly unlikely.)

* * *

Don’t Sleep, There Are Snakes:
Life and Language in the Amazonian Jungle
by Daniel L Everett
pp. 138-139

Pirahãs occasionally talked about me, when I emerged from the river in the evenings after my bath. I heard them ask one another, “Is this the same one who entered the river or is it kapioxiai?”

When I heard them discuss what was the same and what was different about me after I emerged from the river, I was reminded of Heraclitus, who was concerned about the nature of identities through time. Heraclitus posed the question of whether one could step twice into the same river. The water that we stepped into the first time is no longer there. The banks have been altered by the flow so that they are not exactly the same. So apparently we step into a different river. But that is not a satisfying conclusion. Surely it is the same river. So what does it mean to say that something or someone is the same this instant as they were a minute ago? What does it mean to say that I am the same person I was when I was a toddler? None of my cells are the same. Few if any of my thoughts are. To the Pirahãs, people are not the same in each phase of their lives. When you get a new name from a spirit, something anyone can do anytime they see a spirit, you are not exactly the same person as you were before.

Once when I arrived in Posto Novo, I went up to Kóhoibiíihíai and asked him to work with me, as he always did. No answer. So I asked again, “Ko Kóhoi, kapiigakagakaísogoxoihí?” (Hey Kóhoi, do you want to mark paper with me?) Still no answer. So I asked him why he wasn’t talking to me. He responded, “Were you talking to me? My name is Tiáapahai. There is no Kóhoi here. Once I was called Kóhoi, but he is gone now and Tiáapahai is here.”

So, unsurprisingly, they wondered if I had become a different person. But in my case their concern was greater. Because if, in spite of evidence to the contrary, I turned out not to be a xíbiisi, I might really be a different entity altogether and, therefore, a threat to them. I assured them that I was still Dan. I was not kapioxiai.

On many rainless nights, a high falsetto voice can be heard from the jungle near a Pirahã village. This falsetto sounds spiritlike to me. Indeed, it is taken by all the Pirahãs in the village to be a kaoáíbógí, or fast mouth. The voice gives the villagers suggestions and advice, as on how to spend the next day, or on possible night dangers (jaguars, other spirits, attacks by other Indians). This kaoáíbógí also likes sex, and he frequently talks about his desire to copulate with village women, with considerable detail provided.

One night I wanted to see the kaoáíbógí myself. I walked through the brush about a hundred feet to the source of that night’s voice. The man talking in the falsetto was Xagábi, a Pirahã from the village of Pequial and someone known to be very interested in spirits. “Mind if I record you?” I asked, not knowing how he might react, but having a good idea that he would not mind.

“Sure, go ahead,” he answered immediately in his normal voice. I recorded about ten minutes of his kaoáíbógí speech and then returned to my house.

The next day, I went to Xagábi’s place and asked, “Say, Xagábi, why were you talking like a kaoáíbógí last night?”

He acted surprised. “Was there a kaoáíbógí last night? I didn’t hear one. But, then, I wasn’t here.”

pp. 140-141

After some delay, which I could not help but ascribe to the spirits’ sense of theatrical timing, Peter and I simultaneously heard a falsetto voice and saw a man dressed as a woman emerge from the jungle. It was Xisaóoxoi dressed as a recently deceased Pirahã woman. He was using a falsetto to indicate that it was the woman talking. He had a cloth on his head to represent the long hair of a woman, hanging back like a Pirahã woman’s long tresses. “She” was wearing a dress.

Xisaóoxoi’s character talked about how cold and dark it was under the ground where she was buried. She talked about what it felt like to die and about how there were other spirits under the ground. The spirit Xisaóoxoi was “channeling” spoke in a rhythm different from normal Pirahã speech, dividing syllables into groups of two (binary feet) instead of the groups of three (ternary feet) used in everyday talking. I was just thinking how interesting this would be in my eventual analysis of rhythm in Pirahã, when the “woman” rose and left.

Within a few minutes Peter and I heard Xisaóoxoi again, but this time speaking in a low, gruff voice. Those in the “audience” started laughing. A well-known comical spirit was about to appear. Suddenly, out of the jungle, Xisaóoxoi emerged, naked, and pounding the ground with a heavy section of the trunk of a small tree. As he pounded, he talked about how he would hurt people who got in his way, how he was not afraid, and other testosterone-inspired bits of braggadocio.

I had discovered, with Peter, a form of Pirahã theater! But this was of course only my classification of what I was seeing. This was not how the Pirahãs would have described it at all, regardless of the fact that it might have had exactly this function for them. To them they were seeing spirits. They never once addressed Xisaóoxoi by his name, but only by the names of the spirits.

What we had seen was not the same as shamanism, because there was no one man among the Pirahãs who could speak for or to the spirits. Some men did this more frequently than others, but any Pirahã man could, and over the years I was with them most did, speak as a spirit in this way.

The next morning when Peter and I tried to tell Xisaóoxoi how much we enjoyed seeing the spirits, he, like Xagábi, refused to acknowledge knowing anything about it, saying he wasn’t there.

This led me to investigate Pirahã beliefs more aggressively. Did the Pirahãs, including Xisaóoxoi, interpret what we had just seen as fiction or as fact, as real spirits or as theater? Everyone, including Pirahãs who listened to the tape later, Pirahãs from other villages, stated categorically that this was a spirit. And as Peter and I were watching the “spirit show,” I was given a running commentary by a young man sitting next to me, who assured me that this was a spirit, not Xisaóoxoi. Moreover, based on previous episodes in which the Pirahãs doubted that I was the same person and their expressed belief that other white people were spirits, changing forms at will, the only conclusion I could come to was that for the Pirahãs these were encounters with spirits— similar to Western culture’s seances and mediums.

Pirahãs see spirits in their mind, literally. They talk to spirits, literally. Whatever anyone else might think of these claims, all Pirahãs will say that they experience spirits. For this reason, Pirahã spirits exemplify the immediacy of experience principle. And the myths of any other culture must also obey this constraint or there is no appropriate way to talk about them in the Pirahã language.

One might legitimately ask whether something that is not true to Western minds can be experienced. There is reason to believe that it can. When the Pirahãs claim to experience a spirit they have experienced something, and they label this something a spirit. They attribute properties to this experience, as well as the label spirit. Are all the properties, such as existence and lack of blood, correct? I am sure that they are not. But I am equally sure that we attribute properties to many experiences in our daily lives that are incorrect.

* * *

Radical Human Mind: From Animism to Bicameralism and Beyond

On Being Strange

Self, Other, & World

Humanity in All of its Blindness

The World that Inhabits Our Mind

Blue on Blue

“Abstract words are ancient coins whose concrete images in the busy give-and-take of talk have worn away with use.”
~ Julian Jaynes, The Origin of Consciousness in the
Breakdown of the Bicameral Mind

“This blue was the principle that transcended principles. This was the taste, the wish, the Binah that understands, the dainty fingers of personality and the swirling fingerprint lines of individuality, this sigh that returns like a forgotten and indescribable scent that never dies but only you ever knew, this tingle between familiar and strange, this you that never there was word for, this identifiable but untransmittable sensation, this atmosphere without reason, this illicit fairy kiss for which you are more fool than sinner, this only thing that God and Satan mistakenly left you for your own and which both (and everyone else besides) insist to you is worthless— this, your only and invisible, your peculiar— this secret blue.”
~ Quentin S. Crisp, Blue on Blue

Perception is as much cognition as sensation. Colors don’t exist in the world. It is our brain’s way of processing light waves detected by the eyes. Someone unable to see from birth will never be able to see normal colors, even if they gain sight as an adult. The brain has to learn how to see the world and that is a process that primarily happens in infancy and childhood.

Radical questions follow from this insight. Do we experience blue, forgiveness, individuality, etc before our culture has the language for it? And, conversely, does the language we use and how we use it indicate our actual experience? Or does it filter and shape it? Did the ancients lack not only perceived blueness but also individuated/interiorized consciousness and artistic perspective because they had no way of communicating and expressing it? If they possessed such things as their human birthright, why did they not communicate them in their texts and show them in their art?

The most ancient people would refer to the sky as black. Some isolated people in more recent times have also been observed offering this same description. This apparently isn’t a strange exception. Guy Deutscher mentions that, in an informal color experiment, his young daughter once pointed to the “pitch-black sky late at night” and declared it blue—that was at the age of four, long after having learned the color names for blue and black. She had the language to make the distinction and yet she made a similar ‘mistake’ as some isolated island people. How could that be? Aren’t ‘black’ and ‘blue’ obviously different?

The ancients described physical appearances in some ways that seem bizarre to the modern sensibility. Homer says the sea appears something like wine and so do sheep. Or else the sea is violet, just as are oxen and iron. Even more strangely, green is the color of honey and the color human faces turn under emotional distress. Yet no where in the ancient world is anything blue for no word for it existed. Things that seem blue to us are either green, black or simply dark in ancient texts.

It has been argued that Homer’s language such as the word for ‘bronze’ might not have referred to color at all. But that just adds to the strangeness. We not only can’t determine what colors he might have been referring to or even if he was describing colors at all. There weren’t abstractly generalized color terms that were exclusively dedicated to colors, instead also describing other physical features, psychological experiences, and symbolic values. This might imply that synesthesia once was a more common experience, related to the greater capacity preliterate individuals had for memorizing vast amounts of information (see Knowledge and Power in Prehistoric Societies by Lynne Kelly).

The paucity and confusion of ancient color language indicates color wasn’t perceived as all that significant, to the degree it was consciously perceived at all, at least not in the way we moderns think about it. Color hue might have not seemed all that relevant in the ancient world that was mostly lacking artificially colored objects and entirely lacking in bright garden flowers. Besides the ancient Egyptians, no one in the earliest civilizations had developed blue pigment and hence a word to describe it. Blue is a rare color in nature. Even water and sky is rarely a bright clear blue, when blue at all.

This isn’t just about color. There is something extremely bizarre going on, according to what we moderns assume to the case about the human mind and perception.

Consider the case of the Piraha, as studied by Daniel L. Everett (a man who personally understands the power of their cultural worldview). The Piraha have no color terms, not as single words, although they are able to describe colors using multiple words and concrete comparisons—such as red described as being like blood or green as like not yet ripe. Of course, they’ve been in contact with non-Piraha for a while now and so no one knows how they would’ve talked about colors before interaction with outsiders.

From a Western perspective, there are many other odd things about the Piraha. Their language does not fit the expectations of what many have thought as universal to all human language. They have no terms for numbers and counting, as well as no “quantifiers like all, each, every, and so on” (Everett, Don’t Sleep, There Are Snakes, p. 119). Originally, they had no pronouns and the pronouns they borrowed from other languages are used limitedly. They refer to ‘say’ in place of ‘think’, which makes one wonder what this indicates about their experience—is their thought an act of speaking?

Along with lacking ancestor worship, there aren’t even words to refer to family one never personally knew. Also, there are no creation stories or myths or fiction or any apparent notion of the world having been created or another supernatural world existing. They don’t think in those terms nor, one might presume, perceive reality in those terms. They are epistemological agnostics about anything they haven’t personally experienced or someone they personally know hasn’t personally experienced, and their language is extremely precise in knowledge claims, making early Western philosophers seem simpleminded in comparison. Everett was put in the unfortunate position of having tried to convert them to Christianity, but instead they converted him to atheism. Yet the Piraha live in a world they perceive as filled with spirits. These aren’t otherworldly spirits. They are very much in this world and when a Piraha speaks as a spirit they are that spirit. To put it another way, the world is full of diverse and shifting selves.

Color terms refer to abstract unchanging categories, the very thing that seems least relevant to the Piraha. They favor a subjective mentality, but that doesn’t mean they possess a subjective self similar to Western culture. Like many hunter-gatherers, they have a fluid sense of identity that changes along with their names, their former self treated as no longer existing whatsoever, just gone. There is no evidence of belief in a constant self that would survive death, as there is no belief in gods nor a heaven and hell. Instead of being obsessed with what is beyond, they are endlessly fascinated by what is at the edge of experience, what appears and disappears. In Cultural Constraints on Grammar and Cognition in Piraha, Everett explains this:

“After discussions and checking of many examples of this, it became clearer that the Piraha are talking about liminality—situations in which an item goes in and out of the boundaries of their experience. This concept is found throughout Piraha˜ culture.
Piraha˜’s excitement at seeing a canoe go around a river bend is hard to describe; they see this almost as traveling into another dimension. It is interesting, in light of the postulated cultural constraint on grammar, that there is an important Piraha˜ term and cultural value for crossing the border between experience and nonexperience.”

To speak of colors is to speak of particular kinds of perceptions and experiences. The Piraha culture is practically incomprehensible to us, as the Piraha represent an alien view of the world. Everett, in making a conclusion, writes that,

“Piraha thus provides striking evidence for the influence of culture on major grammatical structures, contradicting Newmeyer’s (2002:361) assertion (citing “virtually all linguists today”), that “there is no hope of correlating a language’s gross grammatical properties with sociocultural facts about its speakers.” If I am correct, Piraha shows that gross grammatical properties are not only correlated with sociocultural facts but may be determined by them.”

Even so, Everett is not arguing for a strong Whorfian positon of linguistic determinism. Then again, Vyvyan Evans states that not even Benjamin Lee Whorf made this argument. In Language, Thought and Reality, Whorf wrote (as quoted by Evans in The Language Myth):

“The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur.”

Anyway, Everett observed that the Piraha demonstrated a pattern to how they linguistically treated certain hues of color. It’s just that they had much diversity and complexity in how they described colors, a dark brown object being described differently than a dark-skinned person, and no consistency across all the Piraha members in which phrases they’d use to describe which colors. Still, like any other humans, they had the capacity for color perception, whether or not their color cognition matches that of other cultures.

To emphasize the point, the following is a similar example, as presented by Vyvyan Evans from The Language Myth (p. 207-8):

“The colour system in Yélî Dnye has been studied extensively by linguistic anthropologist Stephen Levinson. 38 Levinson argues that the lesson from Rossel Island is that each of the following claims made by Berlin and Kay is demonstrably false:

  • Claim 1: All languages have basic colour terms
  • Claim 2: The colour spectrum is so salient a perceptual field that all cultures must systematically and exhaustively name the colour space
  • Claim 3: For those basic colour terms that exist in any given language, there are corresponding focal colours – there is an ideal hue that is the prototypical shade for a given basic colour term
  • Claim 4: The emergence of colour terms follows a universal evolutionary pattern

“A noteworthy feature of Rossel Island culture is this: there is little interest in colour. For instance, there is no native artwork or handiwork in colour. The exception to this is hand-woven patterned baskets, which are usually uncoloured, or, if coloured, are black or blue. Moreover, the Rossel language doesn’t have a word that corresponds to the English word colour: the domain of colour appears not to be a salient conceptual category independent of objects. For instance, in Yélî, it is not normally possible to ask what colour something is, as one can in English. Levinson reports that the equivalent question would be: U pââ ló nté? This translates as “Its body, what is it like?” Furthermore, colours are not usually associated with objects as a whole, but rather with surfaces.”

Evans goes into greater detail. Suffice it to say, she makes a compelling argument that this example contradicts and falsifies the main claims of conventional theory, specifically that of Berlin and Kay. This culture defies expectations. It’s one of the many exceptions that appears to disprove the hypothetical rule.

Part of the challenge is we can’t study other cultures as neutral observers. Researchers end up influencing those cultures they study or else simply projecting their own cultural biases onto them and so interpreting the results accordingly. Even the tests used to analyze color perceptions across cultures are themselves culturally biased. They don’t just measure how people divide up hues. In the process of being tested, the design of the test is teaching the subjects a particular way of thinking about color perception. The test can’t tell us how people think about colors prior to the test itself. And obviously, even if the test could accomplish this impossible feat, we have no way of time traveling back in order to apply the test to ancient people.

We are left with a mystery and no easy way to explore it.

* * *

Here are a few related posts of mine. And below that are other sources of info, including a video at the very bottom.

Radical Human Mind: From Animism to Bicameralism and Beyond

Folk Psychology, Theory of Mind & Narrative

Self, Other, & World

Does Your Language Shape How You Think?
by Guy Deutscher

SINCE THERE IS NO EVIDENCE that any language forbids its speakers to think anything, we must look in an entirely different direction to discover how our mother tongue really does shape our experience of the world. Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about. […]

For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies. We may not know as yet how to measure these consequences directly or how to assess their contribution to cultural or political misunderstandings. But as a first step toward understanding one another, we can do better than pretending we all think the same.

Why Isn’t the Sky Blue?
by Tim Howard, Radiolab

Is the Sky Blue?
by Lisa Wade, PhD, Sociological Images

Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction. There are other examples of this phenomenon. What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany. Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

Could our ancestors see blue?
by Ellie Zolfagharifard, Daily Mail

But it’s not just about lighting conditions or optical illusions – evidence is mounting that until we have a way to describe something, we may not see its there.

Fathoming the wine-dark sea
by Christopher Howse, The Spectator

It wasn’t just the ‘wine-dark sea’. That epithet oinops, ‘wine-looking’ (the version ‘wine-dark’ came from Andrew Lang’s later translation) was applied both to the sea and to oxen, and it was accompanied by other colours just as nonsensical. ‘Violet’, ioeis, (from the flower) was used by Homer of the sea too, but also of wool and iron. Chloros, ‘green’, was used of honey, faces and wood. By far the most common colour words in his reticent vocabulary were black (170 times) and white (100), followed distantly by red (13).

What could account for this alien colour-sense? It wasn’t that Homer (if Homer existed) was blind, for there are parallel usages in other Greek authors.

A Winelike Sea
by Caroline Alexander, Lapham’s Quarterly

The image Homer hoped to conjure with his winelike sea greatly depended upon what wine meant to his audience. While the Greeks likely knew of white wine, most ancient wine was red, and in the Homeric epics, red wine is the only wine specifically described. Drunk at feasts, poured onto the earth in sacred rituals, or onto the ashes around funeral pyres, Homeric wine is often mélas, “dark,” or even “black,” a term with broad application, used of a brooding spirit, anger, death, ships, blood, night, and the sea. It is also eruthrós, meaning “red” or the tawny-red hue of bronze; and aíthops, “bright,” “gleaming,” a term also used of bronze and of smoke in firelight. While these terms notably have more to do with light, and the play of light, than with color proper, Homeric wine was clearly dark and red and would have appeared especially so when seen in the terracotta containers in which it was transported. “Winelike sea” cannot mean clear seawater, nor the white splash of sea foam, nor the pale color of a clear sea lapping the shallows of a sandy shore. […]

Homer’s sea, whether háls, thálassa, or póntos, is described as misty, darkly troubled, black-dark, and grayish, as well as bright, deep, clashing, tumultuous, murmuring, and tempestuous—but it is never blue. The Greek word for blue, kuáneos, was not used of the sea until the late sixth or early fifth century BC, in a poem by the lyric poet Simonides—and even here, it is unclear if “blue” is strictly meant, and not, again, “dark”:

the fish straight up from the
dark/blue water leapt
at the beautiful song

After Simonides, the blueness of kuáneos was increasingly asserted, and by the first century, Pliny the Elder was using the Latin form of the word, cyaneus, to describe the cornflower, whose modern scientific name, Centaurea cyanus, still preserves this lineage. But for Homer kuáneos is “dark,” possibly “glossy-dark” with hints of blue, and is used of Hector’s lustrous hair, Zeus’ eyebrows, and the night.

Ancient Greek words for color in general are notoriously baffling: In The Iliad, “chlorós fear” grips the armies at the sound of Zeus’ thunder. The word, according to R. J. Cunliffe’s Homeric lexicon, is “an adjective of color of somewhat indeterminate sense” that is “applied to what we call green”—which is not the same as saying it means “green.” It is also applied “to what we call yellow,” such as honey or sand. The pale green, perhaps, of vulnerable shoots struggling out of soil, the sickly green of men gripped with fear? […]

Rather than being ignorant of color, it seems that the Greeks were less interested in and attentive to hue, or tint, than they were to light. As late as the fourth century BC, Plato named the four primary colors as white, black, red, and bright, and in those cases where a Greek writer lists colors “in order,” they are arranged not by the Newtonian colors of the rainbow—red, orange, yellow, green, blue, indigo, violet—but from lightest to darkest. And The Iliad contains a broad, specialized vocabulary for describing the movement of light: argós meaning “flashing” or “glancing white”; aiólos, “glancing, gleaming, flashing,” or, according to Cunliffe’s Lexicon, “the notion of glancing light passing into that of rapid movement,” and the root of Hector’s most defining epithet, koruthaíolos—great Hector “of the shimmering helm.” Thus, for Homer, the sky is “brazen,” evoking the glare of the Aegean sun and more ambiguously “iron,” perhaps meaning “burnished,” but possibly our sense of a “leaden” sky. Significantly, two of the few unambiguous color terms in The Iliad, and which evoke the sky in accordance with modern sensibilities, are phenomena of light: “Dawn robed in saffron” and dawn shining forth in “rosy fingers of light.”

So too, on close inspection, Homeric terms that appear to describe the color of the sea, have more to do with light. The sea is often glaukós or mélas. In Homer, glaukós (whence glaucoma) is color neutral, meaning “shining” or “gleaming,” although in later Greek it comes to mean “gray.” Mélas (whence melancholy) is “dark in hue, dark,” sometimes, perhaps crudely, translated as “black.” It is used of a range of things associated with water—ships, the sea, the rippled surface of the sea, “the dark hue of water as seen by transmitted light with little or no reflection from the surface.” It is also, as we have seen, commonly used of wine.

So what color is the sea? Silver-pewter at dawn; gray, gray-blue, green-blue, or blue depending on the particular day; yellow or red at sunset; silver-black at dusk; black at night. In other words, no color at all, but rather a phenomenon of reflected light. The phrase “winelike,” then, had little to do with color but must have evoked some attribute of dark wine that would resonate with an audience familiar with the sea—with the póntos, the high sea, that perilous path to distant shores—such as the glint of surface light on impenetrable darkness, like wine in a terracotta vessel. Thus, when Achilles, “weeping, quickly slipping away from his companions, sat/on the shore of the gray salt sea,” stretches forth his hands toward the oínopa pónton, he looks not on the enigmatic “wine-dark sea,” but, more explicitly, and possibly with more weight of melancholy, on a “sea as dark as wine.”

Ancient Greek Color Vision
by Ananda Triulzi

In his writings Homer surprises us by his use of color. His color descriptive palate was limited to metallic colors, black, white, yellowish green and purplish red, and those colors he often used oddly, leaving us with some questions as to his actual ability to see colors properly (1). He calls the sky “bronze” and the sea and sheep as the color of wine, he applies the adjective chloros (meaning green with our understanding) to honey, and a nightingale (2). Chloros is not the only color that Homer uses in this unusual way. He also uses kyanos oddly, “Hector was dragged, his kyanos hair was falling about him” (3). Here it would seem, to our understanding, that Hector’s hair was blue as we associate the term kyanos with the semi-precious stone lapis lazuli, in our thinking kyanos means cyan (4). But we cannot assume that Hector’s hair was blue, rather, in light of the way that Homer consistently uses color adjectives, we must think about his meaning, did he indeed see honey as green, did he not see the ocean as blue, how does his perception of color reflect on himself, his people, and his world.

Homer’s odd color description usage was a cultural phenomenon and not simply color blindness on his part, Pindar describes the dew as chloros, in Euripides chloros describes blood and tears (5). Empedocles, one of the earliest Ancient Greek color theorists, described color as falling into four areas, light or white, black or dark, red and yellow; Xenophanes described the rainbow as having three bands of color: purple, green/yellow, and red (6). These colors are fairly consistent with the four colors used by Homer in his color description, this leads us to the conclusion that all Ancient Greeks saw color only in the premise of Empedocles’ colors, in some way they lacked the ability to perceive the whole color spectrum. […]

This inability to perceive something because of linguistic restriction is called linguistic relativity (7). Because the Ancient Greeks were not really conscious of seeing, and did not have the words to describe what they unconsciously saw, they simply did not see the full spectrum of color, they were limited by linguistic relativity.

The color spectrum aside, it remains to explain the loose and unconventional application of Homer and other’s limited color descriptions, for an answer we look to the work of Eleanor Irwin. In her work, Irwin suggests that besides perceiving less chromatic distinction, the Ancient Greeks perceived less division between color, texture, and shadow, chroma may have been difficult for them to isolate (8). For the Ancient Greeks, the term chloros has been suggested to mean moistness, fluidity, freshness and living (9). It also seems likely that Ancient Greek perception of color was influenced by the qualities that they associated with colors, for instance the different temperaments being associated with colors probably affected the way they applied color descriptions to things. They didn’t simply see color as a surface, they saw it as a spirited thing and the word to describe it was often fittingly applied as an adjective meaning something related to the color itself but different from the simplicity of a refined color.

The Wine-Dark Sea: Color and Perception in the Ancient World
by Erin Hoffman

Homer’s descriptions of color in The Iliad and The Odyssey, taken literally, paint an almost psychedelic landscape: in addition to the sea, sheep were also the color of wine; honey was green, as were the fear-filled faces of men; and the sky is often described as bronze. […]

The conspicuous absence of blue is not limited to the Greeks. The color “blue” appears not once in the New Testament, and its appearance in the Torah is questioned (there are two words argued to be types of blue, sappir and tekeleth, but the latter appears to be arguably purple, and neither color is used, for instance, to describe the sky). Ancient Japanese used the same word for blue and green (青 Ao), and even modern Japanese describes, for instance, thriving trees as being “very blue,” retaining this artifact (青々とした: meaning “lush” or “abundant”). […]

Blue certainly existed in the world, even if it was rare, and the Greeks must have stumbled across it occasionally even if they didn’t name it. But the thing is, if we don’t have a word for something, it turns out that to our perception—which becomes our construction of the universe—it might as well not exist. Specifically, neuroscience suggests that it might not just be “good or bad” for which “thinking makes it so,” but quite a lot of what we perceive.

The malleability of our color perception can be demonstrated with a simple diagram, shown here as figure six, “Afterimages”. The more our photoreceptors are exposed to the same color, the more fatigued they become, eventually giving out entirely and creating a reversed “afterimage” (yellow becomes blue, red becomes green). This is really just a parlor trick of sorts, and more purely physical, but it shows how easily shifted our vision is; other famous demonstrations like this selective attention test (its name gives away the trick) emphasize the power our cognitive functions have to suppress what we see. Our brains are pattern-recognizing engines, built around identifying things that are useful to us and discarding the rest of what we perceive as meaningless noise. (And a good thing that they do; deficiencies in this filtering, called sensory gating, are some of what cause neurological dysfunctions such as schizophrenia and autism.)

This suggests the possibility that not only did Homer lack a word for what we know as “blue”—he might never have perceived the color itself. To him, the sky really was bronze, and the sea really was the same color as wine. And because he lacked the concept “blue”—therefore its perception—to him it was invisible, nonexistent. This notion of concepts and language limiting cognitive perception is called linguistic relativism, and is typically used to describe the ways in which various cultures can have difficulty recalling or retaining information about objects or concepts for which they lack identifying language. Very simply: if we don’t have a word for it, we tend to forget it, or sometimes not perceive it at all. […]

So, if we’re all synesthetes, and our minds are extraordinarily plastic, capable of reorienting our entire perception around the addition of a single new concept (“there is a color between green and violet,” “schizophrenia is much more common than previously believed”), the implications of Homer’s wine-dark sea are rich indeed.

We are all creatures of our own time, our realities framed not by the limits of our knowledge but by what we choose to perceive. Do we yet perceive all the colors there are? What concepts are hidden from us by the convention of our language? When a noblewoman of Syracuse looked out across the Mare Siculum, did she see waves of Bacchanalian indigo beneath a sunset of hammered bronze? If a seagull flew east toward Thapsus, did she think of Venus and the fall of Troy?

The myriad details that define our everyday existence may define also the boundaries of our imagination, and with it our dreams, our ethics. We are lenses moving through time, beings of color and shadow.

Evolution of the Color Blue
by Dov Michaeli MD, PhD, The Doctor Weighs In

Why were black, white, and red the first colors to be perceived by our forefathers? The evolutionary explanation is quite straightforward: ancient humans had to distinguish between night and day. And red is important for recognizing blood and danger. Even today, in us moderns, the color red causes an increase in skin galvanic response, a sign of tension and alarm. Green and yellow entered the vocabulary as the need to distinguish ripe fruit from unripe, grasses that are green from grasses that are wilting, etc. But what is the need for naming the color blue? Blue fruits are not very common, and the color of the sky is not really vital for survival.

The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)
by Aatish Bhatia, Empirical Zeal

Some languages have just three basic colors, others have 4, 5, 6, and so on. There’s even a debate as to whether the Pirahã tribe of the Amazon have any specialized color words at all! (If you ask a Pirahã tribe member to label something red, they’ll say that it’s blood-like).

But there’s still a pattern hidden in this diversity. […] You start with a black-and-white world of darks and lights. There are warm colors, and cool colors, but no finer categories. Next, the reds and yellows separate away from white. You can now have a color for fire, or the fiery color of the sunset. There are tribes that have stopped here. Further down, blues and greens break away from black. Forests, skies, and oceans now come of their own in your visual vocabulary. Eventually, these colors separate further. First, red splits from yellow. And finally, blue from green.

The crayola-fication of the world: How we gave colors names, and it messed with our brains (part II)
by Aatish Bhatia, Empirical Zeal

The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green one. This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).

However, and this is where things start to get a bit strange, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (as in the example images above), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green. It seems that color categories only matter in the right half of your visual field! […]

The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out. […]

But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.

They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It becomes easy to spot the blue among green, so you’re faster at straddling categories.

All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions.

Color categories: Confirmation of the relativity hypothesis.
by Debi Roberson, Jules Davidoff, Ian R. L. Davies, & Laura R. Shapiro

In a category-learning paradigm, there was no evidence that Himba participants perceived the blue – green region of color space in a categorical manner. Like Berinmo speakers, they did not find this division easier to learn than an arbitrary one in the center of the green category. There was also a significant advantage for learning the dumbu-burou division, over the yellow-green division. It thus appears that CP for color category boundaries is tightly linked to the linguistic categories of the participant.

Knowing color terms enhances recognition: Further evidence from English and Himba
by Julie Goldstein, Jules B. Davidoff, & Debi Roberson, JECP

Two experiments attempted to reconcile discrepant recent findings relating to children’s color naming and categorization. In a replication of Franklin and colleagues ( Journal of Experimental Child Psychology, 90 (2005) 114–141), Experiment 1 tested English toddlers’ naming and memory for blue–green and blue–purple colors. It also found advantages for between-category presentations that could be interpreted as support for universal color categories. However, a different definition of knowing color terms led to quite different conclusions in line with the Whorfian view of Roberson and colleagues (Journal of Experimental Psychology: General, 133 (2004) 554–571). Categorical perception in recognition memory was now found only for children with a fuller understanding of the relevant terms. It was concluded that color naming can both under estimate and overestimate toddlers’ knowledge of color terms. Experiment 2 replicated the between-category recognition superiority found in Himba children by Franklin and colleagues for the blue–purple range. But Himba children, whose language does not have separate terms for green and blue, did not show across-category advantage for that set; rather, they behaved like English children who did not know their color terms.

The Effects of Color Names on Color Concepts, or Like Lazarus Raised from the Tomb
by Chris, ScienceBlogs

It’s interesting that the Berinmo and Himba tribes have the same number of color terms, as well, because that rules out one possible alternative explanation of their data. It could be that as languages develop, they develop a more sophisticated color vocabulary, which eventually approximates the color categories that are actually innately present in our visual systems. We would expect, then, that two languages that are at similar levels of development (in other words, they both have the same number of color categories) would exhibit similar effects, but the speakers’ of the two languages remembered and perceived the colors differently. Thus it appears that languages do not develop towards any single set of universal color categories. In fact, Roberson et al. (2004) reported a longitudinal study that implies that exactly the opposite may be the case4. They found that children in the Himba tribe, and English-speaking children in the U.S., initially categorized color chips in a similar way, but as they grew older and more familiar with the color terms of their languages, their categorizations diverged, and became more consistent with their color names. This is particularly strong evidence that color names affect color concepts.

Forget the dress; what color did early Israelites see when they looked up to the sky?
by David Streever, Episcopal Cafe

The children of the Himba were able to differentiate between many more shades of green than their English counterparts, but did not recognize the color blue as being distinct from green. The research found that the 11 basic English colors have no basis in the visual system, lending further credence to the linguistic theories of Deutscher, Geiger, Gladstone, and other academics.

Colour Categories as Cultural Constructs
by Jules Davidoff, Artbrain

This is a group of people in Namibia who were asked to do some color matching and similarity judgments for us. It’s a remote part of the world, but not quite so remote that somebody hasn’t got the t-shirt, but it’s pretty remote. That’s the sort of environment they live in, and these are the youngsters that I’m going to show you some particular data on. They are completely monolingual in their own language, which has a tremendous richness in certain types of terms, in cattle terms (I can’t talk about that now), but has a dramatic lack in color terms. They’ve only got five color terms. So all of the particular colors of the world, and this is an illustration which can go from white to black at the top, red to yellow, green, blue, purple, back to red again, if this was shown in terms of the whole colors of the spectrum, but they only have five terms. So they see the world as, perhaps differently than us, perhaps slightly plainer. So we looked at these young children, and we showed them a navy blue color at the top and we asked them to point to the same color again from another group of colors. And those colors included the correct color, but of course sometimes the children made mistakes. What I want to show was that the English children and the Himba children, these people are the Himba of Northwest Namibia, start out from the same place, they have this undefined color space in which, at the beginning of the testing, T1, they make errors in choosing the navy blue, sometimes they’ll choose the blue, sometimes they’ll choose the black, sometimes they’ll choose the purple. Now the purple one, actually if you did a spectral analysis, the blue and the purple, the one on the right, are the closest. And as you can see, as the children got older, the most common error, both for English children and the Himba children, is the increase (that’s going up on the graph) of the purple mistakes. But, their language, the Himba language, has the same word for blue as for black. We, of course, have the same word for the navy blue as the blue on the left, only as the children get older, three or four, the English children only ever confuse the navy blue to the blue on the left, whereas the Himba children confuse the navy blue with the black. So, what’s happening? Someone asked yesterday whether the Sapir-Worf Hypothesis had any currency. Well, if it has a little bit of currency, it has it certainly here, in that what is happening, because the names of colors mean different things in the different cultures, because blue and black are the same in the Himba language, the actual similarity does seem to have been altered in the pictorial register. So, the blues that we call blue, and the claim is that there is no natural category called blue, they were just sensations we want to group together, those natural categories don’t exist. But because we have constructed these categories, blues look more similar to us in the pictorial register, whereas to these people in Northwest Namibia, the blues and the blacks look more similar. So, in brief, I’d like to further add more evidence or more claim that we are constructing the world of colors and in some way at least our memory structures do alter, to a modest extent at least, what we’re seeing.

Hues and views
A cross-cultural study reveals how language shapes color perception.
by Rachel Adelson, APA

Not only has no evidence emerged to link the 11 basic English colors to the visual system, but the English-Himba data support the theory that color terms are learned relative to language and culture.

First, for children who didn’t know color terms at the start of the study, the pattern of memory errors in both languages was very similar. Crucially, their mistakes were based on perceptual distances between colors rather than a given set of predetermined categories, arguing against an innate origin for the 11 basic color terms of English. The authors write that an 11-color organization may have become common because it efficiently serves cultures with a greater need to communicate more precisely. Still, they write, “even if [it] were found to be optimal and eventually adopted by all cultures, it need not be innate.”

Second, the children in both cultures didn’t acquire color terms in any particular, predictable order–such as the universalist idea that the primary colors of red, blue, green and yellow are learned first.

Third, the authors say that as both Himba and English children started learning their cultures’ color terms, the link between color memory and color language increased. Their rapid perceptual divergence once they acquired color terms strongly suggests that cognitive color categories are learned rather than innate, according to the authors.

The study also spotlights the power of psychological research conducted outside the lab, notes Barbara Malt, PhD, a cognitive psychologist who studies language and thought and also chairs the psychology department at Lehigh University.

“To do this kind of cross-cultural work at all requires a rather heroic effort, [which] psychologists have traditionally left to linguists and anthropologists,” says Malt. “I hope that [this study] will inspire more cognitive and developmental psychologists to go into the field and pursue these kinds of comparisons, which are the only way to really find out which aspects of perception and cognition are universal and which are culture or language specific.”

Humans didn’t even see the colour blue until modern times, research suggests
by Fiona MacDonald, Science Alert

Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) and dark blue (siniy), can discriminate between light and dark shades of blue much faster than English speakers.

This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t see blue at all. Or, more accurately, they probably saw it as we do now, but they never really noticed it.

Blue was the Last Color Perceived by Humans
by Nancy Loyan Schuemann, Mysterious Universe

MRI experiments confirm that people who process color through their verbal left brains, where the names of colors are accessed, recognize them more quickly. Language molds us into the image of the culture in which we are born.

Categorical perception of color is lateralized to the right hemisphere in infants, but to the left hemisphere in adults
by A. Franklin, G. V. Drivonikou, L. Bevis, I. R. L. Davies, P. Kay, & T. Regier, PNAS

Both adults and infants are faster at discriminating between two colors from different categories than two colors from the same category, even when between- and within-category chromatic separation sizes are equated. For adults, this categorical perception (CP) is lateralized; the category effect is stronger for the right visual field (RVF)–left hemisphere (LH) than the left visual field (LVF)–right hemisphere (RH). Converging evidence suggests that the LH bias in color CP in adults is caused by the influence of lexical color codes in the LH. The current study investigates whether prelinguistic color CP is also lateralized to the LH by testing 4- to 6-month-old infants. A colored target was shown on a differently colored background, and time to initiate an eye movement to the target was measured. Target background pairs were either from the same or different categories, but with equal target-background chromatic separations. Infants were faster at initiating an eye movement to targets on different-category than same-category backgrounds, but only for targets in the LVF–RH. In contrast, adults showed a greater category effect when targets were presented to the RVF–LH. These results suggest that whereas color CP is stronger in the LH than RH in adults, prelinguistic CP in infants is lateralized to the RH. The findings suggest that language-driven CP in adults may not build on prelinguistic CP, but that language instead imposes its categories on a LH that is not categorically prepartitioned.

Categorical perception of colour in the left and right visual field is verbally mediated: Evidence from Korean
by Debi Roberson, Hyensou Pak, & J. Richard Hanley

In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is unique to Korean, these results are not consistent with a suggestion made by Drivonikou [Drivonikou, G. V., Kay, P., Regier, T., Ivry, R. B., Gilbert, A. L., Franklin, A. et al. (2007) Further evidence that Whorfian effects are stronger in the right visual field than in the left. Proceedings of the National Academy of Sciences 104, 1097–1102] that CP effects in the left visual field provide evidence for the existence of a set of universal colour categories. Dividing Korean participants into fast and slow responders demonstrated that fast responders show CP only in the right visual field while slow responders show CP in both visual fields. We argue that this finding is consistent with the view that CP in both visual fields is verbally mediated by the left hemisphere language system.

Linguistic Fossils of the Mind’s Eye
by Keith, UMMAGUMMA blog

The other, The Unfolding of Language (2005), deals with the actual evolution of language. […]

Yet, while erosion occurs there is also a creative force in the human development of language. That creativity is revealed in our unique capacity for metaphor. “…metaphor is the indispensible element in the thought-process of every one of us.” (page 117) “It transpired that metaphor is an essential tool of thought, an indispensible conceptual mechanism which allows us to think of abstract concepts in terms of simpler concrete things. It is, in fact, the only way we have of dealing with abstraction.” (page 142) […]

The use of what can be called ‘nouns’ and not just ‘things’ is a fairly recent occurrence in language, reflecting a shift in human experience. This is a ‘fossil’ of linguistics. “The flow from concrete to abstract has created many words for concepts that are no longer physical objects, but nonetheless behave like thing-words in the sentence. The resulting abstract concepts are no longer thing-words, but they inherit their distribution from the thing-words that gave rise to them. A new category of words has thus emerged…which we can now call ‘noun’.” (page 246)

The way language is used, its accepted uses by people through understood rules of grammar, is the residue of collective human experience. “The grammar of a language thus comes to code most compactly and efficiently those constructions that are used most frequently…grammar codes best what it does most often.” (page 261) This is centrally why I hold the grammar of language to be almost a sacred portal into human experience.

In the 2010 work, Deutscher’s emphasis shifts to why different languages reveal that humans actually experience life differently. We do not all feel and act the same way about the things of life. My opinion is that it is a mistake to believe “humanity” thinks, feels and experiences to a high degree of similarity. The fact is language shows that, as it diversified across the earth, humanity has a multitude of diverse ways of experiencing.

First of all, “…a growing body of reliable scientific research provides solid evidence that our mother tongue can affect how we think and perceive the world.” (page 7) […]

The author does not go as far as me, nor is he as blunt; I am interjecting much of my personal beliefs in here. Still, “…fundamental aspects of our thought are influenced by cultural conventions of our society, to a much greater extent than is fashionable to admit today….what we find ‘natural’ depends largely on the conventions we have been brought up on.” (page 233) There are clear echoes of Nietzsche in here.

The conclusion is that “habits of speech can create habits of mind.” So, language affects culture fundamentally. But, this is a reciprocal arrangement. Language changes due to cultural experience yet cultural experience is affected by language.

Guy Deutscher’s Through the Language Glass
Stuart Hindmarsh, Philosophical Overview

In Through the Language Glass, Guy Deutscher addresses the question as to whether the natural language we speak will have an influence on our thought and our perception. He focuses on perceptions, and specifically the perceptions of colours and perceptions of spatial relations. He is very dismissive of the Sapir-Whorf hypothesis and varieties of linguistic relativity which would say that if the natural language we speak is of a certain sort then we cannot have certain types of concepts or experiences. For example, a proponent of this type of linguistic relativity might say that if your language does not have a word for the colour blue then you cannot perceive something as blue. Nonetheless, Deutscher argues that the natural language we speak will have some influence on how we think and see the world, giving several examples, many of which are fascinating. However, I believe that several of his arguments that dismiss views like the Sapir-Whorf hypothesis are based on serious misunderstandings.

The view that language is the medium in which conceptual thought takes place has a long history in philosophy, and this is the tradition out of which the Sapir-Whorf hypothesis was developed. […]

It is important to note that in this tradition the relation between language and conceptual thought is not seen as one in which the ability to speak a language is one capacity and the ability to think conceptually a completely separate faculty, and in which the first merely has a causal influence on the other. It is rather the view that the ability to speak a language makes it possible to think conceptually and that the ability to speak a language makes it possible to have perceptions of certain kinds, such as those in which what is perceived is subsumed under a concept. For example, it might be said that without language it is possible to see a rabbit but not possible to see it as a rabbit (as opposed to a cat, a dog, a squirrel, or any other type of thing). Thus conceptual thinking and perceptions of these types are seen not as separate from language and incidentally influenced by it but dependent on language and taking their general form from language. This does not mean that speech or writing must be taking place every time a person thinks in concepts or has these types of perception, though. To think that it must is a misunderstanding essentially the same as a common misinterpretation of Kant, which I will discuss in more detail in a later post.

While I take this to be the idea behind the Sapir-Whorf hypothesis, Deutscher evidently interprets that hypothesis as a very different kind of view. According to this view, the ability to speak a language is separate from the ability to think conceptually and from the ability to have the kinds of perceptions described above and it merely influences such thought and perception from without. Furthermore, it is not a relation in which language makes these types of thought and perception possible but one in which thought and perception are actually constrained by language. This interpretation runs through all of Deutscher’s criticisms of linguistic relativity. […]

Certainly many questionable assertions have been made based on the premise that language conditions the way that we think. Whorf apparently made spurious claims about Hopi conceptions of time. Today a great deal of dubious material is being written about the supposed influence of the internet and hypertext media on the way that we think. This is mainly inspired by Marshall McLuhan but generally lacking his originality and creativity. Nevertheless, there have been complex and sophisticated versions of the idea that the natural language that we speak conditions our thought and our perceptions, and these deserve serious attention. There are certainly more complex and sophisticated versions of these ideas than the crude caricature that Deutscher sets up and knocks down. Consequently, I don’t believe that he has given convincing reasons for seeing the relations between language and thought as limited to the types of relations in the examples he gives, interesting though they may be. For instance, he notes that the aboriginal tribes in question would have to always keep in mind where the cardinal directions were and consequently in this sense the language would require them to think a certain way.

The History and Science Behind the Color Blue
by staff, Dunn-Edwards Paints

If you think about it, there is not a lot of blue in nature. Most people do not have blue eyes, blue flowers do not occur naturally without human intervention, and blue animals are rare — bluebirds and bluejays only live in isolated areas. The sky is blue — or is it? One theory suggests that before humans had words for the color blue, they actually saw the sky as another color. This theory is supported by the fact that if you never describe the color of the sky to a child, and then ask them what color it is, they often struggle to describe its color. Some describe it as colorless or white. It seems that only after being told that the sky is blue, and after seeing other blue objects over a period of time, does one start seeing the sky as blue. […]

Scientists generally agree that humans began to see blue as a color when they started making blue pigments. Cave paintings from 20,000 years ago lack any blue color, since as previously mentioned, blue is rarely present in nature. About 6,000 years ago, humans began to develop blue colorants. Lapis, a semiprecious stone mined in Afghanistan, became highly prized among the Egyptians. They adored the bright blue color of this mineral. They used chemistry to combine the rare lapis with other ingredients, such as calcium and limestone, and generate other saturated blue pigments. It was at this time that an Egyptian word for “blue” emerged.

Slowly, the Egyptians spread their blue dyes throughout the word, passing them on to the Persians, Mesoamericans and Romans. The dyes were expensive — only royalty could afford them. Thus, blue remained rare for many centuries, though it slowly became popular enough to earn its own name in various languages.

Cognitive Variations:
Reflections on the Unity and Diversity of the Human Mind
by Geoffrey Lloyd
Kindle Locations 178-208

Standard colour charts and Munsell chips were, of course, used in the research to order to ensure comparability and to discount local differences in :he colours encountered in the natural environment. But their use carried major risks, chiefly that of circularity. The protocols of the enquiry presupposed the differences that were supposed to be under investigation and to that extent and in that regard the investigators just got out what they had put in. That is to say, the researchers presented their interviewees with materials that already incorporated the differentiations the researchers themselves were interested in. Asked to identify, name, or group different items, the respondents’ replies were inevitably matched against those differentiations. Of course the terms in which the replies were made-in the natural languages the respondents used-must have borne some relation to the differences perceived, otherwise they would not have been used in replying to the questions (assuming, as we surely may, that the questions were taken seriously and that the respondents were doing their honest best). But it was assumed that what the respondents were using in their replies were essentially colour terminologies, distinguishing hues, and that assumption was unfounded in general, and in certain cases can be shown to be incorrect.

It was unfounded in general because there are plenty of natural languages in which the basic discrimination relates not to hues, but to luminosities. Ancient Greek is one possible example. Greek colour classifications are rich and varied and were, as we shall see, a matter of dispute among the Greeks themselves. They were certainly capable of drawing distinctions between hues. I have already given one example. When Aristotle analyses the rainbow, where it is clearly hue that separates one end of the spectrum from the other, he identifies three colours using terms that correspond, roughly, to ‘red’ ‘green’, and ‘blue’, with a fourth, corresponding to ‘yellow’, which he treats (as noted) as a mere ‘appearance’ between ‘red’ and ‘green’. But the primary contrariety that figures in ancient Greek (including in Aristotle) is between Ieukon and melan, which usually relate not to hues, so much as to luminosity. Leukos, for instance, is used of the sun and of water, where it is clearly not the case that they share, or were thought to share, the same hue. So the more correct translation of that pair is often ‘bright’ or ‘light’ and ‘dark’, rather than ‘white’ and ‘black’.’ ^ Berlin and Kay (1969: 70) recognized the range of application of Ieukon, yet still glossed the term as ‘white’. Even more strangely they interpreted glaukon as ‘black’. That term is particularly context-dependent, but when Aristotle (On the Generation of Animals 779″z6, b34 ff.) tells us that the eyes of babies are glaukon, that corresponds to ‘blue’ where melon, the usual term for ‘black’ or rather ‘dark’, is represented as its antonym, rather than its synonym, as Berlin and Kay would need it to be.

So one possible source of error in the Berlin and Kay methodology was the privileging of hue over luminosity. But that still does not get to the bottom of the problem, which is that in certain cases the respondents were answering in terms whose primary connotations were not colours at all. The Hanunoo had been studied before Berlin and Kay in a pioneering article by Conklin (1955), and Lyons (1995; 1999) has recently reopened the discussion of this material.? First Conklin observed that the Hanunoo have no word for colour as such. But (as noted) that does not mean, of course, that they are incapable of discriminating between different hues or luminosities. To do so they use four terms, mabiru, malagti, rtarara, and malatuy, which may be thought to correspond, roughly, to ‘black’, ‘white’, ‘red’, and ‘green’. Hanunoo way then classified as a s:age 3 language, in Berlin and Kay’s taxonomy, one that discriminates between four basic colour terms, indeed those very four. 7 Cf. also Lucy 1992: ch. 5, who similarly criticizes taking purportedly colour terms out of context.

Yet, according to Conklin, chromatic variation was not the primary basis for differentiation of those four terms at all. Rather the two principal dimensions of variation are (T) lightness versus darkness, and (2) wetness versus dryness, or freshness (succulence) versus desiccation. siccation. A third differentiating factor is indelibility versus fadedness, referring to permanence or impermanence, rather than to hue as such.

Berlin and Kay only got to their cross-cultural universals by ignoring ing (they may even sometimes have been unaware of) the primary connotations of the vocabulary in which the respondents expressed their answers to the questions put to them. That is not to say, of course, that the members of the societies concerned are incapable of distinguishing colours whether as hues or as luminosities. That would be to make the mistake that my first philosophical observation was designed to forestall. You do not need colour terms to register colour differences. Indeed Berlin and Kay never encountered-certainly they never reported-a society where tie respondents simply had nothing to say when questioned about how their terms related to what they saw on the Munsell chips. But the methodology was flawed in so far as it was assumed that the replies given always gave access to a classification of colour, when sometimes colours were not the primary connotations of the vocabulary used at all.’

The Language Myth:
Why Language Is Not an Instinct
by Vyvyan Evans
pp. 204-206

The neo-Whorfians have made four main criticisms of this research tradition as it relates to linguistic relativity. 33 First off, the theoretical construct of the ‘basic colour term’ is based on English. It is then assumed that basic colour terms – based on English – correspond to an innate biological specification. But the assumption that basic colour terms – based on English – correspond to universal semantic constraints, due to our common biology, biases the findings in advance. The ‘finding’ that other languages also have basic colour terms is a consequence of a self-fufilling prophecy: as English has been ‘found’ to exhibit basic colour terms, all other languages will too. But this is no way to investigate putative cross-linguistic universals; it assumes, much like Chomsky did, that colour in all of the world’s languages will be, underlyingly, English-like. And as we shall see, other languages often do things in startlingly different ways.

Second, the linguistic analysis Berlin and Kay conducted was not very rigorous – to say the least. For most of the languages they ‘examined’, Berlin and Kay relied on second-hand sources, as they had no first-hand knowledge of the languages they were hoping to find basic colour terms in. To give you a sense of the problem, it is not even clear whether many of the putative basic colour terms Berlin and Kay ‘uncovered’, were from the same lexical class; for instance, in English, the basic colour terms – white, black, red and so on – are all adjectives. Yet, for many of the world’s languages, colour expressions often come from different lexical classes. As we shall see shortly, one language, Yélî Dnye, draws its colour terms from several lexical classes, none of which is adjectives. And the Yélî language is far from exceptional in this regard. The difficulty here is that, without a more detailed linguistic analysis, there is relatively little basis for the assumption that what is being compared involves comparable words. And, that being the case, can we still claim that we are dealing with basic colour terms?

Third, many other languages do not conceptualise colour as an abstract domain independent of the objects that colour happens to be a property of. For instance, some languages do not even have a word corresponding to the English word colour – as we shall see later. This shows that colour is often not conceptualised as a stand-alone property in the way that it is in English. In many languages, colour is treated in combination with other surface properties. For English speakers this might sound a little odd. But think about the English ‘colour’ term roan: this encodes a surface pattern, rather than strictly colour – in this case, brown interspersed with white, as when we describe a horse as ‘roan’. Some languages combine colour with other properties, such as dessication, as in the Old Germanic word saur, which meant yellow and dry. The problem, then, is that in languages with relatively simple colour technology − arguably the majority of the world’s languages − lexical systems that combine colour with other aspects of an object’s appearance are artificially excluded from being basic colour terms – as English is being used as the reference point. And this, then, distorts the true picture of how colour is represented in language, as the analysis only focuses on those linguistic features that correspond to the ‘norm’ derived from English. 34

And finally, the ‘basic colour term’ project is flawed, in so far as it constitutes a riposte to linguistic relativity; as John Lucy has tellingly observed, linguistic relativity is the thesis that language influences non-linguistic aspects of thought: one cannot demonstrate that it is wrong by investigating the effect of our innate colour sense on language. 35 In fact, one has to demonstrate the reverse: that language doesn’t influence psychophysics (in the domain of colour). Hence, the theory of basic colour terms cannot be said to refute the principle of linguistic relativity as ironically, it wasn’t in fact investigating it.

The neo-Whorfian critique, led by John Lucy and others, argued that, at its core, the approach taken by Berlin and Kay adopted an unwarranted ethnocentric approach that biased findings in advance. And, in so doing, it failed to rule out the possibility that what other languages and cultures were doing was developing divergent semantic systems – rather than there being a single universal system – in the domain of colour, albeit an adaptation to a common human set of neurobiological constraints. By taking the English language in general, and in particular the culture of the English-speaking peoples – the British Isles, North America and the Antipodes – as its point of reference, it not only failed to establish what different linguistic systems – especially in non-western cultures – were doing, but led, inevitably, to the conclusion that all languages, even when strikingly diverse in terms of their colour systems, were essentially English-like. 36

The Master and His Emissary: The Divided Brain and the Making of the Western World
by Iain McGilchrist
pp. 221-222

Consciousness is not the same as inwardness, although there can be no inwardness without consciousness. To return to Patricia Churchland’s statement that it is reasonable to identify the blueness of an object with its disposition to scatter electromagnetic waves preferentially at about 0.46μm, 52 to see it like this, as though from the outside, excluding the ‘subjective’ experience of the colour blue – as though to get the inwardness of consciousness out of the picture – requires a very high degree of consciousness and self-consciousness. The polarity between the ‘objective’ and ‘subjective’ points of view is a creation of the left hemisphere’s analytic disposition. In reality there can be neither absolutely, only a choice between a betweenness which acknowledges itself, and one which denies its own nature. By identifying blueness solely with the behaviour of electromagnetic particles one is not avoiding value, not avoiding betweenness, not avoiding one’s shadow being cast across the picture. One is using the inwardness of consciousness in a very specialised way to strive to empty itself as much as possible of value, of the self. The paradoxical result is an extremely partial, fragmented version of the colour blue, which is neither value-free nor independent of the self ‘s disposition towards its object.

p. 63

Another thought-provoking detail about sadness and the right hemisphere involves the perception of colour. Brain regions involved in conscious identification of colour are probably left-sided, perhaps because it involves a process of categorisation and naming; 288 however, it would appear that the perception of colour in mental imagery under normal circumstances activates only the right fusiform area, not the left, 289 and imaging studies, lesion studies and neuropsychological testing all suggest that the right hemisphere is more attuned to colour discrimination and perception. 290 Within this, though, there are hints that the right hemisphere prefers the colour green and the left hemisphere prefers the colour red (as the left hemisphere may prefer horizontal orientation, and the right hemisphere vertical – a point I shall return to in considering the origins of written language in Chapter 8). 291 The colour green has traditionally been associated not just with nature, innocence and jealousy but with – melancholy: ‘She pined in thought, / And with a green and yellow melancholy / She sat like Patience on a monument, / Smiling at grief ‘. 292

Is there some connection between the melancholy tendencies of the right hemisphere and the mediaeval belief that the left side of the body was dominated by black bile? Black bile was, of course, associated with melancholy (literally, Greek melan–, black ⊕ chole, bile) and was thought to be produced by the spleen, a left-sided organ. For the same reasons the term spleen itself was, from the fourteenth century to the seventeenth century, applied to melancholy; though, as if intuiting that melancholy, passion, and sense of humour all came from the same place (in fact the right hemisphere, associated with the left side of the body), ‘spleen’ could also refer to each or any of these.

Note 291

‘There are hints from many sources that the left hemispheremay innately prefer red over green, just as it may prefer horizontal over vertical. I have already discussed thelanguage-horizontal connection. The connection between the left hemisphere and red is also indirect, but is supported by a remarkable convergence of observations from comparative neurology, which has shown appropriate asymmetries between both the hemispheres and even between the eyes (cone photoreceptor differencesbetween the eyes of birds are consistent with a greater sensitivity to movement and to red on the part of the righteye (Hart, 2000)) and from introspective studies over the millennia in three great religions that have all convergedin the same direction on an association between action, heat, red, horizontal, far etc and the right side of the body (i.e. the left cerebral hemisphere, given the decussation between cerebral hemisphere and output) compared withinaction, cold, green, vertical, near etc and the left side/ right hemisphere respectively’ (Pettigrew, 2001, p. 94).

Louder Than Words:
The New Science of How the Mind Makes Meaning
by Benjamin K. Bergen
pp. 57-58

We perceive objects in the real world in large part through their color. Are the embodied simulations we construct while understanding language in black and white, or are they in color? It seems like the answer should be obvious. When you imagine a yellow trucker hat, you feel the subjective experience of yellowness that looks a lot like yellow as you would perceive it in the real world. But it turns out that color is actually a comparatively fickle visual property of both perceived and imagined objects. Children can’t use color to identify objects until about a year of age, much later than they can use shape. And even once they acquire this ability, as adults, people’s memory for color is substantially less accurate than their memory for shape, and they have to pay closer attention to detect changes in the color of objects than in their shape or location.

And yet, with all this going against it, color still seeps into our embodied simulations, at least briefly. One study looking at color used the same sentence-picture matching method we’ve been talking about. People read sentences that implied particular colors for objects. For instance, John looked at the steak on his plate implies a cooked and therefore appropriately brown steak, while John looked at the steak in the butcher’s window implies an uncooked and therefore red steak. In the key trials, participants then saw a picture of the same object, which could either match or mismatch the color implied by the sentence— that is, the steak could be red or brown. Once again, this method produced an interaction. Curiously, though, the result was slower reactions to matching-color images (unlike the faster reactions to matching shape and orientation images in the previous studies). One explanation for why this effect appears in the opposite direction is that perhaps people processing sentences only mentally simulate color briefly and then suppress color to represent shape and orientation. This might lead to slower responses to a matching color when an image is subsequently presented.

pp. 190-192

Another example of how languages make people think differently comes from color perception. Languages have different numbers of color categories, and those categories have different boundaries. For instance, in English, we make a categorical distinction between reds and pinks— we have different names for them, and we judge colors to be one or the other (we don’t think of pinks as a type of red or vice versa— they’re really different categories). And because our language makes this distinction, when we use English and we want to identify something by its color, we have to attend to where in the pink-red range it falls. But other languages don’t make this distinction. For instance, Wobé, a language spoken in Ivory Coast, only has one color category that spans English pinks and reds. So to speak Wobé, you don’t need to pay as close attention to colors in the pink-red range to identify them; all you have to do is recognize that they’re in that range, retrieve the right color term, and you’re set.

We can see this phenomenon in reverse when we look at the blue range. For the purposes of English, light blues and dark blues are all blues; perceptibly different shades, no doubt, but all blues nonetheless. Russian, however, splits blue apart in the way that we separate red and pink. There are two distinct color categories in Russian for our blues: goluboy (light blues) and siniy (dark blues). For the purposes of English, you don’t have to worry about what shade of blue something is to describe it successfully. Of course you can be more specific if you want; you can describe a shade as powder blue or deep blue, or any variety of others. But you don’t have to. In Russian, however, you do. To describe the colors of Cal or UCLA, for example, there would be no way in Russian to say they’re both blue; you’d have to say that Cal is siniy and UCLA is goluboy. And to say that, you’d need to pay attention to the shades of blue that each school wears. The words the language makes available mandate that you pay attention to particular perceptual details in order to speak.

The flip side of thinking for speaking is thinking for understanding. Each time someone describes something as siniy or goluboy in Russian, there’s a little bit more information there than when the same things are described as blue in English. So if you think about it, saying that the sky is blue in English is actually less specific than its equivalent would be in Russian— some languages provide more information about certain things each time you read or hear about them.

The fact that different languages encode different information in everyday words could have a variety of effects on how people understand those languages. For one, when a language systematically encodes something, that might lead people to regularly encode that detail as part of their embodied simulations. Russian comprehenders might construct more detailed representations of the shades of blue things than their English-comprehending counterparts. Pormpuraawans might understand language about locations by mentally representing cardinal directions in space while their English-comprehending counterparts use ego-centered mental representations to do the same thing.

Or an alternative possibility is that people might ultimately understand language about the given domain in the same way, regardless of the language, but, in order to get there, they might have to do more mental gymnastics. To get from the word blue in English to the color of the sky might take longer than to go there directly from goluboy in Russian. Or, to take another example, to construct an egocentric idea of where the bay windows are relative to you might be easier when you hear on your right than to your north.

A third possibility, and one that has caught a lot of people’s interest, is that there may be longer-term and more pervasive effects of linguistic differences on people’s cognition, even outside of language. Perhaps, for instance, Pormpuraawan speakers, by dint of years and years of having to pay attention to the cardinal directions, learn to constantly monitor them, even when they’re not using language; perhaps more so than English speakers. Likewise, perhaps the color categories your language provides affect not merely what you attend to and think about when using color words but also what differences you perceive among colors and how easily you distinguish between colors. This is the idea of linguistic relativism, that the language you speak can affect the way you think. The debate about linguistic relativism is a hot one, but the jury is still out on how and when language affects nonlinguistic thought.

All of this is to say that individual languages are demanding of their speakers. To speak and understand a language, you have to think, and languages, to some extent, dictate what things you ought to think, what things you ought to pay attention to, and how you should break the world up into categories. As a result, the routine patterns of thought that an English speaker engages in will differ from those of a Russian or Wobé or Pormpuraaw speaker. Native speakers of these languages are also native thinkers of these languages.

The First Signs: Unlocking the Mysteries of the World’s Oldest Symbols
by Genevieve von Petzinger
Kindle Locations 479-499

Not long after the people of Sima de los Huesos began placing their dead in their final resting place, another group of Homo heidelbergensis, this time in Zambia, began collecting colored minerals from the landscape around them. They not only preferred the color red, but also collected minerals ranging in hue from yellow and brown to black and even to a purple shade with sparkling flecks in it. Color symbolism— associating specific colors with particular qualities, ideas, or meanings— is widely recognized among modern human groups. The color red, in particular, seems to have almost universal appeal. These pieces of rock show evidence of grinding and scraping, as though they had been turned into a powder.

This powdering of colors took place in a hilltop cave called Twin Rivers in what is present-day Zambia between 260,000 and 300,000 years ago. 10 At that time, the environment in the region was very similar to what we find there today: humid and semitropical with expansive grasslands broken by stands of short bushy trees. Most of the area’s colorful rocks, which are commonly known as ochre, contain iron oxide, which is the mineral pigment later used to make the red paint on the walls of caves across Ice Age Europe and beyond. In later times, ochre is often associated with nonutilitarian activities, but since the people of Twin Rivers lived before the emergence of modern humans (Homo sapiens, at 200,000 years ago), they were not quite us yet. If this site were, say, 30,000 years old, most anthropologists would agree that the collection and preparation of these colorful minerals had a symbolic function, but because this site is at least 230,000 years older, there is room for debate.

Part of this uncertainty is owing to the fact that ground ochre is also useful for utilitarian reasons. It can act as an adhesive, say, for gluing together parts of a tool. It also works as an insect repellent and in the tanning of hides, and may even have been used for medicinal purposes, such as stopping the bleeding of wounds.

If the selection of the shades of ochre found at this site were for some mundane purpose, then the color shouldn’t matter, right? Yet the people from the Twin Rivers ranged out across the landscape to find these minerals, often much farther afield than necessary if they just required something with iron oxide in it. Instead, they returned to very specific mineral deposits, especially ones containing bright-red ochre, then carried the ochre with them back to their home base. This use of ochre, and the preference for certain colors, particularly bright red, may have been part of a much earlier tradition, and it is currently one of the oldest examples we have of potential symbolism in an ancestral human species.

Kindle Locations 669-683

Four pieces of bright-red ochre collected from a nearby mineral source were also found in the cave. 6 Three of the four pieces had been heated to at least 575 ° F in order to convert them from yellow to red. The inhabitants of Skhul had prospected the landscape specifically for yellowish ochre with the right chemical properties to convert into red pigment. The selective gathering of materials and their probable heat-treatment almost certainly indicates a symbolic aspect to this practice, possibly similar to what we saw with the people at Pinnacle Point about 30,000 years earlier. […]

The combination of the oldest burial with grave goods; the preference for bright-red ochre and the apparent ability to heat-treat pigments to achieve it; and what are likely some of the earliest pieces of personal adornment— all these details make the people from Skhul good candidates for being our cognitive equals. And they appear at least 60,000 years before the traditional timing of the “creative explosion.”

Kindle Locations 1583-1609

There is something about the color red. It can represent happiness, anger, good luck, danger, blood, heat, sun, life, and death. Many cultures around the world attach a special significance to red. Its importance is also reflected in many of the languages spoken today. Not all languages include words for a range of colors, and the simplest systems recognize only white and black, or light and dark, but whenever they do include a third color word in their language, it is always red.

This attachment to red seems to be embedded deep within our collective consciousness. Not only did the earliest humans have a very strong preference for brilliant red ochre (except for the inhabitants of Sai Island, in Sudan, who favored yellow), but even earlier ancestral species were already selecting red ochre over other shades. It may also be significant (although we don’t know how) that the pristine quartzite stone tool found in the Pit of Bones in Spain was of an unusual red hue.

This same preference for red is evident on the walls of caves across Europe during the Ice Age. But by this time, artists had added black to their repertoire and the vast majority of paintings were done in one or both of these colors. I find it intriguing that two of the three most common colors recognized and named across all languages are also the ones most often used to create the earliest art. The third shade, though well represented linguistically, is noticeably absent from Ice Age art. Of all the rock art sites currently known in Europe, only a handful have any white paint in them. Since many of the cave walls are a fairly light gray or a translucent yellowy white, it’s possible that the artists saw the background as representing this shade, or that its absence could have been due to the difficulty in obtaining white pigment: the small number of sites that do have white images all used kaolin clay to create this color. (Since kaolin clay was not as widely available as the materials for making red and black paint, it is certainly possible that scarcity was a factor in color choice.)

While the red pigment was created using ochre, the black paint was made using either ground charcoal or the mineral manganese oxide. The charcoal was usually sourced from burnt wood, though in some instances burnt bone was used instead. Manganese is found in mineral deposits, sometimes in the same vicinity as ochre. Veins of manganese can also occasionally be seen embedded right in the rock at some cave sites. Several other colors do appear on occasion— yellow and brown are the most common— though they appear at only about 10 percent of sites.

There is also a deep purple color that I’ve only ever seen in cave art in northern Spain, and even there it’s rare. La Pasiega (the site where I saw the grinding stone) has a series of paintings in this shade of violet in one section of the cave. Mixed in with more common red paintings, there are several purple signs— dots, stacked lines, rectangular grills— along with a single purple bison that was rendered in great detail (see fig. 4 in insert). Eyes, muzzle, horns— all have been carefully depicted, and yet the purple shade is not an accurate representation of a bison’s coloring. Did the artist use this color simply because it’s what he or she had at hand? Or could it be that the color of the animal was being dictated by something other than a need for this creature to be true to life? We know these artists had access to brown and black pigments, but at many sites they chose to paint animals in shades of red or yellow, or even purple, like the bison here at La Pasiega. These choices are definitely suggestive of there being some type of color symbolism at work, and it could even be that creating accurate replicas of real-life animals was not the main goal of these images.

How did American English become standardized?

Someone asked me about General American (GA) dialect, sometimes called Standard American. This person specifically asked, “In the 30’s to 60, there was the transatlantic accent, but I was wondering when general american became the norm for tv / movies?”

General American is a variant of American Midland dialect. It’s considered to have its most representative form in a small area of the far western Lower Midwest, mostly but not entirely west of the upper Mississippi River: central-to-southern Iowa, northern Missouri, eastern Nebraska, and northwestern Illinois. Major mainstream media figures such as Ronald Reagan and Walter Cronkite came from this part of the country, Illinois and Missouri respectively.

The archetype of GA in broadcasting was Edward Murrow who was born in North Carolina but early on moved to the rhotic region of the Pacific Northwest, specifically Washington state. According to Thomas Paul Bonfiglio (Race and the Rise of Standard American, pp. 173-4), Murrow’s “nightly radio audience was estimated to be 15,000,000 listeners” and widely considered “the foremost American correspondent of that era.” Murrow’s career took off during WWII when America’s image of greatness finally took form (with the help of the destruction of Europe), and the voice that came to be identified with this new great America was that of GA-speaking Edward Murrow. He helped train and inspire an entire generation of broadcasters that followed him. Bonfiglio then states that,

Those who were hired and trained by Murrow in turn hired and trained Walter Cronkite, Mike Wallace, Harry REasoner, Roger Mudd, Dan Rather, and Chet Hunley (230). Walter Cronkite, who was often characterized as the most trusted man in America, characterized himself as a “direct descendent of the Murrow tradition”

There are variants of GA found across the Midwest, in the Far West, and along the West Coast. Many people working in radio, television, and movies speak GA—whether or not it was the dialect they spoke growing up. GA became standard because of a number of reasons, besides those already mentioned.

Let me begin with a discussion of the Midwest.

The Midwest for a long time has been the median and mean of the population in the United States. Between great soil and plentiful water for agriculture and industry, it attracted most of the immigrant population since the 1800s. Even most people heading further west passed through this region. For this reason, the early railroads were built heavily in the Midwest. Chicago, in particular, was in the past the hub of America. The ‘Midwest’ symbolically is quite broad, imaginatively encompassing almost the entirety of the American interior.

The Midwest was increasingly where large audiences could be reached, an important factor in early broadcasting. Another important factor was that the area of GA is most equidistant from all other areas of the country, and so the dialect is the most familiar to most Americans—i.e., it sounds neutral, as if without accent.

Some have gone so far as to argue that GA is inherently more of a ‘neutral‘ accent in that it is easier to speak or sing for most people; and if that were the case, it could have helped it have spread more easily. Interestingly, GA is in some ways closer to early British English than is contemporary British English, as rhotic pronunciation of ‘r’ sounds used to be the norm for British English and still is for GA. Rhotic English, in the United States, is also what distinguishes (Mid-)Western dialect from Eastern and Southern dialect.

By the way, Reagan worked in Midwestern broadcast radio before he became a Hollywood actor. Strangely, quite a few cowboy actors came from or near the area of GA dialect, such as John Wayne from southern Iowa (his father having been from Illinois and his mother from Nebraska). Wayne has a way of speaking that is hard to pinpoint regionally, other than it sounding vaguely ‘Western’, definitely not Eastern or Southern.

GA took longer to take hold in entertainment media, as regional dialects remained popular in many television shows. In 1934, there was the first “syndicated programming, including The Lone Ranger and Amos ‘n’ Andy” (Radio in the United States). It was news broadcasters that helped make GA the norm for the country, although even this took a while (Bonfiglio, p. 58): “Even in the late thirties, the idea of a standard American English had not yet been located in a specific region, and a sort of linguistic relativism in the field of pronunciation prevailed.” Besides those named above, there were others such Clifton Garrick Utley (along with his mother and father who also worked for NBC) and Vincent Pelletier or, even over in Ohio, someone like Lowell Jackson Thomas. Midwestern broadcasters like this only gained wider national audiences starting in the 1940s, and so they helped to define the emerging perception of a Standard American or General American dialect. The world war era helped fuel the seeking of a national identity and hence a national way of speaking. It helped that Western broadcasters like Edward Murrow similarly spoke rhotic GA.

Plus, the Midwest developed the only thriving regional public radio, partly because of the large number of land grant colleges. It’s not that public radio initially was all that important nationally. But it had great influence in the region. And it probably had some later influence on the eventual establishment of National Public Radio.

Still, early broadcasters do sound different than today. Even Cronkite in the beginning of his career had a more clipped style. This had less to do with regional dialect and maybe more to do with the medium itself at the time—as dthrasher explained: “I’d guess that the “50’s accent” you hear had much to do with the technology of AM and shortwave radio. Precise diction and a somewhat clipped style for words and phrases helped to overcome the crackle and hiss of static in radio reception.” He also points out “that many movie and television stars of that era got their start in theater,” a less casual way of speaking, but I’m not sure how much influence that would have had on the field of broadcasting.

What exactly changed, besides technology, in the mid-20th century? Bonfiglio emphasizes that there was a growing desire for standardization in the 1940s. An obvious reason for this was the rise of the public school movement as part of the response to the perceived threat of ethnic immigrants who weren’t assimilating fast enough for many WASPs. As Bonfiglio writes (p. 59):

In 1944, the New York State Department of Education formed a committee to decide on standards of pronunciation to be taught in public schools (C. K. Thomas 1945). The committee was comprised of over a dozen national language experts, who decided that the pupils should all become acquainted with the three types of American pronunciation: “Eastern, Southern and General American.”

So, it wasn’t (Mid-)Westerners declaring themselves as speaking General American. Apparently, even those outside of the (Mid-)West acknowledged that there was this broadly American dialect that was neither Eastern nor Southern. But why did this matter?

The South obviously wouldn’t become the standard because it is the region that started and lost the Civil War. Besides, the South didn’t have a large concentrated population as did the North, a major reason for their having been overwhelmed by the Union army. That still leaves the upper East Coast region, as it did initially dominate early entertainment media. The mid-Atlantic consisted of a massive population, from the 1800s into the early 1900s. The problem was that this massive population was also massively diverse, with a large influx of Southern and Eastern Europeans, including many non-Protestants (Jews, Catholics, and Eastern Orthodox).

This led many to look to the (Mid-)West for a ‘real’ American identity, probably related to the growing popularity of movie Westerns and all that they mythologized in the public mind. Americans early on came to symbolize their aspirational identity with the West, the Midwest being the first American West. A state like Iowa, west of the upper Mississippi River, was a clear demarcation point for where dialect was most distinct from the East and South, a place where there were few Jews and blacks.

The rhotic dialect was quite broadly distributed in the Western United States, even being heard from a Texan like Dan Rather, though it is true his mother and her family came from Indiana—it does make me wonder what dialect he spoke as a child and young adult. It should be noted that Texas received a fair amount of German immigrants, many having passed through the Midwest before settling in Texas. Then there are other broadcasters such as Tom Brokaw from South Dakota and Peter Jennings from Canada, both areas of rhotic accent among other shared linguistic characteristics. Standard Canadian English is closely related to Standard American English and, indeed, there was much early immigration between Canada and (Mid-)Western United States.

Following the Civil War and into the 20th century, the population was simultaneously growing in the Midwest and West Coast. This represented the future of the country, not just major agricultural regions but the emergence of major industries and new centers of media.

The first movie shot in Hollywood happened in 1910. That was a silent movie and hence accent wasn’t yet an issue. It would be a couple of decades before films with sound became common. I was reading that it was WWI that disrupted the film production in other countries. With California becoming an emerging center, the studio system and star system having developed there.

The numbers moving westward increased vastly following the Great Depression and the Dust Bowl. Many of those who ended up in California came from the Midwest, the area of the greatest population and the origin of what has come to be called Standard American English.

The far Middle West accent had already established itself as important. The earliest radio broadcasters that reached the largest numbers of listeners often came from the Midwest or otherwise similarly speaking regions. When so many Midwesterners moved to California, they brought their accent with them. Midwestern broadcasters like Ronald Reagan sometimes became movie stars. Consider also the stereotypical California surfer dude made famous through Hollywood movies. Many of the movie stars and movie extras were of German and Scandinavian ancestry, which had been concentrated in the Midwest. Beach movies came to replace Westerns, but I’m not sure how that might have played into changing attitudes about General American.

The boom of the defense industry and population in California after WWII made it a more important center of culture and media. California even became the center of a religious movement that would take the country by a storm, the new mega-churches that reached massive television audiences. One of these California preachers was Robert H. Schuller who was born and raised in Iowa.

I suppose it took decades for the new accent to become more common mainstream media. By the 1990s, Standard American English definitely had won out as the new dominant accent for the country. It was becoming more common in the 1980s tv, such as with Roseanne which began in 1988. New York City is still a major media center, but it is mostly now known for print media. Even so, there remains a media nostalgia in making movies about New York City, whether or not they are still made there.

The transition to GA dominance wasn’t an accident. There were demographic reasons that made it more probable. But it must be noted that many intentionally promoted it. The Midwest represented a tradition that simultaneously included immigrant diversity and assimilation. This tradition at times was promoted quite forcefully, such as by Klansmen of the Second Klan who hated non-WASP ethnic-Americans (i.e., hyphenated Americans). Mainstream media corporations as gatekeepers were quite self-conscious in their establishing English standardization. The media companies, as stated by Bonfiglio, went so far as to hire professionals from the early speech correction field to teach their broadcasters to speak this at the time newly emerging mainstream standard of American English.

The person who posed the question to me about General American, followed up with this comment: “Even Rosanne doesn’t sound all GA to me. And John Goodman sounds southernish. Was just wondering. I notice some say that after 60s black and white tv it became standard. But I really don’t see that to be the case at all.”

The Roseanne cast had a diverse group of actors. Roseanne Barr was born in Utah, but when she was still young she moved to Colorado which is partly in the Midlands dialect region—her accent is a mix. Several of the other people on the show were born in the Midwest, specifically three from Illinois and one from Michigan. A few were from California and probably spoke more GA, although it’s been a long time since I’ve watched the show.

John Goodman was born in St. Louis, Missouri—what many would consider as culturally part of the Midwest, although there is a Southern influence in Missouri. I’ve even heard a Southern accent in southeast Iowa, from someone who lived just across the Mississipi River. Western Illinois and northern Missouri are part of the specific subset of Midlands dialect (i.e., pure GA) that has become so well known in the mainstream media.

My mother grew up in the Midlands region, central Indiana to be precise. Even she had a Southern-like accent when she was younger, the Hoosier accent that is akin to what is heard in the Upper (Mountain) South. She lost it early on in and now speaks GA. As a speech pathologist, it was part of her job to teach students to speak GA.

I spent many formative years right in the heart of the heart of General American. Even after spending years in the South, it didn’t take long to start speaking GA once I was back in Iowa. It drove my mother wild when I picked up some Southern dialect and she would correct my language, as is her habit. Maybe she was happy when I returned to speaking solid Midwestern dialect.

About early television shows, one to consider is Happy Days. It was set in Milwaukee, Wisconsin. One of the actors was from Wisconsin. Some others were from Minnesota, Oklahoma, illinois, and California. There were a few New Yorkers in that cast as well.

Oddly, one of its spin off shows, Laverne & Shirley, was also supposedly set in the same Milwaukee location. But it’s cast was overwhelmingly from New York. Another spin off from Happy Days was Mork & Mindy, which was supposed to be set in Boulder, Colorado. The two main actors were from Illinois and Michigan, Robin Williams being from Chicago. Of the rest of the cast, two were from Ohio, two from Texas, and one from New York.

From my childhood and young adulthood, there were popular shows like The Wonder Years. The main actor and the actress playing his mother were both from Illinois. By the time that show was on, it probably didn’t matter where actors/actresses came from. Most of them were learning to speak GA. It was probably in California, not the Midwest, where most people in entertainment media learned to speak GA. A Southerner like Stephen Colbert is a good example of someone losing a distinctly regional accent in order to speak GA, although he probably didn’t need to go to California.

If I had to guess, GA came to dominate news reporting and Hollywood movies before it came to dominate tv shows. I’m not sure why that might be. If that is the case, your guess would be as good as mine. One guess might be that tv shows never drew as large of audiences and so General American was less important. New reporting once it became national, on the other hand, demanded an accent that was understandable to the most people. Hollywood movies likewise had larger and more diverse audiences.

According to one theory, General American simply happens to be the accent most Americans can understand the most easily and clearly. Bonfiglio, however, considers that to be an ethnocentric and racist rationalization for the dominance of the (Mid-)Western equivalent of the Aryan race, that perceived superior mix of Anglo-Saxon British and Northern European ancestries. Maybe so or maybe not.

About my mother’s career as a speech pathologist teaching ‘proper’ GA English, my interlocutor then asked the following set of questions, “Just wondering, what era was this? I just find it odd when I watch so much 80s tv and movies, GA isn’t used. What did she teach them for? And was the GA that she taught the one that you mention today? was the accent even remotely similar to what we consider GA today?”

Having been born in the 1940s, my mother started work in the late 1960s and continued until the 2000s. So, she grew up and worked in the precise period of GA dialect fully taking over.

I talked to my mother. We discussed the changes in her own speech.

She doesn’t clearly remember having a Southern-sounding accent or rather a Hoosier accent, but it clearly can be heard on an old audio of her from back in the late 1960s, in the time of her life when she had recently finished college and had begun her career as a speech pathologist.

I asked her if her professors spoke GA. She said that they probably did. She does remember when she was younger that she pronounced in the same way the words ‘pool’, ‘pull’, and ‘pole’. And, when she was in college, a professor corrected her for saying ‘bof’ in place of ‘both’. My mother still will occasionally fall into Hoosier dialect by saying ‘feesh’ for ‘fish’ and ‘cooshion’ for ‘cushion’, the latter example happens commonly in her everyday speaking.

For the most part, my mom speaks GA these days. There is no hint of a Hoosier accent. And, around strangers, she is probably more careful in not using those Hoosier pronunciations. But, even as late as the early 1980s, some people in northern Illinois told my mother that she had what to them sounded like a slight Southern accent. For the time we lived in Illinois and Iowa, we were in the area of GA which probably helped my mom lose what little she had of her childhood dialect.

I also asked my mother about her career as a speech pathologist. She said that early on she thought little about dialect, either in her own speaking or that of students. She did work for a few years in the Deep South before I was born, when my dad was stationed at a military base. She would have corrected both black and white Southern children without any thought about it. Compared to Deep Southern dialect, I’m sure my mother even when young sounded Midwestern, an approximation of the rhotic GA dialect.

It was the late 1980s when our family moved to the South Carolina. My mother said that is the first time she was told to not correct the dialect of black students. She still did tell her black students the different ways to pronounce sounds and words and she modeled GA, but she couldn’t technically teach them proper English. At that time, she also wasn’t allowed to work with kids who had English as a second language, for there were separate ESL teachers. Yet, back in the early 1980s, she worked with some Hispanic students in order to teach them proper English.

Until South Carolina, she says she never considered dialect in terms of her speech work. It seems that the language professions were rather informal until later in her career. She spent the longest part of her career in South Carolina where she worked for two decades. Her field had become extremely professionalized at that point and all the language fields were territorial about the students they worked with and the type of language issues they specialized in.

So, my mother’s own way of speaking English changed over her career as the way she taught language changed. By the end of her career, she says even a speech pathologist from the South and working in the South with Southern students would have taught GA, at least to white students and probably informally to black students as well. She said that speech pathologists ended up teaching code switching, in that they taught kids that there were multiple ways of speaking words. She pointed out that many older blacks she worked with, including a principal, didn’t code switch—that makes sense, as they probably were never taught to do so.

My mother’s career wasn’t directly involved in dialect and accent. She was a speech pathologist which means she largely focused on teaching articulation. She never thought of it as teaching kids GA, even if that was the end result.

That field is interesting. When my mother started, it was called speech correction. Then early in her career it was called speech therapy. But now it is speech-language pathology. The change of name correlated to changes in what was being taught in the field.

I don’t know if General American itself changed over time. It’s interesting to note that many of the earliest speech centers and speech corrections/therapy schools in the US were in the Midwest, where many of the pioneers (e.g., Charles Van Riper) in the field came from—such as Michigan, Wisconsin, and Iowa. Right here in the town I live in, Iowa City, was one of the most influential programs and one of the main professors in that program was born in Iowa City, Dean Williams. As my mother audited one of Williams’ classes, she got to know him and he worked with my brother’s stuttering. Interestingly, Williams himself came in contact with the field because of his own childhood stuttering, when Wendell Johnson helped him. My mother heard Williams say that, while he was in the military during WWII, Johnson sent him speech journals as reading material which inspired him to enter the field when he returned after the war.

So, it appears at least some of the speech fields in the US developed in or near the area of General American dialect. Maybe that is because of the large non-English immigrant populations that settled in the Midwest. German-Americans were the largest demographic in the early 20th century and, accordingly, to mainstream WASP culture this was one of the greatest threats. Even in a college town like Iowa City, the Czechs felt compelled to start their own Catholic church because they couldn’t understand the priest at the German Catholic church. Assimilation was slow to take hold within ethnic immigrant communities. Language standardization and speech correction became a priority for the purveyors of the dominant culture.

Let me point out one thing in relation to my mother. She went to Purdue. The head of her department was Max David Steer, having been in that position from 1963 to 1970, the exact years my mother spent at Purdue. He was a New Yorker, but he got his Ph.D. from the University of Iowa here in Iowa City. Like Williams, he probably also learned under Johnson. The field was small at that time and all of these figures would have known each other.

Here is an amusing side note.

My mother began her education when the field was in transition. Speech corrections/therapy had only been a field distinct from psychology since after WWII, although the program at Purdue started the same year my mother started school, 1963. When she got her masters degree, 1969-70, they had just begun teaching transformational linguistic theory. She says it was highly theoretical and way over her head. Guess who was one of the major influences on this development: the worldwide infamous left-winger, Noam Chomsky. So, my mother learned a bit about Chomskyan linguistic theory back in the day.

By the way, listening to Chomsky speak, it definitely is more or less GA. He grew up in Pennsylvania. It was Pennsylvanian culture that some argue was the greatest influence on Midwestern culture. This is because so many early immigrants entered the United States through Pennsylvania and from there settled in the Midwest. But there is a definite accent that can be found among many Pennsylvanian natives. It’s possible that Chomsky picked up the GA dialect later in life. Anyway, he personifies the neutral/objective-sounding intellectuality of GA in its most standardized mainstream form—so straightforward and unimposing, at least in the way Chomsky speaks it.

I get the sense that, going back far enough, few overtly worried about standardized English. It was simply considered proper English, at least by the mid-20th century. I have no idea when it first became considered proper English in the US. If I had to hazard a guess, the world war era probably helped to establish and spread General American since so many soldiers would have come from the (Mid-)West, the greatest proportion of population in the country—larger than the Southern, Mid-Atlantic, and Northeastern populations combined. It might be similar to how a distinct Southern accent didn’t exist until the Civil War when Southern soldiers fought together and came to share a common identity. Edward Murrow, of course, played a role as the manly voice of WWII describing firsthand accounts of fighting and bombings to the American public back at home.

Whether or not it deserves this prominent position, I suspect General American dialect is here to stay. To most people of this country and around the world, this dialect represents American society. It has become not just dominant here but in most places where English is spoken.

GA has even come to be promoted in the non-entertainment media of the British Broadcasting Company (BBC), specifically for news shows directed at the non-British, as the BBC reaches an international audience. Hollywood has, of course, spread GA English to other countries. So have video games, as the largest consumers of this product are Americans, which creates a bias in the entire industry. More English-speakers in the world have a GA dialect than any other dialect.

General American has become the unofficial standard of English almost everywhere. It is the English dialect that most people can easily understand and not recognize as being a dialect.

Were cave paintings an early language?

Elizabeth Dodd, from In the Mind’s Eye, discussed Julian Jayne’s theory of bicameralism. She thinks it falls short, in particular because of new data about early human development.

That is a fair criticism for any older book that is inherently limited to what was available at the time it was written. In the past few decades, a ton of new info has become available, both through archaeology and translated texts.

Rather than Jaynes, she prefers another theorist (Kindle Locations 350-355):

Besides, I’m more convinced by another scholar, Robbins Burling, who points out that the growth of the brain began two million years ago, from 6oo cubic centimeters to modern humans’ mans’ doubled capacity of at least 1,200 cubic centimeters-in in culinary measurement, we have today about two and a half cups of brain in the curved bowl of our skull. As he notes, from its Australopithecine beginnings the hominid brain had very nearly reached our own modern size long before the archaeological logical record reflects great innovations in tool production. Our changing brains weren’t littering the landscape with evidence of flourishing technological innovation. So what were we doing with those bigger, more complex neural capacities? Talking, ing, he says. Our brains grew as both natural and sexual selection tion guided our species toward ever increasing capacities for language-both comprehension and speech.

I had a thought about language. Genevieve von Petzinger studied the earliest cave paintings and claims to have found a common set of geometric designs, 32 of them to be precise. She speculates that they were used to communicate basic common meanings.

I’m not sure it would have been quite as complex as something like hobo symbols. It could have been much simpler, along the lines of how prairie dogs give names to things in their immediate environment, including individual people who visit regularly.

What prairie dogs have is a basic set of nouns and that is it, as far as we know. Other animals like whales will call each other by name. Plus, there are animals like dogs that can understand simple commands. Even my cats can comprehend the emotion behind my words and, if you’re persistent enough, cats can be taught to respond to simple commands as well.

Is this complex enough to be called language? Is language more than merely naming a few things or responding to simple commands?

Petzinger points out that the ancient symbols weren’t an alphabet or anything along those lines. She also doesn’t think they were abstract symbols. Most likely, they represented concrete things in the world and maybe used as basic counting marks. If these people had language, one might expect these symbols to already be developing some of the qualities of an alphabet or of abstraction. But it appears to be extremely concrete, maybe with some limited narrative elements.

These cave paintings are from the ice age and the period following. The oldest are from around 40,000 years ago. That is far cry from the couple million years ago that Robbins Burling is talking about. If humans were talking at so far back, why didn’t they leave any signs of language? As for the rock paintings, Dodd thinks they do demonstrate language mastery (Kindle Locations 363-365):

By the Upper Paleolithic, when we finally see the great painted caves and sculpted figurines of the Aurignacian culture and those that followed, the artwork suggests a level of mythic and symbolic thinking that could not have been possible without out language. The images, I feel certain, point to narrative, and one cannot tell stories with only a rudimentary lexicon.

Maybe… or maybe not. It’s highly speculative. But, if so, the narrative would be key. Is narrative the tipping point for the formation of actual language? Narrative would be the foundation for verbs, beyond the mere naming of nouns. It would also indicate incipient complex thought based on awareness of temporality and possibly causality (Kindle Locations 355-359):

“Perhaps language confirms, rather than creates, a view of the world,” he reasons. Syntax often reflects an iconic understanding standing of the relation among agents and goals (often through grammatical subjects and objects); our ability to perceive patterns and to “read” or “hear” the world precedes our induction into any specific language form. “We seem to understand the world around us as a collection of objects that act on each other in all sorts ofways,” he says. “If our minds were constructed so as to let us interpret the world in this way, that would be quite enough to account for the structure of our sentences.”

What kind of consciousness, mentality, or worldview would that indicate?

Language and Knowledge, Parable and Gesture

“Man exists in language like a fly trapped in a bottle: that which it cannot see is precisely that through which it sees the world.”
~ Ludwig Wittgenstein

“As William Edwards Deming famously demonstrated, no system can understand itself, and why it does what it does, including the American social system. Not knowing shit about why your society does what it [does] makes for a pretty nasty case of existential unease. So we create institutions whose function is to pretend to know, which makes everyone feel better.”
~ Joe Bageant, America: Y Ur Peeps B So Dum?

“One important characteristic of language, according to Agamben, is that it is based on the presupposition of a subject to which it refers. Aristotle argued that language, ‘saying something about something’, necessarily brings about a distinction between a first ‘something’ (a subject) and a second ‘something’ (a predicate). And this is meaningful only if the first ‘something’ is presupposed . This subject, the immediate, the non-linguistic, is a presupposition of language. At the same time, language seems to precede the subject it presupposes; the world can only be known through language. That there is language that gives meaning and facilitates the transmission of this meaning is a presupposition that precedes every communication because communication always begins with language. 2

“Agamben compares our relationship with language with Wittgenstein’s image of a fly in a bottle: ‘Man exists in language like a fly trapped in a bottle: that which it cannot see is precisely that through which it sees the world.’ 3 According to Agamben, contemporary thought has recognized that we are imprisoned within the limits of the presuppositional structure of language, within this glass. But contemporary thought has not seen that it is possible to leave the glass (PO, 46).

Our age does indeed stand in front of language just as the man from the country in the parable stands in front of the door of the Law. What threatens thinking here is the possibility that thinking might find itself condemned to infinite negotiations with the doorkeeper or, even worse, that it might end by itself assuming the role of the doorkeeper who, without really blocking the entry, shelters the Nothing onto which the door opens. (HS, 54)

[ . . . ]

“What is compared in the parable , as for example in the messianic parables in the gospel of Matthew, is not only the kingdom of God with the terms used in the parables (a field in which wheat and weeds are mixed), but the discourse about the kingdom and the kingdom itself. In that sense, the messianic parables in Matthew are parables about language, for what is meant is language itself. And this, according to Agamben, is also the meaning of Kafka’s parable ‘On Parables’. Kafka is looking for a way beyond language that is only possible by becoming language itself; beyond the distinction between sign and what is signified:

If you’d follow the parables, you’d become parables yourselves and with that, free of the everyday struggle.(TR, 43) 17

“What Kafka indicates here, according to Agamben, is an indistinguishability between being and language. What does this process of becoming language look like? Agamben sees a hint of this in one of Kafka’s journal entries. On 18 October 1921, Kafka wrote in his journal:

Life calls again. It is entirely conceivable that life’s splendor forever lies in wait about each one of us in all its fullness, but veiled from view, deep down, invisible, far off. It is there, though, not hostile, not reluctant, not deaf. If you summon it by the right word, by its right name, it will come. This is the essence of magic, which does not create but summons. 18

“According to Agamben, this refers to an old tradition followed by the Kabbalists in which magic is, in essence , a science of secret names. Everything has its apparent name and a hidden name and those who know this hidden name have power over life and death and the death of those to whom this name belongs. But, Agamben proposes , there is also another tradition that holds that this secret name is not so much the key by which the magician can gain power over a subject as it is a monogram by which things can be liberated from language. The secret name is the name the being received in Eden. If it is spoken aloud, all its apparent names fall away; the whole Babel of names disappears. ‘To have a name is to be guilty. And justice, like magic, is nameless’ (P, 22). The secret name is the gesture that restores the creature to the unexpressed. Thus, Agamben argues, magic is not the secret knowledge of names and their transcendent meaning, but a breaking free from the name. ‘Happy and without a name, the creature knocks at the gates of the land of the magi, who speaks in gestures alone’ (P, 22).”
~ Anke Snoek, Agamben’s Joyful Kafka (pp. 110118, Kindle Locations 2704-2883)

The Case of the Missing Concepts

Hypocognition, in cognitive linguistics, means missing and being unable to communicate cognitive and linguistic representations because there are no words for particular concepts.”

* * *

The enthusiasm for evidence-based medicine (EBM) has not been accompanied by the same success in bridging the gap between theory and practice. This paper advances the hypothesis that the phenomenon psychologists call hypocognition may hinder the development of EBM. People tend to respond to frames rather than to facts. To be accepted, a theory, however robust, must fit into a person’s mental framework. The absence of a simple, consolidated framework is referred to as hypocognition. Hypocognition might limit the application of EBM in three ways. First, it fails to provide an analytical framework by which to orient the physician in the direction of continuous medical development and variability in individual people’s responses. Second, little emphasis is placed on teaching clinical reasoning. Third, there is an imbalance between the enormous mass of available information and the practical possibilities. Possible solutions are described. We not only need more evidence to help clinicians make better decisions, but also need more research on why some clinicians make better decisions than others, how to teach clinical reasoning, and whether computerised supports can promote a higher quality of individualised care.”

* * *

Americans, especially, suffer from what linguists call hypocognition: the lack of a core concept we need in order to thrive. The missing concept is of democracy as a way of life; democracy not as a set system–something done to us, for us, finished and done–but as a set of system values that usefully apply in all arenas of life. In the dominant, failing idea of democracy, society is a subset of economic life. To make the needed planetary turn to life, we must envision the opposite: economic life re-embedded in society guided by shared human values, including fairness, inclusion, and mutual accountability.”

* * *

Frances Moore Lappe (Hope’s Edge, 2002) makes the case that often politicians and corporations use terms that leave us suffering from “hypocognition.” Hypocognition results when a term is used to conjure up all-positive images to prevent us from understanding what is really going on. For example, hypocognition makes it hard for the public to believe there can be anything wrong with “globalism” or “free trade,” which sound like the apple pie and motherhood of the 21st century. It is easy for the press to portray those who protest against “free trade” as fringe lunatics.

“Ms. Lappe coined the term “primitive marketism” as a more appropriate name for what has become the accepted standard of world trade over the last 20 years — that the single principle of highest return to existing wealth is the sole driver of the world-wide system of production and exchange. That leaves cultural integrity, human rights, environmental protection, and even the ability of people to feed themselves as inconsequential to multinational corporations reaching around the world for opportunities for the highest return to existing wealth.

“As much as the term “primitive marketism” helps identify problems inherent to the way global trade is structured today, it takes a bit of bending of the mind and tongue to use it. It seems to me that a term that more immediately and clearly identifies where we are headed with world trade — a term which leaves no room for hypocognition — is “corporate colonialism.””

* * *

This perspective on reason matters to the discussion in this forum about global warming, because many people engaged in environmentalism still have the old, false view of reason and language. Folks trained in public policy, science, economics, and law are often given the old, false view. As a result, they may believe that if you just tell people the facts, they will reason to the right conclusion. What actually happens is that the facts must make sense in terms of their system of frames, or they will be ignored. The facts, to be communicated, must be framed properly. Furthermore, to understand something complex, a person must have a system of frames in place that can make sense of the facts. In the case of global warming, all too many people do not have such a system of frames in the conceptual systems in their brains. Such frame systems have to be built up over a period of time. This has not been done.” (pp. 72-73)

“Have you ever wondered why conservatives can communicate easily in a few words, while liberals take paragraphs? The reason is that conservatives have spent decades, day after day building up frames in people’s brains, and building a better communication system to get their ideas out in public. Progressives have not done that.” (p. 73)

“The right language is absolutely necessary for communicating ‘‘the real crisis.’’(p. 74)

“‘Hypocognition’ is the lack of ideas we need. We are suffering from massive hypocognition in the case of the environment.” (p. 76)

“An important frame is in throes of being born: The Regulated Commons – the idea of common, non-transferable ownership of aspects of the natural world, such as the atmosphere, the airwaves, the waterways, the oceans, and so on.” (p. 78)

* * *

Not all corrections to hypocognition have to be heavy stuff, like grief and scientific advancement. One of my favorite authors tried to give everything a word. Douglas Adams, author of the Hitchhiker’s Guide to the Galaxy series, put out a book with John Lloyd called, The Meaning of Liff. It started as a slightly-drunken party game, during which Adams and his friends picked out the names of English towns and pretended the names were words that they had to define. As they were coming up with different definitions, they realized that, as humans, they all shared common experiences that don’t have names.

“My favorite word of the book is “shoeburyness,” which is defined as “the vague uncomfortable feeling you get when sitting on a seat which is still warm from somebody else’s bottom.” Everyone has felt that. One author I read went to a strict college at which men were forbidden to sit in a seat directly after a woman vacated it, because he would feel her residual body heat and the dean of women considered that too sexual. But no one came up with a word for it. Once there is a word for it, people can begin to refer to it. What concept do you think needs a word? I nominate “splincing” — when you’re completely in the wrong, and hate it, and you daydream about someone wronging you so you can feel righteously aggrieved about something.”