The Psychology and Anthropology of Consciousness

“There is in my opinion no tenable argument against the hypothesis that psychic functions which today seem conscious to us were once unconscious and yet worked as if they were conscious. We could also say that all the psychic phenomena to be found in man were already present in the natural unconscious state. To this it might be objected that it would then be far from clear why there is such a thing as consciousness at all.”
~ Carl Jung, On the Nature of the Psyche 

An intriguing thought by Jung. Many have considered this possibility. It leads to questions about what is consciousness and what purpose it serves. A recent exploration of this is the User Illusion by Tor Nørretranders, in which the author proposes that consciousness doesn’t determine what we do but chooses what we don’t do, the final vote before action is taken, but action itself requires no consciousness. As such, consciousness is useful and advantageous, just not absolutely necessary. It keeps you from eating that second cookie or saying something cruel.

Another related perspective is that of Julian Jaynes’ bicameral mind theory. I say related because Jaynes influenced Nørretranders. About Jung, Jaynes was aware of his writings and stated disagreement with some ideas: “Jung had many insights indeed, but the idea of the collective unconscious and of the archetypes has always seemed to me to be based on the inheritance of acquired characteristics, a notion not accepted by biologists or psychologists today.” (Quoted by Philip Ardery in “Ramifications of Julian Jaynes’s theory of consciousness for traditional general semantics.”) What these three thinkers agree about is that the unconscious mind is much more expansive and capable, more primary and important than is normally assumed. There is so much more to our humanity than the limits of interiorized self-awareness.

What interested me was the anthropological angle. Here is something I wrote earlier:

“Julian Jaynes had written about the comparison of shame and guilt cultures. He was influenced in by E. R. Dodds (and Bruno Snell). Dodds in turn based some of his own thinking about the Greeks on the work of Ruth Benedict, who originated the shame and guilt culture comparison in her writings on Japan and the United States. Benedict, like Margaret Mead, had been taught by Franz Boas. Boas developed some of the early anthropological thinking that saw societies as distinct cultures.”

Boas founded a school of thought about the primacy of culture, the first major challenge to race realism and eugenics. He gave the anthropology field new direction and inspired a generation of anthropologists. This was the same era during which Jung was formulating his own views.

As with Jung before him, Jaynes drew upon the work of anthropologists. Both also influenced anthropologists, but Jung’s influence of course came earlier. Even though some of these early anthropologists were wary of Jungian psychology, such as archetypes and collective unconscious, they saw personality typology as a revolutionary framework (those influenced also included the likes of Edward Sapir and Benjamin Lee Whorf). Through personality types, it was possible to begin understanding what fundamentally made one mind different from another, a necessary factor in distinguishing one culture from another.

In Jung and the Making of Modern Psychology, Sonu Shamdasani describes this meeting of minds (Kindle Locations 4706-4718):

“The impact of Jung’s typology on Ruth Benedict may be found in her concept of Apollonian and Dionysian culture patterns which she first put forward in 1928 in “Psychological Types in the cultures of the Southwest,” west,” and subsequently elaborated in Patterns of Culture. Mead recalled that their conversations on this topic had in part been shaped by Sapir and Oldenweiser’s discussion of Jung’s typology in Toronto in 1924 as well as by Seligman’s article cited above (1959, 207). In Patterns of Culture, ture, Benedict discussed Wilhelm Worringer’s typification of empathy and abstraction, Oswald Spengler’s of the Apollonian and the Faustian and Friedrich Nietzsche’s of the Apollonian and the Dionysian. Conspicuously, ously, she failed to cite Jung explicitly, though while criticizing Spengler, she noted that “It is quite as convincing to characterize our cultural type as thoroughly extravert … as it is to characterize it as Faustian” (1934, 54-55). One gets the impression that Benedict was attempting to distance herself from Jung, despite drawing some inspiration from his Psychological Types.

“In her autobiography, Mead recalls that in the period that led up to her Sex and Temperament, she had a great deal of discussion with Gregory Bateson concerning the possibility that aside from sex difference, there were other types of innate differences which “cut across sex lines” (1973, 216). She stated that: “In my own thinking I drew on the work of Jung, especially his fourfold scheme for grouping human beings as psychological ical types, each related to the others in a complementary way” (217). Yet in her published work, Mead omitted to cite Jung’s work. A possible explanation for the absence of citation of Jung by Benedict and Mead, despite the influence of his typological model, was that they were developing oping diametrically opposed concepts of culture and its relation to the personality to Jung’s. Ironically, it is arguably through such indirect and half-acknowledged conduits that Jung’s work came to have its greatest impact upon modern anthropology and concepts of culture. This short account of some anthropological responses to Jung may serve to indicate that when Jung’s work was engaged with by the academic community, it was taken to quite different destinations, and underwent a sea change.”

It was Benedict’s Patterns of Culture that was a major source of influence on Jaynes. It created a model for comparing and contrasting different kinds of societies. Benedict was studying two modern societies, but Dodds came to see how it could be applied to different societies across time, even into the ancient world. That was a different way of thinking and opened up new possibilities of understanding. It set the stage for Jaynes’ radical proposal, that consciousness itself was built on culture. From types of personalities to types of cultures.

All of that is just something that caught my attention. I fascinating such connections, how ideas get passed on and develop. None of that was the original reason for this post, though. I was doing my regular perusing of the web and came across some stuff of interest. This post is simply an excuse to share some of it.

This topic is always on my mind. The human psyche is amazing. It’s easy to forget what a miracle it is to be conscious and the power of the unconscious that underlies it. There is so much more to our humanity than we can begin to comprehend. Such things as dissociation and voice hearing isn’t limited to crazy people or, if it is, then we’re all a bit crazy.

* * *

Other Multiplicity
by Mark and Rana Mannng, Legion Theory

When the corpus callosum is severed in adults, we create separate consciousnesses which can act together cooperatively within a single body. In Multiple Personality Disorder (MPD), or Dissociative Identity Disorder (DID), as it is now known, psychological trauma to the developing mind also creates separate consciousnesses which can act together cooperatively within a single body. And in both cases, in most normal social situations, the individual would provide no reason for someone to suspect that they were not dealing with someone with a unitary consciousness.

The Third Man Factor: Surviving the Impossible
by John Geiger
pp. 161-162

For modern humans generally, however, the stress threshold for triggering a bicameral hallucination is much higher, according to Jaynes: “Most of us need to be over our heads in trouble before we would hear voices.” 10 Yet, he said, “contrary to what many an ardent biological psychiatrist wishes to think, they occur in normal individuals also.” 11 Recent studies have supported him, with some finding that a large minority of the general population, between 30 and 40 percent, report having experienced auditory hallucinations. These often involve hearing one’s own name, but also phrases spoken from the rear of a car, and the voices of absent friends or dead relatives. 12 Jaynes added that it is “absolutely certain that such voices do exist and that experiencing them is just like hearing actual sound.” Even today, though they are loath to admit it, completely normal people hear voices, he said, “often in times of stress.”

Jaynes pointed to an example in which normally conscious individuals have experienced vestiges of bicameral mentality, notably, “shipwrecked sailors during the war who conversed with an audible God for hours in the water until they were saved.” 13 In other words, it emerges in normal people confronting high stress and stimulus reduction in extreme environments. A U.S. study of combat veterans with post-traumatic stress disorder found a majority (65 percent) reported hearing voices, sometimes “command hallucinations to which individuals responded with a feeling of automatic obedience.”

Gods, voice-hearing and the bicameral mind
by Jules Evans, Philosophy for Life

Although humans evolved into a higher state of subjective consciousness, vestiges of the bicameral mind still remain, most obviously in voice-hearing. As much as 10% of the population hear voices at some point in their lives, much higher than the clinical incidence of schizophrenia (1%). For many people, voice-hearing is not debilitating and can be positive and encouraging.

Sensing a voice or presence often emerges in stressful situations – anecdotally, it’s relatively common for the dying to see the spirits of dead loved ones, likewise as many as 35% of people who have recently lost a loved one say they have a sense of the departed’s continued presence. Mountaineers in extreme conditions often report a sensed presence guiding them (known as the Third Man Factor).

And around 65% of children say they have had ‘imaginary friends’ or toys that play a sort of guardian-angel role in their lives – Jaynes thought children evolve from bicameral to conscious, much as Piaget thought young children are by nature animist

Earslips: Of Mishearings and Mondegreens
by Steven Connor, personal blog

The processing of the sounds of the inanimate world as voices may strike us as a marginal or anomalous phenomenon. However, some recent work designed to explain why THC, the active component of cannabis, might sometimes trigger schizophrenia, points in another direction. Zerrin Atakan of London’s Institute of Psychiatry conducted experiments which suggest that subjects who had been given small doses of THC were much less able to inhibit involuntary actions. She suggests that THC may induce psychotic hallucinations, especially the auditory hallucinations which are classically associated with paranoid delusion, by suppressing the response inhibition which would normally prevent us from reacting to nonvocal sounds as though they were voices. The implications of this argument are intriguing; for it seems to imply that, far from only occasionally or accidentally hearing voices in sounds, we have in fact continuously and actively to inhibit this tendency. Perhaps, without this filter, the wind would always and for all of us be whispering ‘Mary’, or ‘Malcolm’.

Hallucinations and Sensory Overrides
by T. M. Luhrmann, Stanford University

Meanwhile, the absence of cultural categories to describe inner experience does limit
the kinds of psychotic phenomena people experience. In the West, those who are psychotic sometimes experience symptoms that are technically called “thought insertion” and “thought withdrawal”, the sense that some external force has placed thoughts in one’s mind or taken them out. Thought insertion and withdrawal are standard items in symptoms checklists. Yet when Barrett (2004) attempted to translate the item in Borneo, he could not. The Iban do not have an elaborated idea of the mind as a container, and so the idea that someone could experience external thoughts as placed within the mind or removed from it was simply not available to them.

Hallucinatory ‘voices’ shaped by local culture, Stanford anthropologist says
by Clifton B. Parker, Stanford University

Why the difference? Luhrmann offered an explanation: Europeans and Americans tend to see themselves as individuals motivated by a sense of self identity, whereas outside the West, people imagine the mind and self interwoven with others and defined through relationships.

“Actual people do not always follow social norms,” the scholars noted. “Nonetheless, the more independent emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally in many places.”

As a result, hearing voices in a specific context may differ significantly for the person involved, they wrote. In America, the voices were an intrusion and a threat to one’s private world – the voices could not be controlled.

However, in India and Africa, the subjects were not as troubled by the voices – they seemed on one level to make sense in a more relational world. Still, differences existed between the participants in India and Africa; the former’s voice-hearing experience emphasized playfulness and sex, whereas the latter more often involved the voice of God.

The religiosity or urban nature of the culture did not seem to be a factor in how the voices were viewed, Luhrmann said.

“Instead, the difference seems to be that the Chennai (India) and Accra (Ghana) participants were more comfortable interpreting their voices as relationships and not as the sign of a violated mind,” the researchers wrote.

Tanya Luhrmann, hearing voices in Accra and Chenai
by Greg Downey, Neuroanthropology

local theory of mind—the features of perception, intention, and inference that the community treats as important—and local practices of mental cultivation will affect both the kinds of unusual sensory experiences that individuals report and the frequency of those experiences. Hallucinations feel unwilled. They are experienced as spontaneous and uncontrolled. But hallucinations are not the meaningless biological phenomena they are understood to be in much of the psychiatric literature. They are shaped by explicit and implicit learning around the ways that people pay attention with their senses. This is an important anthropological finding because it demonstrates that cultural ideas and practices can affect mental experience so deeply that they lead to the override of ordinary sense perception.

How Universal Is The Mind?
by Salina Golonka, Notes from Two Scientific Psychologists

To the extent that you agree that the modern conception of “cognition” is strongly related to the Western, English-speaking view of “the mind”, it is worth asking what cognitive psychology would look like if it had developed in Japan or Russia. Would text-books have chapter headings on the ability to connect with other people (kokoro) or feelings or morality (dusa) instead of on decision-making and memory? This possibility highlights the potential arbitrariness of how we’ve carved up the psychological realm – what we take for objective reality is revealed to be shaped by culture and language.

A puppet is a magical object. It is not a toy, is it? Here they see it as puppet theatre, as puppets for kids. But it’s just not like that. These native tribes — in Africa or Oceania, etc. — the shamans use puppets in communication not only with the upper world, with the gods, but even in relation when they treat a sick person. Those shamans, when they dress as some demon or some deity, they incarnate genuinely. They are either the totem animal or the demon. (via Matt Cardin)

The Chomsky Problem

Somehow I’ve ended up reading books on linguistics.

It started years ago with my reading books by such thinkers as E. R. Dodds and Julian Jaynes. Their main focus was on language usage of the ancient world. For entirely different reasons, I ended up interested in Daniel L. Everett who became famous for his study of the Piraha, an Amazonian tribe with a unique culture and language. A major figure I have had an interest in for a long time, Noam Chomsky, is also in the linguistics field, but I had never previously been interested in his linguistic writings.

It turns out that Everett and Chomsky are on two sides of the central debate within linguistics. That debate has overshadowed all other issues in the field since what is known as the cognitive revolution. I was peripherally aware of this, but some recent books have forced me to try to make sense of it. Two books I read, though, come at the debate from an entirely different angle.

The first book I read isn’t one I’d recommend. It is The Kingdom of Speech by Tom Wolfe. I’ve never looked at much of his writings, despite having seen his books around for decades. The only prior book I even opened was The Electric Kool-Aid Acid Test, a catchy title if there ever was one. Maybe he is getting old enough that he isn’t as great of a writer as he once was. I don’t know. This latest publication wasn’t that impressive, even as I think I understood and agreed with the central conclusion of his argument posed as a confused angry rant.

It’s possible that such a book might serve a purpose, if reading it led one to read better books on the topic. Tom Wolfe does have a journalistic flair about him that makes the debate seem entertaining to those who might otherwise find it boring — a melodramatic clashing of minds and ideas, sometimes a battle of wills with charisma winning the day. His portrayal of Chomsky definitely gets one thinking, but I wasn’t quite sure what to think of it. Fortunately, another book by an entirely different kind of author, Chris Knight’s Decoding Chomsky, takes on a similar understanding to Chomsky’s linguistics career and does so with more scholarly care.

Both books helped me put my finger on something that has been bothering me about Chomsky. Like Knight, I highly respect Chomsky’s political activism and his being a voice for truth and justice. Yet there was a disconnect I sensed. I remember being disappointed by a video I saw of him being asked by someone about what should be done and his response was that he couldn’t tell anyone what to do and that everyone had to figure it out for himself. The problem is that no one has ever figured out any major problem by themselves in all of human existence. Chomsky knows full well the challenges we face and still, when pushed comes to shove, the best he has to offer is to tell people to vote for the Democratic presidential candidate once again. That is plain depressing.

Knight gives one possible explanation for why that disconnection exists and why it matters. It’s not just a disconnection. After reading Knight’s book, I came to the conclusion that there is a dissociation involved, a near complete divide within Chomsky’s psyche. Because of his career and his activism, he felt compelled to split himself in two. He admits that this is what he has done and states that he has a remarkable talent in being able to do so, but he doesn’t seem grasp the potentially severe consequences. Knight shows that Chomsky should understand this, as it relates to key social problems Chomsky has written about involving the disconnect of the knowing mind — between what we know, what we think we know, what don’t know, and what we don’t know we know. It relates to what Knight discussion of Orwell’s problem and Plato’s problem.

I’m not sure I fully understand what the issue might be. I do sense how this goes far beyond Chomsky and linguistics. Knight points out that this kind of splitting is common in academia. I’d go further. It is common throughout our society. Dissociation is not an unusual response, but when taken to extremes the results can be problematic. An even more extreme example than that of Chomsky, as used by Derrick Jensen, is the Nazi doctors who experimented on children and then went home to play with their own children. The two parts of their lives never crossed. This is something most people learn to do, if never to such a demented degree. Our lives become splintered in endless ways, a near inevitability in such a large complex society as this. Our society maybe couldn’t operate without such dissociation, a possibility that concerns me.

This brings my mind back around to the more basic problem of linguistics itself. What is linguistics a study of and what is the purpose toward what end? That relates to a point Knight makes, arguing that Chomsky has split theory from meaning, science from humanity. Between the Pentagon-funded researcher and the anti-Pentagon anarchist, the twain shall never meet. Two people live in Chomsky’s mind and they are fundamentally opposed, according to Knight. Maybe there is something to this.

Considering the larger-than-life impact Chomsky has had on the linguistics field, what does this mean for our understanding of our own humanity? Why has the Pentagon backed Chomsky’s side and what do they get for their money?

End of Work as Endtimes

Work, a topic that comes up a lot. The US is a society obsessed with work as identity and as a way of life, not just as a means to an end. We idealize work ethic, the greatest praise being that an individual is hard-working and the harshest criticism being that someone is lazy.

We broaden it as an entire cultural ethos, the supposed Protestant work ethic, even though Catholic Americans seem just as obsessed with work. The traditionally Catholic Hispanics used to be stereotyped as lazy, but I doubt that was ever true. The stereotype is now changing and Hispanics are perceived as hard-working, which is their ticket into mainstream American society and their pathway to assimilation not just as Americans but also into potential whiteness.

Black Americans, of course, aren’t given the opportunity to assimilate into whiteness, no matter their real or perceived work ethic. It has been assumed by centuries of whites that blacks are inherently lazy, a justification for slavery and then later forms of prejudice and oppression, including the reinstatement of slavery through chain gangs. The reality, however, is that the whites who complain the most about others being lazy are probably projecting. This country was built with the labor of minorities, along with poor (often ethnic) whites, both supposedly being without the proper work ethic of upper class WASPs. I imagine many of those upper class WASPs wouldn’t know real work if they ever saw it.

There is endless weirdness, besides bigotry, around American notions of work and all that goes with it. In recent years, some have begun to worry about the end of work. It is through work that we have defined our society. The end of work sounds like the end of the world as we know it, which I suppose is true. If machines took over most human work, then what would we do? The fear is the lazy masses, without anyone forcing them to work for survival, would just laze about and do nothing productive at all. We better build work camps to keep the masses occupied or else they might start thinking about creating a free, democratic society.

Even many left-wingers can’t seem to imagine anything genuinely different. Labor has been the pillar of left-wing politics for long before Marx was born. We talking about the lower class as the working class. That is what they are. They are what they do, work. They have no inherent value beyond that. Organizing the masses inevitably means labor organizing or so it has meant in generations past.

I get the sense that there is something odd about all this. It’s not just the obsession with work, as identity and ethic. It’s one of those issues that seems to be about something else entirely. Most of the time when people talk about work I don’t think they’re actually talking about work. It’s maybe a symbolic conflation, like abortion, pointing toward something else. That something else has to do with the social order and social control.

To give this some contrast, consider hunter-gatherers. They don’t worry about work. In fact, they do as little as possible for survival and they probably never think of it as work. Almost everything hunter-gatherers do is a social activity. It’s the social part, not the work part, that defines who they are. Hunter-gatherers don’t have specialization, as everyone does a little bit of everything. Besides, most of their time is spent doing social things, as the most important part of being human in a tribal society is the fact that you belong to a tribe. That is who you are. Work is only important for what it accomplishes for the tribe and one’s place in the tribe, not as an end in and of itself.

What if modern society ends up back where we all started? Hunter-gatherers don’t work that much in order to maintain their lifestyles. What if in the future we too won’t work much to maintain our lifestyles? Would that be such a horrible thing, that like hunter-gatherers we spent more time with our families, friends, neighbors, and communities?

The hyper-focus on work is one of the most bizarre aspects of modern society. If you can’t imagine life beyond work, the problem is in your mind not in the world. Just because cars will eventually start driving themselves, civilization isn’t going to collapse nor will the moral fiber of humanity be rent asunder. Calm down. I’m sure humanity will somehow survive the end of work.

Americans will probably find other ways to work endlessly, such as mowing their lawns more often. That is the future of the US, Americans mowing their lawns everyday because robots took over their jobs. Sure, those future Americans could buy one of the new fancy robot mowers, but then they’d lose all meaning to their existence. To preoccupy themselves, Americans will have mowing contests to prove their human worth and to prove their being part of respectable society.

The Reactionary Mind in a Reactionary Age

The reactionary mind has interested me as much, if not more, than the bicameral mind. Corey Robin was my introduction to the former, although maybe that credit should be given to Richard Hofstadter. Robin’s book on the topic was enlightening. But soon after reading it, I wished someone had also written book like it about liberals.

I’m not sure it matters, though. I’ve since come to the conclusion that conservatives and liberals are kin, existing on a continuum and even of the same essence, together forming a shared dynamic. I’ve even gone so far as to argue that we live in an all-encompassing liberal age and that, therefore, conservatism is just another variety of liberalism. Conservatism, for sure, is a particularly reactionary variety of liberalism. That doesn’t let liberalism off the hook. The reactionary mind is inherent within the liberal paradigm, a necessary consequence. Or here is another thought: Maybe the reactionary mind precedes both. That is a much more interesting line of thought.

The impulse to categorize people, according to ideologies or otherwise, goes back to the post-bicameral Axial Age. That era was when reactionary politics, such as among the Greek philosophers, first became apparent—and when rhetoric began to develop. Bicameral societies (and other pre-Axial societies), on the other hand, would have had no place for the reactionary mind.

Just some ideas rolling around in my head. My inspiration came from perusing some articles and blog posts about reactionary politics, specifically in terms of Corey Robin and one of his critics, Mark Lilla. I haven’t yet read any books by the latter.

I might note that Robin is a leftist of some kind who is critical of liberals as well as conservatives while Lilla is a (former?) conservative who dislikes what he perceives as the mob of Tea Party libertarians. So, as Lilla longs for the supposed moderate conservatism of yesteryear, Robin strongly argues that no such thing ever existed. On the other hand, someone noted that Lilla’s views may have shifted in his latest writings, undermining some of his past criticisms of Robin’s theory of reactionary conservatism.

It should be pointed out that Robin is in good company in making his argument. There was a right-winger during the French Revolution who observed that conservatism only comes into existence after traditionalism is on the wane. That is to say conservatism isn’t traditionalism but a response to its loss, but then again liberalism is also a response to the same thing. The issue, in that case, being what is the difference between response and reaction.

It’s interesting to see these learned thinkers grapple with such issues. But my recent preoccupation with Jaynesian theory (and related views) has led me down other pathways. I wonder if the likes of Robin and Lilla aren’t probing deep enough or going back as far as they should (Lilla, though, might be looking at some earlier origins). Also, maybe they are constrained by their focus on political history and their omission of the truly fascinating research done in classical studies and the social sciences. There seems to be a particular worry and wariness about dealing with the messiness of psychology, i.e., the basic level of human nature that precedes and permeates all ideologies.

My basic sense, in reading some of the analyses and responses by and to Robin and Lilla, is that there is much confusion about the reactionary mind. What exactly is it? What causes it? And what purpose does it serve? The main confusion being its relationship to conservatism. Is there anything to conservatism besides reaction? For that matter, does or can conservatism exist outside of the liberal paradigm (and if not what does that say about liberalism in its relationship to the reactionary mind)?

The latter brings me to some thoughts from this past year, in watching the campaign season spiral into standard American psychosis. Why are liberals so prone to falling into reactionary thought, either temporarily or permanently? And when liberals permanently get stuck in a reactionary mindset, why it they so often if not always become conservatives or right-wingers (or else anti-leftists)? Just look back at liberals during the Cold War when liberals were among the harshest critics and most dangerous opponents of left-wingers. Or look at the study done on liberals after 9/11, those who saw repeated video of the attack became more supportive of Bush’s War on Terror. If liberals aren’t liberal when it really matters, then what is liberalism?

I’m also brought to questions about the moral imagination, the social construction of reality, symbolic conflation, and much else. I have no clear conclusions. Just wondering about what it all means and what it says about the world we find ourselves in, how we got here and where we might be heading.

More than anything, I wonder what all the reaction is about. We are dominated by reaction. Why is that? What is being reacted to? Reasons that reactionaries give change over time, from generation to generation, century to century, and yet the basic reactionary mindset remains unchanging, maybe for millennia. Is reaction inevitable? Or have earlier societies found other ways of dealing with change and uncertainty?

* * *

Roads Not Taken: Mark Lilla on Political Reaction
By Daniel McCarthy, The New York Times

LILLA’S FORTHCOMING SHIPWRECK
By Gabriel Sanchez, Opus Publicum

How Does the Mind of the Political Reactionary Work?
By Hans Rollman, Pop Matters

The Flight 93 Election
By Publius Decius Mus, Claremont Institute

“What’s it all about, boy? Elucidate!” – or – How To Avoid Huge, Shipwrecked Minds
by John Holbo, Crooked Timber

Here’s the most powerful (and chilling) case for Trump you’ll ever hear
By Damon Linker, The Week

Reactionaries In Our Time
By Rod Dreher, The American Conservative

Republicans for Revolution
By Lilla, The New York Review of Books

‘The Reactionary Mind’: An Exchange
By Corey Robin, reply by Mark Lilla, NYB

Contraception and Counterrevolution
By David V. Johnson, interview w/ Corey Robin, Boston Review

Wrong Reaction
By Alex Gourevitch, Jacobin

Lilla v. Robin
by Henry, Crooked Timber

Online Fracas for a Critic of the Right
By Jennifer Schuessler, The New York Times

Mark Lilla’s Truly Awful Review of Corey Robin’s Book
By Andrew Hartman, S-USIH

Redefining the Right Wing
By Daniel Larison, The New Inquiry

Reactionary Minds
By Ari Kohen, blog

Conservatives and reactionaries
By John Quiggin, Crooked Timber

Why Conservatives Are Still Crazy After All These Years
By Rick Perlstein, Rolling Stone

The Reactionary Libertarian
By A. Jay Adler, the sad red earth

Aren’t Irish White?

I think it cannot be maintained by any candid person that the African race have ever occupied or do promise ever to occupy any very high place in the human family. Their present condition is the strongest proof that they cannot. The Irish cannot; the American Indian cannot; the Chinese cannot. Before the energy of the Caucasian race all the other races have quailed and done obeisance.

Ralph Waldo Emerson wrote those words in the late 1820s or early 1830s (Journals and Miscellaneous Notebooks, Volume 12). Someone asked, in response to that quote, “aren’t Irish white?” Well, to the younger Emerson, obviously the Irish weren’t white or rather weren’t Caucasian.

Another great American luminary was Walt Whitman, a close acquaintance of Emerson. From a personal letter, he called Emerson “dear Friend and Master” and the admiration was mutual, Emerson even having penned Whitman a letter of recommendation. In the following decade, writing about Catholics after St. Patrick’s Cathedral was attacked by a Protestant mob, Whitman echoed Emerson’s attitude in describing the Irish faith as a “horrible and beastly superstition . . . dregs of foreign filth” (from The New York Aurora). Beastly! That was once a common way of speaking of the Irish, not just their whiteness but their very humanity under the severest of doubt.

They both were writing at a time when the large waves of Irish immigrants were seen as one of the greatest threats by American WASPs. Think about it. In the decades prior, there had been several Irish rebellions, all of which failed. This had led to many Irish seeking to escape to other countries, most of them ending up in the United States. The English were more than glad to get rid of them. Those of English ancestry in the U.S., however, weren’t so glad to receive them. Just because Americans had fought a revolution against the British a half century before didn’t lead them to be any more sympathetic to the Irish cause, much less the Irish people.

I know it seems strange compared to the world now. But the US once was a far different place. It’s just a fact that the Irish, Scots-Irish, Italians, etc weren’t always considered white or Caucasian. There are entire books written explaining this history. One such book is The History of White People by Nell Irvin Painter, in which she discusses the above Emerson quote, and a few paragraphs on she writes that,

After the failure of the Hungarian revolution in 1848 and Lajos Kossuth’s triumphant tour as a hero in exile, Emerson found a way to view the Hungarian situation through an Irish lens: “The paddy period lasts long. Hungary, it seems, must take the yoke again, & Austria, & Italy, & Prussia, & France. Only the English race can be trusted with freedom.” Emerson pontificated against Central Europeans as well as the Irish: “Races. Our idea, certainly, of Poles & Hungarians is little better than of horses recently humanized.”

Back in the day, whiteness as an idea was mixed up with nationality, ethnicity, and religion. The Irish (and other immigrant groups) weren’t English, weren’t of Anglo/Germanic stock, and generally weren’t Protestant. Although assimilating better than later immigrants, even the Germans early on were treated differently. Benjamin Franklin was prejudiced against Palatine Germans and perceived them as almost racially other—since they were shorter and darker-skinned, along with speaking a different language, having a different culture, and being of different religions (at the time, many were Pietists or Anabaptists, separate from the Protestant tradition).

All those who weren’t WASPs were perceived as foreigners and they indeed often looked different—different color of skin, different color of hair, different attire, etc. Italians, in the 1800s, were sometimes referred to as ‘niggers’ because of their dark skin and dark, curly hair. The Irish, despite their pale skin and lighter hair, were also compared to Africans and Native Americans, portrayed as ape-like and called gorillas, sometimes referred to as savages and their children in the cities dismissed as Street Arabs (Catholicism was seen as foreign as Islam). Painter, in The History of White People, states that,

AMERICAN VISUAL culture testifies to a widespread fondness for likening the Irishman to the Negro. No one supplied better fodder for this parallel than Thomas Nast, the German-born editorial cartoonist for Harper’s Weekly. In 1876, for instance, Nast pictured stereotypical southern freedmen and northern Irishmen as equally unsuited for the vote during Reconstruction after the American Civil War.

As with the Scottish and Scots-Irish, the Irish were seen as a tribal people, not quite civilized. In early America, poor ethnics (i.e., white trash) were associated with Native Americans, sometimes seen as below them—from White Trash by Nancy Isenberg, (pp. 109-110):

“Crackers” first appeared in the records of British officials in the 1760s and described a population with nearly identical traits. In a letter to Lord Dartmouth, one colonial British officer explained that the people called “crackers” were “great boasters,” a “lawless set of rascals on the frontiers of Virginia, Maryland, the Carolinas and Georgia, who often change their places of abode.” As backcountry “banditti,” “villains,” and “horse thieves,” they were dismissed as “idle strag[g]lers” and “a set of vagabonds often worse than the Indians.”

The children of Irish-Americans and other non-English ethnics in Eastern cities were regularly gathered up and put on orphan trains to be sent off West to be adopted. But in reality it usually meant a form of indentured servitude as they were often used as cheap labor. This practice that began in the 19th century continued into the early 20th century. This played a role in the Irish becoming white, as I explained previously:

WASPs, in their fear of Catholics, intentionally placed Catholic children into Protestant homes. In response, Catholics began to implement their own programs to deal with Catholic children in need of homes. One such case involved nuns bringing a trainload of Irish orphans to Arizona to be adopted by Catholic families. The problem was that the Catholic families in question were Mexican-American. The nuns didn’t understand the local racism/ethnocentrism involved and were taken by surprise by the response of the local WASPs. The “white” population living there took great offense at this challenge to racial purity. Suddenly, when put into that context, the Irish children were deemed to be innocent whites to be protected against an inferior race. This is ironic because where those Irish children came from in the big cities out East they were considered the inferior race.

It still took a long time for the Irish to become fully white.

Consider another example: white flight and ethnic succession. This was in reality a lot more complex. Different groups were escaping various other groups over time. Those deemed most inferior, undesirable, and threatening was always shifting. Early on, into the 20th century, the Irish were focus of fear and derision—Prohibitionists often had the Irish in mind when they sought to enforce social control over the perceived drunken masses. Even other minorities, blacks included, sometimes thought it best to escape the Irish. Certainly, the more well off whites didn’t want them in their neighborhoods, not until the mid-20th century when the Irish had moved a bit further up the ladder of economic class.

It demanded centuries of struggle—from political disenfranchisement and economic oppression by the English in Ireland, not unlike slavery and sometimes worse (as during the mass starvation and deportation of the artificially created Potato Famine), to finally being assimilated into American whiteness. That path toward respectability and relative privilege wasn’t inevitable and wouldn’t have been obvious to earlier generations. It wasn’t obvious to 19th century WASPs such as Emerson and Whitman, two white men who thought Irish advancement implausible and Irish aspirations threatening.

It’s sad, of course, that Irish-Americans shoved down African-Americans and Chinese-Americans in their pushing themselves up. They defied the stereotypes of the Irish Paddy and Bridget, even as they promoted the stereotypes of others. This is the story of America. If Emerson and Whitman had lived longer, the Irish might have finally won over some grudging admiration in their joining the ranks of whiteness and defending the racial order. Or maybe those early American WASPs wouldn’t have recognized this broader notion of the white race, the American mutt—it’s not the country they had envisioned as their own.

* * *

Why did the English people previously see the Irish and Scottish Celts as racially inferior?
by samj234, Reddit

The Teen Who Exposed a Professor’s Myth
by Ben Collins, The Daily Beast

The Irish were persecuted in the American job market—and precisely in the overt, literally written-down way that was always believed.

Irish-Americans, Racism and the Pursuit of Whiteness
by Jessie Daniels, Racism Review

Like many immigrant groups in the United States, the Irish were characterized as racial Others when they first arrived in the first half of the 19th century. The Irish had suffered profound injustice in the U.K. at the hands of the British, widely seen as “white negroes.” The potato famine that created starvation conditions that cost the lives of millions of Irish and forced the out-migration of millions of surviving ones, was less a natural disaster and more a complex set of social conditions created by British landowners (much like Hurricane Katrina). Forced to flee from their native Ireland and the oppressive British landowners, many Irish came to the U.S.

Once in the U.S., the Irish were to negative stereotyping that was very similar to that of enslaved Africans and African Americans. The comic Irishman – happy, lazy, stupid, with a gift for music and dance – was a stock character in American theater. Drunkenness and criminality were major themes of Irish stereotypes […]

Simian, or ape-like caricature of the Irish immigrant was also a common one among the mainstream news publications of the day (much like the recent New York Post cartoon). For example, in 1867 American cartoonist Thomas Nast drew “The Day We Celebrate” a cartoon depicting the Irish on St. Patrick’s Day as violent, drunken apes. And, in 1899, Harper’s Weekly featrued a drawing of three men’s heads in profile: Irish, Anglo-Teutonic and Negro, in order to illustrate the similarity between the Irish and the Negro (and, the supposed superiority of the Anglo-Teutonic). In northern states, blacks and Irish immigrants were forced into overlapping – often integrated – slum neighborhoods. Although leaders of the Irish liberation struggle (in Ireland) saw slavery as an evil, their Irish-American cousins largely aligned with the slaveholders.

And, following the end of slavery, the Irish and African Americans were forced to compete for the same low-wage, low-status jobs. So, the “white negroes” of the U.K. came to the United States and, though not enslaved, faced a status almost as low as that of recently-freed blacks. While there were moments of solidarity between Irish and African Americans, this was short lived.

IRISH AS SUB-HUMAN
by Michele Walfred, Thomas Nast Cartoons

The Irish-as-ape-stereotype frequently surfaces, as a popular trope, with the English in the mid-nineteenth century. But, In Nothing But the Same Old Story, researcher Liz Curtis provides plentiful examples that establish anti-Irish sentiment as a centuries-long tradition.

Dehumanizing the Irish by drawing them as beasts or primates served as a convenient technique for any conqueror, and it made perfect sense for an English empire intent on placing Ireland and its people under its jurisdiction and control. The English needed to prove the backwardness of the Irish to justify their colonization (16). When the Irish fought back against English oppression, their violence only perpetuated the “violent beast” prejudice held against them.

English artist James Gillray had drawn the Irish as an ogre – a type of humanoid beast – in a reaction to the Irish’s short-lived rebellion against England in 1798. Even before English scientific circles had begun to distort Darwin’s On the Origin of the Species later in the century, the English had favored the monkey and ape as a symbol for Hibernians.

After the Irish had made great social and political gains in the latter part of the nineteenth century, the view that they were of a different race than white people continued to persist…

Nativism
by Michele Walfred, Thomas Nast Cartoons

In America, Highman distills this down to three themes that ran through nativist sentiment in the early nineteenth century: Reformation and the hatred of Roman Catholicism, fear of foreign radicals and political revolutionaries, and racial nativism, which led to the belief America belonged to people of the Anglo-Saxon race. The United States was their domain. The Irish were viewed as a different race and this belief continued to permeate long after the initial Protestant-driven nativist sentiment had considerably weakened. […]

“American writers, cartoonists, and so-called scientific experts hammered away at Irish violence, emotional instability, and contentment in squalor” (Meagher 217). In the eyes of Protestants with ancestral ties to England, the Irish were no better than animals. The Irish presented a triple threat. Their growing numbers, allegiance to strong, organized religion ruled by a foreign monarch, and political gains within Tweed’s Democratic Party, all posed a serious concern to the Protestant elite.

Protestant nativists fought for their survival and painted the Irish as “others.” They eagerly adopted and repeated the British trope of the Irish as unsophisticated, violent-prone animals, a lower being on the evolutionary scale. The Irish’s faith, and in particular their blind allegiance to a foreign pontiff, unsettled nativists. Protestants Americans remembered the hard-fought revolutionary history of their young nation. During the peak years of the potato famine migration (1845-1855) nativists portrayed the Irish in invasion terminology. Nativists predicted the American way of life would end.

By 1880, by and large, the Irish successfully pulled themselves out of their “lowlife” status in a number of ways. They gained respect through their service in the Civil War on behalf of the Union, and in New York City, through political positions awarded by William M. “Boss” Tweed in return for their loyalty and vote. With these gains in respectablility and power, the Irish emerged as a sought-after voting bloc. But politics alone was not enough to counter nativist prejudice. Most significantly, the Irish fought hard to define themselves as white. To do so meant practicing their own brand of nativism. and align with other xenophobes. The Chinese were a convenient target.

In assessing the work of several “whiteness” studies, historian Timothy Meagher asserts that self-identification as “white” went beyond skin color. “It was not clear that the Irish were white” (217).

America’s dark and not-very-distant history of hating Catholics
by Rory Carroll, The Guardian

Demagogues in the nativist movement incited fury and fear about the huge numbers of impoverished German and Irish Catholic immigrants, many barely speaking English, who spilled off ships.

Newspapers and Protestant clergymen, including Lyman Beecher, co-founder of the American Temperance Society, swelled the outcry, warning the influx would take jobs, spread disease and crime and plot a coup to install the Pope in power.

In 1844 mobs burnt Catholic churches and hunted down victims, notably in Philadelphia where, coincidentally or not, Francis will wrap up his week-long visit.

Abuse from Protestant officers partly drove hundreds of Irish soldiers to defect from the US army to the Mexican side before and during the 1846-48 war with Mexico. The deserters obtained revenge, for a while, by forming the San Patricio battalion and targeting their former superiors in battle, only to wind up jailed, branded and hanged after Mexico surrendered.

The growth of the Ku Klux Klan in the early 20th century gave a new impetus to attacks – mostly verbal – on Catholics.

Roman Catholics and Immigration in Nineteenth-Century America
by Julie Byrne, NHC

Many members of other faiths—Jews, Protestants, and even some Muslims, Hindus and Buddhists—arrived in the successive waves of massive immigration to the United States between the 1840s and 1920s. But Catholics from various countries were the most numerous—and the most noticed. In 1850 Catholics made up only five percent of the total U.S. population. By 1906, they made up seventeen percent of the total population (14 million out of 82 million people)—and constituted the single largest religious denomination in the country.

Immigration in the 1920s
Shmoop

The New Immigrants were distinctive from earlier migrants in that most didn’t want to stay. These immigrants, mostly male and mostly young, hoped to earn enough money during a temporary stay in America to be able to afford an increased standard of living upon returning to their homeland. Something between 50% and 80% of the New Immigrants are believed to have eventually returned to their countries of origin. The exceptions were Jews (who mostly came from Russia, and only 4% of whom repatriated) and Irish (9%), two groups that tended to stay in America permanently because they faced religious persecution, political oppression, and economic privation back home.

Free Speech, World War One, and the Problem of Dissent
by Michael O’Malley, RRCHNM

World War One pitted England, France and Russia against Germany and the Austro-Hungarian Empire. It was difficult, at the beginning of the war, to determine who was the worst of the warring paries, and Americans faced the conflict with divided loyalties. For many Americans of English descent, England seemed like our natural ally. Many American political leaders, most prominently Woodrow Wilson, felt a strong sense of “anglophilia,” or love of England. But Germans and Irish were the two largest immigrant groups to the United States in 1917. Irish immigrants carried bitter memories of English oppression, while German Americans, not surprisingly, tended to favor their homeland, or at least not to regard it as an enemy.

Wilson worried about this division and regarded it as dangerous. Regarding Italian-Americans, German-American, Irish-Americans as suspect, he once declared “Any man who caries a hyphen around with him carries a dagger that he is ready to plunge into the vitals of the republic.

The Visibility of Whiteness and Immigration Restriction in the United States, 1880-1930
by Robert Júlio Decker, Critical Race and Whiteness Studies

In the second half of the nineteenth century, the Western definition of whiteness underwent several significant changes. Scientific racism, understood here as the “language, concepts, methods and authority of science [which] were used to support the belief that certain human groups were intrinsically inferior to others, as measured by some socially defined criterion” (Stepan 1987: IX), provided the methods to not only construct a black/white racial binary, but also to distinguish between several European races. Scientific racism was often augmented by discourses centred on the supposed cultural traits inherent to racial composition. In Britain and the United States, Irish immigrants were racialised as putatively inferior up to the 1880s (Ignatiev 1995: 34-59; Jacobson 1998: 48-52; Knobel 1996). From the 1860s, however, the definition of Englishness slowly began to include all inhabitants of the British Isles and the term Anglo-Saxon was established as generic racial referent for this group (Young 2008: 140-187).

A “Perverse and Ill-Fated People”:
English Perceptions of the Irish, 1845-52

by Ed Lengel, University of Virginia

…the emerging racialist conception of Irish difference, which became dominant in the second half of the nineteenth century. In a sense, the products of Liberal and racialist interpretations of the Irish problem were the same. Idealistic Liberal dreams of an “intimate” marriage between Hibernia and John Bull did not challenge the essentially paternalistic and colonial Anglo-Irish relationship. Indeed, Liberal faith in the improvability of men contributed to a restrictive famine policy intended to teach the Irish to adopt middle-class standards of thrift and morality. It is worth emphasizing in any case that Liberals and racialists agreed on the basic qualities of Saxon and Celt; but while Liberals explained this difference in a gendered discourse of moral inequality, racialists insisted that the ineradicable boundaries of biology would forever separate the two peoples. In both instances, Britain would forever be the master and Ireland the subject.

Racism and Anti-Irish Prejudice in Victorian England
by Anthony S. Wohl, The Victorian Web

In much of the pseudo-scientific literature of the day the Irish were held to be inferior, an example of a lower evolutionary form, closer to the apes than their “superiors”, the Anglo-Saxons . Cartoons in Punch portrayed the Irish as having bestial, ape-like or demonic features and the Irishman, (especially the political radical) was invariably given a long or prognathous jaw, the stigmata to the phrenologists of a lower evolutionary order, degeneracy, or criminality. Thus John Beddoe, who later became the President of the Anthropological Institute (1889-1891), wrote in his Races of Britain (1862) that all men of genius were orthognathous (less prominent jaw bones) while the Irish and the Welsh were prognathous and that the Celt was closely related to Cromagnon man, who, in turn, was linked, according to Beddoe, to the “Africanoid”. The position of the Celt in Beddoe’s “Index of Nigrescence” was very different from that of the Anglo-Saxon. These ideas were not confined to a lunatic fringe of the scientific community, for although they never won over the mainstream of British scientists they were disseminated broadly and it was even hinted that the Irish might be the elusive missing link! Certainly the “ape-like” Celt became something of an malevolent cliche of Victorian racism. Thus Charles Kingsley could write

I am haunted by the human chimpanzees I saw [in Ireland] . . . I don’t believe they are our fault. . . . But to see white chimpanzees is dreadful; if they were black, one would not feel it so much. . . .” (Charles Kingsley in a letter to his wife, quoted in L.P. Curtis, Anglo-Saxons and Celts, p.84).

Even seemingly complimentary generalizations about the Irish national character could, in the Victorian context, be damaging to the Celt. Thus, following the work of Ernest Renan’s La Poésie des Races Celtiques (1854), it was broadly argued that the Celt was poetic, light-hearted and imaginative, highly emotional, playful, passionate, and sentimental. But these were characteristics the Victorians also associated with children. Thus the Irish were “immature” and in need of guidance by others, more highly developed than themselves. Irish “emotion” was contrasted, unfavorably, with English “reason”, Irish “femininity” with English “masculine” virtues, Irish “poetic” attributes with English “pragmatism”. These were all arguments which conveniently supported British rule in Ireland.

A British Ireland, Or the Limits of Race and Hybridity in Maria Edgeworth’s Novels
by Kimberly Philomen Clarke, Georgetown University

In the seventeenth and eighteenth centuries, as Roxanne Wheeler discusses in The Complexion of Race (2000), race was seen as mutable and had a complex relationship to religion. Racial difference was not only dependent on a fixed categorization of skin color, but also on clothing, religion, and culture.19 Throughout the seventeenth and eighteenth centuries, Britons defined themselves according to their Protestantism, clothing, and climate, among other characteristics, and as the nineteenth century arrived, whiteness finally became a marker of Britishness as “skin color emerg[ed] as the most important component of racial identity in Britain during the third quarter of the eighteenth century” (Wheeler 9).

Race became the determinant of culture and history, a common “principle of academic knowledge” in the nineteenth century (Young 93). The correlation between whiteness with Englishness developed in the 1720s and 1730s with the assumption that racial blackness signified one’s intellectual and spiritual inferiority (Wheeler 98). Historian Winthrop Jordan has argued that in the mid-seventeenth century, colonists in confrontation with the Other went from calling themselves Christian to calling themselves English, free, and “white,” a term that came to symbolize a moral and intellectual superiority against blackness and non-Britishness (Wheeler 74). Against this darker, inferior other among the nonwhite British colonies in Africa, the West Indies, and India, Britishness became emblematic of a white empire that would not be culturally or racially muddied by foreign influences (Colley 312).

[…] for the Irish to be British. Primarily, they have to sacrifice their symbolic blackness, that which symbolizes their peasantry class, cultural otherness, and religious differences, and particularly that which marks their contentious history and centuries long colonization by England. To forfeit this darkness symbolizing the history of suppression and difference, however, is also to surrender a part of a collective Irish identity in Britain. […]

Throughout the nineteenth century, the Irish were seen as a symbolic manifestation of a biracial, Caucasian/African hybridity. There are stereotypes that confirm the outsider status of the Irish both before and after the 1801 Act of Union, some of which continue to paint the British as white and the Irish as nonwhite, or at least not white enough to be British. Richard Lebow’s White Ireland and Black Ireland (1976) mentions the “racist attitudes toward the Irish in Victorian Britain” (14). He argues that “racist expressions were merely the age old anti-Irish prejudice couched in the jargon of the day” (15). In The Times in 1836, Benjamin Disraeli claims the Irish “hate our free and fertile isle. They hate our order, our civilization, our enterprising industry, our sustained courage, our decorous liberty, our pure religion. This wild, reckless, indolent, uncertain, and superstitious race has no sympathy with the English character” (quoted in Lebow 61). Andrew Murphy quotes Charles Kingsley, who visited Ireland in the mid-nineteenth century, writing to his wife that, “I am daunted by the human chimpanzees I saw along that hundred miles of horrible country…to see white chimpanzees is dreadful: if they were black, one would not feel it so much, but their skins, except where tanned by exposure, are as white as ours” (Murphy 12). Furthermore, disgusted at Irish poverty and how it contradicts his British image of whiteness, Kingsley writes, “Can you picture in your mind a race of white men reduced to this condition? White men! Yes the highest and purest blood and breed of men” (Murphy 44). These quotations demonstrate both the racial whiteness and “otherness” or non-whiteness that Irish identity connotes in Edgeworth’s literature. Irish otherness was fueled stereotypes of racial, cultural, and intellectual differences that “the Irish” as a generalized group endured before and throughout the nineteenth century and onward. […]

Edgeworth associates Irish peasantry with physical blackness in a letter to her Aunt Ruxton in which she expresses her fears of the sort of Irish rebellion that was frequent in the late eighteenth century and which her family twice before had endured.27 Edgeworth confesses, “All I crave for my own part is, that if I am to have my throat cut, it might not be by a man with his face blackened with charcoal” (Egenolf 849-50). She later says that she “shall look at every person that comes here very closely, to see if there be any marks of charcoal upon their visages” (850). This blackness results from working with charcoal and other materials associated with manual labor. However, in these lines, Edgeworth is not commenting on Irish working class life but rather the threatening gaze of those faces blackened with charcoal and the fear that blackness represents for Edgeworth and her family as the Irish rebel, reclaiming his own agency, destabilizes the power of the upper class families in Ireland. Therefore, keeping in mind the Africanist image of the danger associated with Irish blackened faces, one may read Christy’s physical blackness as not a result of work but some inherent racial trait the Irish were thought to have and that reflected anxieties about the power native Irish against middle and upper class whiteness (859-60).

Irish Nationalists and the Making of the Irish Race
by Bruce Nelson
pp. 34-35

A month later the Bristol Mirror charged that “the Indians with their tomahawks and scalping knives are a much nobler set of savages than the murderers of Limerick and Tipperary.” 16

The comparison of the Irish with the “savages of America” was familiar enough; it dated from the seventeenth century. But there was a dramatically new development in the second half of the nineteenth century, a time when Darwinian science posited an evolutionary chain of being in which humans were descended directly from African apes. In this context, British commentators created a “simianized,” or apelike, Paddy whose likeness to the “backward” races of Africa was inescapable. Perry Curtis has traced this development in Apes and Angels. He notes that the Rising of 1798 led British cartoonists to develop images of a preternaturally ugly Paddy whose appearance was far more ominous and repellent than that of the bumptious but relatively harmless stage Irishman who had predominated for much of the eighteenth century. Some of these cartoon characters were given porcine features, but until the 1860s the cartoon Irishman remained largely human. It was with the coming of Darwinian evolution, and the reemergence of violent Irish republicanism in the guise of Fenianism, that the transformation of the stereotypical Paddy really took off with the publication of cartoon caricatures such as “The Irish Devil-Fish” (a massive octopus with simian facial features) and the even more notorious “Irish Frankenstein,” with his dagger dripping blood. According to Curtis, “In a biological sense, Paddy had devolved, not evolved, from a primitive peasant to an unruly Caliban, thence to a ‘white Negro,’ and finally he arrived at the lowest conceivable level of the gorilla and the orangutan.”

pp. 38-45

Even in regard to France, the citadel of Celtic achievement, he observed that the country’s “vast Moorish population” was “superior in all respects to the lazy, worthless Celt.” 24

Knox’s elevation of the dark-skinned Moor above the Celt is a vivid example of the slippage that often occurred in racial discourse about the Irish. Even relatively sympathetic observers resorted to characterizations of Irish Celts that linked them to darker races and, sometimes, to apes. […]

But Jackson also ruminated on the “Iberian character” of the Irish peasantry, raising the familiar specter of southern origins, Moorish blood, and intimations of darkness and savagery. Referring specifically to the peasants of the west and south of Ireland, he reported that “an absolutely negroid type has been occasionally detected by keen observers,” which meant that “inferior and non-Aryan racial elements are clearly perceptible in the population of the sister isle.” 26 Jackson’s fellow anthropologist Hector MacLean concurred and identified a racial type, also with Iberian characteristics, that was “very prevalent in the west of Ireland. . . . The stature is generally low,” he claimed, “with dark skin and complexion; the head is long, low, and broad; the hair black, coarse, and shaggy; the eyes black or dark brown, or grey, with fiery lustre; forehead receding, with lower part of face prominent.” 27 To those who were predisposed to believe them, reports of this kind served to reinforce elite and popular perceptions of the Irish as akin to “the negro,” “the savage,” and even “the ape.” […]

To locate the “real” Irish, then, one had to go to the west and southwest of the country, where there had been less immigration and therefore less mixing of blood. To be sure, in fishing villages on Galway Bay and in the Aran Islands, Beddoe found significant examples of intermarriage, and thus of racial hybridity. But for the most part, the west was the home of “swarthy” and “dark-complexioned aborigines,” many of whom had dark eyes and even darker, sometimes “coal-black,” hair. By themselves, hair and eye color did not indicate skin color, and for the most part Beddoe acknowledged that he was dealing with whites, although he did record that in the mountains between Sligo and Roscommon he had encountered “the swarthiest people I have ever seen.” He also created an “Index of Nigrescence” to measure the range of hair and eye color from one racial type to another, and like virtually all of the anthropologists of his generation, he could not help but speculate on the relationship between racial classification and intelligence and temperament. “There is an Irish type . . . which I am disposed to derive from the race of Cro-Magnon,“ he reported. “In the West of Ireland I have frequently seen it. Though the head is large, the intelligence is low, and there is a great deal of cunning and suspicion.” He also discovered a tendency toward “prognathism” among people in England, Wales, and Ireland, with Ireland as its “present centre.” Venturing onto very slippery terrain indeed, he speculated that “most of its lineaments are such as to lead us to think of Africa as its possible birthplace, and it may be well, provisionally, to call it Africanoid.” 30

Beddoe did not always follow the apparent logic of his own conclusions. He argued in The Races of Britain that “the points of likeness to the anthropoid apes are distributed variously among the different races of mankind, . . . [and] none of them can be taken in themselves to imply intellectual or moral inferiority.” But by creating an index of nigrescence, and constructing a prognathous physical type in Ireland that he identified as “Africanoid,” he provided openings for others who were far more determined to assert the racial inferiority of the Irish and to see them as a race that had not achieved the salient characteristics commonly associated with “whiteness.” In the early twentieth century, especially in response to the polarization and violence of the Irish War of Independence, a new generation of scholars and pseudoscholars was determined to portray the Irish as a people whose many negative attributes were rooted in a suspect racial past. In 1919 two Harvard geneticists claimed that the Irish were “principally the product of the mingling of two savage Mongolian tribes,” and in 1922 two equally zealous Hibernophobes found a “strain of negro blood” in the Firbolgs, or Attacotti, the ancient race that had invaded Ireland and allegedly waged a war of extermination against its “fair-haired and clean-skinned” rivals on the island. 31

These developments in the realm of science were reflected in a wider, more random discourse through which elite and popular commentators linked the Irish with black Africans and African Americans in a shared stereotype that alleged laziness, irrationality, and an incapacity for self-government as essential characteristics of both races. By the mid-nineteenth century or soon thereafter, the tendency to portray the Irish as apelike creatures who were laughably crude and lamentably violent was becoming a commonplace in the United States as well as Britain. In a meditation on the “Celtic physiognomy,” the American magazine Harper’s Weekly commented on the “small and somewhat upturned nose [and] the black tint of the skin,” while Punch characterized the “Irish Yahoo” who populated “the lowest districts of London and Liverpool” as “a creature manifestly between the Gorilla and the Negro,” a “climbing animal [who] may sometimes be seen ascending a ladder with a hod of bricks.” 32 […]

What comes through in so many of these observations is the racial “in-betweenness” of the Irish in the eye of the beholder. 34 Although Harper’s Weekly did comment on the “black tint of the [Irish] skin,” few observers were willing to argue that the Irish were “black” or “coloured,” no matter how high they registered on Beddoe’s index of nigrescence. Instead, in the age of Darwin, Irishmen and -women were portrayed as “white chimpanzees,” as “creature[ s] manifestly between the Gorilla and the Negro,” and as “more like a tribe of squalid apes than human beings.” Charles Kingsley, an Anglican clergyman and regius professor of modern history at Cambridge, was “haunted by the human chimpanzees” he encountered during a holiday in Ireland in 1860. “To see white chimpanzees is dreadful,” he confided to his wife; “if they were black, one would not feel it so much, but their skins, except where tanned by exposure, are as white as ours.” Thomas Carlyle, the Scottish writer and polemicist, did not doubt that the Irish had “a white skin” and even “European features,” but they were “savages” nonetheless. “The Celt[ s] of Connemara,” he wrote in the 1840s, “are white and not black; but it is not the colour of the skin that determines the savagery of a man” or of a race. “He is a savage who in his sullen stupidity, in his chronic rage and misery, cannot know the facts of this world when he sees them; [who] . . . brandishes his tomahawk against the laws of Nature.” Carlyle exempted the “Teutonic Irish” of Ulster from his censure, but he charged that the chronic laziness of the Celtic Irish, and their refusal to accept that for the foreseeable future their role must be to labor for others, made them akin to the black ex-slaves of Jamaica, for whom he recommended a return to the “beneficence” of involuntary servitude. As for Kingsley, he informed a friend that the “harsh school of facts” had cured him of any illusions about equality between the races. “I have seen,” he wrote, “that the differences of race are so great, that certain races, e.g., the Irish Celts, seem quite unfit for self-government.” 35

Other observers also believed that the racial characteristics of the Irish made them seem more like blacks and less like bona fide “white men.” When James Bryce wrote of the Negro that “his intelligence is rather quick than solid, and . . . shows the childishness as well as lack of self-control which belongs to primitive peoples,” he could just as easily have been describing the Irish as far as many readers were concerned. 36 During the Great War, it was not uncommon for those who witnessed or worked with Irish recruits in the British army to characterize them as “hardy and brave,” but also as prone to “displays of unnecessary bravado” that resulted in excessive casualties on the battlefield. Even a British officer who had “great sympathy” for the Irish troops he led confided to his wife that “his men came from ‘an extraordinary and inexplicable race’ and that Ireland must be an ‘island of children with the bodies of men.’ ” These are nearly the same terms that French observers applied to the black soldiers who were recruited from France’s West African colonies. They too displayed a “wild impulsiveness” and “fierce ardour for hand-to-hand combat” that made them ideal “shock troops.” But there were also frequent allegations that they lacked discipline and cohesion, that, like the Irish, they were a race of “children,” albeit “wonderful children, with generous hearts.” 37

For the Irish, racial in-betweenness was a condition they could ill afford at a time when European and American conceptions of race were narrowing, from the belief in a “multiplicity of nations, races, and religions” to the fulsome embrace of a simple binary division between “white” and “nonwhite.” […]

Dilke was a graduate of Cambridge, where he studied with Charles Kingsley. He was also a Liberal politician, a widely published author, and a racial imperialist whose main concern was not the supremacy of British capital but the triumph, on a global scale, of English institutions and values. The great impediment to this accomplishment, he believed, was the migration of the “cheaper races” to English-speaking countries such as the United States and Australia. “In America,” he wrote in Greater Britain: A Record of Travel in English-Speaking Countries during 1866 and 1867, “we have seen the struggle of the dear races against the cheap— the endeavors of the English to hold their own against the Irish and the Chinese.” But the threat these races posed was not only to the standard of living of the Saxons and their descendants but to civilization itself. He warned of “the danger to our race and to the world from Irish ascendency.” For if the Celt, his religion, and his “fierce” temperament prevailed, then the Englishman and his way of life would be eclipsed and the “freedom of mankind” would be jeopardized. 40

In tracing the evolution of anti-Irish stereotypes and polemics, then, from the sixteenth century through the nineteenth and into the twentieth, one comes face to face with a process of racialization rooted in conquest, colonization, and Anglicization. It was a process that sometimes engendered violence on a horrific scale and one that by means of the stage Irishman, the cartoon caricature, and the condescension and ridicule inherent in the “Paddy joke” did enormous damage to Irish self-esteem. 41 We have seen how the native Irish were portrayed as heathens, savages, and even wild animals; we have seen, too, how Paddy was constructed as feckless, lazy, riotous, and, sometimes, dangerous to the peace and tranquillity of England as well as Ireland. Perhaps by way of summary it is appropriate to turn to the Kentish Gazette, which in February 1847 sought to identify the essential ingredients of the Irish character and to offer up a solution to the Irish Question. During one of the most devastating months of the Great Famine, the Gazette commented editorially that “the system of agitation, of midnight plunder, of open-day assassination, of Hottentot ignorance, superstition, idolatry, and indolence must be eradicated, removed, abolished by the strong arm of the law.” 42 “Idolatry” and “superstition” were, of course, code words for Catholicism; indolence was, allegedly, the preferred pastime of the Irish people; assassination and midnight plunder were the staples of Irish politics; and Hottentot ignorance linked the Irish to African people who were widely regarded as primitive and backward, thus completing the process of racialization.

Who Built America, Volume II
pp . 146-149

American nativism often took the form of anti-Catholicism. In 1887 the American Protective Association (APA) organized to drive Irish Catholics out of American politics and soon claimed a half-million members, all of whom took an oath never to vote for a Catholic. The APA explicitly blamed the depression on Catholics, asserting that immigrants had taken the jobs of native-born Americans. It endorsed political candidates in 1894, but it broke apart when its members could not agree on establishing a third party or supporting the Republican ticket in 1896.

THE HISTORY OF RELIGIOUS CONFLICT IN THE UNITED STATES:
REVOLUTION TO SEPTEMBER 11TH

by Erik Wong, Stanford University

The early part of the 19th Century was relatively quiet in terms of religious conflict in America. The religious conflict that stands out in this period involves tensions between Catholics and Protestants, culminating in violence directed at Irish Catholic immigrants. The surge in immigration from Europe during the 19th Century coincided with and influx of Catholics and the rise of activist Protestantism in the U.S. As strong Protestant values permeated the country, immigrants who were Catholic also became viewed as outsiders and undemocratic. These views are separate from, but on top of, the harsh anti-Irish sentiment that also spread during the period.

In the 1830s and 1840s, anti-Catholic violence broke out in the Northeast and elsewhere. In 1835, one incident was ignited by a speaking tour by Lyman Beecher, who published Plea for the West, a book about a Catholic plot to take over the U.S. and impose Catholic rule. After Beecher’s speaking tour passed through Charlestown, Massachusetts, a mob set fire to the Ursuline convent and school.[3] In Philadelphia in 1844, pitched gun battles broke out between “native” Americans and mostly Irish Catholics. Martial law had to be declared in order to end the violence.[4]

The Divide Between Blacks and the Irish
by Noel Ignatiev, The Root

The Irish who immigrated to America in the 18th and 19th centuries were fleeing caste oppression and a system of landlordism that made the material conditions of the Irish peasant comparable to those of an American slave. The Penal Laws regulated every aspect of Irish life and established Irish Catholics as an oppressed race. Anticipating Judge Roger B. Taney’s famous dictum in the Dred Scott decision, on two occasions officials with judiciary authority in Ireland declared that “the law does not suppose any such person to exist as an Irish Roman Catholic.”

When they first began arriving here in large numbers, the Irish were, in the words of Mr. Dooley (a character created by journalist Finley Peter Dunne), given a shovel and told to start digging up the place as if they owned it. On the rail beds and canals, they labored for low wages under dangerous conditions; in the South they were occasionally employed where it did not make sense to risk the life of a slave. As they came to the cities, they were crowded into districts that became centers of crime, vice and disease.

They commonly found themselves thrown together with free Negroes. Blacks and the Irish fought each other and the police, socialized and occasionally intermarried, and developed a common culture of the lowly. They also both suffered the scorn of those better situated. Along with Jim Crow and Jim Dandy, the drunken, belligerent and foolish Patrick and Bridget were stock characters on the early stage. In antebellum America, it was speculated that if racial amalgamation was ever to take place, it would begin between those two groups. As we know, things turned out otherwise.

How the Irish Became White
by Art McDonald, University of Pittsburgh

Ironically, Irish Catholics came to this country as an oppressed race yet quickly learned that to succeed they had to in turn oppress their closest social class competitors, free Northern blacks. Back home these “native Irish or papists” suffered something very similar to American slavery under English Penal Laws. Yet, despite their revolutionary roots as an oppressed group fighting for freedom and rights, and despite consistent pleas from the great Catholic emancipator, Daniel O’Connell, to support the abolitionists, the newly arrived Irish-Americans judged that the best way of gaining acceptance as good citizens and to counter the Nativist movement was to cooperate in the continued oppression of African Americans. Ironically, at the same time they were collaborating with the dominant culture to block abolition, they were garnering support from among Southern, slaveholding democrats for Repeal of the oppressive English Act of the Union back home. Some even convinced themselves that abolition was an English plot to weaken this country.

Upon hearing of this position on the part of so many of his fellow countrymen now residing in the United States, in 1843 O’Connell wrote: “Over the broad Atlantic I pour forth my voice, saying, come out of such a land, you Irishmen; or, if you remain, and dare countenance the system of slavery that is supported there, we will recognize you as Irishmen no longer.” It’s a tragic story. In a letter published in the Liberator in 1854, it was stated that “passage to the United States seems to produce the same effect upon the exile of Erin as the eating of the forbidden fruit did upon Adam and Eve. In the morning, they were pure, loving, and innocent; in the evening, guilty.”

Irish and Africans Americans had lots in common and lots of contact during this period; they lived side by side and shared work spaces. In the early years of immigration the poor Irish and blacks were thrown together, very much part of the same class competing for the same jobs. In the census of 1850, the term mulatto appears for the first time due primarily to inter-marriage between Irish and African Americans. The Irish were often referred to as “Negroes turned inside out and Negroes as smoked Irish.” A famous quip of the time attributed to a black man went something like this: “My master is a great tyrant, he treats me like a common Irishman.” Free blacks and Irish were viewed by the Nativists as related, somehow similar, performing the same tasks in society. It was felt that if amalgamation between the races was to happen, it would happen between Irish and blacks. But, ultimately, the Irish made the decision to embrace whiteness, thus becoming part of the system which dominated and oppressed blacks. Although it contradicted their experience back home, it meant freedom here since blackness meant slavery.

How Housing Discrimination Created the Idea of Whiteness
by Whet Moser, Chicago Magazine

Note that Irish and Germans are at the top of the list. Had Hoyt’s book been written fifty, or even twenty years before, they likely would have been lower. As Lewinnek described to me, German and Irish immigrants were relegated to the periphery of the city after the Great Fire by the “fire limits,” prohibitions on the construction of inexpensive wooden houses that effectively pushed working-class homeowners out of the city center; Chicago Germans were at the forefront of briefly successful protests against the fire limits.

Not In My Neighborhood: How Bigotry Shaped a Great American City
by Antero Pietila
Excerpt

Harlem exemplifies racial succession, which is the sociologists’ term for ethnic, racial and economic neighborhood transition. In the space of four decades between the 1870s and 1910s, that section of New York City went from a white upper-class community of American-born residents to one populated by recent Irish, Jewish, German, Italian and Scandinavian immigrants.

American Pharaoh
by Adam Cohen and Elizabeth Taylor
Chapter 1

If white Chicago as a whole turned a cold shoulder to the new black arrivals, Daley’s Irish kinsmen were particularly unwelcoming.

The Irish and blacks had much in common. Ireland’s many years of domination at the hands of the British resembled, if not slavery, then certainly southern sharecropping — with Irish farmers working the land and sending rent to absentee landlords in England. The Irish were dominated, like southern blacks, through violence, and lost many of the same civil rights: to vote, to serve on juries, and to marry outside their group. Indeed, after Cromwell’s bloody invasion in the mid-1600s, not only were Irish-Catholics massacred in large numbers, but several thousand were sent in chains to the West Indies, where they were sold into slavery. But these similar histories of oppression did not bring Chicago’s Irish and blacks together. Much of the early difficulty stemmed from rivalry between two groups relegated to the lowest levels of the social order.

Ethnic America: A History
by Thomas Sowell
pp. 277-279

Today’s neighborhood changes have been dramatized by such expressions as “white flight.” but these patterns existed long before black-white neighborhood changes were the issue. When the nineteenth-century Irish immigrants flooded into New York and Boston, the native Americans fled. With the first appearance of an Irish family in a neighborhood, “the exodus of non-Irish residents began,” 2 According to a contemporary, property values “tremble” as “fear spreads,” and panicky flight ensues. 3 As “the old occupants fled to the outskirts of town,” 4 in the mid-nineteenth century when immigration increased, New York City grew northward about one mile per decade. The built-up area extended only as far north as Fourteenth Street in 1840, but it grew to Thirty-fourth Street in a decade, and to Forty-second Street by I860.5

“White flight” is a misleading term, not only because of its historical narrowness, but also because blacks too have fled when the circumstances were reversed. Blacks fled a whole series of neighborhoods in nineteenth-century New York, “pursued” by new Italian immigrants who moved in. 6 In nineteenth-century Detroit, blacks moved out of neighborhoods as Polish immigrants moved in. 7 The first blacks in Harlem were fleeing from the tough Irish neighborhoods in mid-Manhattan, 8 and avoided going north of 145th Street, for fear of encountering more Irish there. 9

As the relative socioeconomic positions of ethnic groups changed with the passage of time, so did the neighborhood flight. In nineteenth-century nieghborhoods where Anglo-Saxons had once fled as the Irish moved in, the middle-class Irish later fled as the Jews and Italians moved in. […]

Ethnic succession did not end with neighborhoods. Early Irish immigrants were often used as strikebreakers and were hated and kept out of unions as a result. Later, the Irish were unionized and Italians, Negroes, and many others were used as strikebreakers, encountering in turn the same hostility and resistance to their admission to unions. Still later, the Irish were union leaders, while Jews or Italians were rank-and-file union members. Today, there are unions where Jews are union leaders and blacks and Puerto Ricans are members. Similarly, in the schools, the Irish immigrant children in the mid-nineteenth century were taught by Protestant Anglo-Saxon teachers. Half a century later, Jewish immigrant children were far more likely to be taught by Irish Catholics than by Jewish teachers. A generation later, Negro children in Harlem were far more likely to be taught by Jewish teachers than by black teachers. Few children of rising ethnic groups have had “role models” of their own ethnicity. Some of the most successful— notably the Chinese and the Japanese— almost never did.

While various ethnic groups succeeded each other in neighborhoods, schools, jobs, etc., the country as a whole was also changing. The installation of underground sewage lines and indoor plumbing in the late nineteenth century meant that no other urban ethnic group had to endure as primitive and dangerous a set of living conditions as the Irish had in the mid-nineteenth century. Subways, trolleys, and eventually bus lines made it feasible for working people to spread out and still get to work in a reasonable time. The incredible overcrowding on New York’s lower east side in the nineteenth century was never to be approached again in modern slums. Blacks, Puerto Ricans, and Mexican Americans today live in crowded housing conditions, compared to their contemporaries, but in no way so crowded as the conditions among Jews, Italians, or the Irish in the nineteenth century. “Overcrowded” schools today may have perhaps half as many students per class as in nineteenth century schools on New York’s lower east side. The problems of today are very real, and sometimes severe, but they are by no means historically unprecedented.

Many of the problems of the poor and powerless remain the same, whatever group fills that role at a given time. The Jewish Daily Forward commented in 1907: “police in the Jewish quarter of New York are the most savage in America.” 19 An Italian immigrant writer complained in the early twentieth century about his experiences with the “rudeness” and “inconsiderateness” of government officials, which he found “disgusting.” 20 Many of the complaints against poor ethnic groups were also similar to those today— that “children are born with reckless regularity” among the Jews and Italians, 21 that murders are a result of “the wanton brutality of the moment,” 22 and that raising the immigrants to a decent level “implies a problem of such magnitude and such distant realization” that it can only be imagined. 23

When the Ancient World Was Still a Living Memory

I often discuss the historical period beginning with the Enlightenment thinkers and ending with the early modern revolutions. There are many obvious reasons for this focus, as in many ways it is the origins of the world we live in. But for the same reason, it was also the end of the world that came before.

That is what makes it so fascinating to read the words of those who were alive then. They were well aware of what was being lost. It was still within living memory, such as the last remnants of feudalism still holding on even as revolutions were remaking society. The costs of change were clearly understood and many thought it necessary to compensate in some way for what was being lost (e.g., Paine’s citizen’s dividend) or at the very least to acknowledge its passing.

That is different today. We live in a world fully changed. There is little if any living memory of what came before, although isolated traces linger in some remote places. This relates to the disconnection I see among so many people today, across the political spectrum, but it stands out most for me among liberals I observe. Liberalism has embraced modernity and so forgotten its roots, the historical development and radical thought that made it possible. Blindness to the past makes for a lack of vision in the present.

All of this was brought to mind because of something I just read. It is a Jacobin article by Alex Gourevitch, in response to Mark Lilla’s review of Corey Robin’s 2011 book, The Reactionary Mind. Gourevitch writes that,

“[I]f liberalism were really committed to the view that the individual is “metaphysically” prior to society, that would almost single-handedly eliminate the French liberal tradition, from the proto-liberalism of Montesquieu, to the sociological liberalism of Benjamin Constant, to the holist liberalism of Emile Durkheim. Constant’s famous speech in 1819 distinguishing the liberty of the moderns from that of the ancients was explicitly based on an appreciation of the social origins of modern individualism. “Ancient peoples,” wrote Constant, “could neither feel the need for [modern liberty], nor appreciate its advantages. Their social organization led them to desire an entirely different freedom from the one which this system grants to us.” Social organization “leads” and systems “grant.” No “metaphysical” priority of the individual there.”

Benjamin Constant was of French ancestry. His family had fled religious persecution and so he was born in Switzerland, but he returned to France as an adult. He was one of the first people to identify as a liberal and he was involved in the revolutionary fervor of the times, although he sought moderation. What interests me here is that it was the French Revolution that led to the abolition of feudalism in that country. Feudalism was still a major force at the time, although it was on the wane across Europe. When Constant wrote of the ancient world, he surely was speaking from the firsthand experience of the persisting ancient social order in the world around him.

Many thinkers of that era wrote about the past, specifically of Western history. They were literally and experientially closer to the past than we are now. Feudalism, for example, had developed from the landholding tradition of the Roman Empire. The influence of the ancient world was much more apparent at the time and so they could speak of the ancient world with a familiarity that we cannot. For us, that earlier social order is simply gone and at best we could read about it in history books, not that many will ever bother to do so. It’s not a living reality to us and so doesn’t compel our interest, certainly not our moral imaginations.

Piraha and Bicameralism

For the past few months, I’ve been reading about color perception, cognition, and terminology. I finally got around to finishing a post on it. The topic is a lot more complex and confusing than what one might expect. The specific inspiration was the color blue, a word that apparently doesn’t signify a universal human experience. There is no condition of blueness objectively existing in the external world. It’s easy to forget that a distinction always exists between perception and reality or rather between one perception of reality and another.

How do you prove something is real when it feels real in your experience? For example, how would you attempt to prove your consciousness, interior experience, and individuality? What does it mean for your sense of self to be real? You can’t even verify your experience of blue matches that of anyone else, much less show that blueness is a salient hue for all people. All you have is the experience itself. Your experience can motivate, influence, and shape what and how you communicate or try to communicate, but you can’t communicate the experience itself. This inability is a stumbling block of all human interactions. The gap between cultures can be even more vast.

This is why language is so important to us. Language doesn’t only serve the purpose of communication but more importantly the purpose of creating a shared worldview. This is the deeply ingrained human impulse to bond with others, no matter how imperfect this is achieved in practice. When we have a shared language, we can forget about the philosophical dilemmas of experience and to what degree it is shared. We’d rather not have to constantly worry about such perplexing and disturbing issues.

These contemplations were stirred up by one book in particular, Daniel L. Everett’s Don’t Sleep, There Are Snakes. In my post on color, I brought up some of his observations about the Piraha (read pp. 136-141 from that book and have your mind blown). Their experience is far beyond what most people experience in the modern West. They rely on immediacy of experience. If they don’t experience or someone they know doesn’t experience something, it has little relevance to their lives and no truth value in their minds. Yet what they consider to be immediate experience can seem bizarre for us outsiders.

Piraha spirituality isn’t otherworldly. Spirits exist, just as humans exist. In fact, there is no certain distinction. When someone is possessed by a spirit, they are that spirit and the Piraha treat them as such. The person who is possessed is simply not there. The spirit is real because they experience the spirit with their physical senses. Sometimes in coming into contact with a spirit, a Piraha individual will lose their old identity and gain a new one, the change being permanent and another name to go along with it. The previous person is no longer there and I suppose never comes back. They aren’t pretending to change personalities. That is their direct experience of reality. Talk about the power of language. A spirit gives someone a new name and they become a different person. The name has power, represents an entire way of being, a personality unto itself. The person becomes what they are named. This is why the Piraha don’t automatically assume someone is the same person the next time they meet them, for they live in a fluid world where change is to be expected.

A modern Westerner sees the Piraha individual. To their mind, it’s the same person. They can see he or she is physically the same person. But another Piraha tribal member doesn’t see the same person. For example, when possessed, the person is apparently not conscious of the experience and won’t remember it later. During possession, they will be in an entirely dissociated state of mind, literally being someone else with different behaviors and a different voice. The Piraha audience watching the possession also won’t remember anything other than a spirit having visited. It isn’t a possession to them. The spirit literally was there. That is their perceived reality, what they know in their direct experience.

What the Piraha consider crazy and absurd is the Western faith in a monotheistic tradition not based on direct experience. If you never met Jesus, they can’t comprehend why you’d believe in him. The very notion of ‘faith’ makes absolutely no sense to them, as it seems like an act of believing what you know not to be real in your own experience. They are sincere Doubting Thomases. Jesus isn’t real, until he physically walks into their village to be seen with their own eyes, touched with their own hands, and heard with their own ears. To them, spirituality is as real as the physical world around them and is proven by the same means, through direct experience or else the direct experience of someone who is personally trusted to speak honestly.

Calling the Piraha experience of spirits a mass hallucination is to miss the point. To the degree that is true, we are all mass hallucinating all the time. It’s just one culture’s mass hallucinations differ from that of another. We modern Westerners, however, so desperately want to believe there can only be one objective reality to rule them all. The problem is we humans aren’t objective beings. Our perceived reality is unavoidably subjective. We can’t see our own cultural biases because they are the only reality we know.

In reading Everett’s description of the Piraha, I couldn’t help thinking about Julian Jaynes’ theory of the bicameral mind. Jaynes wasn’t primarily focused on hunter-gatherers such as the Piraha. Even so, one could see the Piraha culture as having elements of bicameralism, whether or not they ever were fully bicameral. They don’t hallucinate hearing voices from spirits. They literally hear them. How such voices are spoken is apparently not the issue. What matters is that they are spoken and heard. And those spirit voices will sometimes tell the Piraha important information that will influence, if not determine, their behaviors and actions. These spirit visitations are obviously treated seriously and play a central role in the functioning of their society.

What is strangest of all is that the Piraha are not fundamentally different than you or I. They point to one of the near infinite possibilities that exist within our shared human nature. If a baby from Western society was raised by the Piraha, we have no reason to assume that he or she wouldn’t grow up to be like any other Piraha. It was only a few centuries ago when it also was common for Europeans to have regular contact with spirits. The distance between the modern mind and what came before is shorter than it first appears, for what came before still exists within us, as what we will become is a seed already planted.*

I don’t want this point to be missed. What is being discussed here isn’t ultimately about colors or spirits. This is a way of holding up a mirror to ourselves. What we see reflected back isn’t what we expected, isn’t how we appeared in our own imaginings. What if we aren’t what we thought we were? What if we turn out to be a much more amazing kind of creature, one that holds a multitude within?

(*Actually, that isn’t stated quite correctly. It isn’t what came before. The Piraha are still here, as are many other societies far different from the modern West. It’s not just that we carry the past within us. That is as true for the Piraha, considering they too carry a past within them, most of it being a past of human evolution shared with the rest of humanty. Modern individuality has only existed in a blip of time, a few hundred years in the hundreds of thousands of years of hominid existence. The supposed bicameral mind lasted for thousands of years longer than the entire post-bicameral age. What are the chances that our present experience of individuality will last as long? Highly unlikely.)

* * *

Don’t Sleep, There Are Snakes:
Life and Language in the Amazonian Jungle
by Daniel L Everett
pp. 138-139

Pirahãs occasionally talked about me, when I emerged from the river in the evenings after my bath. I heard them ask one another, “Is this the same one who entered the river or is it kapioxiai?”

When I heard them discuss what was the same and what was different about me after I emerged from the river, I was reminded of Heraclitus, who was concerned about the nature of identities through time. Heraclitus posed the question of whether one could step twice into the same river. The water that we stepped into the first time is no longer there. The banks have been altered by the flow so that they are not exactly the same. So apparently we step into a different river. But that is not a satisfying conclusion. Surely it is the same river. So what does it mean to say that something or someone is the same this instant as they were a minute ago? What does it mean to say that I am the same person I was when I was a toddler? None of my cells are the same. Few if any of my thoughts are. To the Pirahãs, people are not the same in each phase of their lives. When you get a new name from a spirit, something anyone can do anytime they see a spirit, you are not exactly the same person as you were before.

Once when I arrived in Posto Novo, I went up to Kóhoibiíihíai and asked him to work with me, as he always did. No answer. So I asked again, “Ko Kóhoi, kapiigakagakaísogoxoihí?” (Hey Kóhoi, do you want to mark paper with me?) Still no answer. So I asked him why he wasn’t talking to me. He responded, “Were you talking to me? My name is Tiáapahai. There is no Kóhoi here. Once I was called Kóhoi, but he is gone now and Tiáapahai is here.”

So, unsurprisingly, they wondered if I had become a different person. But in my case their concern was greater. Because if, in spite of evidence to the contrary, I turned out not to be a xíbiisi, I might really be a different entity altogether and, therefore, a threat to them. I assured them that I was still Dan. I was not kapioxiai.

On many rainless nights, a high falsetto voice can be heard from the jungle near a Pirahã village. This falsetto sounds spiritlike to me. Indeed, it is taken by all the Pirahãs in the village to be a kaoáíbógí, or fast mouth. The voice gives the villagers suggestions and advice, as on how to spend the next day, or on possible night dangers (jaguars, other spirits, attacks by other Indians). This kaoáíbógí also likes sex, and he frequently talks about his desire to copulate with village women, with considerable detail provided.

One night I wanted to see the kaoáíbógí myself. I walked through the brush about a hundred feet to the source of that night’s voice. The man talking in the falsetto was Xagábi, a Pirahã from the village of Pequial and someone known to be very interested in spirits. “Mind if I record you?” I asked, not knowing how he might react, but having a good idea that he would not mind.

“Sure, go ahead,” he answered immediately in his normal voice. I recorded about ten minutes of his kaoáíbógí speech and then returned to my house.

The next day, I went to Xagábi’s place and asked, “Say, Xagábi, why were you talking like a kaoáíbógí last night?”

He acted surprised. “Was there a kaoáíbógí last night? I didn’t hear one. But, then, I wasn’t here.”

pp. 140-141

After some delay, which I could not help but ascribe to the spirits’ sense of theatrical timing, Peter and I simultaneously heard a falsetto voice and saw a man dressed as a woman emerge from the jungle. It was Xisaóoxoi dressed as a recently deceased Pirahã woman. He was using a falsetto to indicate that it was the woman talking. He had a cloth on his head to represent the long hair of a woman, hanging back like a Pirahã woman’s long tresses. “She” was wearing a dress.

Xisaóoxoi’s character talked about how cold and dark it was under the ground where she was buried. She talked about what it felt like to die and about how there were other spirits under the ground. The spirit Xisaóoxoi was “channeling” spoke in a rhythm different from normal Pirahã speech, dividing syllables into groups of two (binary feet) instead of the groups of three (ternary feet) used in everyday talking. I was just thinking how interesting this would be in my eventual analysis of rhythm in Pirahã, when the “woman” rose and left.

Within a few minutes Peter and I heard Xisaóoxoi again, but this time speaking in a low, gruff voice. Those in the “audience” started laughing. A well-known comical spirit was about to appear. Suddenly, out of the jungle, Xisaóoxoi emerged, naked, and pounding the ground with a heavy section of the trunk of a small tree. As he pounded, he talked about how he would hurt people who got in his way, how he was not afraid, and other testosterone-inspired bits of braggadocio.

I had discovered, with Peter, a form of Pirahã theater! But this was of course only my classification of what I was seeing. This was not how the Pirahãs would have described it at all, regardless of the fact that it might have had exactly this function for them. To them they were seeing spirits. They never once addressed Xisaóoxoi by his name, but only by the names of the spirits.

What we had seen was not the same as shamanism, because there was no one man among the Pirahãs who could speak for or to the spirits. Some men did this more frequently than others, but any Pirahã man could, and over the years I was with them most did, speak as a spirit in this way.

The next morning when Peter and I tried to tell Xisaóoxoi how much we enjoyed seeing the spirits, he, like Xagábi, refused to acknowledge knowing anything about it, saying he wasn’t there.

This led me to investigate Pirahã beliefs more aggressively. Did the Pirahãs, including Xisaóoxoi, interpret what we had just seen as fiction or as fact, as real spirits or as theater? Everyone, including Pirahãs who listened to the tape later, Pirahãs from other villages, stated categorically that this was a spirit. And as Peter and I were watching the “spirit show,” I was given a running commentary by a young man sitting next to me, who assured me that this was a spirit, not Xisaóoxoi. Moreover, based on previous episodes in which the Pirahãs doubted that I was the same person and their expressed belief that other white people were spirits, changing forms at will, the only conclusion I could come to was that for the Pirahãs these were encounters with spirits— similar to Western culture’s seances and mediums.

Pirahãs see spirits in their mind, literally. They talk to spirits, literally. Whatever anyone else might think of these claims, all Pirahãs will say that they experience spirits. For this reason, Pirahã spirits exemplify the immediacy of experience principle. And the myths of any other culture must also obey this constraint or there is no appropriate way to talk about them in the Pirahã language.

One might legitimately ask whether something that is not true to Western minds can be experienced. There is reason to believe that it can. When the Pirahãs claim to experience a spirit they have experienced something, and they label this something a spirit. They attribute properties to this experience, as well as the label spirit. Are all the properties, such as existence and lack of blood, correct? I am sure that they are not. But I am equally sure that we attribute properties to many experiences in our daily lives that are incorrect.

* * *

Radical Human Mind: From Animism to Bicameralism and Beyond

On Being Strange

Self, Other, & World

Humanity in All of its Blindness

The World that Inhabits Our Mind

Blue on Blue

“Abstract words are ancient coins whose concrete images in the busy give-and-take of talk have worn away with use.”
~ Julian Jaynes, The Origin of Consciousness in the
Breakdown of the Bicameral Mind

“This blue was the principle that transcended principles. This was the taste, the wish, the Binah that understands, the dainty fingers of personality and the swirling fingerprint lines of individuality, this sigh that returns like a forgotten and indescribable scent that never dies but only you ever knew, this tingle between familiar and strange, this you that never there was word for, this identifiable but untransmittable sensation, this atmosphere without reason, this illicit fairy kiss for which you are more fool than sinner, this only thing that God and Satan mistakenly left you for your own and which both (and everyone else besides) insist to you is worthless— this, your only and invisible, your peculiar— this secret blue.”
~ Quentin S. Crisp, Blue on Blue

Perception is as much cognition as sensation. Colors don’t exist in the world. It is our brain’s way of processing light waves detected by the eyes. Someone unable to see from birth will never be able to see normal colors, even if they gain sight as an adult. The brain has to learn how to see the world and that is a process that primarily happens in infancy and childhood.

Radical questions follow from this insight. Do we experience blue, forgiveness, individuality, etc before our culture has the language for it? And, conversely, does the language we use and how we use it indicate our actual experience? Or does it filter and shape it? Did the ancients lack not only perceived blueness but also individuated/interiorized consciousness and artistic perspective because they had no way of communicating and expressing it? If they possessed such things as their human birthright, why did they not communicate them in their texts and show them in their art?

The most ancient people would refer to the sky as black. Some isolated people in more recent times have also been observed offering this same description. This apparently isn’t a strange exception. Guy Deutscher mentions that, in an informal color experiment, his young daughter once pointed to the “pitch-black sky late at night” and declared it blue—that was at the age of four, long after having learned the color names for blue and black. She had the language to make the distinction and yet she made a similar ‘mistake’ as some isolated island people. How could that be? Aren’t ‘black’ and ‘blue’ obviously different?

The ancients described physical appearances in some ways that seem bizarre to the modern sensibility. Homer says the sea appears something like wine and so do sheep. Or else the sea is violet, just as are oxen and iron. Even more strangely, green is the color of honey and the color human faces turn under emotional distress. Yet no where in the ancient world is anything blue for no word for it existed. Things that seem blue to us are either green, black or simply dark in ancient texts.

It has been argued that Homer’s language such as the word for ‘bronze’ might not have referred to color at all. But that just adds to the strangeness. We not only can’t determine what colors he might have been referring to or even if he was describing colors at all. There weren’t abstractly generalized color terms that were exclusively dedicated to colors, instead also describing other physical features, psychological experiences, and symbolic values. This might imply that synesthesia once was a more common experience, related to the greater capacity preliterate individuals had for memorizing vast amounts of information (see Knowledge and Power in Prehistoric Societies by Lynne Kelly).

The paucity and confusion of ancient color language indicates color wasn’t perceived as all that significant, to the degree it was consciously perceived at all, at least not in the way we moderns think about it. Color hue might have not seemed all that relevant in the ancient world that was mostly lacking artificially colored objects and entirely lacking in bright garden flowers. Besides the ancient Egyptians, no one in the earliest civilizations had developed blue pigment and hence a word to describe it. Blue is a rare color in nature. Even water and sky is rarely a bright clear blue, when blue at all.

This isn’t just about color. There is something extremely bizarre going on, according to what we moderns assume to the case about the human mind and perception.

Consider the case of the Piraha, as studied by Daniel L. Everett (a man who personally understands the power of their cultural worldview). The Piraha have no color terms, not as single words, although they are able to describe colors using multiple words and concrete comparisons—such as red described as being like blood or green as like not yet ripe. Of course, they’ve been in contact with non-Piraha for a while now and so no one knows how they would’ve talked about colors before interaction with outsiders.

From a Western perspective, there are many other odd things about the Piraha. Their language does not fit the expectations of what many have thought as universal to all human language. They have no terms for numbers and counting, as well as no “quantifiers like all, each, every, and so on” (Everett, Don’t Sleep, There Are Snakes, p. 119). Originally, they had no pronouns and the pronouns they borrowed from other languages are used limitedly. They refer to ‘say’ in place of ‘think’, which makes one wonder what this indicates about their experience—is their thought an act of speaking?

Along with lacking ancestor worship, there aren’t even words to refer to family one never personally knew. Also, there are no creation stories or myths or fiction or any apparent notion of the world having been created or another supernatural world existing. They don’t think in those terms nor, one might presume, perceive reality in those terms. They are epistemological agnostics about anything they haven’t personally experienced or someone they personally know hasn’t personally experienced, and their language is extremely precise in knowledge claims, making early Western philosophers seem simpleminded in comparison. Everett was put in the unfortunate position of having tried to convert them to Christianity, but instead they converted him to atheism. Yet the Piraha live in a world they perceive as filled with spirits. These aren’t otherworldly spirits. They are very much in this world and when a Piraha speaks as a spirit they are that spirit. To put it another way, the world is full of diverse and shifting selves.

Color terms refer to abstract unchanging categories, the very thing that seems least relevant to the Piraha. They favor a subjective mentality, but that doesn’t mean they possess a subjective self similar to Western culture. Like many hunter-gatherers, they have a fluid sense of identity that changes along with their names, their former self treated as no longer existing whatsoever, just gone. There is no evidence of belief in a constant self that would survive death, as there is no belief in gods nor a heaven and hell. Instead of being obsessed with what is beyond, they are endlessly fascinated by what is at the edge of experience, what appears and disappears. In Cultural Constraints on Grammar and Cognition in Piraha, Everett explains this:

“After discussions and checking of many examples of this, it became clearer that the Piraha are talking about liminality—situations in which an item goes in and out of the boundaries of their experience. This concept is found throughout Piraha˜ culture.
Piraha˜’s excitement at seeing a canoe go around a river bend is hard to describe; they see this almost as traveling into another dimension. It is interesting, in light of the postulated cultural constraint on grammar, that there is an important Piraha˜ term and cultural value for crossing the border between experience and nonexperience.”

To speak of colors is to speak of particular kinds of perceptions and experiences. The Piraha culture is practically incomprehensible to us, as the Piraha represent an alien view of the world. Everett, in making a conclusion, writes that,

“Piraha thus provides striking evidence for the influence of culture on major grammatical structures, contradicting Newmeyer’s (2002:361) assertion (citing “virtually all linguists today”), that “there is no hope of correlating a language’s gross grammatical properties with sociocultural facts about its speakers.” If I am correct, Piraha shows that gross grammatical properties are not only correlated with sociocultural facts but may be determined by them.”

Even so, Everett is not arguing for a strong Whorfian positon of linguistic determinism. Then again, Vyvyan Evans states that not even Benjamin Lee Whorf made this argument. In Language, Thought and Reality, Whorf wrote (as quoted by Evans in The Language Myth):

“The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur.”

Anyway, Everett observed that the Piraha demonstrated a pattern to how they linguistically treated certain hues of color. It’s just that they had much diversity and complexity in how they described colors, a dark brown object being described differently than a dark-skinned person, and no consistency across all the Piraha members in which phrases they’d use to describe which colors. Still, like any other humans, they had the capacity for color perception, whether or not their color cognition matches that of other cultures.

To emphasize the point, the following is a similar example, as presented by Vyvyan Evans from The Language Myth (p. 207-8):

“The colour system in Yélî Dnye has been studied extensively by linguistic anthropologist Stephen Levinson. 38 Levinson argues that the lesson from Rossel Island is that each of the following claims made by Berlin and Kay is demonstrably false:

  • Claim 1: All languages have basic colour terms
  • Claim 2: The colour spectrum is so salient a perceptual field that all cultures must systematically and exhaustively name the colour space
  • Claim 3: For those basic colour terms that exist in any given language, there are corresponding focal colours – there is an ideal hue that is the prototypical shade for a given basic colour term
  • Claim 4: The emergence of colour terms follows a universal evolutionary pattern

“A noteworthy feature of Rossel Island culture is this: there is little interest in colour. For instance, there is no native artwork or handiwork in colour. The exception to this is hand-woven patterned baskets, which are usually uncoloured, or, if coloured, are black or blue. Moreover, the Rossel language doesn’t have a word that corresponds to the English word colour: the domain of colour appears not to be a salient conceptual category independent of objects. For instance, in Yélî, it is not normally possible to ask what colour something is, as one can in English. Levinson reports that the equivalent question would be: U pââ ló nté? This translates as “Its body, what is it like?” Furthermore, colours are not usually associated with objects as a whole, but rather with surfaces.”

Evans goes into greater detail. Suffice it to say, she makes a compelling argument that this example contradicts and falsifies the main claims of conventional theory, specifically that of Berlin and Kay. This culture defies expectations. It’s one of the many exceptions that appears to disprove the hypothetical rule.

Part of the challenge is we can’t study other cultures as neutral observers. Researchers end up influencing those cultures they study or else simply projecting their own cultural biases onto them and so interpreting the results accordingly. Even the tests used to analyze color perceptions across cultures are themselves culturally biased. They don’t just measure how people divide up hues. In the process of being tested, the design of the test is teaching the subjects a particular way of thinking about color perception. The test can’t tell us how people think about colors prior to the test itself. And obviously, even if the test could accomplish this impossible feat, we have no way of time traveling back in order to apply the test to ancient people.

We are left with a mystery and no easy way to explore it.

* * *

Here are a few related posts of mine. And below that are other sources of info, including a video at the very bottom.

Radical Human Mind: From Animism to Bicameralism and Beyond

Folk Psychology, Theory of Mind & Narrative

Self, Other, & World

Does Your Language Shape How You Think?
by Guy Deutscher

SINCE THERE IS NO EVIDENCE that any language forbids its speakers to think anything, we must look in an entirely different direction to discover how our mother tongue really does shape our experience of the world. Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about. […]

For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies. We may not know as yet how to measure these consequences directly or how to assess their contribution to cultural or political misunderstandings. But as a first step toward understanding one another, we can do better than pretending we all think the same.

Why Isn’t the Sky Blue?
by Tim Howard, Radiolab

Is the Sky Blue?
by Lisa Wade, PhD, Sociological Images

Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction. There are other examples of this phenomenon. What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany. Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

Could our ancestors see blue?
by Ellie Zolfagharifard, Daily Mail

But it’s not just about lighting conditions or optical illusions – evidence is mounting that until we have a way to describe something, we may not see its there.

Fathoming the wine-dark sea
by Christopher Howse, The Spectator

It wasn’t just the ‘wine-dark sea’. That epithet oinops, ‘wine-looking’ (the version ‘wine-dark’ came from Andrew Lang’s later translation) was applied both to the sea and to oxen, and it was accompanied by other colours just as nonsensical. ‘Violet’, ioeis, (from the flower) was used by Homer of the sea too, but also of wool and iron. Chloros, ‘green’, was used of honey, faces and wood. By far the most common colour words in his reticent vocabulary were black (170 times) and white (100), followed distantly by red (13).

What could account for this alien colour-sense? It wasn’t that Homer (if Homer existed) was blind, for there are parallel usages in other Greek authors.

A Winelike Sea
by Caroline Alexander, Lapham’s Quarterly

The image Homer hoped to conjure with his winelike sea greatly depended upon what wine meant to his audience. While the Greeks likely knew of white wine, most ancient wine was red, and in the Homeric epics, red wine is the only wine specifically described. Drunk at feasts, poured onto the earth in sacred rituals, or onto the ashes around funeral pyres, Homeric wine is often mélas, “dark,” or even “black,” a term with broad application, used of a brooding spirit, anger, death, ships, blood, night, and the sea. It is also eruthrós, meaning “red” or the tawny-red hue of bronze; and aíthops, “bright,” “gleaming,” a term also used of bronze and of smoke in firelight. While these terms notably have more to do with light, and the play of light, than with color proper, Homeric wine was clearly dark and red and would have appeared especially so when seen in the terracotta containers in which it was transported. “Winelike sea” cannot mean clear seawater, nor the white splash of sea foam, nor the pale color of a clear sea lapping the shallows of a sandy shore. […]

Homer’s sea, whether háls, thálassa, or póntos, is described as misty, darkly troubled, black-dark, and grayish, as well as bright, deep, clashing, tumultuous, murmuring, and tempestuous—but it is never blue. The Greek word for blue, kuáneos, was not used of the sea until the late sixth or early fifth century BC, in a poem by the lyric poet Simonides—and even here, it is unclear if “blue” is strictly meant, and not, again, “dark”:

the fish straight up from the
dark/blue water leapt
at the beautiful song

After Simonides, the blueness of kuáneos was increasingly asserted, and by the first century, Pliny the Elder was using the Latin form of the word, cyaneus, to describe the cornflower, whose modern scientific name, Centaurea cyanus, still preserves this lineage. But for Homer kuáneos is “dark,” possibly “glossy-dark” with hints of blue, and is used of Hector’s lustrous hair, Zeus’ eyebrows, and the night.

Ancient Greek words for color in general are notoriously baffling: In The Iliad, “chlorós fear” grips the armies at the sound of Zeus’ thunder. The word, according to R. J. Cunliffe’s Homeric lexicon, is “an adjective of color of somewhat indeterminate sense” that is “applied to what we call green”—which is not the same as saying it means “green.” It is also applied “to what we call yellow,” such as honey or sand. The pale green, perhaps, of vulnerable shoots struggling out of soil, the sickly green of men gripped with fear? […]

Rather than being ignorant of color, it seems that the Greeks were less interested in and attentive to hue, or tint, than they were to light. As late as the fourth century BC, Plato named the four primary colors as white, black, red, and bright, and in those cases where a Greek writer lists colors “in order,” they are arranged not by the Newtonian colors of the rainbow—red, orange, yellow, green, blue, indigo, violet—but from lightest to darkest. And The Iliad contains a broad, specialized vocabulary for describing the movement of light: argós meaning “flashing” or “glancing white”; aiólos, “glancing, gleaming, flashing,” or, according to Cunliffe’s Lexicon, “the notion of glancing light passing into that of rapid movement,” and the root of Hector’s most defining epithet, koruthaíolos—great Hector “of the shimmering helm.” Thus, for Homer, the sky is “brazen,” evoking the glare of the Aegean sun and more ambiguously “iron,” perhaps meaning “burnished,” but possibly our sense of a “leaden” sky. Significantly, two of the few unambiguous color terms in The Iliad, and which evoke the sky in accordance with modern sensibilities, are phenomena of light: “Dawn robed in saffron” and dawn shining forth in “rosy fingers of light.”

So too, on close inspection, Homeric terms that appear to describe the color of the sea, have more to do with light. The sea is often glaukós or mélas. In Homer, glaukós (whence glaucoma) is color neutral, meaning “shining” or “gleaming,” although in later Greek it comes to mean “gray.” Mélas (whence melancholy) is “dark in hue, dark,” sometimes, perhaps crudely, translated as “black.” It is used of a range of things associated with water—ships, the sea, the rippled surface of the sea, “the dark hue of water as seen by transmitted light with little or no reflection from the surface.” It is also, as we have seen, commonly used of wine.

So what color is the sea? Silver-pewter at dawn; gray, gray-blue, green-blue, or blue depending on the particular day; yellow or red at sunset; silver-black at dusk; black at night. In other words, no color at all, but rather a phenomenon of reflected light. The phrase “winelike,” then, had little to do with color but must have evoked some attribute of dark wine that would resonate with an audience familiar with the sea—with the póntos, the high sea, that perilous path to distant shores—such as the glint of surface light on impenetrable darkness, like wine in a terracotta vessel. Thus, when Achilles, “weeping, quickly slipping away from his companions, sat/on the shore of the gray salt sea,” stretches forth his hands toward the oínopa pónton, he looks not on the enigmatic “wine-dark sea,” but, more explicitly, and possibly with more weight of melancholy, on a “sea as dark as wine.”

Ancient Greek Color Vision
by Ananda Triulzi

In his writings Homer surprises us by his use of color. His color descriptive palate was limited to metallic colors, black, white, yellowish green and purplish red, and those colors he often used oddly, leaving us with some questions as to his actual ability to see colors properly (1). He calls the sky “bronze” and the sea and sheep as the color of wine, he applies the adjective chloros (meaning green with our understanding) to honey, and a nightingale (2). Chloros is not the only color that Homer uses in this unusual way. He also uses kyanos oddly, “Hector was dragged, his kyanos hair was falling about him” (3). Here it would seem, to our understanding, that Hector’s hair was blue as we associate the term kyanos with the semi-precious stone lapis lazuli, in our thinking kyanos means cyan (4). But we cannot assume that Hector’s hair was blue, rather, in light of the way that Homer consistently uses color adjectives, we must think about his meaning, did he indeed see honey as green, did he not see the ocean as blue, how does his perception of color reflect on himself, his people, and his world.

Homer’s odd color description usage was a cultural phenomenon and not simply color blindness on his part, Pindar describes the dew as chloros, in Euripides chloros describes blood and tears (5). Empedocles, one of the earliest Ancient Greek color theorists, described color as falling into four areas, light or white, black or dark, red and yellow; Xenophanes described the rainbow as having three bands of color: purple, green/yellow, and red (6). These colors are fairly consistent with the four colors used by Homer in his color description, this leads us to the conclusion that all Ancient Greeks saw color only in the premise of Empedocles’ colors, in some way they lacked the ability to perceive the whole color spectrum. […]

This inability to perceive something because of linguistic restriction is called linguistic relativity (7). Because the Ancient Greeks were not really conscious of seeing, and did not have the words to describe what they unconsciously saw, they simply did not see the full spectrum of color, they were limited by linguistic relativity.

The color spectrum aside, it remains to explain the loose and unconventional application of Homer and other’s limited color descriptions, for an answer we look to the work of Eleanor Irwin. In her work, Irwin suggests that besides perceiving less chromatic distinction, the Ancient Greeks perceived less division between color, texture, and shadow, chroma may have been difficult for them to isolate (8). For the Ancient Greeks, the term chloros has been suggested to mean moistness, fluidity, freshness and living (9). It also seems likely that Ancient Greek perception of color was influenced by the qualities that they associated with colors, for instance the different temperaments being associated with colors probably affected the way they applied color descriptions to things. They didn’t simply see color as a surface, they saw it as a spirited thing and the word to describe it was often fittingly applied as an adjective meaning something related to the color itself but different from the simplicity of a refined color.

The Wine-Dark Sea: Color and Perception in the Ancient World
by Erin Hoffman

Homer’s descriptions of color in The Iliad and The Odyssey, taken literally, paint an almost psychedelic landscape: in addition to the sea, sheep were also the color of wine; honey was green, as were the fear-filled faces of men; and the sky is often described as bronze. […]

The conspicuous absence of blue is not limited to the Greeks. The color “blue” appears not once in the New Testament, and its appearance in the Torah is questioned (there are two words argued to be types of blue, sappir and tekeleth, but the latter appears to be arguably purple, and neither color is used, for instance, to describe the sky). Ancient Japanese used the same word for blue and green (青 Ao), and even modern Japanese describes, for instance, thriving trees as being “very blue,” retaining this artifact (青々とした: meaning “lush” or “abundant”). […]

Blue certainly existed in the world, even if it was rare, and the Greeks must have stumbled across it occasionally even if they didn’t name it. But the thing is, if we don’t have a word for something, it turns out that to our perception—which becomes our construction of the universe—it might as well not exist. Specifically, neuroscience suggests that it might not just be “good or bad” for which “thinking makes it so,” but quite a lot of what we perceive.

The malleability of our color perception can be demonstrated with a simple diagram, shown here as figure six, “Afterimages”. The more our photoreceptors are exposed to the same color, the more fatigued they become, eventually giving out entirely and creating a reversed “afterimage” (yellow becomes blue, red becomes green). This is really just a parlor trick of sorts, and more purely physical, but it shows how easily shifted our vision is; other famous demonstrations like this selective attention test (its name gives away the trick) emphasize the power our cognitive functions have to suppress what we see. Our brains are pattern-recognizing engines, built around identifying things that are useful to us and discarding the rest of what we perceive as meaningless noise. (And a good thing that they do; deficiencies in this filtering, called sensory gating, are some of what cause neurological dysfunctions such as schizophrenia and autism.)

This suggests the possibility that not only did Homer lack a word for what we know as “blue”—he might never have perceived the color itself. To him, the sky really was bronze, and the sea really was the same color as wine. And because he lacked the concept “blue”—therefore its perception—to him it was invisible, nonexistent. This notion of concepts and language limiting cognitive perception is called linguistic relativism, and is typically used to describe the ways in which various cultures can have difficulty recalling or retaining information about objects or concepts for which they lack identifying language. Very simply: if we don’t have a word for it, we tend to forget it, or sometimes not perceive it at all. […]

So, if we’re all synesthetes, and our minds are extraordinarily plastic, capable of reorienting our entire perception around the addition of a single new concept (“there is a color between green and violet,” “schizophrenia is much more common than previously believed”), the implications of Homer’s wine-dark sea are rich indeed.

We are all creatures of our own time, our realities framed not by the limits of our knowledge but by what we choose to perceive. Do we yet perceive all the colors there are? What concepts are hidden from us by the convention of our language? When a noblewoman of Syracuse looked out across the Mare Siculum, did she see waves of Bacchanalian indigo beneath a sunset of hammered bronze? If a seagull flew east toward Thapsus, did she think of Venus and the fall of Troy?

The myriad details that define our everyday existence may define also the boundaries of our imagination, and with it our dreams, our ethics. We are lenses moving through time, beings of color and shadow.

Evolution of the Color Blue
by Dov Michaeli MD, PhD, The Doctor Weighs In

Why were black, white, and red the first colors to be perceived by our forefathers? The evolutionary explanation is quite straightforward: ancient humans had to distinguish between night and day. And red is important for recognizing blood and danger. Even today, in us moderns, the color red causes an increase in skin galvanic response, a sign of tension and alarm. Green and yellow entered the vocabulary as the need to distinguish ripe fruit from unripe, grasses that are green from grasses that are wilting, etc. But what is the need for naming the color blue? Blue fruits are not very common, and the color of the sky is not really vital for survival.

The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)
by Aatish Bhatia, Empirical Zeal

Some languages have just three basic colors, others have 4, 5, 6, and so on. There’s even a debate as to whether the Pirahã tribe of the Amazon have any specialized color words at all! (If you ask a Pirahã tribe member to label something red, they’ll say that it’s blood-like).

But there’s still a pattern hidden in this diversity. […] You start with a black-and-white world of darks and lights. There are warm colors, and cool colors, but no finer categories. Next, the reds and yellows separate away from white. You can now have a color for fire, or the fiery color of the sunset. There are tribes that have stopped here. Further down, blues and greens break away from black. Forests, skies, and oceans now come of their own in your visual vocabulary. Eventually, these colors separate further. First, red splits from yellow. And finally, blue from green.

The crayola-fication of the world: How we gave colors names, and it messed with our brains (part II)
by Aatish Bhatia, Empirical Zeal

The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green one. This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).

However, and this is where things start to get a bit strange, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (as in the example images above), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green. It seems that color categories only matter in the right half of your visual field! […]

The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out. […]

But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.

They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It becomes easy to spot the blue among green, so you’re faster at straddling categories.

All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions.

Color categories: Confirmation of the relativity hypothesis.
by Debi Roberson, Jules Davidoff, Ian R. L. Davies, & Laura R. Shapiro

In a category-learning paradigm, there was no evidence that Himba participants perceived the blue – green region of color space in a categorical manner. Like Berinmo speakers, they did not find this division easier to learn than an arbitrary one in the center of the green category. There was also a significant advantage for learning the dumbu-burou division, over the yellow-green division. It thus appears that CP for color category boundaries is tightly linked to the linguistic categories of the participant.

Knowing color terms enhances recognition: Further evidence from English and Himba
by Julie Goldstein, Jules B. Davidoff, & Debi Roberson, JECP

Two experiments attempted to reconcile discrepant recent findings relating to children’s color naming and categorization. In a replication of Franklin and colleagues ( Journal of Experimental Child Psychology, 90 (2005) 114–141), Experiment 1 tested English toddlers’ naming and memory for blue–green and blue–purple colors. It also found advantages for between-category presentations that could be interpreted as support for universal color categories. However, a different definition of knowing color terms led to quite different conclusions in line with the Whorfian view of Roberson and colleagues (Journal of Experimental Psychology: General, 133 (2004) 554–571). Categorical perception in recognition memory was now found only for children with a fuller understanding of the relevant terms. It was concluded that color naming can both under estimate and overestimate toddlers’ knowledge of color terms. Experiment 2 replicated the between-category recognition superiority found in Himba children by Franklin and colleagues for the blue–purple range. But Himba children, whose language does not have separate terms for green and blue, did not show across-category advantage for that set; rather, they behaved like English children who did not know their color terms.

The Effects of Color Names on Color Concepts, or Like Lazarus Raised from the Tomb
by Chris, ScienceBlogs

It’s interesting that the Berinmo and Himba tribes have the same number of color terms, as well, because that rules out one possible alternative explanation of their data. It could be that as languages develop, they develop a more sophisticated color vocabulary, which eventually approximates the color categories that are actually innately present in our visual systems. We would expect, then, that two languages that are at similar levels of development (in other words, they both have the same number of color categories) would exhibit similar effects, but the speakers’ of the two languages remembered and perceived the colors differently. Thus it appears that languages do not develop towards any single set of universal color categories. In fact, Roberson et al. (2004) reported a longitudinal study that implies that exactly the opposite may be the case4. They found that children in the Himba tribe, and English-speaking children in the U.S., initially categorized color chips in a similar way, but as they grew older and more familiar with the color terms of their languages, their categorizations diverged, and became more consistent with their color names. This is particularly strong evidence that color names affect color concepts.

Forget the dress; what color did early Israelites see when they looked up to the sky?
by David Streever, Episcopal Cafe

The children of the Himba were able to differentiate between many more shades of green than their English counterparts, but did not recognize the color blue as being distinct from green. The research found that the 11 basic English colors have no basis in the visual system, lending further credence to the linguistic theories of Deutscher, Geiger, Gladstone, and other academics.

Colour Categories as Cultural Constructs
by Jules Davidoff, Artbrain

This is a group of people in Namibia who were asked to do some color matching and similarity judgments for us. It’s a remote part of the world, but not quite so remote that somebody hasn’t got the t-shirt, but it’s pretty remote. That’s the sort of environment they live in, and these are the youngsters that I’m going to show you some particular data on. They are completely monolingual in their own language, which has a tremendous richness in certain types of terms, in cattle terms (I can’t talk about that now), but has a dramatic lack in color terms. They’ve only got five color terms. So all of the particular colors of the world, and this is an illustration which can go from white to black at the top, red to yellow, green, blue, purple, back to red again, if this was shown in terms of the whole colors of the spectrum, but they only have five terms. So they see the world as, perhaps differently than us, perhaps slightly plainer. So we looked at these young children, and we showed them a navy blue color at the top and we asked them to point to the same color again from another group of colors. And those colors included the correct color, but of course sometimes the children made mistakes. What I want to show was that the English children and the Himba children, these people are the Himba of Northwest Namibia, start out from the same place, they have this undefined color space in which, at the beginning of the testing, T1, they make errors in choosing the navy blue, sometimes they’ll choose the blue, sometimes they’ll choose the black, sometimes they’ll choose the purple. Now the purple one, actually if you did a spectral analysis, the blue and the purple, the one on the right, are the closest. And as you can see, as the children got older, the most common error, both for English children and the Himba children, is the increase (that’s going up on the graph) of the purple mistakes. But, their language, the Himba language, has the same word for blue as for black. We, of course, have the same word for the navy blue as the blue on the left, only as the children get older, three or four, the English children only ever confuse the navy blue to the blue on the left, whereas the Himba children confuse the navy blue with the black. So, what’s happening? Someone asked yesterday whether the Sapir-Worf Hypothesis had any currency. Well, if it has a little bit of currency, it has it certainly here, in that what is happening, because the names of colors mean different things in the different cultures, because blue and black are the same in the Himba language, the actual similarity does seem to have been altered in the pictorial register. So, the blues that we call blue, and the claim is that there is no natural category called blue, they were just sensations we want to group together, those natural categories don’t exist. But because we have constructed these categories, blues look more similar to us in the pictorial register, whereas to these people in Northwest Namibia, the blues and the blacks look more similar. So, in brief, I’d like to further add more evidence or more claim that we are constructing the world of colors and in some way at least our memory structures do alter, to a modest extent at least, what we’re seeing.

Hues and views
A cross-cultural study reveals how language shapes color perception.
by Rachel Adelson, APA

Not only has no evidence emerged to link the 11 basic English colors to the visual system, but the English-Himba data support the theory that color terms are learned relative to language and culture.

First, for children who didn’t know color terms at the start of the study, the pattern of memory errors in both languages was very similar. Crucially, their mistakes were based on perceptual distances between colors rather than a given set of predetermined categories, arguing against an innate origin for the 11 basic color terms of English. The authors write that an 11-color organization may have become common because it efficiently serves cultures with a greater need to communicate more precisely. Still, they write, “even if [it] were found to be optimal and eventually adopted by all cultures, it need not be innate.”

Second, the children in both cultures didn’t acquire color terms in any particular, predictable order–such as the universalist idea that the primary colors of red, blue, green and yellow are learned first.

Third, the authors say that as both Himba and English children started learning their cultures’ color terms, the link between color memory and color language increased. Their rapid perceptual divergence once they acquired color terms strongly suggests that cognitive color categories are learned rather than innate, according to the authors.

The study also spotlights the power of psychological research conducted outside the lab, notes Barbara Malt, PhD, a cognitive psychologist who studies language and thought and also chairs the psychology department at Lehigh University.

“To do this kind of cross-cultural work at all requires a rather heroic effort, [which] psychologists have traditionally left to linguists and anthropologists,” says Malt. “I hope that [this study] will inspire more cognitive and developmental psychologists to go into the field and pursue these kinds of comparisons, which are the only way to really find out which aspects of perception and cognition are universal and which are culture or language specific.”

Humans didn’t even see the colour blue until modern times, research suggests
by Fiona MacDonald, Science Alert

Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) and dark blue (siniy), can discriminate between light and dark shades of blue much faster than English speakers.

This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t see blue at all. Or, more accurately, they probably saw it as we do now, but they never really noticed it.

Blue was the Last Color Perceived by Humans
by Nancy Loyan Schuemann, Mysterious Universe

MRI experiments confirm that people who process color through their verbal left brains, where the names of colors are accessed, recognize them more quickly. Language molds us into the image of the culture in which we are born.

Categorical perception of color is lateralized to the right hemisphere in infants, but to the left hemisphere in adults
by A. Franklin, G. V. Drivonikou, L. Bevis, I. R. L. Davies, P. Kay, & T. Regier, PNAS

Both adults and infants are faster at discriminating between two colors from different categories than two colors from the same category, even when between- and within-category chromatic separation sizes are equated. For adults, this categorical perception (CP) is lateralized; the category effect is stronger for the right visual field (RVF)–left hemisphere (LH) than the left visual field (LVF)–right hemisphere (RH). Converging evidence suggests that the LH bias in color CP in adults is caused by the influence of lexical color codes in the LH. The current study investigates whether prelinguistic color CP is also lateralized to the LH by testing 4- to 6-month-old infants. A colored target was shown on a differently colored background, and time to initiate an eye movement to the target was measured. Target background pairs were either from the same or different categories, but with equal target-background chromatic separations. Infants were faster at initiating an eye movement to targets on different-category than same-category backgrounds, but only for targets in the LVF–RH. In contrast, adults showed a greater category effect when targets were presented to the RVF–LH. These results suggest that whereas color CP is stronger in the LH than RH in adults, prelinguistic CP in infants is lateralized to the RH. The findings suggest that language-driven CP in adults may not build on prelinguistic CP, but that language instead imposes its categories on a LH that is not categorically prepartitioned.

Categorical perception of colour in the left and right visual field is verbally mediated: Evidence from Korean
by Debi Roberson, Hyensou Pak, & J. Richard Hanley

In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is unique to Korean, these results are not consistent with a suggestion made by Drivonikou [Drivonikou, G. V., Kay, P., Regier, T., Ivry, R. B., Gilbert, A. L., Franklin, A. et al. (2007) Further evidence that Whorfian effects are stronger in the right visual field than in the left. Proceedings of the National Academy of Sciences 104, 1097–1102] that CP effects in the left visual field provide evidence for the existence of a set of universal colour categories. Dividing Korean participants into fast and slow responders demonstrated that fast responders show CP only in the right visual field while slow responders show CP in both visual fields. We argue that this finding is consistent with the view that CP in both visual fields is verbally mediated by the left hemisphere language system.

Linguistic Fossils of the Mind’s Eye
by Keith, UMMAGUMMA blog

The other, The Unfolding of Language (2005), deals with the actual evolution of language. […]

Yet, while erosion occurs there is also a creative force in the human development of language. That creativity is revealed in our unique capacity for metaphor. “…metaphor is the indispensible element in the thought-process of every one of us.” (page 117) “It transpired that metaphor is an essential tool of thought, an indispensible conceptual mechanism which allows us to think of abstract concepts in terms of simpler concrete things. It is, in fact, the only way we have of dealing with abstraction.” (page 142) […]

The use of what can be called ‘nouns’ and not just ‘things’ is a fairly recent occurrence in language, reflecting a shift in human experience. This is a ‘fossil’ of linguistics. “The flow from concrete to abstract has created many words for concepts that are no longer physical objects, but nonetheless behave like thing-words in the sentence. The resulting abstract concepts are no longer thing-words, but they inherit their distribution from the thing-words that gave rise to them. A new category of words has thus emerged…which we can now call ‘noun’.” (page 246)

The way language is used, its accepted uses by people through understood rules of grammar, is the residue of collective human experience. “The grammar of a language thus comes to code most compactly and efficiently those constructions that are used most frequently…grammar codes best what it does most often.” (page 261) This is centrally why I hold the grammar of language to be almost a sacred portal into human experience.

In the 2010 work, Deutscher’s emphasis shifts to why different languages reveal that humans actually experience life differently. We do not all feel and act the same way about the things of life. My opinion is that it is a mistake to believe “humanity” thinks, feels and experiences to a high degree of similarity. The fact is language shows that, as it diversified across the earth, humanity has a multitude of diverse ways of experiencing.

First of all, “…a growing body of reliable scientific research provides solid evidence that our mother tongue can affect how we think and perceive the world.” (page 7) […]

The author does not go as far as me, nor is he as blunt; I am interjecting much of my personal beliefs in here. Still, “…fundamental aspects of our thought are influenced by cultural conventions of our society, to a much greater extent than is fashionable to admit today….what we find ‘natural’ depends largely on the conventions we have been brought up on.” (page 233) There are clear echoes of Nietzsche in here.

The conclusion is that “habits of speech can create habits of mind.” So, language affects culture fundamentally. But, this is a reciprocal arrangement. Language changes due to cultural experience yet cultural experience is affected by language.

Guy Deutscher’s Through the Language Glass
Stuart Hindmarsh, Philosophical Overview

In Through the Language Glass, Guy Deutscher addresses the question as to whether the natural language we speak will have an influence on our thought and our perception. He focuses on perceptions, and specifically the perceptions of colours and perceptions of spatial relations. He is very dismissive of the Sapir-Whorf hypothesis and varieties of linguistic relativity which would say that if the natural language we speak is of a certain sort then we cannot have certain types of concepts or experiences. For example, a proponent of this type of linguistic relativity might say that if your language does not have a word for the colour blue then you cannot perceive something as blue. Nonetheless, Deutscher argues that the natural language we speak will have some influence on how we think and see the world, giving several examples, many of which are fascinating. However, I believe that several of his arguments that dismiss views like the Sapir-Whorf hypothesis are based on serious misunderstandings.

The view that language is the medium in which conceptual thought takes place has a long history in philosophy, and this is the tradition out of which the Sapir-Whorf hypothesis was developed. […]

It is important to note that in this tradition the relation between language and conceptual thought is not seen as one in which the ability to speak a language is one capacity and the ability to think conceptually a completely separate faculty, and in which the first merely has a causal influence on the other. It is rather the view that the ability to speak a language makes it possible to think conceptually and that the ability to speak a language makes it possible to have perceptions of certain kinds, such as those in which what is perceived is subsumed under a concept. For example, it might be said that without language it is possible to see a rabbit but not possible to see it as a rabbit (as opposed to a cat, a dog, a squirrel, or any other type of thing). Thus conceptual thinking and perceptions of these types are seen not as separate from language and incidentally influenced by it but dependent on language and taking their general form from language. This does not mean that speech or writing must be taking place every time a person thinks in concepts or has these types of perception, though. To think that it must is a misunderstanding essentially the same as a common misinterpretation of Kant, which I will discuss in more detail in a later post.

While I take this to be the idea behind the Sapir-Whorf hypothesis, Deutscher evidently interprets that hypothesis as a very different kind of view. According to this view, the ability to speak a language is separate from the ability to think conceptually and from the ability to have the kinds of perceptions described above and it merely influences such thought and perception from without. Furthermore, it is not a relation in which language makes these types of thought and perception possible but one in which thought and perception are actually constrained by language. This interpretation runs through all of Deutscher’s criticisms of linguistic relativity. […]

Certainly many questionable assertions have been made based on the premise that language conditions the way that we think. Whorf apparently made spurious claims about Hopi conceptions of time. Today a great deal of dubious material is being written about the supposed influence of the internet and hypertext media on the way that we think. This is mainly inspired by Marshall McLuhan but generally lacking his originality and creativity. Nevertheless, there have been complex and sophisticated versions of the idea that the natural language that we speak conditions our thought and our perceptions, and these deserve serious attention. There are certainly more complex and sophisticated versions of these ideas than the crude caricature that Deutscher sets up and knocks down. Consequently, I don’t believe that he has given convincing reasons for seeing the relations between language and thought as limited to the types of relations in the examples he gives, interesting though they may be. For instance, he notes that the aboriginal tribes in question would have to always keep in mind where the cardinal directions were and consequently in this sense the language would require them to think a certain way.

The History and Science Behind the Color Blue
by staff, Dunn-Edwards Paints

If you think about it, there is not a lot of blue in nature. Most people do not have blue eyes, blue flowers do not occur naturally without human intervention, and blue animals are rare — bluebirds and bluejays only live in isolated areas. The sky is blue — or is it? One theory suggests that before humans had words for the color blue, they actually saw the sky as another color. This theory is supported by the fact that if you never describe the color of the sky to a child, and then ask them what color it is, they often struggle to describe its color. Some describe it as colorless or white. It seems that only after being told that the sky is blue, and after seeing other blue objects over a period of time, does one start seeing the sky as blue. […]

Scientists generally agree that humans began to see blue as a color when they started making blue pigments. Cave paintings from 20,000 years ago lack any blue color, since as previously mentioned, blue is rarely present in nature. About 6,000 years ago, humans began to develop blue colorants. Lapis, a semiprecious stone mined in Afghanistan, became highly prized among the Egyptians. They adored the bright blue color of this mineral. They used chemistry to combine the rare lapis with other ingredients, such as calcium and limestone, and generate other saturated blue pigments. It was at this time that an Egyptian word for “blue” emerged.

Slowly, the Egyptians spread their blue dyes throughout the word, passing them on to the Persians, Mesoamericans and Romans. The dyes were expensive — only royalty could afford them. Thus, blue remained rare for many centuries, though it slowly became popular enough to earn its own name in various languages.

Cognitive Variations:
Reflections on the Unity and Diversity of the Human Mind
by Geoffrey Lloyd
Kindle Locations 178-208

Standard colour charts and Munsell chips were, of course, used in the research to order to ensure comparability and to discount local differences in :he colours encountered in the natural environment. But their use carried major risks, chiefly that of circularity. The protocols of the enquiry presupposed the differences that were supposed to be under investigation and to that extent and in that regard the investigators just got out what they had put in. That is to say, the researchers presented their interviewees with materials that already incorporated the differentiations the researchers themselves were interested in. Asked to identify, name, or group different items, the respondents’ replies were inevitably matched against those differentiations. Of course the terms in which the replies were made-in the natural languages the respondents used-must have borne some relation to the differences perceived, otherwise they would not have been used in replying to the questions (assuming, as we surely may, that the questions were taken seriously and that the respondents were doing their honest best). But it was assumed that what the respondents were using in their replies were essentially colour terminologies, distinguishing hues, and that assumption was unfounded in general, and in certain cases can be shown to be incorrect.

It was unfounded in general because there are plenty of natural languages in which the basic discrimination relates not to hues, but to luminosities. Ancient Greek is one possible example. Greek colour classifications are rich and varied and were, as we shall see, a matter of dispute among the Greeks themselves. They were certainly capable of drawing distinctions between hues. I have already given one example. When Aristotle analyses the rainbow, where it is clearly hue that separates one end of the spectrum from the other, he identifies three colours using terms that correspond, roughly, to ‘red’ ‘green’, and ‘blue’, with a fourth, corresponding to ‘yellow’, which he treats (as noted) as a mere ‘appearance’ between ‘red’ and ‘green’. But the primary contrariety that figures in ancient Greek (including in Aristotle) is between Ieukon and melan, which usually relate not to hues, so much as to luminosity. Leukos, for instance, is used of the sun and of water, where it is clearly not the case that they share, or were thought to share, the same hue. So the more correct translation of that pair is often ‘bright’ or ‘light’ and ‘dark’, rather than ‘white’ and ‘black’.’ ^ Berlin and Kay (1969: 70) recognized the range of application of Ieukon, yet still glossed the term as ‘white’. Even more strangely they interpreted glaukon as ‘black’. That term is particularly context-dependent, but when Aristotle (On the Generation of Animals 779″z6, b34 ff.) tells us that the eyes of babies are glaukon, that corresponds to ‘blue’ where melon, the usual term for ‘black’ or rather ‘dark’, is represented as its antonym, rather than its synonym, as Berlin and Kay would need it to be.

So one possible source of error in the Berlin and Kay methodology was the privileging of hue over luminosity. But that still does not get to the bottom of the problem, which is that in certain cases the respondents were answering in terms whose primary connotations were not colours at all. The Hanunoo had been studied before Berlin and Kay in a pioneering article by Conklin (1955), and Lyons (1995; 1999) has recently reopened the discussion of this material.? First Conklin observed that the Hanunoo have no word for colour as such. But (as noted) that does not mean, of course, that they are incapable of discriminating between different hues or luminosities. To do so they use four terms, mabiru, malagti, rtarara, and malatuy, which may be thought to correspond, roughly, to ‘black’, ‘white’, ‘red’, and ‘green’. Hanunoo way then classified as a s:age 3 language, in Berlin and Kay’s taxonomy, one that discriminates between four basic colour terms, indeed those very four. 7 Cf. also Lucy 1992: ch. 5, who similarly criticizes taking purportedly colour terms out of context.

Yet, according to Conklin, chromatic variation was not the primary basis for differentiation of those four terms at all. Rather the two principal dimensions of variation are (T) lightness versus darkness, and (2) wetness versus dryness, or freshness (succulence) versus desiccation. siccation. A third differentiating factor is indelibility versus fadedness, referring to permanence or impermanence, rather than to hue as such.

Berlin and Kay only got to their cross-cultural universals by ignoring ing (they may even sometimes have been unaware of) the primary connotations of the vocabulary in which the respondents expressed their answers to the questions put to them. That is not to say, of course, that the members of the societies concerned are incapable of distinguishing colours whether as hues or as luminosities. That would be to make the mistake that my first philosophical observation was designed to forestall. You do not need colour terms to register colour differences. Indeed Berlin and Kay never encountered-certainly they never reported-a society where tie respondents simply had nothing to say when questioned about how their terms related to what they saw on the Munsell chips. But the methodology was flawed in so far as it was assumed that the replies given always gave access to a classification of colour, when sometimes colours were not the primary connotations of the vocabulary used at all.’

The Language Myth:
Why Language Is Not an Instinct
by Vyvyan Evans
pp. 204-206

The neo-Whorfians have made four main criticisms of this research tradition as it relates to linguistic relativity. 33 First off, the theoretical construct of the ‘basic colour term’ is based on English. It is then assumed that basic colour terms – based on English – correspond to an innate biological specification. But the assumption that basic colour terms – based on English – correspond to universal semantic constraints, due to our common biology, biases the findings in advance. The ‘finding’ that other languages also have basic colour terms is a consequence of a self-fufilling prophecy: as English has been ‘found’ to exhibit basic colour terms, all other languages will too. But this is no way to investigate putative cross-linguistic universals; it assumes, much like Chomsky did, that colour in all of the world’s languages will be, underlyingly, English-like. And as we shall see, other languages often do things in startlingly different ways.

Second, the linguistic analysis Berlin and Kay conducted was not very rigorous – to say the least. For most of the languages they ‘examined’, Berlin and Kay relied on second-hand sources, as they had no first-hand knowledge of the languages they were hoping to find basic colour terms in. To give you a sense of the problem, it is not even clear whether many of the putative basic colour terms Berlin and Kay ‘uncovered’, were from the same lexical class; for instance, in English, the basic colour terms – white, black, red and so on – are all adjectives. Yet, for many of the world’s languages, colour expressions often come from different lexical classes. As we shall see shortly, one language, Yélî Dnye, draws its colour terms from several lexical classes, none of which is adjectives. And the Yélî language is far from exceptional in this regard. The difficulty here is that, without a more detailed linguistic analysis, there is relatively little basis for the assumption that what is being compared involves comparable words. And, that being the case, can we still claim that we are dealing with basic colour terms?

Third, many other languages do not conceptualise colour as an abstract domain independent of the objects that colour happens to be a property of. For instance, some languages do not even have a word corresponding to the English word colour – as we shall see later. This shows that colour is often not conceptualised as a stand-alone property in the way that it is in English. In many languages, colour is treated in combination with other surface properties. For English speakers this might sound a little odd. But think about the English ‘colour’ term roan: this encodes a surface pattern, rather than strictly colour – in this case, brown interspersed with white, as when we describe a horse as ‘roan’. Some languages combine colour with other properties, such as dessication, as in the Old Germanic word saur, which meant yellow and dry. The problem, then, is that in languages with relatively simple colour technology − arguably the majority of the world’s languages − lexical systems that combine colour with other aspects of an object’s appearance are artificially excluded from being basic colour terms – as English is being used as the reference point. And this, then, distorts the true picture of how colour is represented in language, as the analysis only focuses on those linguistic features that correspond to the ‘norm’ derived from English. 34

And finally, the ‘basic colour term’ project is flawed, in so far as it constitutes a riposte to linguistic relativity; as John Lucy has tellingly observed, linguistic relativity is the thesis that language influences non-linguistic aspects of thought: one cannot demonstrate that it is wrong by investigating the effect of our innate colour sense on language. 35 In fact, one has to demonstrate the reverse: that language doesn’t influence psychophysics (in the domain of colour). Hence, the theory of basic colour terms cannot be said to refute the principle of linguistic relativity as ironically, it wasn’t in fact investigating it.

The neo-Whorfian critique, led by John Lucy and others, argued that, at its core, the approach taken by Berlin and Kay adopted an unwarranted ethnocentric approach that biased findings in advance. And, in so doing, it failed to rule out the possibility that what other languages and cultures were doing was developing divergent semantic systems – rather than there being a single universal system – in the domain of colour, albeit an adaptation to a common human set of neurobiological constraints. By taking the English language in general, and in particular the culture of the English-speaking peoples – the British Isles, North America and the Antipodes – as its point of reference, it not only failed to establish what different linguistic systems – especially in non-western cultures – were doing, but led, inevitably, to the conclusion that all languages, even when strikingly diverse in terms of their colour systems, were essentially English-like. 36

The Master and His Emissary: The Divided Brain and the Making of the Western World
by Iain McGilchrist
pp. 221-222

Consciousness is not the same as inwardness, although there can be no inwardness without consciousness. To return to Patricia Churchland’s statement that it is reasonable to identify the blueness of an object with its disposition to scatter electromagnetic waves preferentially at about 0.46μm, 52 to see it like this, as though from the outside, excluding the ‘subjective’ experience of the colour blue – as though to get the inwardness of consciousness out of the picture – requires a very high degree of consciousness and self-consciousness. The polarity between the ‘objective’ and ‘subjective’ points of view is a creation of the left hemisphere’s analytic disposition. In reality there can be neither absolutely, only a choice between a betweenness which acknowledges itself, and one which denies its own nature. By identifying blueness solely with the behaviour of electromagnetic particles one is not avoiding value, not avoiding betweenness, not avoiding one’s shadow being cast across the picture. One is using the inwardness of consciousness in a very specialised way to strive to empty itself as much as possible of value, of the self. The paradoxical result is an extremely partial, fragmented version of the colour blue, which is neither value-free nor independent of the self ‘s disposition towards its object.

p. 63

Another thought-provoking detail about sadness and the right hemisphere involves the perception of colour. Brain regions involved in conscious identification of colour are probably left-sided, perhaps because it involves a process of categorisation and naming; 288 however, it would appear that the perception of colour in mental imagery under normal circumstances activates only the right fusiform area, not the left, 289 and imaging studies, lesion studies and neuropsychological testing all suggest that the right hemisphere is more attuned to colour discrimination and perception. 290 Within this, though, there are hints that the right hemisphere prefers the colour green and the left hemisphere prefers the colour red (as the left hemisphere may prefer horizontal orientation, and the right hemisphere vertical – a point I shall return to in considering the origins of written language in Chapter 8). 291 The colour green has traditionally been associated not just with nature, innocence and jealousy but with – melancholy: ‘She pined in thought, / And with a green and yellow melancholy / She sat like Patience on a monument, / Smiling at grief ‘. 292

Is there some connection between the melancholy tendencies of the right hemisphere and the mediaeval belief that the left side of the body was dominated by black bile? Black bile was, of course, associated with melancholy (literally, Greek melan–, black ⊕ chole, bile) and was thought to be produced by the spleen, a left-sided organ. For the same reasons the term spleen itself was, from the fourteenth century to the seventeenth century, applied to melancholy; though, as if intuiting that melancholy, passion, and sense of humour all came from the same place (in fact the right hemisphere, associated with the left side of the body), ‘spleen’ could also refer to each or any of these.

Note 291

‘There are hints from many sources that the left hemispheremay innately prefer red over green, just as it may prefer horizontal over vertical. I have already discussed thelanguage-horizontal connection. The connection between the left hemisphere and red is also indirect, but is supported by a remarkable convergence of observations from comparative neurology, which has shown appropriate asymmetries between both the hemispheres and even between the eyes (cone photoreceptor differencesbetween the eyes of birds are consistent with a greater sensitivity to movement and to red on the part of the righteye (Hart, 2000)) and from introspective studies over the millennia in three great religions that have all convergedin the same direction on an association between action, heat, red, horizontal, far etc and the right side of the body (i.e. the left cerebral hemisphere, given the decussation between cerebral hemisphere and output) compared withinaction, cold, green, vertical, near etc and the left side/ right hemisphere respectively’ (Pettigrew, 2001, p. 94).

Louder Than Words:
The New Science of How the Mind Makes Meaning
by Benjamin K. Bergen
pp. 57-58

We perceive objects in the real world in large part through their color. Are the embodied simulations we construct while understanding language in black and white, or are they in color? It seems like the answer should be obvious. When you imagine a yellow trucker hat, you feel the subjective experience of yellowness that looks a lot like yellow as you would perceive it in the real world. But it turns out that color is actually a comparatively fickle visual property of both perceived and imagined objects. Children can’t use color to identify objects until about a year of age, much later than they can use shape. And even once they acquire this ability, as adults, people’s memory for color is substantially less accurate than their memory for shape, and they have to pay closer attention to detect changes in the color of objects than in their shape or location.

And yet, with all this going against it, color still seeps into our embodied simulations, at least briefly. One study looking at color used the same sentence-picture matching method we’ve been talking about. People read sentences that implied particular colors for objects. For instance, John looked at the steak on his plate implies a cooked and therefore appropriately brown steak, while John looked at the steak in the butcher’s window implies an uncooked and therefore red steak. In the key trials, participants then saw a picture of the same object, which could either match or mismatch the color implied by the sentence— that is, the steak could be red or brown. Once again, this method produced an interaction. Curiously, though, the result was slower reactions to matching-color images (unlike the faster reactions to matching shape and orientation images in the previous studies). One explanation for why this effect appears in the opposite direction is that perhaps people processing sentences only mentally simulate color briefly and then suppress color to represent shape and orientation. This might lead to slower responses to a matching color when an image is subsequently presented.

pp. 190-192

Another example of how languages make people think differently comes from color perception. Languages have different numbers of color categories, and those categories have different boundaries. For instance, in English, we make a categorical distinction between reds and pinks— we have different names for them, and we judge colors to be one or the other (we don’t think of pinks as a type of red or vice versa— they’re really different categories). And because our language makes this distinction, when we use English and we want to identify something by its color, we have to attend to where in the pink-red range it falls. But other languages don’t make this distinction. For instance, Wobé, a language spoken in Ivory Coast, only has one color category that spans English pinks and reds. So to speak Wobé, you don’t need to pay as close attention to colors in the pink-red range to identify them; all you have to do is recognize that they’re in that range, retrieve the right color term, and you’re set.

We can see this phenomenon in reverse when we look at the blue range. For the purposes of English, light blues and dark blues are all blues; perceptibly different shades, no doubt, but all blues nonetheless. Russian, however, splits blue apart in the way that we separate red and pink. There are two distinct color categories in Russian for our blues: goluboy (light blues) and siniy (dark blues). For the purposes of English, you don’t have to worry about what shade of blue something is to describe it successfully. Of course you can be more specific if you want; you can describe a shade as powder blue or deep blue, or any variety of others. But you don’t have to. In Russian, however, you do. To describe the colors of Cal or UCLA, for example, there would be no way in Russian to say they’re both blue; you’d have to say that Cal is siniy and UCLA is goluboy. And to say that, you’d need to pay attention to the shades of blue that each school wears. The words the language makes available mandate that you pay attention to particular perceptual details in order to speak.

The flip side of thinking for speaking is thinking for understanding. Each time someone describes something as siniy or goluboy in Russian, there’s a little bit more information there than when the same things are described as blue in English. So if you think about it, saying that the sky is blue in English is actually less specific than its equivalent would be in Russian— some languages provide more information about certain things each time you read or hear about them.

The fact that different languages encode different information in everyday words could have a variety of effects on how people understand those languages. For one, when a language systematically encodes something, that might lead people to regularly encode that detail as part of their embodied simulations. Russian comprehenders might construct more detailed representations of the shades of blue things than their English-comprehending counterparts. Pormpuraawans might understand language about locations by mentally representing cardinal directions in space while their English-comprehending counterparts use ego-centered mental representations to do the same thing.

Or an alternative possibility is that people might ultimately understand language about the given domain in the same way, regardless of the language, but, in order to get there, they might have to do more mental gymnastics. To get from the word blue in English to the color of the sky might take longer than to go there directly from goluboy in Russian. Or, to take another example, to construct an egocentric idea of where the bay windows are relative to you might be easier when you hear on your right than to your north.

A third possibility, and one that has caught a lot of people’s interest, is that there may be longer-term and more pervasive effects of linguistic differences on people’s cognition, even outside of language. Perhaps, for instance, Pormpuraawan speakers, by dint of years and years of having to pay attention to the cardinal directions, learn to constantly monitor them, even when they’re not using language; perhaps more so than English speakers. Likewise, perhaps the color categories your language provides affect not merely what you attend to and think about when using color words but also what differences you perceive among colors and how easily you distinguish between colors. This is the idea of linguistic relativism, that the language you speak can affect the way you think. The debate about linguistic relativism is a hot one, but the jury is still out on how and when language affects nonlinguistic thought.

All of this is to say that individual languages are demanding of their speakers. To speak and understand a language, you have to think, and languages, to some extent, dictate what things you ought to think, what things you ought to pay attention to, and how you should break the world up into categories. As a result, the routine patterns of thought that an English speaker engages in will differ from those of a Russian or Wobé or Pormpuraaw speaker. Native speakers of these languages are also native thinkers of these languages.

The First Signs: Unlocking the Mysteries of the World’s Oldest Symbols
by Genevieve von Petzinger
Kindle Locations 479-499

Not long after the people of Sima de los Huesos began placing their dead in their final resting place, another group of Homo heidelbergensis, this time in Zambia, began collecting colored minerals from the landscape around them. They not only preferred the color red, but also collected minerals ranging in hue from yellow and brown to black and even to a purple shade with sparkling flecks in it. Color symbolism— associating specific colors with particular qualities, ideas, or meanings— is widely recognized among modern human groups. The color red, in particular, seems to have almost universal appeal. These pieces of rock show evidence of grinding and scraping, as though they had been turned into a powder.

This powdering of colors took place in a hilltop cave called Twin Rivers in what is present-day Zambia between 260,000 and 300,000 years ago. 10 At that time, the environment in the region was very similar to what we find there today: humid and semitropical with expansive grasslands broken by stands of short bushy trees. Most of the area’s colorful rocks, which are commonly known as ochre, contain iron oxide, which is the mineral pigment later used to make the red paint on the walls of caves across Ice Age Europe and beyond. In later times, ochre is often associated with nonutilitarian activities, but since the people of Twin Rivers lived before the emergence of modern humans (Homo sapiens, at 200,000 years ago), they were not quite us yet. If this site were, say, 30,000 years old, most anthropologists would agree that the collection and preparation of these colorful minerals had a symbolic function, but because this site is at least 230,000 years older, there is room for debate.

Part of this uncertainty is owing to the fact that ground ochre is also useful for utilitarian reasons. It can act as an adhesive, say, for gluing together parts of a tool. It also works as an insect repellent and in the tanning of hides, and may even have been used for medicinal purposes, such as stopping the bleeding of wounds.

If the selection of the shades of ochre found at this site were for some mundane purpose, then the color shouldn’t matter, right? Yet the people from the Twin Rivers ranged out across the landscape to find these minerals, often much farther afield than necessary if they just required something with iron oxide in it. Instead, they returned to very specific mineral deposits, especially ones containing bright-red ochre, then carried the ochre with them back to their home base. This use of ochre, and the preference for certain colors, particularly bright red, may have been part of a much earlier tradition, and it is currently one of the oldest examples we have of potential symbolism in an ancestral human species.

Kindle Locations 669-683

Four pieces of bright-red ochre collected from a nearby mineral source were also found in the cave. 6 Three of the four pieces had been heated to at least 575 ° F in order to convert them from yellow to red. The inhabitants of Skhul had prospected the landscape specifically for yellowish ochre with the right chemical properties to convert into red pigment. The selective gathering of materials and their probable heat-treatment almost certainly indicates a symbolic aspect to this practice, possibly similar to what we saw with the people at Pinnacle Point about 30,000 years earlier. […]

The combination of the oldest burial with grave goods; the preference for bright-red ochre and the apparent ability to heat-treat pigments to achieve it; and what are likely some of the earliest pieces of personal adornment— all these details make the people from Skhul good candidates for being our cognitive equals. And they appear at least 60,000 years before the traditional timing of the “creative explosion.”

Kindle Locations 1583-1609

There is something about the color red. It can represent happiness, anger, good luck, danger, blood, heat, sun, life, and death. Many cultures around the world attach a special significance to red. Its importance is also reflected in many of the languages spoken today. Not all languages include words for a range of colors, and the simplest systems recognize only white and black, or light and dark, but whenever they do include a third color word in their language, it is always red.

This attachment to red seems to be embedded deep within our collective consciousness. Not only did the earliest humans have a very strong preference for brilliant red ochre (except for the inhabitants of Sai Island, in Sudan, who favored yellow), but even earlier ancestral species were already selecting red ochre over other shades. It may also be significant (although we don’t know how) that the pristine quartzite stone tool found in the Pit of Bones in Spain was of an unusual red hue.

This same preference for red is evident on the walls of caves across Europe during the Ice Age. But by this time, artists had added black to their repertoire and the vast majority of paintings were done in one or both of these colors. I find it intriguing that two of the three most common colors recognized and named across all languages are also the ones most often used to create the earliest art. The third shade, though well represented linguistically, is noticeably absent from Ice Age art. Of all the rock art sites currently known in Europe, only a handful have any white paint in them. Since many of the cave walls are a fairly light gray or a translucent yellowy white, it’s possible that the artists saw the background as representing this shade, or that its absence could have been due to the difficulty in obtaining white pigment: the small number of sites that do have white images all used kaolin clay to create this color. (Since kaolin clay was not as widely available as the materials for making red and black paint, it is certainly possible that scarcity was a factor in color choice.)

While the red pigment was created using ochre, the black paint was made using either ground charcoal or the mineral manganese oxide. The charcoal was usually sourced from burnt wood, though in some instances burnt bone was used instead. Manganese is found in mineral deposits, sometimes in the same vicinity as ochre. Veins of manganese can also occasionally be seen embedded right in the rock at some cave sites. Several other colors do appear on occasion— yellow and brown are the most common— though they appear at only about 10 percent of sites.

There is also a deep purple color that I’ve only ever seen in cave art in northern Spain, and even there it’s rare. La Pasiega (the site where I saw the grinding stone) has a series of paintings in this shade of violet in one section of the cave. Mixed in with more common red paintings, there are several purple signs— dots, stacked lines, rectangular grills— along with a single purple bison that was rendered in great detail (see fig. 4 in insert). Eyes, muzzle, horns— all have been carefully depicted, and yet the purple shade is not an accurate representation of a bison’s coloring. Did the artist use this color simply because it’s what he or she had at hand? Or could it be that the color of the animal was being dictated by something other than a need for this creature to be true to life? We know these artists had access to brown and black pigments, but at many sites they chose to paint animals in shades of red or yellow, or even purple, like the bison here at La Pasiega. These choices are definitely suggestive of there being some type of color symbolism at work, and it could even be that creating accurate replicas of real-life animals was not the main goal of these images.

Spiritualism and Bicameralism

In Spirit of Equality, Steve A. Wiggins discusses the recent Ghostbusters movie. His focus is on spiritualism and gender. He writes that,

“A thoughtful piece in by Colin Dickey in New Republic points out some of the unusual dynamics at play here. Looking at the history of Spiritualism as the basis for the modern interest in ghosts, Dickey suggests that women have been involved in the long-term fascination with the dead from the beginning. Their motive, however, was generally communication. Women wanted to relate with ghosts to make a connection. The original Ghostbusters movie represented a male, rationalistic approach to ghosts. As Dickey points out, instead of communicating, the men hunt and trap rather than trance and rap.”

I’m familiar with the spiritualist tradition. It’s part of the milieu that formed the kind of religion I was raised in, Science of Mind and New Thought Christianity.

The main church I grew up in, Unity Church, was heavily influenced by women from when it began in the late 1800s. Its founding was inspired by Myrtle Fillmore’s spiritual healing, women were leaders early on in the church, and ministers officiated same sex marriage ceremonies at least as far back as when I was a kid. It’s not patriarchal religion and emphasizes the idea of having a direct and personal connection to the divine, such that you can manifest it in your life.

The gender differences mentioned by Wiggins are the type of thing that always interest me. There are clear differences, whatever are the causes. Psychological research has found variations in cognition and behavior, on average between the genders. This is seen in personality research. And brain research shows at least some of these differences are based in biology, i.e., women having on average a larger corpus callosum.

I’m not sure how these kinds of differences relate to something like spiritualism and the fictional portrayal of human interaction with ghosts/spirits. The two Ghostbusters movies do offer a fun way to think about it.

Reading Wiggin’s piece, I thought about an essay I just read this past week. It offers a different perspective on a related topic, that of hearing voice commands and the traditions that go along with it. The essay is “Evolution and Inspiration” by Judith Weissman (from Gods, Voices and the Bicameral Mind ed. Marcel Kuijsten).

She notes, “that all over the world, throughout history, most of the poets who hear voices have been male, and their poems are usually about the laws of the fathers.” She considers this likely relevant, although she doesn’t offer any certain conclusions about what it might mean.

In the context of what Wiggins brings up, it makes one wonder what separates the tradition of voice-hearing poets and spiritualists. I can think of one thing, from that same essay.

Weissman mentioned that command voices often tell people what to do. A famous example was Daniel Paul Schreber who, when hearing a voice telling him to defend his manhood, punched in the face an attendant working at the psychiatric institute. Interestingly, Schreber was a well educated, rationalistic, and non-religious man before he began hearing voices.

Command voices tell people, often men, what to do. It leads to action, sometimes rather masculine action. Few people hear such voices these days and, when they do, they are considered schizophrenic—crazier than even a spiritualist.

From the New Republic article, The Spiritualist Origins of Ghostbusters, Colin Dickey offers an explanation about spiritualism in a gender-biased society.

“Spiritualism also celebrated precisely those aspects of femininity that the rest of culture was busy pathologizing. Nervousness, erratic behavior, uncontrolled outbursts, flagrant sexuality—doctors and psychiatrists saw these all as symptoms of hysteria, that ever-elusive disease that mostly boiled down to women acting out. But these same unruly behaviors were qualities prized in an excellent medium, and women who exhibited these traits were routinely praised for their psychic sensitivity. Women who might have otherwise been institutionalized found celebrity through Spiritualism instead.”

That makes me wonder. Which is cause and which effect? How does spiritualism and other forms of spirituality get expressed in other kinds of societies?

I’m reminded of two other things. First, there was an interesting passage on hysteria from a book on Galen, The Prince of Medicine by Susan P. Mattern. In bicameral fashion, the woman’s uterus (Greek hystera) literally had a mind of its own and was presumed to move around causing problems. The second thing is another part from the the Weissman essay:

“The last priests died shortly after the Ik were forcibly moved, and only one person was left who could still hear commanding voices, Nagoli, the daughter of a priest. Because she was not allowed to become a priest herself, she was called mad.”

Spirituality, when it was part of the social order, was respectable. But when that male-oriented society broke down, the spiritual ability of that woman was then seen as madness. The men (and the other women) couldn’t hear the voices she heard. The voices that once were guidance had become a threat. If that voice-hearing daughter of a priest had lived in 19th century United States, she might have been diagnosed with hysteria or else have become a popular spiritualist. Or if her culture hadn’t fallen into disarray, she would have been no one special at all, perfectly normal.

To Give Voice

I wish to continue my thoughts from Outpost of Humanity. I didn’t explicitly state the inspiration of that post. It formed out of my response to a comment at another post, The Stories We Tell.

That comment was odd. The person who wrote it mixed praise with criticism, a backhanded slap. Looking at it again, I still don’t get the point of it. I already responded to it with several thorough comments. But I wanted to emphasize a central point that brings it back to where my mind is at the moment.

I’ve been in a process of rethinking my life, both in terms of how I spend my time and how I view the world. I wonder about what useful role I can play or simply what is of genuine value to me. I don’t mind criticisms, as I’m perfectly capable of criticizing myself. And I can promise you that my self-criticisms will be far more scathing than anything offered by anyone else.

The funny thing is that this particular critic was attacking me based on my previous admissions about depression, as if that disqualified my opinion as having any validity. I’m fairly open about my personal problems and some people see this as a weakness, a chink in the armor. They don’t understand that I’ve never thought of honesty as a weakness; if anything, a strength. I don’t hide who I am.

I find it telling that this particular critic was writing under a pseudonym. It’s easy to attack others hidden behind a computer screen and anonymity. I’ve had trolls threaten me, even my family, telling me that they knew where I lived. My response was to tell them to come right on over and I’ll invite them in to chat over beer or coffee. That tends to make trolls go away, the threat of meeting in person.

We live in a dysfunctional society. To live in such a society is to be part of that dysfunction. Depression and other mental conditions might not be common in some other societies, but it is common in this one, effecting even the successful and the supposedly well-adjusted. There are many things that make me abnormal. Depression, however, is not one of them. If you’re looking for a reason to dismiss me, look elsewhere and be more thoughtful about it.

In my response to this particular critic, I pointed out that many of the most influential people throughout history were seen as problematic by the defenders of the status quo. They were called malcontents, rabblerousers, troublemakers, blasphemers, and worse. The personal character, lifestyle, etc were often targeted. These people were deemed a threat to society and treated accordingly, victimized in numerous ways: ridiculed, fired, fined, ostracized, banished, imprisoned, tortured, attacked by mobs, and sometimes killed.

This is true for religious prophets such as Jesus to political revolutionaries such as Thomas Paine. I might note that both of these men, when they attracted the attention of the powerful, lacked worldly success or even employment. Each of them spent time wandering homeless and ended their lives as poor bachelors with few loyal friends remaining. They were hated and despised, and for that very reason they also inspired. Yet upon their deaths, they were forgotten by society until later generations resurrected their deeds and brought them back into public memory.

There is another aspect to this. Consider either of these figures. If Jesus had remained a carpenter or found some other kind of respectable work, if he had been successful in his career and was a good citizen, if he had married and raised a family, would that have been a better use of his life? If Paine had continued his father’s trade as corsetmaker, if his first wife and child hadn’t died and his second marriage hadn’t ended, if he hadn’t lost his civil servant job for the sake of a petition to the government, if he hadn’t become a privateer that paid for his educating himself (mainly buying books to read), if he hadn’t known poverty and desperation before meeting Benjamin Franklin, would he have been a better person and the world a better place?

Let me focus on Paine. He lived a rough life. By the standards of his society, his life kept ending up in failure. He found himself in middle age with no prospects or hope, his only merits having been his intelligence and self-education. He had no way of proving his worth to others. Like Jesus, he had given up the trade he was taught when younger. And now he had nothing. When Franklin first saw him, he was probably dirty and smelly, likely with rotten teeth. He was a poor nobody. That didn’t stop Franklin, also of a working class background, from seeing the potential in Paine. Following Franklin’s advice, he headed to America where he was carried ashore sick and close to death.

It was precisely Paine’s rough life that gave him insight and perspective. He saw the world for what it was. And with his own understanding, he sensed other possibilities. But why should anyone have listened to him? He came to the American colonies, upon Franklin’s invitation. He found employment at a printing press where he began his serious writing career. All he knew to do was write what was in him to write. He had no college degree or anything else to demonstrate his opinion was any more meaningful than anyone else’s. If anything, his uncouth ways, bad attitude and drinking habits led people to dismiss him. Yet most of those who dismissed him are now forgotten to history.

Paine was not well-adjusted to the society he found himself in. He knew that as well as anyone else. But why should he shut up simply because some others found him disagreeable? He had the audacity to suggest that maybe the problem was in society and not to be blamed on the victims of society. In a perfect world, his life would have turned out better. Becoming a pamphleteer and then revolutionary wasn’t his first choice of professions nor his second, third and forth choice.

No one knows for certain what their life will become. And no one knows how they will be remembered later on or even if they will be remembered at all. We each have potential within us and most of that potential will remain hidden. We don’t know what we’re capable of, until the conditions bring out what before we didn’t realize existed within us. Everyone is trying the best that they can, in the situation they find themselves. There are probably millions of people in the world right now with talents equal to or greater than that of Thomas Paine, but few of them will ever get the opportunity to develop their potential even slightly. Paine was lucky to find someone like Benjamin Franklin who helped him out of a tough spot for, otherwise, Paine would likely have died forgotten like so many others.

We are products of our environments, the results of luck, good or bad. Life is crap shoot. None of us chooses how and where we were born and under what circumstances. We are all forced to take life as it comes. Our place and time either amplifies or mutes our possibilities, opens or closes doors, clears or blocks our path. An individual of average intelligence and ability might do great things because of her situation, as her particular set of potentials happen to be what was precisely needed in that context to take advantage of it or solve a problem. But an individual of immense intelligence and ability might do nothing at all, no matter how hard they try, if she happened to be born in unfortunate conditions such as having been a peasant in a feudal society or a housewife in early 20th century America.

Sometimes it is the failures of society, the least well-adjusted who have the most understanding. They are those who have struggled the most and have seen the underbelly of society. This often gives probing insight and unique perspective. This is because those low in society tend to give more thought to those above than the other way around. Privilege and comfort can lead to thoughtlessness and complacency, none of which is conducive to depth of understanding. It was the lowly position of the likes of Jesus and Paine that allowed them to so powerfully criticize the social order. The clearest vantage point is from the bottom.

Even so, none of us can escape the limitations of where we find ourselves, no matter how clearly we see those limitations. Jesus was an axial age prophet. He was one of many teachers, philosophers, and leaders who arose over a period of centuries. It was a point of transition and transformation. That is what those axial age prophets gave voice to. Still, for all their insight and vision, none of them could foresee the long term consequences of the new social world that was coming into being. And none of them could know what part they might or might not play beyond their own lifetime.

Consider Jesus again. Assuming he actually existed, he was just some guy wandering about and preaching, no different than thousands of others doing the same in the area, thousands of others claiming to speak for God, to be divine saviors, or even to be godmen. Even his name, Jesus, and appellation, Christ, were common at the time—and so he quite likely wasn’t even the only Jesus Christ in the first century. There was nothing that made him stand out from the crowd, not his telling parables nor his miracles/magic-tricks. Then he was crucified, no more special than a common criminal.

Upon his death, his prophecies didn’t come true and people almost two thousand years later are still waiting. There is no evidence that Jesus ever intended to start a religious movement, much less found a new religion. It wasn’t Jesus but later generations of Christians who built up his reputation. In his lifetime, he was almost entirely unknown, the Romans apparently not noticing his existence as he was never recorded in any official records and not written about even by the most famous Jewish historian of the time. The stories about him weren’t put down on paper until generations after his supposed death.

So, why do so many people care about and feel inspired by a poor homeless guy in the ancient world who liked to tell stories while hanging out with the trash of Roman society such as prostitutes, unemployed fishermen, the sickly, etc? According to both Jewish and Roman social norms, Jesus was an utter failure and a clear example of how not to live one’s life. As such, what did he accomplish that makes him so important? He did one thing and did it well, and the other axial age prophets did the same thing. What he was able to do was simply express a new vision and, by doing this, helped people understand the significance of the changes in society and worldview. No matter how simple, it was powerful. The axial age prophets helped transform all of civilization.

Those changes followed after the collapse of the late bronze age civilizations. There was a thousand years of social chaos and reordering before stability began to fully set in again. The axial age prophets heralded in the new age. But at any given moment in that transition, during any particular generation, the larger view would have been impossible to see, as we can with the benefit of historical and archaeological hindsight. Even today, we still don’t know the full consequences of those changes. Those like Paine were struggling with the new order continually evolving. From the end of bicameralism to the rise of modernity, individuality has been taking hold and taking different forms.

It makes one wonder how far that individualism can be pushed before finally breaking. The bicameral societies, some argue, were the victims of their own success. They developed into such immense and complicated civilizations that bicameralism could no longer operate and maintain social order. What if we are now coming to the point where we too will become the victims of our own success? Who are the prophets of our age standing outside the system? Will many people alive right now listen to them or will they only be taken seriously later on by future generations, after the changes have already occurred? Will the prophets of the present be dismissed and ignored as were the prophets of the past?

I would point out that most prophets likely never think of themselves as prophets. They are simply people living their lives. Finding themselves amidst greater events, they try to make sense of it all. And in doing so, they give voice to what so many others are feeling. The prophets of our age are at this very moment unknown, to be found as homeless beggars, low-level workers, college dropouts, and in so many other places. One is unlikely to come across them among the successful and well-adjusted. Some, God forbid!, might be suffering from depression. Oh the horror!

The prophets of the present probably wouldn’t even recognize themselves as such. Only a crazy person or religious nut would think of themselves as a prophet these days (or even during the era of Paine’s lifetime). Having spent their lives being told they are worthless, their main struggle could be in taking themselves seriously and in sensing their own potential. Most people who have something worthy to say never get the chance to be heard, amidst all the noise. The fact of the matter is none of us can ultimately judge the value of our own understandings. History, as always, will be the judge. All that we can do is speak our truth as best we can.

Anyway, a prophet in this sense isn’t necessarily an individual who says something unique and original, but someone who simply speaks what others are afraid to say or don’t know how to articulate. Playing this role of giving voice might be more an act of moral courage than of visionary genius. Speaking truth to power shouldn’t be underestimated, even when the powerful pretend they don’t hear.

I make no grand claims of myself. Nor do I expect to be praised for my efforts. All I seek to do is give voice to what matters most, to give voice to the otherwise voiceless. I do this for no other reason than I feel compelled to do so. It is what is in me to do. If some others see me as an opinionated fool or a self-righteous malcontent, then so be it. I’d like to think that what I express has meaning and value, but I can’t be my own judge. Either my words stand on their own merit or they don’t.

The truths that need to be spoken are greater than any of us. But each of us has hold of some small part. I wish others well in seeking their own truth and giving voice to it. This is no easy task, but worth the effort. Truth-speaking shouldn’t be taken lightly.