The Power of Language Learning

“I feel that American as against British English, and English of any major dialect as against Russian, and both languages as against the Tarascan language of Mexico constitute different worlds. I note that it is persons with experience of foreign languages and poetry who feel most acutely that a natural language is a different way not only of talking but of thinking and imaging and of emotional life.”
~Paul Friedrich, The Language Parallax, Kindle Locations 356-359

“Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.”
~Jeremy Lent, The Patterning Instinct, p. 204

Like Daniel Everett, the much earlier Roger Williams went to convert the natives, and in the process he was deconverted, at least to the extent of losing his righteous Puritanism. And as with Everett, he studied the native languages and wrote about them. That could be an example of the power of linguistic relativity, in that studying another language could cause you to enter another cultural worldview.

On a related note, Baruch Spinoza did textual analysis, Thomas Paine did Biblical criticism, Friedrich Nietzsche did philology, etc. It makes one wonder how studying language might help shape the thought and redirect the life trajectory of certain thinkers. Many radicals have a history of studying languages and texts. The same thing is seen with a high number of academics, ministers, and apologists turning into agnostics and atheists through an originally faithful study of the Bible (e.g., Robert M. Price).

There is a trickster quality to language, something observed by many others. To closely study language and the products of language is to risk having one’s mind unsettled and then to risk being scorned by those locked into a single linguistic worldview. What Everett found was that, in trying to translate the Bible for the Piraha, he was destabilizing his place within the religious order and also, in discovering the lack of linguistic recursion, destabilizing his place within the academic order. Both organized religion and organized academia are institutions of power that maintain the proper order. For the same reason of power, governments have often enforced a single language for the entire population, as thought control and social control, as enforced assimilation.

Monolingualism goes hand in hand with monoculturalism. And so simply learning a foreign language can be one of the most radical acts that one can commit. The more foreign the language, the more radical the effect. But sometimes simply scrutinizing one’s own native language can shift one’s mind, a possible connection between writing and a greater potential for independent thought. Then again, knowledge of language can also make one a better rhetorician and propagandist. Language as trickster phenomenon does have two faces.

* * *

The Bilingual Mind
by Aneta Pavlenko
pp. 25-27

Like Humboldt and Sapir before him, Whorf, too, believed in the plasticity of the human mind and its ability to go beyond the categories of the mother tongue. This belief permeates the poignant plea for ‘multilingual awareness’ made by the terminally ill Whorf to the world on the brink of World War II:

I believe that those who envision a world speaking only one tongue, whether English, German, Russian, or any other, hold a misguided ideal and would do the evolution of the human mind the greatest disservice. Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. ([ 1941b ] 2012 : 313)

Whorf’s arguments fell on deaf ears, because they were made in a climate significantly less tolerant of linguistic diversity than that of the late imperial Russia and the USSR. In the nineteenth century, large immigrant communities in the US (in particular German speakers) enjoyed access to native-language education, press and theater. The situation began to change during the period often termed the Great Migration (1880–1924), when approximately 24 million new immigrants entered the country (US Bureau of the Census, 1975 ). The overwhelming influx raised concerns about national unity and the capacity of American society to assimilate such a large body of newcomers. In 1917, when the US entered the European conflict declaring war on Germany, the anti-immigrant sentiments found an outlet in a strong movement against ‘the language of the enemy’: German books were removed from libraries and destroyed, German-language theaters and publications closed, and German speakers became subject to intimidation and threats (Luebke , 1980 ; Pavlenko, 2002a ; Wiley , 1998 ).

The advisability of German – and other foreign-language-medium – instruction also came into question, in a truly Humboldtian fashion that linked the learning of foreign languages with adoption of ‘foreign’ worldviews (e.g., Gordy , 1918 ). The National Education Association went as far as to declare “the practice of giving instruction … in a foreign tongue to be un-American and unpatriotic” (Fitz-Gerald , 1918 : 62). And while many prominent intellectuals stood up in defense of foreign languages (e.g., Barnes, 1918 ), bilingual education gave way and so did foreign-language instruction at the elementary level, where children were judged most vulnerable and where 80% of them ended their education. Between 1917 and 1922, Alabama, Colorado, Delaware, Iowa, Nebraska, Oklahoma, and South Dakota issued laws that prohibited foreign-language instruction in grades I through VIII, while Wisconsin and Minnesota restricted it to one hour a day. Louisiana, Indiana, and Ohio made the teaching of German illegal at the elementary level, and so did several cities with large German-speaking populations, including Baltimore, New York City, and Philadelphia (Luebke , 1980 ; Pavlenko, 2002a ). The double standard that made bilingualism an upper-class privilege reserved for ‘real’ Americans is seen in the address given by Vassar College professor Marian Whitney at the Modern Language Teachers conference in 1918:

In so far as teaching foreign languages in our elementary schools has been a means of keeping a child of foreign birth in the language and ideals of his family and tradition, I think it a bad thing; but to teach young Americans French, German, or Spanish at an age when their oral and verbal memory is keen and when languages come easily, is a good thing. (Whitney , 1918 : 11–12)

The intolerance reached its apogee in Roosevelt ’s 1919 address to the American Defense Society that equated English monolingualism with loyalty to the US:

We have room for but one language here, and that is the English language, for we intend to see that the crucible turns our people out as Americans, of American nationality, and not as dwellers in a polyglot boardinghouse; and we have room for but one sole loyalty, and that is the loyalty to the American people. (cited in Brumberg, 1986 : 7)

Reprinted in countless Board of Education brochures, this speech fortified the pressure not only to learn English but to abandon native languages. This pressure precipitated a rapid shift to English in many immigrant communities, further facilitated by the drastic reduction in immigrant influx, due to the quotas established by the 1924 National Origins Act (Pavlenko , 2002a ). Assimilation efforts also extended to Native Americans, who were no longer treated as sovereign nations – many Native American children were sent to English-language boarding schools, where they lost their native languages (Morgan, 2009 ; Spack , 2002 ).

The endangerment of Native American languages was of great concern to Boas, Sapir , and Whorf , yet their support for linguistic diversity and multilingualism never translated into reforms and policies: in the world outside of academia, Americanization laws and efforts were making US citizenry unapologetically monolingual and the disappearance of ‘multilingual awareness’ was applauded by academics who viewed bilingualism as detrimental to children’s cognitive, linguistic and emotional development (Anastasi & Cordova , 1953 ; Bossard, 1945 ; Smith, 1931 , 1939 ; Spoerl , 1943 ; Yoshioka , 1929 ; for discussion, see Weinreich, 1953 : 115–118). It was only in the 1950s that Arsenian ( 1945 ), Haugen ( 1953 , 1956 ), and Weinreich ( 1953 ) succeeded in promoting a more positive view of bilingualism, yet part of their success resided in the fact that by then bilingualism no longer mattered – it was regarded, as we will see, as an “unusual” characteristic, pervasive at the margins but hardly relevant for the society at large.

In the USSR, on the other hand, linguists’ romantic belief in linguistic rights and politicians’ desire to institutionalize nations as fundamental constituents of the state gave rise to the policy of korenizatsia [nativization] and a unique educational system that promoted the development of multilingual competence (Hirsch, 2005 ; Pavlenko , 2013 ; Smith , 1998 ). It is a little-known and under-appreciated irony that throughout the twentieth century, language policies in the ‘totalitarian’ Soviet Union were significantly more liberal – even during the period of the so-called ‘russification’– than those in the ‘liberal’ United States.

Kavanaugh and the Authoritarians

I don’t care too much about the Brett Kavanaugh hearings, one way or another. There doesn’t appear to be any hope of salvation in our present quandary, not for anyone involved (or uninvolved), far beyond who ends up on the Supreme Court.

But from a detached perspective of depressive realism, the GOP is on a clear decline, to a far greater degree than the Democrats which is saying a lot. Back during the presidential campaign, I stated that neither main political party should want to win. That is because we are getting so close to serious problems in our society or rather getting closer to the results of those problems that have long been with us. Whichever party is in power will be blamed, not that I care either way considering both parties deserve blame.

Republicans don’t seem to be able to help themselves. They’ve been playing right into the narrative of their own decline. At the very moment they needed to appeal to minorities because of looming demographic changes, they doubled down on bigotry. Now, the same people who supported and voted for a president who admitted to grabbing women by the pussy (with multiple sexual allegations against him and multiple known cases of cheating on his wife) are defending Kavanaugh against allegations of sexual wrongdoing.

This is not exactly a surprise, as Trump brazenly and proudly declared that he could shoot a person for everyone to see and his supporters would be fine with it. And certainly his publicly declaring his authoritarianism in this manner didn’t faze many Republican voters and Republican politicians. He was elected and the GOP rallied behind him. Also, it didn’t bother Kavanaugh as his acceptance of the Republican nomination implies he also supports authoritarianism and, if possible, plans on enacting it on the Supreme Court. Whether or not true that Trump could get away with murder, it is an amazing statement to make in public and still get elected president for, in any functioning democracy, that would immediately disqualify a candidate.

It almost doesn’t matter what are the facts of the situation, guilt or innocence. Everyone knows that, even if Kavanaugh was a proven rapist, the same right-wing authoritarians who love Trump would defend Kavanaugh to the bitter end. Loyalty is everything to these people. Not so much for the political left in how individuals are more easily thrown under the bus (or like Al Franken who threw himself under the bus and for a rather minor accusation of an inappropriate joke, not even involving any inappropriate touching). Sexual allegations demoralize Democrats, consider the hard hit it took with Anthony Weiner, in a way that never happens with Republicans who always consider a sexual allegation to be a call to battle.

The official narrative now is that the GOP is the party of old school bigots and chauvinistic pigs. They always had that hanging over their heads. And in the past, they sometimes held it up high with pride as if it were a banner of their strength. But now they find themselves on the defense. It turns out that this narrative they embraced probably doesn’t have much of a future. Yet Republicans can’t find it in themselves to seek a new script. For some odd reason, they are heavily attached to being heartless assholes.

This is even true for many Republican women. My conservative mother who, having not voted for Trump, has been pulled back into partisanship with the present conflict and has explicitly told me that she doesn’t believe men held accountable for past sexual transgressions because that is just the way the world was back then. Some conservative women go even further, arguing that men can’t help themselves and that even now we shouldn’t hold them accountable — as Toyin Owoseje reported:

Groping women is “no big deal”, a Donald Trump supporting mother told her daughters on national television when asked about the sexual misconduct allegations levelled against Supreme Court nominee Brett Kavanaugh.

Among Republicans, we’ve been hearing such immoral defenses for a long time. There is another variety of depravity to be found among Democrats, but they at least have the common sense to not openly embrace depravity in their talent for soft-pedalling their authoritarian tendencies. Yet as full-blown authoritarian extremists disconnected from the average American, Republicans don’t understand why the non-authoritarian majority of the population might find their morally debased views unappealing. To them, loyalty to group is everything, and the opinions of those outside the group don’t matter.

The possibility that Kavanaugh might have raped a woman, to right-wing authoritarians, simply makes him seem all the more of a strong male to be revered. It doesn’t matter what he did, at least not to his defenders. This doesn’t bode well for the Republican Party. With the decline they are on, the only hope they have is for Trump to start World War III and seize total control of the government. They’ve lost the competition of rhetoric. All that is left for them is force their way to the extent they can, which at the moment means trying to push Kavanaugh into the Supreme Court. Of course, they theoretically could simply pick a different conservative nominee without all the baggage, but they can’t back down now no matter what. Consequences be damned!

Just wait to see what they’ll be willing to do when the situation gets worse. Imagine what would happen with a Trump-caused constitutional crisis and Kavanaugh on the Supreme Court. However it ends, the trajectory is not pointing upward. The decline of the GOP might be the (further) decline of the United States.

Straw Men in the Linguistic Imaginary

“For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?”
~ Jeremy Lent, The Patterning Instinct

Portraying the Sapir-Whorf hypothesis as linguistic determinism is a straw man fallacy. It’s false to speak of a strong Sapir-Whorf hypothesis at all as no such hypothesis was ever proposed by Edward Sapir and Benjamin Lee Whorf. Interestingly, it turns out that researchers have since found examples of what could be called linguistic determinism or at least very strong linguistic relativity, although still apparently rare (similar to how examples of genetic determinism are rare). But that is neither here nor there, considering Sapir and Whorf didn’t argue for linguistic determinism, no matter how one might quote-mine their texts. The position of relativity, similar to social constructivism, is the wholesale opposite of rigid determinism — besides, linguistic relativism wasn’t even a major focus of Sapir’s work even as he influenced Whorf.

Turning their view into a caricature of determinism was an act of projection. It was the anti-relativists who were arguing for biology determining language, from Noam Chomsky’s language module in the brain to Brent Berlin and Paul Kay’s supposedly universal color categories. It was masterful rhetoric to turn the charge onto those holding the moderate position in order to dress them up as ideologial extremists and charlatans. And with Sapir and Whorf gone from early deaths, they weren’t around to defend themselves and to deny what was claimed on their behalf.

Even Whorf’s sometimes strongly worded view of relativity, by today’s standards and knowledge in the field, doesn’t sound particularly extreme. If anything, to those informed of the most up-to-date research, denying such obvious claims would now sound absurd. How did so many become disconnected from simple truths of human experience that anyone who dared speak these truths could be ridiculed and dismissed out of hand? For generations, relativists stating common sense criticisms of race realism were dismissed in a similar way, and they were often the same people (cultural relativity and linguistic relativity in American scholarship was influenced by Franz Boas) — the argument tying them together is that relativity in expression and emodiment of our shared humanity (think of it more in terms of Daniel Everett’s dark matter of the mind) is based on a complex and flexible set of universal potentials, such that universalism doesn’t require nor indicate essentialism. Yet why do we go on clinging to so many forms of determinism, essentialism, and nativism, including those ideas advocated by many of Sapir and Whorf’s opponents?

We are in a near impossible situation. Essentialism has been a cornerstone of modern civilization, most of all in its WEIRD varieties. Relativity simply can’t be fully comprehended, much less tolerated, within the dominant paradigm, although as Leavitt argues it resonates with the emphasis on language found in Romanticism which was a previous response to essentialism. As for linguistic determinism, even if it were true beyond a few exceptional cases, it is by and large an untestable hypothesis at present and so scientifically meaningless within WEIRD science. WEIRD researchers exist in a civilization that has become dominated by WEIRD societies with nearly all alternatives destroyed or altered beyond their original form. There is no where to stand outside of the WEIRD paradigm, especially not the WEIRDest of the WEIRD researchers doing most of the research.

If certain thoughts are unthinkable within WEIRD culture and language, we have no completely alien mode of thought by which to objectively assess the WEIRD, as imperialism and globalization has left no society untouched. There is no way for us to even think about what might be unthinkable, much less research it. This doublebind goes right over the heads of most people, even over the heads of some relativists who fear being disparaged if they don’t outright deny any possibility of the so-called strong Sapir-Whorf hypothesis. That such a hypothesis potentially could describe reality to a greater extent than we’d prefer is, for most people infected with the WEIRD mind virus and living within the WEIRD monocultural reality tunnel, itself an unthinkable thought.

It is unthinkable and, in its fullest form, fundamentally untestable. And so it is terra incognito within the collective mind. The response is typically either uncomfortable irritation or nervous laughter. Still, the limited evidence in support of linguistic determinism points to the possibility of it being found in other as-of-yet unexplored areas — maybe a fair amount of evidence already exists that will later be reinterpreted when a new frame of understanding becomes established or when someone, maybe generations later, looks at it with fresh eyes. History is filled with moments when something shifted allowing the incomprehensible and unspeakable to become a serious public debate, sometimes a new social reality. Determinism in all of its varieties seems a generally unfrutiful path of research, although in its linguistic form it is compelling as a thought experiment in showing how little we know and can know, how severely constrained is our imaginative capacities.

We don’t look in the darkness where we lost what we are looking for because the light is better elsewhere. But what would we find if we did search the shadows? Whether or not we discovered proof for linguistic determinism, we might stumble across all kinds of other inconvenient evidence pointing toward ever more radical and heretical thoughts. Linguistic relativity and determinism might end up playing a central role less because of the bold answers offered than in the questions that were dared to be asked. Maybe, in thinking about determinism, we could come to a more profound insight of relativity — after all, a complex enough interplay of seemingly deterministic factors would for all appearances be relativistic, that is to say what seen to be linear causation could when lines of causation are interwoven lead to emergent properties. The relativistic whole, in that case, presumably would be greater than the deterministic parts.

Besides, it always depends on perspective. Consider Whorf who “has been rejected both by cognitivists as a relativist and by symbolic and postmodern anthropologists as a determinist and essentialist” (John Leavitt, Linguistic Relativities, p. 193; Leavitt’s book goes into immense detail about all of the misunderstanding and misinterpretation, much of it because of intellectual laziness or hubris  but some of motivated by ideological agendas; the continuing and consistent wrongheadedness makes it difficult to not take much of it as arguing in bad faith). It’s not always clear what the debate is supposed to be about. Ironically, such terms as ‘determinism’ and ‘relativity’ are relativistic in their use while, in how we use them, determining how we think about the issues and how we interpret the evidence. There is no way to take ourselves out of the debate itself for our own humanity is what we are trying to place under the microscope, causing us tremendous psychological contortions in maintaining whatever worldview we latch onto.

There is less distance between linguistic relativity and linguistic determinism than is typically assumed. The former says we are only limited by habit of thought and all it entails within culture and relationships. Yet habits of thought can be so powerful as to essentially determine social orders for centuries and millennia. Calling this mere ‘habit’ hardly does it justice. In theory, a society isn’t absolutely determined to be the way it is nor for those within it to behave the way they do, but in practice extremely few individuals ever escape the gravity pull of habitual conformity and groupthink (i.e., Jaynesian self-authorization is more a story we tell ourselves than an actual description of behavior).

So, yes, in terms of genetic potential and neuroplasticity, there was nothing directly stopping Bronze Age Egyptians from starting an industrial revolution and there is nothing stopping a present-day Piraha from becoming a Harvard professor of mathematics — still, the probability of such things happening is next to zero. Consider the rare individuals in our own society who break free of the collective habits of our society, as they usually either end up homeless or institutionalized, typically with severely shortened lives. To not go along with the habits of your society is to be deemed insane, incompetent, and/or dangerous. Collective habits within a social order involve systematic enculturation, indoctrination, and enforcement. The power of language — even if only relativistic — over our minds is one small part of the cultural system, albeit an important part.

We don’t need to go that far with our argument, though. However you want to splice it, there is plenty of evidence that remains to be explained. And the evidence has become overwhelming and, to many, disconcerting. The debate over the validity of the theory of linguistic relativity is over. But the opponents of the theory have had two basic strategies to contain their loss and keep the debate on life support. They conflate linguistic relativity with linguistic determinism and dismiss it as laughably false. Or they concede that linguistic relativity is partly correct but argue that it’s insignificant in influence, as if they never denied it and simply were unimpressed.

“This is characteristic: one defines linguistic relativity in such an extreme way as to make it seem obviously untrue; one is then free to acknowledge the reality of the data at the heart of the idea of linguistic relativity – without, until quite recently, proposing to do any serious research on these data.” (John Leavit, Linguistic Relativities, p. 166)

Either way, essentialists maintain their position as if no serious challenge was posed. The evidence gets lost in the rhetoric, as the evidence keeps growing.

Still, there is something more challenging that also gets lost in debate, even when evidence is acknowledged. What motivated someone like Whorf wasn’t intellectual victory and academic prestige. There was a sense of human potential locked behind habit. That is why it was so important to study foreign cultures with their diverse languages, not only for the sake of knowledge but to be confronted by entirely different worldviews. Essentialists are on the old imperial path of Whiggish Enlightenment, denying differences by proclaiming that all things Western are the norm of humanity and reality, sometimes taken as a universal ideal state or the primary example by which to measure all else… an ideology that easily morphs into yet darker specters:

“Any attempt to speak of language in general is illusory; the (no doubt French or English) philosopher who does so is merely elevating his own mother tongue to the status of a universal standard (p. 3). See how the discourse of diversity can be turned to defend racism and fascism! I suppose by now this shouldn’t surprise us – we’ve seen so many examples of it at the end of the twentieth and beginning of the twenty-first century.” (John Leavit, Linguistic Relativities, p. 161)

In this light, it should be unsurprising that the essentialist program presented in Chomskyan linguistics was supported and funded by the Pentagon (their specific interest in this case being about human-computer interface in eliminating messy human error; in studying the brain as a computer, it was expected that the individual human mind could be made more amenable to a computerized system of military action and its accompanying chain-of-command). Essentialism makes promises that are useful for systems of effective control as part of a larger technocratic worldview of social control.

The essentialist path we’ve been on has left centuries of destruction in its wake. But from the humbling vista opening onto further possibilities, the relativists offer not a mere scientific theory but a new path for humanity or rather they throw light onto the multiple paths before us. In offering respect and openness toward the otherness of others, we open ourselves toward the otherness within our own humanity. The point is that, though trapped in linguistic cultures, the key to our release is also to be found in the same place. But this requires courage and curiosity, a broadening of the moral imagination.

Let me end on a note of irony. In comparing linguistic cultures, Joseph Needham wrote that, “Where Western minds asked ‘what essentially is it?’, Chinese minds asked ‘how is it related in its beginnings, functions, and endings with everything else, and how ought we to react to it?” This was quoted by Jeremy Lent in the Patterning Instinct (p. 206; quote originally from: Science and Civilization in China, vol. 2, History of Scientific Thought, pp. 199-200). Lent makes clear that this has everything to do with language. Chinese language embodies ambiguity and demands contextual understanding, whereas Western or more broadly Indo-European language elicits abstract essentialism.

So, it is a specific linguistic culture of essentialism that influences, if not entirely determines, that Westerners are predisposed to see language as essentialist, rather than as relative. And it is this very essentialism that causes many Westerners, especially abstract-minded intellectuals, to be blind to essentialism as being linguistically cultural, but not essentialist to human nature and neurocognitive functioning. That is the irony. This essentialist belief system is further proof of linguistic relativism. Even so, the direct study of other languages has strengthened a countervailing cultural understanding that Sapir and Whorf built upon, but the essentialist project has buried this school of thought and it is only now being unearthed within mainstream thought.

* * *

Recusion and Human Thought: Why the Piraha Don’t Have Numbers
by Daniel Everett

Scientists—linguists and anthropologists in particular—are very reticent to say that one group is somehow more special than another group because if that’s the case, then you’ve made discoveries that they haven’t made. I really think that’s probably right. I don’t think the Pirahã are special in some deep sense. They’re certainly very unusual, and they have characteristics that need to be explained, but all of the groups in the Amazon have different but equally interesting characteristics. I think that one reason we fail to notice, when we do field research, the fundamental differences between languages is because linguistic theory over the last 50 years—maybe even longer—has been primarily directed towards understanding how languages are alike, as opposed to how they are different.

If we look at the differences between languages—not exclusively, because what makes them alike is also very very important—the differences can be just as important as the similarities. We have no place in modern linguistic theory for really incorporating the differences and having interesting things to say about the differences. So when you say that this language lacks X, we will say, well, that’s just an exotic fact: so they lack it, no big deal. But when you begin to accumulate differences across languages around the world, maybe some of these things that we thought were so unusual aren’t as unusual and could in fact turn out to be similarities. Or, the differences could be correlated with different components that we didn’t expect before. Maybe there’s something about the geography, or something about the culture, or something about other aspects of these people that account for these differences. Looking at differences doesn’t mean you throw your hands up and say there’s no explanation and that you have nothing more than a catalog of what exists in the world. But it does develop a very different way of looking at culture and looking at language. […]

I know Noam fairly well, I’ve known his work most of my career and I’ve read everything he’s ever written in linguistics—I could have written his responses myself. I don’t mean to be flippant, but they were re-statements of things that everybody knows that he believes. I think that it’s difficult for him to see that there is any alternative to what he’s saying. He said to me there is no alternative to universal grammar; it just means the biology of humans that underlies language. But that’s not right, because there are a lot of people who believe that the biology of humans underlies language but that there is no specific language instinct. In fact at the Max Planck, Mike Tomasello has an entire research lab and one of the best primate zoos in the world, where he studies the evolution of communication, and human language, without believing in a language instinct or a universal grammar.

I’ve mainly followed Mike’s research there because we talk more or less the same language, and he’s more interested in directly linguistic questions than just primatology, but there’s a lot of really interesting work in primatology—looking at the acquisition of communication and finding similarities that we might not have thought were there if we believe in a universal grammar.

“Recursion and Lexicon” by Jan Koster
from Recursion and Human Language ed. by Harry van der Hulst

Current theorizing about the human language faculty, particularly about recursion, is dominated by the biolinguistics perspective. This perspective has been part of the generative enterprise since its inception and can be summarized as follows: The core of language is individual-psychological and may ultimately be explained in terms of human biology. A classical formulation of this program was Lenneberg (1967) and it was revitalized recently by Jenkins (2000) and particularly by Hauser, Chomsky and Fitch (2002) (henceforth: HCF). According to HCF, recursion (in the form of Merge) is the core of the human language faculty biologically conceived.

The biological perspective is far from self-evidently correct and, in fact, goes against a long tradition that emphasized the cultural, conventional nature of language. This tradition goes back at least to Aristotle’s De Interpretatione and became the core idea about language since the late Enlightenment and Romanticism, thanks to the influence of Herder, Von Humboldt and others. Most early 20th-century views were offshoots of the great conceptions formulated around the turn of the 18th century. Thus, Ferdinand de Saussure followed German Romanticism in this respect, as did the grreat American structuralists Franz Boas and Edward Sapir. Saussure was also influenced by one of the founding fathers of sociology, Emile Durkheim, who argued that certain social facts could not be reduced to individual psychology or biology. Also philosophers like Wittgenstein and Popper followed the European tradition, the former with his emphasis on the public and language game-dependent nature of linguistic rules, the latter by stipulating that language belongs to his (pseudo-technical) conception of supra-individual human culture known as “world 3” (Popper 1972).

None of these conceptions excludes a biological basis for language, for the trivial reason that all human culture and activity has a biological basis. Sapir (1921: 3), for instance, adheres to the cultural view of language: “[…] walking is an inherent, biological function of man” but “[…] speech is non-instinctive, acquired, “cultural” function” (1921: 4). Clearly, however, this does not exclude biology for Sapir (1921: 9):

Physiologically, speech is an overlaid function, or to be more precise, a group of overlaid functions. It gets what service it can out of organs and functions, nervous and muscular, that have come into being and are maintained for very different ends than its own.

Biological structures with a new, “overlaid” function is like what biologists Gould and Vrba (1982) call “exaptation.”

The Patterning Instinct
by Jeremy Lent
pp. 197-205

The ability of these speakers to locate themselves in a way that is impossible for the rest of us is only the most dramatic in an array of discoveries that are causing a revolution in the world of linguistics. Researchers point to the Guugu Yimithirr as prima facie evidence supporting the argument that the language you speak affects how your cognition develops. As soon as they learn their first words, Guugu Yimithirr infants begin to structure their orientation around the cardinal directions. In time, their neural connections get wired accordingly until this form of orientation becomes second nature, and they no longer even have to think about where north, south, east, and west are.3 […]

For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?4

The answer can be found in the remarkable story of one charismatic individual, Benjamin Whorf. In the early twentieth century, Whorf was a student of anthropologist-linguist Edward Sapir, whose detailed study of Native American languages had caused him to propose that a language’s grammatical structure corresponds to patterns of thought in its culture. “We see and hear and otherwise experience very largely as we do,” Sapir suggested, “because the language habits of our community predispose certain choices of interpretation.”5

Whorf took this idea, which became known as the Sapir-Whorf hypothesis, to new heights of rhetoric. The grammar of our language, he claimed, affects how we pattern meaning into the natural world. “We cut up and organize the spread and flow of events as we do,” he wrote, “largely because, through our mother tongue, we are parties to an agreement to do so, not because nature itself is segmented in exactly that way for all to see.”6 […]

Whorf was brilliant but highly controversial. He had a tendency to use sweeping generalizations and dramatic statements to drive home his point. “As goes our segmentation of the face of nature,” he wrote, “so goes our physics of the Cosmos.” Sometimes he went beyond the idea that language affects how we think to a more strident assertion that language literally forces us to think in a certain way. “The forms of a person’s person’s thoughts,” he proclaimed, “are controlled by inexorable laws of pattern of which he is unconscious.” This rhetoric led people to interpret the Sapir-Whorf hypothesis as a theory of linguistic determinism, claiming that people’s thoughts are inevitably determined by the structure of their language.8

A theory of rigid linguistic determinism is easy to discredit. All you need to do is show a Hopi Indian capable of thinking in terms of past, present, and future, and you’ve proven that her language didn’t ordain how she was able to think. The more popular the Sapir-Whorf theory became, the more status could be gained by any researcher who poked holes in it. In time, attacking Sapir-Whorf became a favorite path to academic tenure, until the entire theory became completely discredited.9

In place of the Sapir-Whorf hypothesis arose what is known as the nativist view, which argues that the grammar of language is innate to humankind. As discussed earlier, the theory of universal grammar, proposed by Noam Chomsky in the 1950s and popularized more recently by Steven Pinker, posits that humans have a “language instinct” with grammatical rules coded into our DNA. This theory has dominated the field of linguistics for decades. “There is no scientific evidence,” writes Pinker, “that languages dramatically shape their speakers’ ways of thinking.” Pinker and other adherents to this theory, however, are increasingly having to turn a blind eye—not just to the Guugu Yimithirr but to the accumulating evidence of a number of studies showing the actual effects of language on people’s patterns of thought.10 […]

Psychologist Peter Gordon saw an opportunity to test the most extreme version of the Sapir-Whorf hypothesis with the Pirahã. If language predetermined patterns of thought, then the Pirahã should be unable to count, in spite of the fact that they show rich intelligence in other forms of their daily life. He performed a number of tests with the Pirahã over a two-year period, and his results were convincing: as soon as the Pirahã had to deal with a set of objects beyond three, their counting performance disintegrated. His study, he concludes, “represents a rare and perhaps unique case for strong linguistic determinism.”12

The Guugu Yimithirr, at one end of the spectrum, show the extraordinary skills a language can give its speakers; the Pirahã, at the other end, show how necessary language is for basic skills we take for granted. In between these two extremes, an increasing number of researchers are demonstrating a wide variety of more subtle ways the language we speak can influence how we think.

One set of researchers illustrated how language affects perception. They used the fact that the Greek language has two color terms—ghalazio and ble—that distinguish light and dark blue. They tested the speed with which Greek speakers and English speakers could distinguish between these two different colors, even when they weren’t being asked to name them, and discovered the Greeks were significantly faster.13

Another study demonstrates how language helps structure memory. When bilingual Mandarin-English speakers were asked in English to name a statue of someone with a raised arm looking into the distance, they were more likely to name the Statue of Liberty. When they were asked the same question in Mandarin, they named an equally famous Chinese statue of Mao with his arm raised.14

One intriguing study shows English and Spanish speakers remembering accidental events differently. In English, an accident is usually described in the standard subject-verb-object format of “I broke the bottle.” In Spanish, a reflexive verb is often used without an agent, such as “La botella se rompió”—“the bottle broke.” The researchers took advantage of this difference, asking English and Spanish speakers to watch videos of different intentional and accidental events and later having them remember what happened. Both groups had similar recall for the agents involved in intentional events. However, when remembering the accidental events, English speakers recalled the agents better than the Spanish speakers did.15

Language can also have a significant effect in channeling emotions. One researcher read the same story to Greek-English bilinguals in one language and, then, months later, in the other. Each time, he interviewed them about their feelings in response to the story. The subjects responded differently to the story depending on its language, and many of these differences could be attributed to specific emotion words available in one language but not the other. The English story elicited a sense of frustration in readers, but there is no Greek word for frustration, and this emotion was absent in responses to the Greek story. The Greek version, however, inspired a sense of stenahoria in several readers, an emotion loosely translated as “sadness/discomfort/suffocation.” When one subject was asked why he hadn’t mentioned stenahoria after his English reading of the story, he answered that he cannot feel stenahoria in English, “not just because the word doesn’t exist but because that kind of situation would never arise.”16 […]

Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.25

In particular, the use of root metaphors, embedded so deeply in our consciousness that we don’t even notice them, influences how we define our sense of self and apply meaning to the world around us. “Metaphor plays a very significant role in determining what is real for us,” writes cognitive linguist George Lakoff. “Metaphorical concepts…structure our present reality. New metaphors have the power to create a new reality.”26

These metaphors enter our minds as infants, as soon as we begin to talk. They establish neural pathways that are continually reinforced until, just like the cardinal directions of the Guugu Yimithirr, we use our metaphorical constructs without even recognizing them as metaphors. When a parent, for example, tells a child to “put that out of your mind,” she is implicitly communicating a metaphor of the MIND AS A CONTAINER that should hold some things and not others.27

When these metaphors are used to make sense of humanity’s place in the cosmos, they become the root metaphors that structure a culture’s approach to meaning. Hunter-gatherers, as we’ve seen, viewed the natural world through the root metaphor of GIVING PARENT, which gave way to the agrarian metaphor of ANCESTOR TO BE PROPITIATED. Both the Vedic and Greek traditions used the root metaphor of HIGH IS GOOD to characterize the source of ultimate meaning as transcendent, while the Chinese used the metaphor of PATH in their conceptualization of the Tao. These metaphors become hidden in plain sight, since they are used so extensively that people begin to accept them as fundamental structures of reality. This, ultimately, is how culture and language reinforce each other, leading to a deep persistence of underlying structures of thought from one generation to the next.28

Linguistic Relativities
by John Leavitt
pp. 138-142

Probably the most famous statement of Sapir’s supposed linguistic determinism comes from “The Status of Linguistics as a Science,” a talk published in 1929:

Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of a particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language, and that language is merely an incidental means of solving specific problems of communication or reflection. The fact of the matter is that the “real world” is to a large extent unconsciously built up on the language habits of the group. No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are different worlds, not merely the same world with different labels attached … We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation. (Sapir 1949: 162)

This is the passage that is most commonly quoted to demonstrate the putative linguistic determinism of Sapir and of his student Whorf, who cites some of it (1956: 134) at the beginning of “The Relation of Habitual Thought and Behavior to Language,” a paper published in a Sapir Festschrift in 1941. But is this linguistic determinism? Or is it the statement of an observed reality that must be dealt with? Note that the passage does not say that it is impossible to translate between different languages, nor to convey the same referential content in both. Note also that there is a piece missing here, between “labels attached” and “We see and hear.” In fact, the way I have presented it, with the three dots, is how this passage is almost always presented (e.g., Lucy 1992a: 22); otherwise, the quote usually ends at “labels attached.” If we look at what has been elided, we find two examples, coming in a new paragraph immediately after “attached.” In a typically Sapirian way, one is poetic, the other perceptual. He begins:

The understanding of a simple poem, for instance, involves not merely an understanding of the single words in their average significance, but a full comprehension of the whole life of the community as it is mirrored in the words, or as it is suggested by the overtones.

So the apparent claim of linguistic determinism is to be illustrated by – a poem (Friedrich 1979: 479–80), and a simple one at that! In light of this missing piece of the passage, what Sapir seems to be saying is not that language determines thought, but that language is part of social reality, and so is thought, and to understand either a thought or “a green thought in a green shade” you need to consider the whole.

The second example is one of the relationship of terminology to classification:

Even comparatively simple acts of perception are very much more at the mercy of the social patterns called words than we might suppose. If one draws some dozen lines, for instance, of different shapes, one peceives them as divisible into such categories as “straight,” “crooked,” “curved,” “zigzag” because of the classificatory suggestiveness of the linguistic terms themselves. We see and hear …

Again, is Sapir here arguing for a determination of thought by language or simply observing that in cases of sorting out complex data, one will tend to use the categories that are available? In the latter case, he would be suggesting to his audience of professionals (the source is a talk given to a joint meeting of the Linguistic Society of America and the American Anthropological Association) that such phenomena may extend beyond simple classification tasks.

Here it is important to distinguish between claims of linguistic determinism and the observation of the utility of available categories, an observation that in itself in no way questions the likely importance of the non-linguistic salience of input or the physiological component of perception. Taken in the context of the overall Boasian approach to language and thought, this is clearly the thrust of Sapir’s comments here. Remember that this was the same man who did the famous “Study on Phonetic Symbolism,” which showed that there are what appear to be universal psychological reactions to certain speech sounds (his term is “symbolic feeling-significance”), regardless of the language or the meaning of the word in which these sounds are found (in Sapir 1949). This evidence against linguistic determinism, as it happens, was published the same year as “The Status of Linguistics as a Science,” but in the Journal of Experimental Psychology.3

The metaphor Sapir uses most regularly for the relation of language patterning to thought is not that of a constraint, but of a road or groove that is relatively easy or hard to follow. In Language, he proposed that languages are “invisible garments” for our spirits; but at the beginning of the book he had already questioned this analogy: “But what if language is not so much a garment as a prepared road or groove?” (p. 15); grammatical patterning provides “grooves of expression, (which) have come to be felt as inevitable” (p. 89; cf. Erickson et al. 1997: 298). One important thing about a road is that you can get off it; of a groove, that you can get out of it. We will see that this kind of wording permeates Whorf’s formulations as well. […]

Since the early 1950s, Sapir’s student Benjamin Lee Whorf (1897–1941) has most often been presented as the very epitome of extreme cognitive relativism and linguistic determinism. Indeed, as the name attached to the “linguistic determinism hypothesis,” a hypothesis almost never evoked but to be denied, Whorf has become both the best-known ethnolinguist outside the field itself and one of the great straw men of the century. This fate is undeserved; he was not a self-made straw man, as Marshall Sahlins once called another well-known anthropologist. While Whorf certainly maintained what he called a principle of linguistic relativity, it is clear from reading Language, Thought, and Reality, the only generally available source of his writings, published posthumously in 1956, and even clearer from still largely unpublished manuscripts, that he was also a strong universalist who accepted the general validity of modern science. With some re-evaluations since the early 1990s (Lucy 1992a; P. Lee 1996), we now have a clearer idea of what Whorf was about.

In spite of sometimes deterministic phraseology, Whorf presumed that much of human thinking and perception was non-linguistic and universal across languages. In particular, he admired Gestalt psychology (P. Lee 1996) as a science giving access to general characteristics of human perception across cultures and languages, including the lived experiences that lie behind the forms that we label time and space. He puts this most clearly in discussions of the presumably universal perception of visual space:

A discovery made by modern configurative or Gestalt psychology gives us a canon of reference, irrespective of their languages or scientific jargons, by which to break down and describe all visually observable situations, and many other situations, also. This is the discovery that visual perception is basically the same for all normal persons past infancy and conforms to definite laws. (Whorf 1956: 165)

Whorf clearly believed there was a real world out there, although, enchanted by quantum mechanics and relativity theory, he also believed that this was not the world as we conceive it, nor that every human being conceives it habitually in the same way.

Whorf also sought and proposed general descriptive principles for the analysis of languages of the most varied type. And along with Sapir, he worked on sound symbolism, proposing the universality of feeling-associations to certain speech sounds (1956: 267). Insofar as he was a good disciple of Sapir and Boas, Whorf believed, like them, in the universality of cognitive abilities and of some fundamental cognitive processes. And far from assuming that language determines thought and culture, Whorf wrote in the paper for the Sapir volume that

I should be the last to pretend that there is anything so definite as “a correlation” between culture and language, and especially between ethnological rubrics such as “agricultural, hunting,” etc., and linguistic ones like “inflected,” “synthetic,” or “isolating.” (pp. 138–9)

pp. 146

For Whorf, certain scientific disciplines – elsewhere he names “relativity, quantum theory, electronics, catalysis, colloid chemistry, theory of the gene, Gestalt psychology, psychoanalysis, unbiased cultural anthropology, and so on” (1956: 220), as well as non-Euclidean geometry and, of course, descriptive linguistics – were exemplary in that they revealed aspects of the world profoundly at variance with the world as modern Westerners habitually assume it to be, indeed as the members of any human language and social group habitually assume it to be.

Since Whorf was concerned with linguistic and/or conceptual patterns that people almost always follow in everyday life, he has often been read as a determinist. But as John Lucy pointed out (1992a), Whorf’s critiques clearly bore on habitual thinking, what it is easy to think; his ethical goal was to force us, through learning about other languages, other ways of foregrounding and linking aspects of experience, to think in ways that are not so easy, to follow paths that are not so familiar. Whorf’s argument is not fundamentally about constraint, but about the seductive force of habit, of what is “easily expressible by the type of symbolic means that language employs” (“Model,” 1956: 55) and so easy to think. It is not about the limits of a given language or the limits of thought, since Whorf presumes, Boasian that he is, that any language can convey any referential content.

Whorf’s favorite analogy for the relation of language to thought is the same as Sapir’s: that of tracks, paths, roads, ruts, or grooves. Even Whorf’s most determinist-sounding passages, which are also the ones most cited, sound very different if we take the implications of this analogy seriously: “Thinking … follows a network of tracks laid down in the given language, an organization which may concentrate systematically upon certain phases of reality … and may systematically discard others featured by other languages. The individual is utterly unaware of this organization and is constrained completely within its unbreakable bonds” (1956: 256); “we dissect nature along lines laid down by our native languages” (p. 213). But this is from the same essay in which Whorf asserted the universality of “ways of linking experiences … basically alike for all persons”; and this completely constrained individual is evidently the unreflective (utterly unaware) Mr. Everyman (Schultz 1990), and the very choice of the analogy of traced lines or tracks, assuming that they are not railway tracks – that they are not is suggested by all the other road and path metaphors – leaves open the possibility of getting off the path, if only we had the imagination and the gumption to do it. We can cut cross-country. In the study of an exotic language, he wrote, “we are at long last pushed willy-nilly out of our ruts. Then we find that the exotic language is a mirror held up to our own” (1956: 138). How can Whorf be a determinist, how can he see us as forever trapped in these ruts, if the study of another language is sufficient to push us, kicking and screaming perhaps, out of them?

The total picture, then, is not one of constraint or determinism. It is, on the other hand, a model of powerful seduction: the seduction of what is familiar and easy to think, of what is intellectually restful, of what makes common sense.7 The seduction of the habitual pathway, based largely on laziness and fear of the unknown, can, with work, be resisted and broken. Somewhere in the back of Whorf’s mind may have been the allegory of the broad, fair road to Hell and the narrow, difficult path to Heaven beloved of his Puritan forebears. It makes us think of another New England Protestant: “Two roads diverged in a wood, and I, / I took the one less travelled by, / and that has made all the difference.”

The recognition of the seduction of the familiar implies a real ethical program:

It is the “plainest” English which contains the greatest number of unconscious assumptions about nature … Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. (1956: 244)

Learning non-Western languages offers a lesson in humility and awe in an enormous multilingual world:

We shall no longer be able to see a few recent dialects of the Indo-European family, and the rationalizing techniques elaborated from their patterns, as the apex of the evolution of the human mind, nor their present wide spread as due to any survival from fitness or to anything but a few events of history – events that could be called fortunate only from the parochial point of view of the favored parties. They, and our own thought processes with them, can no longer be envisioned as spanning the gamut of reason and knowledge but only as one constellation in a galactic expanse. (p. 218)

The breathtaking sense of sudden vaster possibility, of the sky opening up to reveal a bigger sky beyond, may be what provokes such strong reactions to Whorf. For some, he is simply enraging or ridiculous. For others, reading Whorf is a transformative experience, and there are many stories of students coming to anthropology or linguistics largely because of their reading of Whorf (personal communications; Alford 2002).

p. 167-168

[T]he rise of cognitive science was accompanied by a restating of what came to be called the “Sapir–Whorf hypothesis” in the most extreme terms. Three arguments came to the fore repeatedly:

Determinism. The Sapir–Whorf hypothesis says that the language you speak, and nothing else, determines how you think and perceive. We have already seen how false a characterization this is: the model the Boasians were working from was only deterministic in cases of no effort, of habitual thought or speaking. With enough effort, it is always possible to change your accent or your ideas.

Hermeticism. The Sapir–Whorf hypothesis maintains that each language is a sealed universe, expressing things that are inexpressible in another language. In such a view, translation would be impossible and Whorf’s attempt to render Hopi concepts in English an absurdity. In fact, the Boasians presumed, rather, that languages were not sealed worlds, but that they were to some degree comparable to worlds, and that passing between them required effort and alertness.

Both of these characterizations are used to set up a now classic article on linguistic relativity by the psychologist Eleanor Rosch (1974):

Are we “trapped” by our language into holding a particular “world view”? Can we never really understand or communicate with speakers of a language quite different from our own because each language has molded the thought of its people into mutually incomprehensible world views? Can we never get “beyond” language to experience the world “directly”? Such issues develop from an extreme form of a position sometimes known as “the Whorfian hypothesis” … and called, more generally, generally, the hypothesis of “linguistic relativity.” (Rosch 1974: 95)

Rosch begins the article noting how intuitively right the importance of language differences first seemed to her, then spends much of the rest of it attacking this initial intuition.

Infinite variability. A third common characterization is that Boasian linguistics holds that, in Martin Joos’s words, “languages can differ from each other without limit and in unpredictable ways” (Joos 1966: 96). This would mean that the identification of any language universal would disprove the approach. In fact, the Boasians worked with the universals that were available to them – these were mainly derived from psychology – but opposed what they saw as the unfounded imposition of false universals that in fact reflected only modern Western prejudices. Joos’s hostile formulation has been cited repeatedly as if it were official Boasian doctrine (see Hymes and Fought 1981: 57).

For over fifty years, these three assertions have largely defined the received understanding of linguistic relativity. Anyone who has participated in discussions and/or arguments about the “Whorfian hypothesis” has heard them over and over again.

p. 169-173

In the 1950s, anthropologists and psychologists were interested in experimentation and the testing of hypotheses on what was taken to be the model of the natural sciences. At a conference on language in culture, Harry Hoijer (1954) first named a Sapir–Whorf hypothesis that language influences thought.

To call something a hypothesis is to propose to test it, presumably using experimental methods. This task was taken on primarily by psychologists. A number of attempts were made to prove or disprove experimentally that language influences thought (see Lucy 1992a: 127–78; P. Brown 2006). Both “language” and “thought” were narrowed down to make them more amenable to experiment: the aspect of language chosen was usually the lexicon, presumably the easiest aspect to control in an experimental setting; thought was interpreted to mean perceptual discrimination and cognitive processing, aspects of thinking that psychologists were comfortable testing for. Eric Lenneberg defined the problem posed by the “Sapir–Whorf hypothesis” as that of “the relationship that a particular language may have to its speakers’ cognitive processes … Does the structure of a given language affect the thoughts (or thought potential), the memory, the perception, the learning ability of those who speak that language?” (1953: 463). Need I recall that Boas, Sapir, and Whorf went out of their way to deny that different languages were likely to be correlated with strengths and weaknesses in cognitive processes, i.e., in what someone is capable of thinking, as opposed to the contents of habitual cognition? […]

Berlin and Kay started by rephrasing Sapir and Whorf as saying that the search for semantic universals was “fruitless in principle” because “each language is semantically arbitrary relative to every other language” (1969: 2; cf. Lucy 1992a: 177–81). If this is what we are calling linguistic relativity, then if any domain of experience, such as color, is identified in recognizably the same way in different languages, linguistic relativity must be wrong. As we have seen, this fits the arguments of Weisgerber and Bloomfield, but not of Sapir or Whorf. […]

A characteristic study was reported recently in my own university’s in-house newspaper under the title “Language and Perception Are Not Connected” (Baril 2004). The article starts by saying that according to the “Whorf–Sapir hypothesis … language determines perception,” and therefore that “we should not be able to distinguish differences among similar tastes if we do not possess words for expressing their nuances, since it is language that constructs the mode of thought and its concepts … According to this hypothesis, every language projects onto its speakers a system of categories through which they see and interpret the world.” The hypothesis, we are told, has been “disconfirmed since the 1970s” by research on color. The article reports on the research of Dominic Charbonneau, a graduate student in psychology. Intrigued by recent French tests in which professional sommeliers, with their elaborate vocabulary, did no better than regular ignoramuses in distinguishing among wines, Charbonneau carried out his own experiment on coffee – this is, after all, a French-speaking university, and we take coffee seriously. Francophone students were asked to distinguish among different coffees; like most of us, they had a minimal vocabulary for distinguishing them (words like “strong,” “smooth,” “dishwater”). The participants made quite fine distinctions among the eighteen coffees served, well above the possible results of chance, showing that taste discrimination does not depend on vocabulary. Conclusion: “Concepts must be independent of language, which once again disconfirms the Sapir–Whorf hypothesis” (my italics). And this of course would be true if there were such a hypothesis, if it was primarily about vocabulary, and if it said that vocabulary determines perception.

We have seen that Bloomfield and his successors in linguistics maintained the unlimited arbitrariness of color classifications, and so could have served as easy straw men for the cognitivist return to universals. But what did Boas, Sapir, Whorf, or Lee actually have to say about color? Did they in fact claim that color perception or recognition or memory was determined by vocabulary? Sapir and Lee are easy: as far as I have been able to ascertain, neither one of them talked about color at all. Steven Pinker attributes a relativist and determinist view of color classifications to Whorf:

Among Whorf’s “kaleidoscopic flux of impressions,” color is surely the most eye-catching. He noted that we see objects in different hues, depending on the wavelengths of the light they reflect, but that the wavelength is a continuous dimension with nothing delineating red, yellow, green, blue, and so on. Languages differ in their inventory of color words … You can fill in the rest of the argument. It is language that puts the frets in the spectrum. (Pinker 1994: 61–2)

No he didn’t. Whorf never noted anything like this in any of his published work, and Pinker gives no indication of having gone through Whorf’s unpublished papers. As far as I can ascertain, Whorf talks about color in two places; in both he is saying the opposite of what Pinker says he is saying.

pp. 187-188

The 1950s through the 1980s saw the progressive triumph of universalist cognitive science. From the 1980s, one saw the concomitant rise of relativistic postmodernism. By the end of the 1980s there had been a massive return to the old split between universalizing natural sciences and their ancillary social sciences on the one hand, particularizing humanities and their ancillary cultural studies on the other. Some things, in the prevailing view, were universal, others so particular as to call for treatment as fiction or anecdote. Nothing in between was of very much interest, and North American anthropology, the discipline that had been founded upon and achieved a sort of identity in crossing the natural-science/humanities divide, faced an identity crisis. Symptomatically, one noticed many scholarly bookstores disappearing their linguistics sections into “cognitive science,” their anthropology sections into “cultural studies.”

In this climate, linguistic relativity was heresy, Whorf, in particular, a kind of incompetent Antichrist. The “Whorfian hypothesis” of linguistic relativism or determinism became a topos of any anthropology textbook, almost inevitably to be shown to be silly. Otherwise serious linguists and psychologists (e.g., Pinker 1994: 59–64) continued to dismiss the idea of linguistic relativity with an alacrity suggesting alarm and felt free to heap posthumous personal vilification on Whorf, the favorite target, for his lack of official credentials, in some really surprising displays of academic snobbery. Geoffrey Pullum, to take only one example, calls him a “Connecticut fire prevention inspector and weekend language-fancier” and “our man from the Hartford Fire Insurance Company” (Pullum 1989 [1991]: 163). This comes from a book with the subtitle Irreverent Essays on the Study of Language. But how irreverent is it to make fun of somebody almost everybody has been attacking for thirty years?

The Language Myth: Why Language Is Not an Instinct
by Vyvyan Evans
pp. 195-198

Who’s afraid of the Big Bad Whorf?

Psychologist Daniel Casasanto has noted, in an article whose title gives this section its heading, that some researchers find Whorf’s principle of linguistic relativity to be threatening. 6 But why is Whorf such a bogeyman for some? And what makes his notion of linguistic relativity such a dangerous idea?

The rationalists fear linguistic relativity – the very idea of it – and they hate it, with a passion: it directly contradicts everything they stand for – if relativism is anywhere near right, then the rationalist house burns down, or collapses, like a tower of cards without a foundation. And this fear and loathing in parts of the Academy can often, paradoxically, be highly irrational indeed. Relativity is often criticised without argumentative support, or ridiculed, just for the audacity of existing as an intellectual idea to begin with. Jerry Fodor, more candid than most about his irrational fear, just hates it. He says: “The thing is: I hate relativism. I hate relativism more than I hate anything else, excepting, maybe, fiberglass powerboats.” 7 Fodor continues, illustrating further his irrational contempt: “surely, surely, no one but a relativist would drive a fiberglass powerboat”. 8

Fodor’s objection is that relativism overlooks what he deems to be “the fixed structure of human nature”. 9 Mentalese provides the fixed structure – as we saw in the previous chapter. If language could interfere with this innate set of concepts, then the fixed structure would no longer be fixed – anathema to a rationalist.

Others are more coy, but no less damning. Pinker’s strategy is to set up straw men, which he then eloquently – but mercilessly – ridicules. 10 But don’t be fooled, there is no serious argument presented – not on this occasion. Pinker takes an untenable and extreme version of what he claims Whorf said, and then pokes fun at it – a common modus operandi employed by those who are afraid. Pinker argues that Whorf was wrong because he equated language with thought: that Whorf assumes that language causes or determines thought in the first place. This is the “conventional absurdity” that Pinker refers to in the first of his quotations above. For Pinker, Whorf was either romantically naïve about the effects of language, or, worse, like the poorly read and ill-educated, credulous.

But this argument is a classic straw man: it is set up to fail, being made of straw. Whorf never claimed that language determined thought. As we shall see, the thesis of linguistic determinism, which nobody believes, and which Whorf explicitly rejected, was attributed to him long after his death. But Pinker has bought into the very myths peddled by the rationalist tradition for which he is cheerleader-in-chief, and which lives in fear of linguistic relativity. In the final analysis, the language-as-instinct crowd should be afraid, very afraid: linguistic relativity, once and for all, explodes the myth of the language-as-instinct thesis.

The rise of the Sapir − Whorf hypothesis

Benjamin Lee Whorf became interested in linguistics in 1924, and studied it, as a hobby, alongside his full-time job as an engineer. In 1931, Whorf began to attend university classes on a part-time basis, studying with one of the leading linguists of the time, Edward Sapir. 11 Amongst other things covered in his teaching, Sapir touched on what he referred to as “relativity of concepts … [and] the relativity of the form of thought which results from linguistic study”. 12 The notion of the relativistic effect of different languages on thought captured Whorf’s imagination; and so he became captivated by the idea that he was to develop and become famous for. Because Whorf’s claims have often been disputed and misrepresented since his death, let’s see exactly what his formulation of his principle of linguistic relativity was:

Users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world. 13

Indeed, as pointed out by the Whorf scholar, Penny Lee, post-war research rarely ever took Whorf’s principle, or his statements, as their starting point. 14 Rather, his writings were, on the contrary, ignored, and his ideas largely distorted. 15

For one thing, the so-called ‘Sapir − Whorf hypothesis’ was not due to either Sapir or Whorf. Sapir – whose research was not primarily concerned with relativity – and Whorf were lumped together: the term ‘Sapir − Whorf hypothesis’ was coined in the 1950s, over ten years after both men had been dead – Sapir died in 1939, and Whorf in 1941.16 Moreover, Whorf’s principle emanated from an anthropological research tradition; it was not, strictly speaking, a hypothesis. But, in the 1950s, psychologists Eric Lenneberg and Roger Brown sought to test empirically the notion of linguistic relativity. And to do so, they reformulated it in such a way that it could be tested, producing two testable formulations. 17 One, the so-called ‘strong version’ of relativity, holds that language causes a cognitive restructuring: language causes or determines thought. This is otherwise known as linguistic determinism, Pinker’s “conventional absurdity”. The second hypothesis, which came to be known as the ‘weak version’, claims instead that language influences a cognitive restructuring, rather than causing it. But neither formulation of the so-called ‘Sapir − Whorf hypothesis’ was due to Whorf, or Sapir. Indeed, on the issue of linguistic determinism, Whorf was explicit in arguing against it, saying the following:

The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur. 18

This demonstrates that, in point of fact, Whorf actually believed in something like the ‘fixed structure’ that Fodor claims is lacking in relativity. The delicious irony arising from it all is that Pinker derides Whorf on the basis of the ‘strong version’ of the Sapir − Whorf hypothesis: linguistic determinism – language causes thought. But this strong version was a hypothesis not created by Whorf, but imagined by rationalist psychologists who were dead set against Whorf and linguistic relativity anyway. Moreover, Whorf explicitly disagreed with the thesis that was posthumously attributed to him. The issue of linguistic determinism became, incorrectly and disingenuously, associated with Whorf, growing in the rationalist sub-conscious like a cancer – Whorf was clearly wrong, they reasoned.

In more general terms, defenders of the language-as-instinct thesis have taken a leaf out of the casebook of Noam Chomsky. If you thought that academics play nicely, and fight fair, think again. Successful ideas are the currency, and they guarantee tenure, promotion, influence and fame; and they allow the successful academic to attract Ph.D. students who go out and evangelise, and so help to build intellectual empires. The best defence against ideas that threaten is ridicule. And, since the 1950s, until the intervention of John Lucy in the 1990s – whom I discuss below – relativity was largely dismissed; the study of linguistic relativity was, in effect, off-limits to several generations of researchers.

The Bilingual Mind, And What it Tells Us about Language and Thought
by Aneta Pavlenko
PP. 27-32

1.1.2.4 The real authors of the Sapir-Whorf hypothesis and the invisibility of scientific revolutions

The invisibility of bilingualism in the United States also accounts for the disappearance of multilingual awareness from discussions of Sapir’s and Whorf’s work, which occurred when the two scholars passed away – both at a relatively young age – and their ideas landed in the hands of others. The posthumous collections brought Sapir’s ( 1949 ) and Whorf’s ( 1956 ) insights to the attention of the wider public (including, inter alia , young Thomas Kuhn ) and inspired the emergence of the field of psycholinguistics. But the newly minted psycholinguists faced a major problem: it had never occurred to Sapir and Whorf to put forth testable hypotheses. Whorf showed how linguistic patterns could be systematically investigated through the use of overt categories marked systematically (e.g., number in English or gender in Russian) and covert categories marked only in certain contexts (e.g., gender in English), yet neither he nor Sapir ever elaborated the meaning of ‘different observations’ or ‘psychological correlates’.

Throughout the 1950s and 1960s, scholarly debates at conferences, summer seminars and in academic journals attempted to correct this ‘oversight’ and to ‘systematize’ their ideas (Black, 1959 ; Brown & Lenneberg , 1954 ; Fishman , 1960 ; Hoijer, 1954a; Lenneberg, 1953 ; Osgood & Sebeok , 1954 ; Trager , 1959 ). The term ‘the Sapir -Whorf hypothesis’ was first used by linguistic anthropologist Harry Hoijer ( 1954b ) to refer to the idea “that language functions, not simply as a device for reporting experience, but also, and more significantly, as a way of defining experience for its speakers” (p. 93). The study of SWH, in Hoijer’s view, was supposed to focus on structural and semantic patterns active in a given language. This version, probably closest to Whorf’s own interest in linguistic classification, was soon replaced by an alternative, developed by psychologists Roger Brown and Eric Lenneberg, who translated Sapir’s and Whorf’s ideas into two ‘testable’ hypotheses (Brown & Lenneberg, 1954 ; Lenneberg, 1953 ). The definitive form of the dichotomy was articulated in Brown’s ( 1958 ) book Words and Things:

linguistic relativity holds that where there are differences of language there will also be differences of thought, that language and thought covary. Determinism goes beyond this to require that the prior existence of some language pattern is either necessary or sufficient to produce some thought pattern. (p. 260)

In what follows, I will draw on Kuhn’s ([1962] 2012 ) insights to discuss four aspects of this radical transformation of Sapir’s and Whorf’s ideas into the SWH: (a) it was a major change of paradigm , that is, of shared assumptions, research foci, and methods, (b) it erased multilingual awareness , (c) it created a false dichotomy, and (d) it proceeded unacknowledged.

The change of paradigm was necessitated by the desire to make complex notions, articulated by linguistic anthropologists, fit experimental paradigms in psychology. Yet ideas don’t travel easily across disciplines: Kuhn ([1962] 2012 ) compares a dialog between scientific communities to intercultural communication, which requires skillful translation if it is to avoid communication breakdowns. Brown and Lenneberg ’s translation was not skillful and while their ideas moved the study of language and cognition forward, they departed from the original arguments in several ways (for discussion, see also Levinson , 2012 ; Lucy , 1992a ; Lee , 1996 ).

First, they shifted the focus of the inquiry from the effects of obligatory grammatical categories, such as tense, to lexical domains, such as color, that had a rather tenuous relationship to linguistic thought (color differentiation was, in fact, discussed by Boas and Whorf as an ability not influenced by language). Secondly, they shifted from concepts as interpretive categories to cognitive processes, such as perception or memory, that were of little interest to Sapir and Whorf, and proposed to investigate them with artificial stimuli, such as Munsell chips, that hardly reflect habitual thought. Third, they privileged the idea of thought potential (and, by implication, what can be said) over Sapir’s and Whorf’s concerns with obligatory categories and habitual thought (and, by definition, with what is said). Fourth, they missed the insights about the illusory objectivity of one’s own language and replaced the interest in linguistic thought with independent ‘language’ and ‘cognition’. Last, they substituted Humboldt ’s, Sapir ’s and Whorf ’s interest in multilingual awareness with a hypothesis articulated in monolingual terms.

A closer look at Brown’s ( 1958 ) book shows that he was fully aware of the existence of bilingualism and of the claims made by bilingual speakers of Native American languages that “thinking is different in the Indian language” (p. 232). His recommendation in this case was to distrust those who have the “unusual” characteristic of being bilingual:

There are few bilinguals, after all, and the testimony of those few cannot be uncritically accepted. There is a familiar inclination on the part of those who possess unusual and arduously obtained experience to exaggerate its remoteness from anything the rest of us know. This must be taken into account when evaluating the impressions of students of Indian languages. In fact, it might be best to translate freely with the Indian languages, assimilating their minds to our own. (Brown, 1958 : 233)

The testimony of German–English bilinguals – akin to his own collaborator Eric Heinz Lenneberg – was apparently another matter: the existence of “numerous bilingual persons and countless translated documents” was, for Brown ( 1958 : 232), compelling evidence that the German mind is “very like our own”. Alas, Brown ’s ( 1958 ) contradictory treatment of bilingualism and the monolingual arrogance of the recommendations ‘to translate freely’ and ‘to assimilate Indian minds to our own’ went unnoticed by his colleagues. The result was the transformation of a fluid and dynamic account of language into a rigid, static false dichotomy.

When we look back, the attribution of the idea of linguistic determinism to multilinguals interested in language evolution and the evolution of the human mind makes little sense. Yet the replacement of the open-ended questions about implications of linguistic diversity with two ‘testable’ hypotheses had a major advantage – it was easier to argue about and to digest. And it was welcomed by scholars who, like Kay and Kempton ( 1984 ), applauded the translation of Sapir’s and Whorf’s convoluted passages into direct prose and felt that Brown and Lenneberg “really said all that was necessary” (p. 66) and that the question of what Sapir and Whorf actually thought was interesting but “after all less important than the issue of what is the case” (p. 77). In fact, by the 1980s, Kay and Kempton were among the few who could still trace the transformation to the two psychologists. Their colleagues were largely unaware of it because Brown and Lenneberg concealed the radical nature of their reformulation by giving Sapir and Whorf ‘credit’ for what should have been the Brown-Lenneberg hypothesis.

We might never know what prompted this unusual scholarly modesty – a sincere belief that they were simply ‘improving’ Sapir and Whorf or the desire to distance themselves from the hypothesis articulated only to be ‘disproved’. For Kuhn ([1962] 2012 ), this is science as usual: “it is just this sort of change in the formulation of questions and answers that accounts, far more than novel empirical discoveries, for the transition from Aristotelian to Galilean and from Galilean to Newtonian dynamics” (p. 139). He also points to the hidden nature of many scientific revolutions concealed by textbooks that provide the substitute for what they had eliminated and make scientific development look linear, truncating the scientists’ knowledge of the history of their discipline. This is precisely what happened with the SWH: the newly minted hypothesis took on a life of its own, multiplying and reproducing itself in myriads of textbooks, articles, lectures, and popular media, and moving the discussion further and further away from Sapir’s primary interest in ‘social reality’ and Whorf’s central concern with ‘habitual thought’.

The transformation was facilitated by four common academic practices that allow us to manage the ever-increasing amount of literature in the ever-decreasing amount of time: (a) simplification of complex arguments (which often results in misinterpretation); (b) reduction of original texts to standard quotes; (c) reliance on other people’s exegeses; and (d) uncritical reproduction of received knowledge. The very frequency of this reproduction made the SWH a ‘fact on the ground’, accepted as a valid substitution for the original ideas. The new terms of engagement became part of habitual thought in the Ivory Tower and to this day are considered obligatory by many academics who begin their disquisitions on linguistic relativity with a nod towards the sound-bite version of the ‘strong’ determinism and ‘weak’ relativity. In Kuhn ’s ([1962] 2012 ) view, this perpetuation of a new set of shared assumptions is a key marker of a successful paradigm change: “When the individual scientist can take a paradigm for granted, he need no longer, in his major works, attempt to build his field anew, starting from first principles and justifying the use of each concept introduced” (p. 20).

Yet the false dichotomy reified in the SWH – and the affective framing of one hypothesis as strong and the other as weak – moved the goalposts and reset the target and the standards needed to achieve it, giving scholars a clear indication of which hypothesis they should address. This preference, too, was perpetuated by countless researchers who, like Langacker ( 1976 : 308), dismissed the ‘weak’ version as obviously true but uninteresting and extolled ‘the strongest’ as “the most interesting version of the LRH” but also as “obviously false”. And indeed, the research conducted on Brown’s and Lenneberg’s terms failed to ‘prove’ linguistic determinism and instead revealed ‘minor’ language effects on cognition (e.g., Brown & Lenneberg, 1954 ; Lenneberg , 1953 ) or no effects at all (Heider , 1972 ). The studies by Gipper ( 1976 ) 4 and Malotki ( 1983 ) showed that even Whorf ’s core claims, about the concept of time in Hopi, may have been misguided. 5 This ‘failure’ too became part of the SWH lore, with textbooks firmly stating that “a strong version of the Whorfian hypothesis cannot be true” (Foss & Hakes , 1978 : 393).

By the 1980s, there emerged an implicit consensus in US academia that Whorfianism was “a bête noire, identified with scholarly irresponsibility, fuzzy thinking, lack of rigor, and even immorality” (Lakoff, 1987 : 304). This consensus was shaped by the political climate supportive of the notion of ‘free thought’ yet hostile to linguistic diversity, by educational policies that reinforced monolingualism, and by the rise of cognitive science and meaning-free linguistics that replaced the study of meaning with the focus on structures and universals. Yet the implications of Sapir ’s and Whorf’s ideas continued to be debated (e.g., Fishman , 1980 , 1982 ; Kay & Kempton , 1984 ; Lakoff, 1987 ; Lucy & Shweder , 1979 ; McCormack & Wurm , 1977 ; Pinxten , 1976 ) and in the early 1990s the inimitable Pinker decided to put the specter of the SWH to bed once and for all. Performing a feat reminiscent of Humpty Dumpty, Pinker ( 1994 ) made the SWH ‘mean’ what he wanted it to mean, namely “the idea that thought is the same thing as language” (p. 57). Leaving behind Brown ’s ( 1958 ) articulation with its modest co-variation, he replaced it in the minds of countless undergraduates with

the famous Sapir-Whorf hypothesis of linguistic determinism , stating that people’s thoughts are determined by the categories made available by their language, and its weaker version, linguistic relativity , stating that differences among languages cause differences in the thoughts of their speakers. (Pinker, 1994 : 57)

And lest they still thought that there is something to it, Pinker ( 1994 ) told them that it is “an example of what can be called a conventional absurdity” (p. 57) and “it is wrong, all wrong” (p. 57). Ironically, this ‘obituary’ for the SWH coincided with the neo-Whorfian revival, through the efforts of several linguists, psychologists, and anthropologists – most notably Gumperz and Levinson ( 1996 ), Lakoff ( 1987 ), Lee ( 1996 ), Lucy ( 1992a , b ), and Slobin ( 1991 , 1996a ) – who were willing to buck the tide, to engage with the original texts, and to devise new methods of inquiry. This work will form the core of the chapters to come but for now I want to emphasize that the received belief in the validity of the terms of engagement articulated by Brown and Lenneberg and their attribution to Sapir and Whorf is still pervasive in many academic circles and evident in the numerous books and articles that regurgitate the SWH as the strong/weak dichotomy. The vulgarization of Whorf ’s views bemoaned by Fishman ( 1982 ) also continues in popular accounts, and I fully agree with Pullum ( 1991 ) who, in his own critique of Whorf, noted:

Once the public has decided to accept something as an interesting fact, it becomes almost impossible to get the acceptance rescinded. The persistent interestingness and symbolic usefulness overrides any lack of factuality. (p. 159)

Popularizers of academic work continue to stigmatize Whorf through comments such as “anyone can estimate the time of day, even the Hopi Indians; these people were once attributed with a lack of any conception of time by a book-bound scholar, who had never met them” (Richards , 1998 : 44). Even respectable linguists perpetuate the strawman version of “extreme relativism – the idea that there are no facts common to all cultures and languages” (Everett, 2012 : 201) or make cheap shots at “the most notorious of the con men, Benjamin Lee Whorf, who seduced a whole generation into believing, without a shred of evidence, that American Indian languages lead their speakers to an entirely different conception of reality from ours” (Deutscher, 2010 : 21). This assertion is then followed by a statement that while the link between language, culture, and cognition “seems perfectly kosher in theory, in practice the mere whiff of the subject today makes most linguists, psychologists, and anthropologists recoil” because the topic “carries with it a baggage of intellectual history which is so disgraceful that the mere suspicion of association with it can immediately brand anyone a fraud” (Deutscher, 2010 : 21).

Such comments are not just an innocent rhetorical strategy aimed at selling more copies: the uses of hyperbole (most linguists, psychologists, and anthropologists ; mere suspicion of association), affect (disgraceful , fraud , recoil , embarrassment), misrepresentation (disgraceful baggage of intellectual history), strawman’s arguments and reduction ad absurdum as a means of persuasion have played a major role in manufacturing the false consent in the history of ideas that Deutscher (2010) finds so ‘disgraceful’ (readers interested in the dirty tricks used by scholars should read the expert description by Pinker , 2007 : 89–90). What is particularly interesting is that both Deutscher (2010) and Everett (2012) actually martial evidence in support of Whorf’s original arguments. Their attempt to do so while distancing themselves from Whorf would have fascinated Whorf, for it reveals two patterns of habitual thought common in English-language academia: the uncritical adoption of the received version of the SWH and the reliance on the metaphor of ‘argument as war’ (Tannen , 1998), i.e., an assumption that each argument has ‘two sides’ (not one or three), that these sides should be polarized in either/or terms, and that in order to present oneself as a ‘reasonable’ author, one should exaggerate the alternatives and then occupy the ‘rational’ position in between. Add to this the reductionism common for trade books and the knowledge that criticism sells better than praise, and you get Whorf as a ‘con man’.

Dark Matter of the Mind
by Daniel L. Everett
Kindle Locations 352-373

I am here particularly concerned with difference, however, rather than sameness among the members of our species— with variation rather than homeostasis. This is because the variability in dark matter from one society to another is fundamental to human survival, arising from and sustaining our species’ ecological diversity. The range of possibilities produces a variety of “human natures” (cf. Ehrlich 2001). Crucial to the perspective here is the concept-apperception continuum. Concepts can always be made explicit; apperceptions less so. The latter result from a culturally guided experiential memory (whether conscious or unconscious or bodily). Such memories can be not only difficult to talk about but often ineffable (see Majid and Levinson 2011; Levinson and Majid 2014). Yet both apperception and conceptual knowledge are uniquely determined by culture, personal history, and physiology, contributing vitally to the formation of the individual psyche and body.

Dark matter emerges from individuals living in cultures and thereby underscores the flexibility of the human brain. Instincts are incompatible with flexibility. Thus special care must be given to evaluating arguments in support of them (see Blumberg 2006 for cogent criticisms of many purported examples of instincts, as well as the abuse of the term in the literature). If we have an instinct to do something one way, this would impede learning to do it another way. For this reason it would surprise me if creatures higher on the mental and cerebral evolutionary scale— you and I, for example— did not have fewer rather than more instincts. Humans, unlike cockroaches and rats— two other highly successful members of the animal kingdom— adapt holistically to the world in which they live, in the sense that they can learn to solve problems across environmental niches, then teach their solutions and reflect on these solutions. Cultures turn out to be vital to this human adaptational flexibility— so much so that the most important cognitive question becomes not “What is in the brain?” but “What is the brain in?” (That is, in what individual, residing in what culture does this particular brain reside?)

The brain, by this view, was designed to be as close to a blank slate as was possible for survival. In other words, the views of Aristotle, Sapir, Locke, Hume, and others better fit what we know about the nature of the brain and human evolution than the views of Plato, Bastian, Freud, Chomsky, Tooby, Pinker, and others. Aristotle’s tabula rasa seems closer to being right than is currently fashionable to suppose, especially when we answer the pointed question, what is left in the mind/ brain when culture is removed?

Most of the lessons of this book derive from the idea that our brains (including our emotions) and our cultures are related symbiotically through the individual, and that neither supervenes on the other. In this framework, nativist ideas often are superfluous.

Kindle Locations 3117-3212

Science, we might say, ought to be exempt from dark matter. Yet that is much harder to claim than to demonstrate. […] To take a concrete example of a science, we focus on linguistics, because this discipline straddles the borders between the sciences, humanities, and social sciences. The basic idea to be explored is this: because counterexamples and exceptions are culturally determined in linguistics, as in all sciences, scientific progress is the output of cultural values. These values differ even within the same discipline (e.g., linguistics), however, and can lead to different notions of progress in science. To mitigate this problem, therefore, to return to linguistics research as our primary example, our inquiry should be informed by multiple theories, with a focus on languageS rather than Language. To generalize, this would mean a focus on the particular rather than the general in many cases. Such a focus (in spite of the contrast between this and many scientists’ view that generalizations are the goal of science) develops a robust empirical basis while helping to distinguish local theoretical culture from broader, transculturally agreed-upon desiderata of science— an issue that theories of language, in a way arguably more extreme than in other disciplines, struggle to tease apart.

The reason that a discussion of science and dark matter is important here is to probe the significance and meaning of dark matter, culture, and psychology in the more comfortable, familiar territory of the reader, to understand that what we are contemplating here is not limited to cultures unlike our own, but affects every person, every endeavor of Homo sapiens, even the hallowed enterprise of science. This is not to say that science is merely a cultural illusion. This chapter has nothing to do with postmodernist epistemological relativity. But it does aim to show that science is not “pure rationality,” autonomous from its cultural matrix. […]

Whether we classify an anomaly as counterexample or exception depends on our dark matter— our personal history plus cultural values, roles, and knowledge structures. And the consequences of our classification are also determined by culture and dark matter. Thus, by social consensus, exceptions fall outside the scope of the statements of a theory or are explicitly acknowledged by the theory to be “problems” or “mysteries.” They are not immediate problems for the theory. Counterexamples, on the other hand, by social consensus render a statement false. They are immediately acknowledged as (at least potential) problems for any theory. Once again, counterexamples and exceptions are the same etically, though they are nearly polar opposites emically. Each is defined relative to a specific theoretical tradition, a specific set of values, knowledge structures, and roles— that is, a particular culture.

One bias that operates in theories, the confirmation bias, is the cultural value that a theory is true and therefore that experiments are going to strengthen it, confirm it, but not falsify it. Anomalies appearing in experiments conducted by adherents of a particular theory are much more likely to be interpreted as exceptions that might require some adjustments of the instruments, but nothing serious in terms of the foundational assumptions of the theory. On the other hand, when anomalies turn up in experiments by opponents of a theory, there will be a natural bias to interpret these as counterexamples that should lead to the abandonment of the theory. Other values that can come into play for the cultural/ theoretical classification of an anomaly as a counterexample or an exception include “tolerance for cognitive dissonance,” a value of the theory that says “maintain that the theory is right and, at least temporarily, set aside problematic facts,” assuming that they will find a solution after the passage of a bit of time. Some theoreticians call this tolerance “Galilean science”— the willingness to set aside all problematic data because a theory seems right. Fair enough. But when, why, and for how long a theory seems right in the face of counterexamples is a cultural decision, not one that is based on facts alone. We have seen that the facts of a counterexample and an exception can be exactly the same. Part of the issue of course is that data, like their interpretations, are subject to emicization. We decide to see data with a meaning, ignoring the particular variations that some other theory might seize on as crucial. In linguistics, for example, if a theory (e.g., Chomskyan theory) says that all relevant grammatical facts stop at the boundary of the sentence, then related facts at the level of paragraphs, stories, and so on, are overlooked.

The cultural and dark matter forces determining the interpretation of anomalies in the data that lead one to abandon a theory and another to maintain it themselves create new social situations that confound the intellect and the sense of morality that often is associated with the practice of a particular theory. William James (1907, 198) summed up some of the reactions to his own work, as evidence of these reactions to the larger field of intellectual endeavors: “I fully expect to see the pragmatist view of truth run through the classic stages of a theory’s career. First, you know, a new theory is attacked as absurd; then it is admitted to be true, but obvious and insignificant; finally it is seen to be so important that its adversaries claim that they themselves discovered it.”

In recent years, due to my research and claims regarding the grammar of the Amazonian Pirahã— that this language lacks recursion— I have been called a charlatan and a dull wit who has misunderstood. It has been (somewhat inconsistently) further claimed that my results are predicted (Chomsky 2010, 2014); it has been claimed that an alternative notion of recursion, Merge, was what the authors had in mind is saying that recursion is the foundation of human languages; and so on. And my results have been claimed to be irrelevant.

* * *

Beyond Our Present Knowledge
Useful Fictions Becoming Less Useful
Essentialism On the Decline
Is the Tide Starting to Turn on Genetics and Culture?
Blue on Blue
The Chomsky Problem
Dark Matter of the Mind
What is the Blank Slate of the Mind?
Cultural Body-Mind
How Universal Is The Mind?
The Psychology and Anthropology of Consciousness
On Truth and Bullshit

The Mind in the Body

“[In the Old Testament], human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’ […] In short, the biblical character presents itself to us more as parts than as a whole”
(Robert A. Di Vito, “Old Testament Anthropology and the Construction of Personal Identity”, p. 227-228)

The Axial Age was a transitional stage following the collapse of the Bronze Age Civilizations. And in that transition, new mindsets mixed with old, what came before trying to contain the rupture and what was forming not yet fully born. Writing, texts, and laws were replacing voices gone quiet and silent. Ancient forms of authorization no longer were as viscerally real and psychologically compelling. But the transition period was long and slow, and in many ways continues to this day (e.g., authoritarianism as vestigial bicameralism).

One aspect was the changing experience of identity, as experienced within the body and the world. But let me take it a step back. In hunter-gatherer societies, there is the common attribute of animism where the world is alive with voices and along with this the sense of identity that, involving sensory immersion not limited to the body, extends into the surrounding environment. The bicameral mind seems to have been a reworking of this mentality for the emerging agricultural villages and city-states. Instead of body as part of the natural environment, there was the body politic with the community as a coherent whole, a living organism. Without a metaphorical framing of inside and outside as the crux of identity as would later develop, self and other was defined by permeable collectivism rather than rigid individualism (bundle theory of mind taken to the extreme of bundle theory of society).

In the late Bronze Age, large and expansive theocratic hierarchies formed. Writing increasingly took a greater role. All of this combined to make the bicameral order precarious. The act of writing and reading texts was still integrated with voice-hearing traditions, a text being the literal ‘word’ of a god, spirit, or ancestor. But the voices being written down began the process of creating psychological distance, the text itself beginning to take onto itself authority. This became a competing metaphorical framing, that of truth and reality as text.

This transformed the perception of the body. The voices became harder to decipher. Hearing a voice of authority speak to you required little interpretation, but a text emphasizes the need for interpretation. Reading became a way of thinking about the world and about one’s way of being in the world. Divination and similar practices was the attempt to read the world. Clouds or lightning, the flight of birds or the organs of a sacrificial animal — these were texts to be read.

Likewise, the body became a repository of voices, although initially not quite a unitary whole. Different aspects of self and spirits, different energies and forces were located and contained in various organs and body parts — to the extent that they had minds of their own, a potentially distressing condition and sometimes interpreted as possession. As the bicameral community was a body politic, the post-bicameral body initiated the internalization of community. But this body as community didn’t at first have a clear egoic ruler — the need for this growing stronger as external authorization further weakened. Eventually, it became necessary to locate the ruling self in a particular place within, such as the heart or throat or head. This was a forceful suppression of the many voices and hence a disallowing of the perception of self as community. The narrative of individuality began to be told.

Even today, we go on looking for a voice in some particular location. Noam Chomsky’s theory of a language organ is an example of this. We struggle for authorization within consciousness, as the ancient grounding of authorization in the world and in community has been lost, cast into the shadows.

Still, dissociation having taken hold, the voices never disappear and they continue to demand being heard, if only as symptoms of physical and psychological disease. Or else we let the thousand voices of media to tell us how to think and what to do. Ultimately, trying to contain authorization within us is impossible and so authorization spills back out into the world, the return of the repressed. Our sense of individualism is much more of a superficial rationalization than we’d like to admit. The social nature of our humanity can’t be denied.

As with post-bicameral humanity, we are still trying to navigate this complex and confounding social reality. Maybe that is why Axial Age religions, in first articulating the dilemma of conscious individuality, remain compelling in what was taught. The Axial Age prophets gave voice to our own ambivalance and maybe that is what gives the ego such power over us. We moderns haven’t become disconnected and dissociated merely because of some recent affliction — such a state of mind is what we inherited, as the foundation of our civilization.

* * *

“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward. But when thou doest alms, let not thy left hand know what thy right hand doeth: That thine alms may be in secret: and thy Father which seeth in secret himself shall reward thee openly.” (Matthew 6:2-4)

“Wherefore if thy hand or thy foot offend thee, cut them off, and cast them from thee: it is better for thee to enter into life halt or maimed, rather than having two hands or two feet to be cast into everlasting fire. And if thine eye offend thee, pluck it out, and cast it from thee: it is better for thee to enter into life with one eye, rather than having two eyes to be cast into hell fire.” (Matthew 18:8-9)

The Prince of Medicine
by Susan P. Mattern
pp. 232-233

He mentions speaking with many women who described themselves as “hysterical,” that is, having an illness caused, as they believed, by a condition of the uterus (hystera in Greek) whose symptoms varied from muscle contractions to lethargy to nearly complete asphyxia (Loc. Affect. 6.5, 8.414K). Galen, very aware of Herophilus’s discovery of the broad ligaments anchoring the uterus to the pelvis, denied that the uterus wandered around the body like an animal wreaking havoc (the Hippocratics imagined a very actively mobile womb). But the uterus could, in his view, become withdrawn in some direction or inflamed; and in one passage he recommends the ancient practice of fumigating the vagina with sweet-smelling odors to attract the uterus, endowed in this view with senses and desires of its own, to its proper place; this technique is described in the Hippocratic Corpus but also evokes folk or shamanistic medicine.

“Between the Dream and Reality”:
Divination in the Novels of Cormac McCarthy

by Robert A. Kottage
pp. 50-52

A definition of haruspicy is in order. Known to the ancient Romans as the Etrusca disciplina or “Etruscan art” (P.B. Ellis 221), haruspicy originally included all three types of divination practiced by the Etruscan hierophant: interpretation of fulgura (lightnings), of monstra (birth defects and unusual meteorological occurrences), and of exta (internal organs) (Hammond). ”Of these, the practice still commonly associated with the term is the examination of organs, as evidenced by its OED definition: “The practice or function of a haruspex; divination by inspection of the entrails of victims” (“haruspicy”).”A detailed science of liver divination developed in the ancient world, and instructional bronze liver models formed by the Etruscans—as well as those made by their predecessors the Hittites and Babylonians—have survived (Hammond). ”Any unusual features were noted and interpreted by those trained in the esoteric art: “Significant for the exta were the size, shape, colour, and markings of the vital organs, especially the livers and gall bladders of sheep, changes in which were believed by many races to arise supernaturally… and to be susceptible of interpretation by established rules”(Hammond). Julian Jaynes, in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind, comments on the unique quality of haruspicy as a form of divination, arriving as it did at the dawn of written language: “Extispicy [divining through exta] differs from other methods in that the metaphrand is explicitly not the speech or actions of the gods, but their writing. The baru [Babylonian priest] first addressed the gods… with requests that they ‘write’ their message upon the entrails of the animal” (Jaynes 243). Jaynes also remarks that organs found to contain messages of import would sometimes be sent to kings, like letters from the gods (Jaynes 244). Primitive man sought (and found) meaning everywhere.

The logic behind the belief was simple: the whole universe is a single, harmonious organism, with the thoughts and intensions of the intangible gods reflected in the tangible world. For those illiterate to such portents, a lightning bolt or the birth of a hermaphrodite would have been untranslatable; but for those with proper training, the cosmos were as alive with signs as any language:

The Babylonia s believed that the decisions of their gods, like those of their kings, were arbitrary, but that mankind could at least guess their will. Any event on earth, even a trivial one, could reflect or foreshadow the intentions of the gods because the universe is a living organism, a whole, and what happens in one part of it might be caused by a happening in some distant part. Here we see a germ of the theory of cosmic sympathy formulated by Posidonius. (Luck 230)

This view of the capricious gods behaving like human king is reminiscent of the evil archons of gnosticism; however, unlike gnosticism, the notion of cosmic sympathy implies an illuminated and vastly “readable” world, even in the darkness of matter. The Greeks viewed pneuma as “the substance that penetrates and unifies all things. In fact, this tension holds bodies together, and every coherent thing would collapse without it” (Lawrence)—a notion that diverges from the gnostic idea of pneuma as spiritual light temporarily trapped in the pall of physicality.

Proper vision, then, is central to all the offices of the haruspex. The world cooperates with the seer by being illuminated, readable.

p. 160

Jaynes establishes the important distinction between the modern notion of chance commonly associated with coin flipping and the attitude of the ancient Mesopotamians toward sortilege:

We are so used to the huge variety of games of chance, of throwing dice, roulette wheels, etc., all of them vestiges of this ancient practice of divination by lots, that we find it difficult to really appreciate the significance of this practice historically. It is a help here to realize that there was no concept of chance whatever until very recent times…. [B]ecause there was no chance, the result had to be caused by the gods whose intentions were being divined. (Jaynes 240)

In a world devoid of luck, proper divination is simply a matter of decoding the signs—bad readings are never the fault of the gods, but can only stem from the reader.

The Consciousness of John’s Gospel
A Prolegomenon to a Jaynesian-Jamesonian Approach

by Jonathan Bernier

When reading the prologue’s historical passages, one notes a central theme: the Baptist witnesses to the light coming into the world. Put otherwise, the historical witnesses to the cosmological. This, I suggest, can be understood as an example of what Jaynes (1976: 317–338) calls ‘the quest for authorization.’ As the bicameral mind broke down, as exteriorised thought ascribed to other-worldly agents gave way to interiorised thought ascribed to oneself, as the voices of the gods spoke less frequently, people sought out new means, extrinsic to themselves, by which to authorise belief and practice; they quite literally did not trust themselves. They turned to oracles and prophets, to auguries and haruspices, to ecstatics and ecstasy. Proclamatory prophecy of the sort practiced by John the Baptist should be understood in terms of the bicameral mind: the Lord God of Israel, external to the Baptist, issued imperatives to the Baptist, and then the Baptist, external to his audience, relayed those divine imperatives to his listeners. Those who chose to follow the Baptist’s imperatives operated according to the logic of the bicameral mind, as described by Jaynes (1976: 84–99): the divine voice speaks, therefore I act. That voice just happens now to be mediated through the prophet, and not apprehended directly in the way that the bicameral mind apprehended the voices and visions. The Baptist as witness to God’s words and Word is the Baptist as bicameral vestige.

By way of contrast, the Word-become-flesh can be articulated in terms of the bicameral mind giving way to consciousness. The Jesus of the prologue represents the apogee of interiorised consciousness: the Word is not just inside him, but he in fact is the Word. 1:17 draws attention to an implication consequent to this indwelling of the Word: with the divine Word – and thus also the divine words – dwelling fully within oneself, what need is there for that set of exteriorised thoughts known as the Mosaic Law? […]

[O]ne notes Jaynes’ (1976: 301, 318) suggestion that the Mosaic Law represents a sort of half-way house between bicameral exteriority and conscious interiority: no longer able to hear the voices, the ancient Israelites sought external authorisation in the written word; eventually, however, as the Jewish people became increasingly acclimated to conscious interiority, they became increasingly ambivalent towards the need for and role of such exteriorised authorisation. Jaynes (1976: 318) highlights Jesus’ place in this emerging ambivalence; however, in 1:17 it is not so much that exteriorised authorisation is displaced by interiorised consciousness but that Torah as exteriorised authority is replaced by Jesus as exteriorised authority. Jesus, the fully conscious Word-made-flesh, might displace the Law, but it is not altogether clear that he offers his followers a full turn towards interiorised consciousness; one might, rather, read 1:17 as a bicameral attempt to re-contain the cognition revolution of which Jaynes considers Jesus to be a flag-bearer.

The Discovery of the Mind
by Bruno Snell
pp. 6-8

We find it difficult to conceive of a mentality which made no provision for the body as such. Among the early expressions designating what was later rendered as soma or ‘body’, only the plurals γυα, μλεα, etc. refer to the physical nature of the body; for chros is merely the limit of the body, and demas represents the frame, the structure, and occurs only in the accusative of specification. As it is, early Greek art actually corroborates our impression that the physical body of man was comprehended, not as a unit but as an aggregate. Not until the classical art of the fifth century do we find attempts to depict the body as an organic unit whose parts are mutually correlated. In the preceding period the body is a mere construct of independent parts variously put together.6 It must not be thought, however, that the pictures of human beings from the time of Homer are like the primitive drawings to which our children have accustomed us, though they too simply add limb to limb.

Our children usually represent the human shape as shown in fig. i, whereas fig. 2 reproduces the Greek concept as found on the vases of the geometric period. Our children first draw a body as the central and most important part of their design; then they add the head, the arms and the legs. The geometric figures, on the other hand, lack this central part; they are nothing but μλεα κα γυα, i.e. limbs with strong muscles, separated from each other by means of exaggerated joints. This difference is of course partially dependent upon the clothes they wore, but even after we have made due allowance for this the fact remains that the Greeks of this early period seem to have seen in a strangely ‘articulated’ way. In their eyes the individual limbs are clearly distinguished from each other, and the joints are, for the sake of emphasis, presented as extraordinarily thin, while the fleshy parts are made to bulge just as unrealistically. The early Greek drawing seeks to demonstrate the agility of the human figure, the drawing of the modern child its compactness and unity.

Thus the early Greeks did not, either in their language or in the visual arts, grasp the body as a unit. The phenomenon is the same as with the verbs denoting sight; in the latter, the activity is at first understood in terms of its conspicuous modes, of the various attitudes and sentiments connected with it, and it is a long time before speech begins to address itself to the essential function of this activity. It seems, then, as if language aims progressively to express the essence of an act, but is at first unable to comprehend it because it is a function, and as such neither tangibly apparent nor associated with certain unambiguous emotions. As soon, however, as it is recognized and has received a name, it has come into existence, and the knowledge of its existence quickly becomes common property. Concerning the body, the chain of events may have been somewhat like this: in the early period a speaker, when faced by another person, was apparently satisfied to call out his name: this is Achilles, or to say: this is a man. As a next step, the most conspicuous elements of his appearance are described, namely his limbs as existing side by side; their functional correlation is not apprehended in its full importance until somewhat later. True enough, the function is a concrete fact, but its objective existence does not manifest itself so clearly as the presence of the individual corporeal limbs, and its prior significance escapes even the owner of the limbs himself. With the discovery of this hidden unity, of course, it is at once appreciated as an immediate and self-explanatory truth.

This objective truth, it must be admitted, does not exist for man until it is seen and known and designated by a word; until, thereby, it has become an object of thought. Of course the Homeric man had a body exactly like the later Greeks, but he did not know it qua body, but merely as the sum total of his limbs. This is another way of saying that the Homeric Greeks did not yet have a body in the modern sense of the word; body, soma, is a later interpretation of what was originally comprehended as μλη or γυα, i.e. as limbs. Again and again Homer speaks of fleet legs, of knees in speedy motion, of sinewy arms; it is in these limbs, immediately evident as they are to his eyes, that he locates the secret of life.7

Hebrew and Buddhist Selves:
A Constructive Postmodern Study

by Nicholas F. Gier

Finally, at least two biblical scholars–in response to the question “What good is this pre-modern self?”–have suggested that the Hebrew view (we add the Buddhist and the Chinese) can be used to counter balance the dysfunctional elements of modern selfhood. Both Robert Di Vito and Jacqueline Lapsley have called this move “postmodern,” based, as they contend, on the concept of intersubjectivity.[3] In his interpretation of Charles S. Peirce as a constructive postmodern thinker, Peter Ochs observes that Peirce reaffirms the Hebraic view that relationality is knowledge at its most basic level.  As Ochs states: “Peirce did not read Hebrew, but the ancient Israelite term for ‘knowledge’–yidiah–may convey Peirce’s claim better than any term he used.  For the biblical authors, ‘to know’ is ‘to have intercourse with’–with the world, with one’s spouse, with God.”[4]

The view that the self is self-sufficient and self-contained is a seductive abstraction that contradicts the very facts of our interdependent existence.  Modern social atomism was most likely the result of modeling the self on an immutable transcendent deity (more Greek than biblical) and/or the inert isolated atom of modern science. […]

It is surprising to discover that the Buddhist skandhas are more mental in character, while the Hebrew self is more material in very concrete ways.  For example, the Psalmist says that “all my inner parts (=heart-mind) bless God’s holy name” (103.1); his kidneys (=conscience) chastise him (16.7); and broken bones rejoice (16:7).  Hebrew bones offer us the most dramatic example of a view of human essence most contrary to Christian theology.  One’s essential core is not immaterial and invisible; rather, it is one’s bones, the most enduring remnant of a person’s being.  When the nepeš “rejoices in the Lord” at Ps. 35.9, the poet, in typical parallel fashion, then has the bones speak for her in v. 10.  Jeremiah describes his passion for Yahweh as a “fire” in his heart (l�b) that is also in his bones (20.9), just as we say that a great orator has “fire in his belly.” The bones of the exiles will form the foundation of those who will be restored by Yahweh’s rãah in Ezekiel 37, and later Pharisidic Judaism speaks of the bones of the deceased “sprouting” with new life in their resurrected bodies.[7]  The bones of the prophet Elijah have special healing powers (2 Kgs. 13.21).  Therefore, the cult of relic bones does indeed have scriptural basis, and we also note the obvious parallel to the worship of the Buddha’s bones.

With all these body parts functioning in various ways, it is hard to find, as Robert A. Di Vito suggests, “a true ‘center’ for the [Hebrew] person . . . a ‘consciousness’ or a self-contained ‘self.’”[8] Di Vito also observes that the Hebrew word for face (p~n§m) is plural, reflecting all the ways in which a person appears in multifarious social interactions.  The plurality of faces in Chinese culture is similar, including the “loss of face” when a younger brother fails to defer to his elder brother, who would have a difference “face” with respect to his father.  One may be tempted to say that the j§va is the center of the Buddhist self, but that would not be accurate because this term simply designates the functioning of all the skandhas together.

Both David Kalupahana and Peter Harvey demonstrate how much influence material form (rãpa) has on Buddhist personality, even at the highest stage of spiritual development.[9]  It is Zen Buddhists, however, who match the earthy Hebrew rhetoric about the human person. When Bodhidharma (d. 534 CE) prepared to depart from his body, he asked four of his disciples what they had learned from him.  As each of them answered they were offered a part of his body: his skin, his flesh, his bones, and his marrow.  The Zen monk Nangaku also compared the achievements of his six disciples to six parts of his body. Deliberately inverting the usual priority of mind over body, the Zen monk Dogen (1200-1253) declared that “The Buddha Way is therefore to be attained above all through the body.”[10]  Interestingly enough, the Hebrews rank the flesh, skin, bones, and sinews as the most essential parts of the body-soul.[11]  The great Buddhist dialectician Nagarjuna (2nd Century CE) appears to be the source of Bodhidharma’s body correlates, but it is clear that Nagarjuna meant them as metaphors.[12]  In contrast it seems clear that, although dead bones rejoicing is most likely a figure of speech, the Hebrews were convinced that we think, feel, and perceive through and with all parts of our bodies.

In Search of a Christian Identity
by Robert Hamilton

The essential points here, are the “social disengagement” of the modern self, away from identifying solely with roles defined by the family group, and the development of a “personal unity” within the individual. Morally speaking, we are no longer empty vessels to be filled up by some god, or servant of god, we are now responsible for our own actions, and decisions, in light of our own moral compass. I would like to mention Julian Jayne’s seminal work, The Origin of Consciousness in the Breakdown of the Bicameral Mind, as a pertinent hypothesis for an attempt to understand the enormous distance between the modern sense of self with that of the ancient mind, and its largely absent subjective state.[13]

“The preposterous hypothesis we have come to in the previous chapter is that at one time human nature was split in two, an executive part called a god, and a follower part called a man.”[14]

This hypothesis sits very well with De Vitos’ description of the permeable personal identity of Old Testament characters, who are “taken over,” or possessed, by Yahweh.[15] The evidence of the Old Testament stories points in this direction, where we have patriarchal family leaders, like Abraham and Noah, going around making morally contentious decisions (in today’s terms) based on their internal dialogue with a god – Jehovah.[16] As Jaynes postulates later in his book, today we would call this behaviour schizophrenia. De Vito, later in the article, confirms, that:

“Of course, this relative disregard for autonomy in no way limits one’s responsibility for conduct–not even when Yhwh has given “statutes that were not good” in order to destroy Israel “(Ezek 20:25-26).[17]

Cognitive Perspectives on Early Christology
by Daniel McClellan

The insights of CSR [cognitive science of religion] also better inform our reconstruction of early Jewish concepts of agency, identity, and divinity. Almost twenty years ago, Robert A. Di Vito argued from an anthropological perspective that the “person” in the Hebrew Bible “is more radically decentered, ‘dividual,’ and undefined with respect to personal boundaries … [and] in sharp contrast to modernity, it is identified more closely with, and by, its social roles.”40 Personhood was divisible and permeable in the Hebrew Bible, and while there was diachronic and synchronic variation in certain details, the same is evident in the literature of Second Temple Judaism and early Christianity. This is most clear in the widespread understanding of the spirit (רוח (and the soul (נפש – (often used interchangeably – as the primary loci of a person’s agency or capacity to act.41 Both entities were usually considered primarily constitutive of a person’s identity, but also distinct from their physical body and capable of existence apart from it.42 The physical body could also be penetrated or overcome by external “spirits,” and such possession imposed the agency and capacities of the possessor.43 The God of Israel was largely patterned after this concept of personhood,44 and was similarly partible, with God’s glory (Hebrew: כבוד ;Greek: δόξα), wisdom (חכמה/σοφία), spirit (רוח/πνεῦµα), word (דבר/λόγος), presence (שכינה ,(and name (שם/ὄνοµα) operating as autonomous and sometimes personified loci of agency that could presence the deity and also possess persons (or cultic objects45) and/or endow them with special status or powers.46

Did Christianity lead to schizophrenia?
Psychosis, psychology and self reference

by Roland Littlewood

This new deity could be encountered anywhere—“Wherever two are gathered in my name” (Mathew 18.20)—for Christianity was universal and individual (“neither Jew nor Greek… bond nor free… male or female, for you are all one man in Christ Jesus” says St. Paul). And ultimate control rested with Him, Creator and Master of the whole universe, throughout the whole universe. No longer was there any point in threatening your recalcitrant (Egyptian) idol for not coming up with the goods (Cumont, 1911/1958, p. 93): as similarly in colonial Africa, at least according to the missionaries (Peel, 2000). If God was independent of social context and place, then so was the individual self at least in its conversations with God (as Dilthey argues). Religious status was no longer signalled by external signs (circumcision), or social position (the higher stages of the Roman priesthood had been occupied by aspiring politicians in the course of their career: “The internal status of the officiating person was a matter of… indifference to the celestial spirits” [Cumont, 1911/1958, p. 91]). “Now it is not our flesh that we must circumcise, we must crucify ourselves, exterminate and mortify our unreasonable desires” (John Chrysostom, 1979), “circumcise your heart” says “St. Barnabas” (2003, p. 45) for religion became internal and private. Like the African or Roman self (Mauss, 1938/1979), the Jewish self had been embedded in a functioning society, individually decentred and socially contextualised (Di Vito, 1999); it survived death only through its bodily descendants: “But Abram cried, what can you give me, seeing I shall die childless” (Genesis 15.2). To die without issue was extinction in both religious systems (Madigan & Levenson, 2008). But now an enduring part of the self, or an associate of it—the soul—had a connection to what might be called body and consciousness yet had some sort of ill defined association with them. In its earthly body it was in potential communication with God. Like God it was immaterial and immortal. (The associated resurrection of the physical body, though an essential part of Christian dogma, has played an increasingly less important part in the Church [cf. Stroumsa, 1990].) For 19th-century pagan Yoruba who already accepted some idea of a hereafter, each village has its separate afterlife which had to be fused by the missionaries into a more universal schema (Peel, 2000, p. 175). If the conversation with God was one to one, then each self-aware individual had then to make up their own mind on adherence—and thus the detached observer became the surveyor of the whole world (Dumont, 1985). Sacral and secular became distinct (separate “functions” as Dumont calls them), further presaging a split between psychological faculties. The idea of the self/soul as an autonomous unit facing God became the basis, via the stages Mauss (1938/1979) briefly outlines, for a political philosophy of individualism (MacFarlane, 1978). The missionaries in Africa constantly attempted to reach the inside of their converts, but bemoaned that the Yoruba did not seem to have any inward core to the self (Peel, 2000, Chapter 9).

Embodying the Gospel:
Two Exemplary Practices

by Joel B. Green
pp. 12-16

Philosopher Charles Taylor’s magisterial account of the development of personal identity in the West provides a useful point of entry into this discussion. He shows how modern assumptions about personhood in the West developed from Augustine in the fourth and fifth centuries, through major European philosophers in the seventeenth and eighteenth centuries (e.g.,Descartes, Locke, Kant), and into the present. The result is a modern human “self defined by the powers of disengaged reason—with its associated ideals of self-responsible freedom and dignity—of self-exploration, and of personal commitment.”2 These emphases provide a launching point for our modern conception of “inwardness,” that is, the widespread view that people have an inner self, which is the authentic self.

Given this baseline understanding of the human person, it would seem only natural to understand conversion in terms of interiority, and this is precisely what William James has done for the modern west. In his enormously influential 1901–02 Gifford Lectures at Edinburgh University, published in 1902 under the title The Varieties of Religious Experience, James identifies salvation as the resolution of a person’s inner, subjective crisis.Salvation for James is thus an individual, instantaneous, feeling-based, interior experience.3 Following James, A.D. Nock’s celebrated study of conversion in antiquity reached a similar conclusion: “By conversion we mean there orientation of the soul of an individual, his [sic] deliberate turning from in different or from an earlier form of piety to another, a turning which involves a consciousness that a great change is involved, that the old was wrong and the new is right.” Nock goes on to write of “a passion of willingness and acquiescence, which removes the feeling of anxiety, a sense of perceiving truths not known before, a sense of clean and beautiful newness within and without and an ecstasy of happiness . . .”4 In short, what is needed is a “change of heart.”

However pervasive they may be in the contemporary West, whether in-side or outside the church, such assumptions actually sit uneasily with Old and New Testament portraits of humanity. Let me mention two studies that press our thinking in an alternative direction. Writing with reference to Old Testament anthropology, Robert Di Vito finds that the human “(1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively trans-parent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘inner depths’), and (4) is ‘authentic’ precisely in its heteronomy, in its obedience to another and dependence upon another.”5 Two aspects of Di Vito’s summary are of special interest: first, his emphasis on a more communitarian experience of personhood; and second, his emphasis on embodiment. Were we to take seriously what these assumptions might mean for embracing and living out the Gospel, we might reflect more on what it means to be saved within the community of God’s people and, indeed, what it means to be saved in relation to the whole of God’s creation. We might also reflect less on conversion as decision-making and more on conversion as pattern-of-life.

The second study, by Klaus Berger, concerns the New Testament. Here,Berger investigates the New Testament’s “historical psychology,” repeatedly highlighting both the ease with which we read New Testament texts against modern understandings of humanity and the problems resident in our doing so.6 His list of troublesome assumptions—troublesome because they are more at home in the contemporary West than in the ancient Mediterranean world—includes these dualities, sometimes even dichotomies: doing and being, identity and behavior, internal and external. A more integrated understanding of people, the sort we find in the New Testament world, he insists, would emphasize life patterns that hold together believing, thinking, feeling, and behaving, and allow for a clear understanding that human behavior in the world is both simply and profoundly em-bodied belief. Perspectives on human transformation that take their point  of departure from this “psychology” would emphasize humans in relation-ship with other humans, the bodily nature of human allegiances and commitments, and the fully integrated character of human faith and life. […]

Given how John’s message is framed in an agricultural context, it is not a surprise that his point turns on an organic metaphor rather than a mechanical one. The resulting frame has no room for prioritizing inner (e.g.,“mind” or “heart”) over outer (e.g., “body” or “behavior”), nor of fitting disparate pieces together to manufacture a “product,” nor of correlating status and activity as cause and effect. Organic metaphors neither depend on nor provoke images of hierarchical systems but invite images of integration, interrelation, and interdependence. Consistent with this organic metaphor, practices do not occupy a space outside the system of change, but are themselves part and parcel of the system. In short, John’s agricultural metaphor inseparably binds “is” and “does” together.

Ressurrection and the Restoration of Israel:
The Ultimate Victory of the God of Life
by Jon Douglas Levenson
pp. 108-114

In our second chapter, we discussed one of the prime warrants often adduced either for the rejection of resurrection (by better-informed individuals) or for its alleged absence, and the alleged absence of any notion of the afterlife, in Judaism (by less informed individuals). That warrant is the finality of death in the Hebrew Bible, or at least in most of it, and certainly in what is from a Jewish point of view its most important subsection, the first five books. For no resurrections take place therein, and predictions of a general resurrection at the end of time can be found in the written Torah only through ingenious derash of the sort that the rabbinic tradition itself does not univocally endorse or replicate in its translations. In the same chapter, we also identified one difficulty with this notion that the Pentateuch exhibits no possibility of an afterlife but supports, instead, the absolute finality of death, and to this point we must now return. I am speaking of the difficulty of separating individuals from their families (including the extended family that is the nation). If, in fact, individuals are fundamentally and inextricably embedded within their fam ilies, then their own deaths, however terrifying in prospect, will lack the final ity that death carries with it in a culture with a more individualistic, atomistic understanding of the self. What I am saying here is something more radical than the truism that in the Hebrew Bible, parents draw consolation from the thought that their descendants will survive them (e.g., Gen 48:11), just as, conversely, the parents are plunged into a paralyzing grief at the thought that their progeny have perished (e.g., Gen 37:33–35; Jer 31:15). This is, of course, the case, and probably more so in the ancient world, where children were the support of one’s old age, than in modern societies, where the state and the pension fund fill many roles previously concentrated in the family. That to which I am pointing, rather, is that the self of an individual in ancient Israel was entwined with the self of his or her family in ways that are foreign to the modern West, and became foreign to some degree already long ago.

Let us take as an example the passage in which Jacob is granted ‘‘the blessing of Abraham,’’ his grandfather, according to the prayer of Isaac, his father, to ‘‘possess the land where you are sojourning, which God assigned to Abraham’’ (Gen 28:1–4). The blessing on Abraham, as we have seen, can be altogether and satisfactorily fulfilled in Abraham’s descendants. Thus, too, can Ezekiel envision the appointment of ‘‘a single shepherd over [Israel] to tend them—My servant David,’’ who had passed away many generations before (Ezek 34:23). Can we, without derash, see in this a prediction that David, king of Judah and Israel, will be raised from the dead? To do so is to move outside the language of the text and the culture of Israel at the time of Ezekiel, which does not speak of the resurrections of individuals at all. But to say, as the School of Rabbi Ishmael said about ‘‘to Aaron’’ in Num 18:28,1 that Ezekiel means only one who is ‘‘like David’’—a humble shepherd boy who comes to triumph in battle and rises to royal estate, vindicating his nation and making it secure and just—is not quite the whole truth, either. For biblical Hebrew is quite capable of saying that one person is ‘‘like’’ another or descends from another’s lineage (e.g., Deut 18:15; 2 Kgs 22:2; Isa 11:1) without implying identity of some sort. The more likely interpretation, rather, is that Ezekiel here predicts the miraculous appearance of a royal figure who is not only like David but also of David, a person of Davidic lineage, that is, who functions as David redivivus. This is not the resurrection of a dead man, to be sure, but neither is it the appearance of some unrelated person who only acts like David, or of a descendant who is ‘‘a chip off the old block.’’ David is, in one obvious sense, dead and buried (1 Kgs 2:10), and his death is final and irreversible. In another sense, harder for us to grasp, however, his identity survives him and can be manifested again in a descendant who acts as he did (or, to be more precise, as Ezekiel thought he acted) and in whom the promise to David is at long last fulfilled. For David’s identity was not restricted to the one man of that name but can reappear to a large measure in kin who share it.

This is obviously not reincarnation. For that term implies that the ancient Israelites believed in something like the later Jewish and Christian ‘‘soul’’ or like the notion (such as one finds in some religions) of a disembodied consciousness that can reappear in another person after its last incarnation has died. In the Hebrew Bible, however, there is nothing of the kind. The best approximation is the nepes, the part of the person that manifests his or her life force or vitality most directly. James Barr defines the nepes as ‘‘a superior controlling centre which accompanies, exposes and directs the existence of that totality [of the personality] and one which, especially, provides the life to the whole.’’2 Although the nepes does exhibit a special relationship to the life of the whole person, it is doubtful that it constitutes ‘‘a superior controlling center.’’ As Robert Di Vito points out, ‘‘in the OT, human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’’’ Thus, the eye talks or thinks (Job 24:15) and even mocks (Prov 30:17), the ear commends or pronounces blessed (Job 29:11), blood cries out (Gen 4:10), the nepes (perhaps in the sense of gullet or appetite) labors (Prov 16:26) or pines (Ps 84:3), kidneys rejoice and lips speak (Prov 23:16), hands shed blood (Deut 21:7), the heart and flesh sing (Ps 84:3), all the psalmist’s bones say, ‘‘Lord, who is like you?’’ (Ps 35:10), tongue and lips lie or speak the truth (Prov 12:19, 22), hearts are faithful (Neh 9:8) or wayward (Jer 5:23), and so forth.3 The point is not that the individual is simply an agglomeration of distinct parts. It is, rather, that the nepes is one part of the self among many and does not control the entirety, as the old translation ‘‘soul’’ might lead us to expect.4 A similar point might be made about the modern usage of the term person.

[4. It is less clear to me that this is also Di Vito’s point. He writes, for example: ‘‘The biblical character presents itself to us more as parts than as a whole . . . accordingly, in the OT one searches in vain for anything really corresponding to the Platonic localization of desire and emotion in a central ‘locale,’ like the ‘soul’ under the hegemony of reason, a unified and self-contained center from which the individual’s activities might flow, a ‘self’ that might finally assert its control’’ (‘‘Old Testament Anthropology,’’ 228).]

All of the organs listed above, Di Vito points out, are ‘‘susceptible to moral judgment and evaluation.’’5 Not only that, parts of the body besides the nepes can actually experience emotional states. As Aubrey R. Johnson notes, ‘‘Despondency, for example, is felt to have a shriveling effect upon the bones . . . just as they are said to decay or become soft with fear or distress, and so may be referred to as being themselves troubled or afraid’’ (e.g., Ezek 37:11; Hab 3:16; Jer 23:9; Ps 31:11). In other words, ‘‘the various members and secretions of the body . . . can all be thought of as revealing psychical properties,’’6 and this is another way of saying that the nepes does not really correspond to Barr’s ‘‘superior controlling centre’’ at all. For many of the functions here attributed to the nepes are actually distributed across a number of parts of the body. The heart, too, often functions as the ‘‘controlling centre,’’ determining, for example, whether Israel will follow God’s laws or not (e.g., Ezek 11:19). The nepes in the sense of the life force of the body is sometimes identified with the blood, rather than with an insensible spiritual essence of the sort that words like ‘‘soul’’ or ‘‘person’’ imply. It is in light of this that we can best understand the Pentateuchal laws that forbid the eating of blood on the grounds that it is the equivalent of eating life itself, eating, that is, an animal that is not altogether dead (Lev 17:11, 14; Deut 12:23; cf. Gen 9:4–5). If the nepes ‘‘provides the life to the whole,’’7 so does the blood, with which laws like these, in fact, equate it. The bones, which, as we have just noted, can experience emotional states, function likewise on occasion. When a dead man is hurriedly thrown into Elisha’s grave in 2 Kgs 13:21, it is contact with the wonder-working prophet’s bones that brings about his resurrection. And when the primal man at long last finds his soul mate, he exclaims not that she (unlike the animals who have just been presented to him) shares a nepes with him but rather that she ‘‘is bone of my bones / And flesh of my flesh’’ (Gen 2:23).

In sum, even if the nepes does occasionally function as a ‘‘controlling centre’’ or a provider of life, it does not do so uniquely. The ancient Israelite self is more dynamic and internally complex than such a formulation allows. It should also be noticed that unlike the ‘‘soul’’ in most Western philosophy, the biblical nepes can die. When the non-Israelite prophet Balaam expresses his wish to ‘‘die the death of the upright,’’ it is his nepes that he hopes will share their fate (Num 23:10), and the same applies to Samson when he voices his desire to die with the Philistines whose temple he then topples upon all (Judg 16:30). Indeed, ‘‘to kill the nepes’’ functions as a term for homicide in biblical Hebrew, in which context, as elsewhere, it indeed has a meaning like that of the English ‘‘person’’ (e.g., Num 31:19; Ezek 13:19).8 As Hans Walter Wolff puts it, nepes ‘‘is never given the meaning of an indestructible core of being, in contradistinction to the physical life . . . capable of living when cut off from that life.’’9 Like heart, blood, and bones, the nepes can cease to function. It is not quite correct to say, however, that this is because it is ‘‘physical’’ rather than ‘‘spiritual,’’ for the other parts of the self that we consider physical— heart, blood, bones, or whatever—are ‘‘spiritual’’ as well—registering emotions, reacting to situations, prompting behavior, expressing ideas, each in its own way. A more accurate summary statement would be Johnson’s: ‘‘The Israelite conception of man [is] as a psycho-physical organism.’’10 ‘‘For some time at least [after a person’s death] he may live on as an individual (apart from his possible survival within the social unit),’’ observes Johnson, ‘‘in such scattered elements of his personality as the bones, the blood and the name.’’11 It would seem to follow that if ever he is to return ‘‘as a psycho-physical organ ism,’’ it will have to be not through reincarnation of his soul in some new person but through the resurrection of the body, with all its parts reassembled and revitalized. For in the understanding of the Hebrew Bible, a human being is not a spirit, soul, or consciousness that happens to inhabit this body or that—or none at all. Rather, the unity of body and soul (to phrase the point in the unhappy dualistic vocabulary that is still quite removed from the way the Hebrew Bible thought about such things) is basic to the person. It thus follows that however distant the resurrection of the dead may be from the understanding of death and life in ancient Israel, the concept of immortality in the sense of a soul that survives death is even more distant. And whatever the biblical problems with the doctrine of resurrection—and they are formidable—the biblical problems with the immortality that modern Jewish prayer books prefer (as we saw in our first chapter) are even greater.

Di Vito points, however, to an aspect of the construction of the self in ancient Israel that does have some affinities with immortality. This is the thorough embeddedness of that individual within the family and the corollary difficulty in the context of this culture of isolating a self apart from the kin group. Drawing upon Charles Taylor’s highly suggestive study The Sources of the Self,12 Di Vito points out that ‘‘salient features of modern identity, such as its pronounced individualism, are grounded in modernity’s location of the self in the ‘inner depths’ of one’s interiority rather than in one’s social role or public relations.’’13 Cautioning against the naïve assumption that ancient Israel adhered to the same conception of the self, Di Vito develops four points of contrast between modern Western and ancient Israelite thinking on this point. In the Hebrew Bible,

the subject (1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively transparent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘‘inner depths’’), and (4) is ‘‘authentic’’ precisely in its heteronomy, in its obedience to another and dependence upon another.14

Although Di Vito’s formulation is overstated and too simple—is every biblical figure, even David, presented as ‘‘altogether lacking in a sense of ‘inner depths’’’?—his first and last points are highly instructive and suggest that the familial and social understanding of ‘‘life’’ in the Hebrew Bible is congruent with larger issues in ancient Israelite culture. ‘‘Life’’ and ‘‘death’’ mean different things in a culture like ours, in which the subject is not so ‘‘deeply embedded . . . in its social identity’’ and in which authenticity tends to be associated with cultivation of individual traits at the expense of conformity, and with the attainment of personal autonomy and independence.

The contrast between the biblical and the modern Western constructions of personal identity is glaring when one considers the structure of what Di Vito calls ‘‘the patriarchal family.’’ This ‘‘system,’’ he tells us, ‘‘with strict subor dination of individual goals to those of the extended lineal group, is designed to ensure the continuity and survival of the family.’’15 In this, of course, such a system stands in marked contrast to liberal political theory that has developed over the past three and a half centuries, which, in fact, virtually assures that people committed to that theory above all else will find the Israelite system oppressive. For the liberal political theory is one that has increasingly envi sioned a system in which society is composed of only two entities, the state and individual citizens, all of whom have equal rights quite apart from their famil ial identities and roles. Whether or not one affirms such an identity or plays the role that comes with it (or any role different from that of other citizens) is thus relegated to the domain of private choice. Individuals are guaranteed the free dom to renounce the goals of ‘‘the extended lineal group’’ and ignore ‘‘the continuity and survival of the family,’’ or, increasingly, to redefine ‘‘family’’ according to their own private preferences. In this particular modern type of society, individuals may draw consolation from the thought that their group (however defined) will survive their own deaths. As we have had occasion to remark, there is no reason to doubt that ancient Israelites did so, too. But in a society like ancient Israel, in which ‘‘the subject . . . is deeply embedded, or engaged, in its social identity,’’ ‘‘with strict subordination of individual goals to those of the extended lineal group,’’ the loss of the subject’s own life and the survival of the familial group cannot but have a very different resonance from the one most familiar to us. For even though the subject’s death is irreversible—his or her nepes having died just like the rest of his or her body/soul—his or her fulfillment may yet occur, for identity survives death. God can keep his promise to Abraham or his promise to Israel associated with the gift of David even after Abraham or David, as an individual subject, has died. Indeed, in light of Di Vito’s point that ‘‘the subject . . . is comparatively decentered and undefined with respect to personal boundaries,’’ the very distinction between Abraham and the nation whose covenant came through him (Genesis 15; 17), or between David and the Judean dynasty whom the Lord has pledged never to abandon (2 Sam 7:8–16; Ps 89:20–38), is too facile.

Our examination of personal identity in the earlier literature of the Hebrew Bible thus suggests that the conventional view is too simple: death was not final and irreversible after all, at least not in the way in which we are inclined to think of these matters. This is not, however, because individuals were be lieved to possess an indestructible essence that survived their bodies. On the one hand, the body itself was thought to be animated in ways foreign to modern materialistic and biologistic thinking, but, on the other, even its most spiritual part, its nepeˇs (life force) or its n˘eˇs¯amâ (breath), was mortal. Rather, the boundary between individual subjects and the familial/ethnic/national group in which they dwelt, to which they were subordinate, and on which they depended was so fluid as to rob death of some of the horror it has in more individualistic cultures, influenced by some version of social atomism. In more theological texts, one sees this in the notion that subjects can die a good death, ‘‘old and contented . . . and gathered to [their] kin,’’ like Abraham, who lived to see a partial—though only a partial—fulfillment of God’s promise of land, progeny, and blessing upon him, or like Job, also ‘‘old and contented’’ after his adversity came to an end and his fortunes—including progeny—were restored (Gen 25:8; Job 42:17). If either of these patriarchal figures still felt terror in the face of his death, even after his afflictions had been reversed, the Bible gives us no hint of it.16 Death in situations like these is not a punishment, a cause for complaint against God, or the provocation of an existential crisis. But neither is it death as later cultures, including our own, conceive it.

Given this embeddedness in family, there is in Israelite culture, however, a threat that is the functional equivalent to death as we think of it. This is the absence or loss of descendants.

The Master and His Emissary
by Iain McGilchrist
pp. 263-264

Whoever it was that composed or wrote them [the Homeric epics], they are notable for being the earliest works of Western civilisation that exemplify a number of characteristics that are of interest to us. For in their most notable qualities – their ability to sustain a unified theme and produce a single, whole coherent narrative over a considerable length, in their degree of empathy, and insight into character, and in their strong sense of noble values (Scheler’s Lebenswerte and above) – they suggest a more highly evolved right hemisphere.

That might make one think of the importance to the right hemisphere of the human face. Yet, despite this, there are in Homeric epic few descriptions of faces. There is no doubt about the reality of the emotions experienced by the figures caught up in the drama of the Iliad or the Odyssey: their feelings of pride, hate, envy, anger, shame, pity and love are the stuff of which the drama is made. But for the most part these emotions are conveyed as relating to the body and to bodily gesture, rather than the face – though there are moments, such as at the reunion of Penelope and Odysseus at the end of the Odyssey, when we seem to see the faces of the characters, Penelope’s eyes full of tears, those of Odysseus betraying the ‘ache of longing rising from his breast’. The lack of emphasis on the face might seem puzzling at a time of increasing empathic engagement, but I think there is a reason for this.

In Homer, as I mentioned in Part I, there was no word for the body as such, nor for the soul or the mind, for that matter, in the living person. The sōma was what was left on the battlefield, and the psuchēwas what took flight from the lips of the dying warrior. In the living person, when Homer wants to speak of someone’s mind or thoughts, he refers to what is effectively a physical organ – Achilles, for example, ‘consulting his thumos’. Although the thumos is a source of vital energy within that leads us to certain actions, the thumos has fleshly characteristics such as requiring food and drink, and a bodily situation, though this varies. According to Michael Clarke’s Flesh and Spirit in the Songs of Homer, Homeric man does not have a body or a mind: ‘rather this thought and consciousness are as inseparable a part of his bodily life as are movement and metabolism’. 15 The body is indistinguishable from the whole person. 16 ‘Thinking, emotion, awareness, reflection, will’ are undertaken in the breast, not the head: ‘the ongoing process of thought is conceived of as if it were precisely identified with the palpable inhalation of the breath, and the half-imagined mingling of breath with blood and bodily fluids in the soft, warm, flowing substances that make up what is behind the chest wall.’ 17 He stresses the importance of flow, of melting and of coagulation. The common ground of meaning is not in a particular static thing but in the ongoing process of living, which ‘can be seen and encapsulated in different contexts by a length of time or an oozing liquid’. These are all images of transition between different states of flux, different degrees of permanence, and allowing the possibility of ambiguity: ‘The relationship between the bodily and mental identity of these entities is subtle and elusive.’ 18 Here there is no necessity for the question ‘is this mind or is it body?’ to have a definitive answer. Such forbearance, however, had become impossible by the time of Plato, and remains, according to current trends in neurophilosophy, impossible today.

Words suggestive of the mind, the thumos ‘family’, for example, range fluidly and continuously between actor and activity, between the entity that thinks and the thoughts or emotions that are its products. 19 Here Clarke is speaking of terms such as is, aiōn, menos. ‘The life of Homeric man is defined in terms of processes more precisely than of things.’ 20 Menos, for example, refers to force or strength, and can also mean semen, despite being often located in the chest. But it also refers to ‘the force of violent self-propelled motion in something non-human’, perhaps like Scheler’s Drang: again more an activity than a thing. 21

This profound embodiment of thought and emotion, this emphasis on processes that are always in flux, rather than on single, static entities, this refusal of the ‘either/ or’ distinction between mind and body, all perhaps again suggest a right-hemisphere-dependent version of the world. But what is equally obvious to the modern mind is the relative closeness of the point of view. And that, I believe, helps to explain why there is little description of the face: to attend to the face requires a degree of detached observation. That there is here a work of art at all, a capacity to frame human existence in this way, suggests, it is true, a degree of distance, as well as a degree of co-operation of the hemispheres in achieving it. But it is the gradual evolution of greater distance in post-Homeric Greek culture that causes the efflorescence, the ‘unpacking’, of both right and left hemisphere capacities in the service of both art and science.

With that distance comes the term closest to the modern, more disembodied, idea of mind, nous (or noos), which is rare in Homer. When nous does occur in Homer, it remains distinct, almost always intellectual, not part of the body in any straightforward sense: according to Clarke it ‘may be virtually identified with a plan or stratagem’. 22 In conformation to the processes of the left hemisphere, it is like the flight of an arrow, directional. 23

By the late fifth and fourth centuries, separate ‘concepts of body and soul were firmly fixed in Greek culture’. 24 In Plato, and thence for the next two thousand years, the soul is a prisoner in the body, as he describes it in the Phaedo, awaiting the liberation of death.

The Great Shift
by James L. Kugel
pp. 163-165

A related belief is attested in the story of Hannah (1 Sam 1). Hannah is, to her great distress, childless, and on one occasion she goes to the great temple at Shiloh to seek God’s help:

The priest Eli was sitting on a seat near the doorpost of the temple of the LORD . In the bitterness of her heart, she prayed to the LORD and wept. She made a vow and said: “O LORD of Hosts, if You take note of Your maidservant’s distress, and if You keep me in mind and do not neglect Your maidservant and grant Your maidservant a male offspring, I will give him to the LORD for all the days of his life; and no razor shall ever touch his head.” * Now as she was speaking her prayer before the LORD , Eli was watching her mouth. Hannah was praying in her heart [i.e., silently]; her lips were moving, but her voice could not be heard, so Eli thought she was drunk. Eli said to her: “How long are you going to keep up this drunkenness? Cut out the boozing!” But Hannah answered: “Oh no, sir, I am a woman of saddened spirit. I have drunk no wine or strong drink, but I have been pouring out my heart to the LORD . Don’t take your maidservant for an ill-behaved woman! I have been praying this long because of my great distress.” Eli answered her: “Then go in peace, and may the God of Israel grant you what you have asked of Him.” (1 Sam 1:9–17)

If Eli couldn’t hear her, how did Hannah ever expect God to hear her? But she did. Somehow, even though no sound was coming out of her mouth, she apparently believed that God would hear her vow and, she hoped, act accordingly. (Which He did; “at the turn of the year she bore a son,” 1 Sam 1:20.) This too seemed to defy the laws of physics, just as much as Jonah’s prayer from the belly of the fish, or any prayer uttered at some distance from God’s presumed locale, a temple or other sacred spot.

Many other things could be said about the Psalms, or about biblical prayers in general, but the foregoing three points have been chosen for what they imply for the overall theme of this book. We have already seen a great deal of evidence indicating that people in biblical times believed the mind to be semipermeable, capable of being infiltrated from the outside. This is attested not only in the biblical narratives examined earlier, but it is the very premise on which all of Israel’s prophetic corpus stands. The semipermeable mind is prominent in the Psalms as well; in a telling phrase, God is repeatedly said to penetrate people’s “kidneys and heart” (Pss 7:10, 26:2, 139:13; also Jer 11:20, 17:10, 20:12), entering these messy internal organs 28 where thoughts were believed to dwell and reading—as if from a book—all of people’s hidden ideas and intentions. God just enters and looks around:

You have examined my heart, visited [me] at night;
You have tested me and found no wickedness; my mouth has not transgressed. (Ps 17:3)
Examine me, O LORD , and test me; try my kidneys and my heart. (26:2)

[28. Robert North rightly explained references to a person’s “heart” alone ( leb in biblical Hebrew) not as a precise reference to that particular organ, but as “a vaguely known or confused jumble of organs, somewhere in the area of the heart or stomach”: see North (1993), 596.]

Indeed God is so close that inside and outside are sometimes fused:

Let me bless the LORD who has given me counsel; my kidneys have been instructing me at night.
I keep the LORD before me at all times, just at my right hand, so I will not stumble. (Ps 16:7–8)

(Who’s giving this person advice, an external God or an internal organ?)

Such is God’s passage into a person’s semipermeable mind. But the flip side of all this is prayer, when a person’s words, devised on the inside, in the human mind, leave his or her lips in order to reach—somehow—God on the outside. As we have seen, those words were indeed believed to make their way to God; in fact, it was the cry of the victim that in some sense made the world work, causing God to notice and take up the cause of justice and right. Now, the God who did so was also, we have seen, a mighty King, who presumably ranged over all of heaven and earth:

He mounted on a cherub and flew off, gliding on the wings of the wind. (Ps 18:11)

He makes the clouds His chariot, He goes about on the wings of the wind. (Ps 104:3)

Yet somehow, no matter where His travels might take Him, God is also right there, just on the other side of the curtain that separates ordinary from extraordinary reality, allowing Him to hear the sometimes geographically distant cry of the victim or even to hear an inaudible, silent prayer like Hannah’s. The doctrine of divine omnipresence was still centuries away and was in fact implicitly denied in many biblical texts, 29 yet something akin to omnipresence seems to be implied in God’s ability to hear and answer prayers uttered from anywhere, no matter where He is. In fact, this seems implied as well in the impatient, recurrent question seen above, “How long, O L ORD ?”; the psalmist seems to be saying, “I know You’ve heard me, so when will You answer?”

Perhaps the most striking thing suggested by all this is the extent to which the Psalms’ depiction of God seems to conform to the general contours of the great Outside as described in an earlier chapter. God is huge and powerful, but also all-enfolding and, hence, just a whisper away. Somehow, people in biblical times seem to have just assumed that God, on the other side of that curtain, could hear their prayers, no matter where they were. All this again suggests a sense of self quite different from our own—a self that could not only be permeated by a great, external God, but whose thoughts and prayers could float outward and reach a God who was somehow never far, His domain beginning precisely where the humans’ left off.

One might thus say that, in this and in other ways, the psalmists’ underlying assumptions constitute a kind of biblical translation of a basic way of perceiving that had started many, many millennia earlier, a rephrasing of that fundamental reality in the particular terms of the religion of Israel. That other, primeval sense of reality and this later, more specific version of it found in these psalms present the same basic outline, which is ultimately a way of fitting into the world: the little human (more specifically in the Psalms, the little supplicant) faced with a huge enfolding Outside (in the Psalms, the mighty King) who overshadows everything and has all the power: sometimes kind and sometimes cruel (in the Psalms, sometimes heeding one’s request, but at other times oddly inattentive or sluggish), the Outside is so close as to move in and out of the little human (in the Psalms as elsewhere, penetrating a person’s insides, but also, able to pick up the supplicant’s request no matter where or how uttered). 30

pp. 205-207

The biblical “soul” was not originally thought to be immortal; in fact, the whole idea that human beings have some sort of sacred or holy entity inside them did not exist in early biblical times. But the soul as we conceive of it did eventually come into existence, and how this transformation came about is an important part of the history that we are tracing.

The biblical book of Proverbs is one of the least favorites of ordinary readers. To put the matter bluntly, Proverbs can be pretty monotonous: verse after verse tells you how much better the “righteous” are than the “wicked”: that the righteous tread the strait and narrow, control their appetites, avoid the company of loose women, save their money for a rainy day, and so forth, while the “wicked” always do quite the opposite. In spite of the way the book hammers away at these basic themes, a careful look at specific verses sometimes reveals something quite striking. 1 Here, for example, is what one verse has to say about the overall subject of the present study:

A person’s soul is the lamp of the LORD , who searches out all the innermost chambers. (Prov 20:27)

At first glance, this looks like the old theme of the semipermeable mind, whose innermost chambers are accessible to an inquisitive God. But in this verse, God does not just enter as we have seen Him do so often in previous chapters, when He appeared (apparently in some kind of waking dream) to Abraham or Moses, or put His words in the mouth of Amos or Jeremiah, or in general was held to “inspect the kidneys and heart” (that is, the innermost thoughts) of people. Here, suddenly, God seems to have an ally on the inside: the person’s own soul.

This point was put forward in rather pungent form by an ancient Jewish commentator, Rabbi Aḥa (fourth century CE ). He cited this verse to suggest that the human soul is actually a kind of secret agent, a mole planted by God inside all human beings. The soul’s job is to report to God (who is apparently at some remove) on everything that a person does or thinks:

“A person’s soul is the lamp of the LORD , who searches out all the innermost chambers”: Just as kings have their secret agents * who report to the king on each and every thing, so does the Holy One have secret agents who report on everything that a person does in secret . . . The matter may be compared to a man who married the daughter of a king. The man gets up early each morning to greet the king, and the king says, “You did such-and-such a thing in your house [yesterday], then you got angry and you beat your slave . . .” and so on for each and every thing that occurred. The man leaves and says to the people of the palace, “Which of you told the king that I did such-and-so? How does he know?” They reply to him, “Don’t be foolish! You’re married to his daughter and you want to know how he finds out? His own daughter tells him!” So likewise, a person can do whatever he wants, but his soul reports everything back to God. 2

The soul, in other words, is like God’s own “daughter”: she dwells inside a human body, but she reports regularly to her divine “father.” Or, to put this in somewhat more schematic terms: God, who is on the outside, has something that is related or connected to Him on the inside, namely, “a person’s soul.” But wasn’t it always that way?

Before getting to an answer, it will be worthwhile to review in brief something basic that was seen in the preceding chapters. Over a period of centuries, the basic model of God’s interaction with human beings came to be reconceived. After a time, He no longer stepped across the curtain separating ordinary from extraordinary reality. Now He was not seen at all—at first because any sort of visual sighting was held to be lethal, and later because it was difficult to conceive of. God’s voice was still heard, but He Himself was an increasingly immense being, filling the heavens; and then finally (moving ahead to post-biblical times), He was just axiomatically everywhere all at once. This of course clashed with the old idea of the sanctuary (a notion amply demonstrated in ancient Mesopotamian religion as well), according to which wherever else He was, God was physically present in his earthly “house,” that is, His temple. But this ancient notion as well came to be reconfigured in Israel; perched like a divine hologram above the outstretched wings of the cherubim in the Holy of Holies, God was virtually bodiless, issuing orders (like “Let there be light”) that were mysteriously carried out. 3

If conceiving of such a God’s being was difficult, His continued ability to penetrate the minds of humans ought to have been, if anything, somewhat easier to account for. He was incorporeal and omnipresent; 4 what could stand in the way of His penetrating a person’s mind, or being there already? Yet precisely for this reason, Proverbs 20:27 is interesting. It suggests that God does not manage this search unaided: there is something inside the human being that plays an active role in this process, the person’s own self or soul.

p. 390

It is striking that the authors of this study went on specifically to single out the very different sense of self prevailing in the three locales as responsible for the different ways in which voice hearing was treated: “Outside Western culture people are more likely to imagine [a person’s] mind and self as interwoven with others. These are, of course, social expectations, or cultural ‘invitations’—ways in which other people expect people like themselves to behave. Actual people do not always follow social norms. Nonetheless, the more ‘independent’ emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally many times in many places—among them India and Africa . . .” The passage continues: “For instance, the anthropologist McKim Marriott wanted to be so clear about how much Hindus conceive themselves to be made through relationships, compared with Westerners, that he called the Hindu person a ‘dividual’. His observations have been supported by other anthropologists of South Asia and certainly in south India, and his term ‘dividual’ was picked up to describe other forms of non-Western personhood. The psychologist Glenn Adams has shown experimentally that Ghanaians understand themselves as intrinsically connected through relationships. The African philosopher John Mbiti remarks: ‘only in terms of other people does the [African] individual become conscious of his own being.’” Further, see Markus and Mullally (1997); Nisbett (2004); Marriot (1976); Miller (2007); Trawick (1992); Strathern (1988); Ma and Schoeneman (1997); Mbiti (1969).

The “Other” Psychology of Julian Jaynes
by Brian J. McVeigh
p. 74

The Heart is the Ruler of the Body

We can begin with the word xin1, or heart, though given its broader denotations related to both emotions and thought, a better translation is “heart-mind” (Yu 2003). Xin1 is a pictographic representation of a physical heart, and as we will see below, it forms the most primary and elemental building block for Chinese linguo-concepts having to do with the psychological. The xin1 oversaw the activities of an individual’s psychophysiological existence and was regarded as the ruler of the body — indeed, the person — in the same way a king ruled his people. If individuals cultivate and control their hearts, then the family, state, and world cold be properly governed (Yu 2007, 2009b).

Psycho-Physio-Spiritual Aspects of the Person

Under the control of heart were the wu3shen2 of “five spirits” (shen2, hun2, po4, yi4, zhi4) which dwelt respectively in the heart, liver, lungs, spleen, and kidneys. The five shen2 were implicated in the operations of thinking, perception, and bodily systems and substances. A phonosemantic, shen2 has been variously translated as mind, spirit, supernatural being, consciousness, vitality, expression, soul, energy, god, or numen/numinous. The left side element of this logograph means manifest, show, demonstrate; we can speculate that whatever was manifested came from a supernatural source; it may have meant “ancestral spirit” (Keightley 1978: 17). The right side provides sound but also the additional meaning of “to state” or “report to a superior”; again we can speculate that it meant communing to a supernatural superior.

Introspective Illusion

On split brain research, Susan Blackmore observed that, “In this way, the verbal left brain covered up its ignorance by confabulating.” This relates to the theory of introspective illusion (see also change blindness, choice blindness, and bias blind spot). In both cases, the conscious mind turns to confabulation to explain what it has no access to and so what it doesn’t understand.

This is how we maintain a sense of being in control. Our egoic minds have immense talent at rationalization and it can happen instantly with total confidence in the reason(s) given. That indicates that consciousness is a lot less conscious than it really is… or rather that consciousness isn’t what we think it is.

Our theory of mind, as such, is highly theoretical in the speculative sense. That is to say it isn’t particularly reliable in most cases. First and foremost, what matters is that the story told is compelling, to both us and others (self-justification, in its role within consciousness, is close to Jaynesian self-authorization). We are ruled by our need for meaning, even as our body-minds don’t require meaning to enact behaviors and take actions. We get through our lives just fine mostly on automatic.

According to Julian Jaynes theory of the bicameral mind, the purpose of consciousness is to create an internal stage upon which we play out narratives. As this interiorized and narratized space is itself confabulated, that is to say psychologically and socially constructed, this space allows all further confabulations of consciousness. We imaginatively bootstrap our individuality into existence, and that requires a lot of explaining.

* * *

Introspection illusion
Wikipedia

A 1977 paper by psychologists Richard Nisbett and Timothy D. Wilson challenged the directness and reliability of introspection, thereby becoming one of the most cited papers in the science of consciousness.[8][9] Nisbett and Wilson reported on experiments in which subjects verbally explained why they had a particular preference, or how they arrived at a particular idea. On the basis of these studies and existing attribution research, they concluded that reports on mental processes are confabulated. They wrote that subjects had, “little or no introspective access to higher order cognitive processes”.[10] They distinguished between mental contents (such as feelings) and mental processes, arguing that while introspection gives us access to contents, processes remain hidden.[8]

Although some other experimental work followed from the Nisbett and Wilson paper, difficulties with testing the hypothesis of introspective access meant that research on the topic generally stagnated.[9]A ten-year-anniversary review of the paper raised several objections, questioning the idea of “process” they had used and arguing that unambiguous tests of introspective access are hard to achieve.[3]

Updating the theory in 2002, Wilson admitted that the 1977 claims had been too far-reaching.[10] He instead relied on the theory that the adaptive unconscious does much of the moment-to-moment work of perception and behaviour. When people are asked to report on their mental processes, they cannot access this unconscious activity.[7] However, rather than acknowledge their lack of insight, they confabulate a plausible explanation, and “seem” to be “unaware of their unawareness”.[11]

The idea that people can be mistaken about their inner functioning is one applied by eliminative materialists. These philosophers suggest that some concepts, including “belief” or “pain” will turn out to be quite different from what is commonly expected as science advances.

The faulty guesses that people make to explain their thought processes have been called “causal theories”.[1] The causal theories provided after an action will often serve only to justify the person’s behaviour in order to relieve cognitive dissonance. That is, a person may not have noticed the real reasons for their behaviour, even when trying to provide explanations. The result is an explanation that mostly just makes themselves feel better. An example might be a man who discriminates against homosexuals because he is embarrassed that he himself is attracted to other men. He may not admit this to himself, instead claiming his prejudice is because he believes that homosexuality is unnatural.

2017 Report on Consciousness and Moral Patienthood
Open Philanthropy Project

Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.

I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.

A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability.

59. I’m not aware of surveys indicating how common illusionist approaches are, though Frankish (2016a) remarks that:

The topic of this special issue is the view that phenomenal consciousness (in the philosophers’ sense) is an illusion — a view I call illusionism. This view is not a new one: the first wave of identity theorists favoured it, and it currently has powerful and eloquent defenders, including Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey. However, it is widely regarded as a marginal position, and there is no sustained interdisciplinary research programme devoted to developing, testing, and applying illusionist ideas. I think the time is ripe for such a programme. For a quarter of a century at least, the dominant physicalist approach to consciousness has been a realist one. Phenomenal properties, it is said, are physical, or physically realized, but their physical nature is not revealed to us by the concepts we apply to them in introspection. This strategy is looking tired, however. Its weaknesses are becoming evident…, and some of its leading advocates have now abandoned it. It is doubtful that phenomenal realism can be bought so cheaply, and physicalists may have to accept that it is out of their price range. Perhaps phenomenal concepts don’t simply fail to represent their objects as physical but misrepresent them as phenomenal, and phenomenality is an introspective illusion…

[Keith Frankish, Editorial Introduction, Journal of Consciousness Studies, Volume 23, Numbers 11-12, 2016, pp. 9-10(2)]

The Round-Based Community

Yet there’s an even deeper point to be made here, which is that flatness may actually be closer to how we think about the people around us, or even about ourselves.

This is a useful observation from Alec Nevala-Lee (The flat earth society).

I’m willing to bet that perceiving others and oneself as round characters has to do with the ability of cognitive complexity and tolerance for cognitive dissonance. These are tendencies of the liberal-minded, although research shows that with cognitive overload, from stress to drunkenness, even the liberal-minded will become conservative-minded (e.g., liberals who watched repeated video of 9/11 terrorist attacks were more likely to support Bush’s war on terror; by the way, identifying a conflict by a single emotion is a rather flat way of looking at the world).

Bacon concludes: “Increasingly, the political party you belong to represents a big part of your identity and is not just a reflection of your political views. It may even be your most important identity.” And this strikes me as only a specific case of the way in which we flatten ourselves out to make our inner lives more manageable. We pick and choose what else we emphasize to better fit with the overall story that we’re telling. It’s just more obvious these days.

So, it’s not only about characters but entire attitudes and worldviews. The ego theory of self itself encourages flatness, as opposed to the (Humean and Buddhist) bundle theory of self. It’s interesting to note how much more complex identity has become in the modern world and how much more accepting we are of allowing people to have multiple identities than in the past. This has happened at the very same time that fluid intelligence has drastically increased, and of course fluid intelligence correlates with liberal-mindedness (correlating as well to FFM openness, MBTI perceiving, Hartmann’s thin boundary type, etc).

Cultures have a way of taking psychological cues from their heads of state. As Forster says of one critical objection to flat characters: “Queen Victoria, they argue, cannot be summed up in a single sentence, so what excuse remains for Mrs. Micawber?” When the president himself is flat—which is another way of saying that he can no longer surprise us on the downside—it has implications both for our literature and for our private lives.

At the moment, the entire society is under extreme duress. This at least temporarily rigidifies the ego boundaries. Complexity of identity becomes less attractive to the average person at such times. Still, the most liberal-minded (typically radical leftists in the US) will be better at maintaining their psychological openness in the face of conflict, fear, and anxiety. As Trump is the ultimate flat character, look to the far left for those who will represent the ultimate round character. Mainstream liberals, as usual, will attempt to play to the middle and shift with the winds, taking up flat and round in turn. It’s a battle of not only ideological but psychological worldviews. And which comes to define our collective identity will dominate our society for the coming generation.

The process is already happening. And it shouldn’t astonish us if we all wake up one day to discover that the world is flat.

It’s an interesting moment. Our entire society is becoming more complex — in terms of identity, demographics, technology, media, and on and on. This requires we develop the ability of roundedness or else fall back on the simplifying rhetoric and reaction of conservative-mindedness with the rigid absolutes of authoritarianism being the furthest reaches of flatness… and, yes, such flatness tends to be memorable (the reason it is so easy to make comparisons to someone like Hitler who has become an extreme caricature of flatness). This is all the more reason for the liberal-minded to gain the awareness and intellectual defenses toward the easy attraction of flat identities and worldviews, since in a battle of opposing flat characters the most conservative-minded will always win.

 

Dickinson’s Purse and Sword

A lesser known founding father is John Dickinson, but he should be more well known considering how important he was at the time. His politics could today be described as moderate conservatism or maybe status quo liberalism. During conflict with the British Empire, he hoped the colonial leaders would seek reconciliation. Yet even as he refused to sign the Declaration of Independence, not based on principle but prudence, he didn’t stand in the way of those who supported it. And once war was under way, he served in the revolutionary armed forces. After that, he was a key figure in developing the Articles of Confederation and the Constitution.

Although a Federalist, he was highly suspicious of nationalism, the two being distinguished at the time. It might be noted that, if not for losing the war of rhetoric, the Anti-Federalists would be known as Federalists for they actually wanted a functioning federation. Indeed, Dickinson made arguments that are more Anti-Federalist in spirit. An example of this is his warning against a centralized government possessing both purse and sword, that is to say a powerful government that has both a standing army and the means of taxation to fund it without any need of consent of the governed. That is what the Articles protected against and the Constitution failed to do.

That warning remains unheeded to this day. And so the underlying issue remains silenced, the conflict and tension remains unresolved. The lack of political foresight and moral courage was what caused the American Revolution, the problems (e.g., division of power) arising in the English Civil War and Glorious Revolution still being problems generations later. The class war and radical ideologies from the 17th century led to the decades of political strife and public outrage prior to the official start of the American Revolution. But the British leadership hoped to continue to suppress the growing unrest, similar to how present American leadership hopes for the same and probably with the same eventual result.

What is interesting is how such things never go away and how non-radicals like Dickinson can end up giving voice to radical ideas. The idea of the purse strings being held by a free people, i.e., those taxed having the power of self-governance to determine their own taxation,  is not that far off from Karl Marx speaking of workers controlling the means of production — both implying that a society is only free to the degree people are free. Considering Dickinson freed the slaves he inherited, even a reluctant revolutionary such as himself could envision the radicalism of a free people.

* * *

On a related thought, one of the most radical documents, of course, was Thomas Jefferson’s strongly worded Declaration of Independence. It certainly was radical when it was written and, as with much else from that revolutionary era, maintains its radicalism to this day.

The Articles of Confederation, originally drafted by Dickinson, were closely adhering to the guiding vision of the Declaration.  Even though Dickinson was against declaring independence until all alternatives had been exhausted, once independence had been declared he was very much about following a course of moral principle as set down by that initial revolutionary document.

Yet the Constitution, that is the second constitution after the Articles, was directly unconstitutional and downright authoritarian according to the Articles.  The men of the Constitutional Convention blatantly disregarded their constitutional mandate in their having replaced the Articles without constitutional consensus and consent, that is to say it was a coup (many of the revolutionary soldiers didn’t take this coup lightly and continued the revolutionary war through such acts as Shay’s Rebellion, which was violently put down by the very Federal military that the Anti-Federalists warned about).

But worse still, the Constitution ended up being a complete betrayal of the Declaration which set out the principles that justified a revolution in the first place. As Howard Schartz put it:

“The Declaration itself, by contrast, never envisioned a Federal government at all. Ironically, then, if one wants to see the political philosophy of the United States in the Declaration of Independence, one should theoretically be against any form of federal government and not just for a particular interpretation of its limited powers.”
(Liberty In America’s Founding Moment, Kindle Locations 5375-5378)

It does seem that the contradiction bothered Dickinson. But he wasn’t a contrarian by nature, much less a rabblerouser. Once it was determined a new constitution was going to be passed, he sought the best compromise he saw as possible, although on principle he still refused to show consent by being a signatory. As for Jefferson, whether or not he ever thought the Constitution was a betrayal of the Declaration, he assumed any constitution was an imperfect document and that no constitution would or should last beyond his own generation.

* * *

Letters from a Farmer
Letter IX

No free people ever existed, or can ever exist, without keeping, to use a common, but strong expression, “the purse strings,” in their own hands. Where this is the case, they have a constitutional check upon the administration, which may thereby be brought into order without violence: But where such a power is not lodged in the people, oppression proceeds uncontrolled in its career, till the governed, transported into rage, seek redress in the midst of blood and confusion.

Letter II

Nevertheless I acknowledge the proceedings of the convention furnish my mind with many new and strong reasons, against a complete consolidation of the states. They tend to convince me, that it cannot be carried with propriety very far—that the convention have gone much farther in one respect than they found it practicable to go in another; that is, they propose to lodge in the general government very extensive powers—powers nearly, if not altogether, complete and unlimited, over the purse and the sword. But, in its organization, they furnish the strongest proof that the proper limbs, or parts of a government, to support and execute those powers on proper principles (or in which they can be safely lodged) cannot be formed. These powers must be lodged somewhere in every society; but then they should be lodged where the strength and guardians of the people are collected. They can be wielded, or safely used, in a free country only by an able executive and judiciary, a respectable senate, and a secure, full, and equal representation of the people. I think the principles I have premised or brought into view, are well founded—I think they will not be denied by any fair reasoner. It is in connection with these, and other solid principles, we are to examine the constitution. It is not a few democratic phrases, or a few well formed features, that will prove its merits; or a few small omissions that will produce its rejection among men of sense; they will inquire what are the essential powers in a community, and what are nominal ones; where and how the essential powers shall be lodged to secure government, and to secure true liberty.

Letter III

When I recollect how lately congress, conventions, legislatures, and people contended in the cause of liberty, and carefully weighed the importance of taxation, I can scarcely believe we are serious in proposing to vest the powers of laying and collecting internal taxes in a government so imperfectly organized for such purposes. Should the United States be taxed by a house of representatives of two hundred members, which would be about fifteen members for Connecticut, twenty-five for Massachusetts, etc., still the middle and lower classes of people could have no great share, in fact, in taxation. I am aware it is said, that the representation proposed by the new constitution is sufficiently numerous; it may be for many purposes; but to suppose that this branch is sufficiently numerous to guard the rights of the people in the administration of the government, in which the purse and sword are placed, seems to argue that we have forgotten what the true meaning of representation is. I am sensible also, that it is said that congress will not attempt to lay and collect internal taxes; that it is necessary for them to have the power, though it cannot probably be exercised. I admit that it is not probable that any prudent congress will attempt to lay and collect internal taxes, especially direct taxes: but this only proves that the power would be improperly lodged in congress, and that it might be abused by imprudent and designing men.

Letter XVII

It is said, that as the federal head must make peace and war, and provide for the common defense, it ought to possess all powers necessary to that end: that powers unlimited, as to the purse and sword, to raise men and monies, and form the militia, are necessary[168] to that end; and, therefore, the federal head ought to possess them. This reasoning is far more specious than solid: it is necessary that these powers so exist in the body politic, as to be called into exercise whenever necessary for the public safety; but it is by no means true, that the man, or congress of men, whose duty it more immediately is to provide for the common defense, ought to possess them without limitation. But clear it is, that if such men, or congress, be not in a situation to hold them without danger to liberty, he or they ought not to possess them. It has long been thought to be a well-founded position, that the purse and sword ought not to be placed in the same hands in a free government. Our wise ancestors have carefully separated them—placed the sword in the hands of their king, even under considerable limitations, and the purse in the hands of the commons alone: yet the king makes peace and war, and it is his duty to provide for the common defense of the nation. This authority at least goes thus far—that a nation, well versed in the science of government, does not conceive it to be necessary or expedient for the man entrusted with the common defense and general tranquility, to possess unlimitedly the powers in question, or even in any considerable degree.

The Spell of Inner Speech

Inner speech is not a universal trait of humanity, according to Russell T. Hurlburt. That is unsurprising. Others go much further in arguing that inner speech was once non-existent for entire civilizations.

My favorite version of this argument being that of Julian Jayne’s theory of the bicameral mind. It was noted by Jaynes how bicameralism can be used as an interpretative frame to understand many of the psychological oddities still found in modern society. His theory goes a long way in explaining hypnosis, for instance. From that perspective, I’ve long suspected that post-bicameral consciousness isn’t as well established as is generally assumed. David Abrahms observes (see at end of post for full context):

“It is important to realize that the now common experience of “silent” reading is a late development in the story of the alphabet, emerging only during the Middle Ages, when spaces were first inserted between the words in a written manuscript (along with various forms of punctuation), enabling readers to distinguish the words of a written sentence without necessarily sounding them out audibly. Before this innovation, to read was necessarily to read aloud, or at the very least to mumble quietly; after the twelfth century it became increasingly possible to internalize the sounds, to listen inwardly to phantom words (or the inward echo of words once uttered).”

Internal experience took a long time to take hold. During the Enlightenment, there was still contentious debate about whether or not all humans shared a common capacity for inner experience — that is, did peasants, slaves and savages (and women) have minds basically the same as rich white men, presumably as rational actors with independent-mindedness and abstract thought. The rigid boundaries of the hyper-individualistic ego-mind required millennia to be built up within the human psyche, initially considered the sole province of the educated elite that, from Plato onward, was portrayed as a patriarchal and paternalistic enlightened aristocracy.

The greatest of radical ideals was to challenge this self-serving claim of the privileged mind by demanding that all be treated as equals before God and government, in that through the ability to read all could have a personal relationship with God and through natural rights all could self-govern. Maybe it wasn’t merely a change in the perception of common humanity but a change within common humanity itself. As modernity came into dominance, the inner sense of self with the accompanying inner speech became an evermore prevalent experience. Something rare among the elite not too many centuries earlier had suddenly become common among the commoners.

With minds of their own, quite literally, the rabble became rabblerousers who no longer mindlessly bowed down to their betters. The external commands and demands of the ancien regime lost their grip as individuality became the norm. What replaced it was what Jaynes referred to as self-authorization, very much dependent on an inner voice. But it is interesting to speculate that it might have required such a long incubation period considering this new mindset had first taken root back in the Axial Age. It sometimes can be a slow process for new memes to filter across vast geographic populations and seep down into the masses.

So what might the premodern mentality have been like? At Hurlburt’s piece, I noticed some comments about personal experience. One anonymous person mentioned, after brain trauma, “LOSING my inner voice. It is a totally different sensation/experience of reality. […] It is totally unlike anything I had ever known, I felt “simple” my day to day routines where driven only by images related to my goals (example: seeing Toothbrush and knowing my goals is to brush my teeth) and whenever I needed to recite something or create thoughts for communication, it seemed I could only conjure up the first thoughts to come to my mind without any sort of filter. And I would mumble and whisper to myself in Lue of the inner voice. But even when mumbling and whispering there was NO VOICE in my head. Images, occasionally. Other than that I found myself being almost hyper-aware of my surroundings with my incoming visual stimuli as the primary focus throughout my day.”

This person said a close comparison was being in the zone, sometimes referred to as runner’s high. That got me thinking about various factors that can shut down the normal functioning of the egoic mind. Extreme physical activity forces the mind into a mode that isn’t experienced that often and extensively by people in the modern world, a state of mind combining exhaustion, endorphins, and ketosis — a state of mind, on the other hand, that would have been far from uncommon before modernity with some arguing ketosis was once the normal mode of neurocogntivie functioning. Related to this, it has been argued that the abstractions of Enlightenment thought was fueled by the imperial sugar trade, maybe the first time a permanent non-ketogenic mindset was possible in the Western world. What sugar (i.e., glucose), especially when mixed with the other popular trade items of tea and coffee, makes possible is thinking and reading (i.e., inner experience) for long periods of time without mental tiredness. During the Enlightenment, the modern mind was borne out of a drugged-up buzz. That is one interpretation. Whatever the cause, something changed.

Also, in the comment section of that article, I came across a perfect description of self-authorization. Carla said that, “There are almost always words inside my head. In fact, I’ve asked people I live with to not turn on the radio in the morning. When they asked why, they thought my answer was weird: because it’s louder than the voice in my head and I can’t perform my morning routine without that voice.” We are all like that to some extent. But for most of us, self-authorization has become so natural as to largely go unnoticed. Unlike Carla, the average person learns to hear their own inner voice despite external sounds. I’m willing to bet that, if tested, Carla would show results of having thin mental boundaries and probably an accordingly weaker egoic will to force her self-authorization onto situations. Some turn to sugar and caffeine (or else nicotine and other drugs) to help shore up rigid thick boundaries and maintain focus in this modern world filled with distractions — likely a contributing factor to drug addiction.

In Abrams’s book, The Spell of the Sensuous, he emphasizes the connection between sight and sound. By way of reading, seeing words becomes hearing words in one’s own mind. This is made possible by the perceptual tendency to associate sight and sound, the two main indicators of movement, with a living other such as an animal moving through the underbrush. Maybe this is what creates the sense of a living other within, a Jaynesian consciousness as interiorized metaphorical space. The magic of hearing words inside puts a spell on the mind, invoking a sense of inner being separate from the outer world. This is how reading can conjure forth an entire visuospatial experience of narratized world, sometimes as compellingly real or moreso than our mundane lives. To hear and see, even if only imagined inwardly, is to make real.

Yet many lose the ability to visualize as they age. I wonder if that has to do with how the modern world until recently has been almost exclusively focused on text. It’s only now that a new generation has been so fully raised on the visual potency of 24/7 cable and the online world, and unlike past generations they might remain more visually-oriented into old age. The loss of visual imagination might have been more of a quirk of printed text, the visual not so much disappearing as being subverted into sound as the ego’s own voice became insular. But even when we are unaware of it, maybe the visual remains as the light in the background that makes interior space visible like a lamp in a sonorous cave, the lamp barely offering enough light to allow us follow the sound further into the darkness. Bertrand Russell went so far as to “argues that mental imagery is the essence of the meaning of words in most cases” (Bertrand Russell: Unconscious Terrors; Murder, Rage and Mental Imagery.). It is the visual that makes the aural come alive with meaning — as Russell put it:

“it is nevertheless the possibility of a memory image in the child and an imagination image in the hearer that makes the essence of the ‘meaning’ of the words. In so far as this is absent, the words are mere counters, capable of meaning, but not at the moment possessing it.”

Jaynes resolves the seeming dilemma by proposing the visuospatial as a metaphorical frame in which the mind operates, rather than itself being the direct focus of thought. And to combine this with Russell’s view, as the visual recedes from awareness, abstract thought recedes from the visceral sense of meaning of the outer world. This is how modern humanity, ever more lost in thought, has lost contact with the larger world of nature and universe, a shrinking number of people who still regularly experience a wilderness vista or the full starry sky. Our entire world turns inward and loses its vividness, becomes smaller, the boundaries dividing thicker. Our minds become ruled by Russell’s counters of meaning (i.e., symbolic proxies), rather than meaning directly. That may be changing, though, in this new era of visually-saturated media. Even books, as audiobooks, can now be heard outwardly in the voice of another. The rigid walls of the ego, so carefully constructed over centuries, are being cracked open again. If so, we might see a merging back together again of the separated senses, which could manifest as a return of synaesthesia as a common experience and with it a resurgence of metaphorical thought that hews close to the sensory world, the fertile ground of meaning. About a talk by Vilanayur S. Ramachandran, Maureen Seaberg writes (The Sea of Similitude):

“The refined son of an Indian diplomat explains that synesthesia was discovered by Sir Francis Galton, cousin of Charles Darwin, and that its name is derived from the Greek words for joined sensations. Next, he says something that really gets me to thinking – that there is greater cross wiring in the brains of synesthetes. This has enormous implications. “Now, if you assume that this greater cross wiring and concepts are also in different parts of the brain [than just where the synesthesia occurs], then it’s going to create a greater propensity towards metaphorical thinking and creativity in people with synesthesia. And, hence, the eight times more common incidence of synesthesia among poets, artists and novelists,” he says.

“In 2005, Dr. Ramachandran and his colleagues at the University of California at San Diego identified where metaphors are likely generated in the brain by studying people who could no longer understand metaphor because of brain damage. Proving once again the maxim that nature speaks through exceptions, they tested four patients who had experienced injuries to the left angular gyrus region. In May 2005, Scientific American reported on this and pointed out that although the subjects were bright and good communicators, when the researchers presented them with common proverbs and metaphors such as “the grass is always greener on the other side” and “reaching for the stars,” the subjects interpreted the sayings literally almost all the time. Their metaphor centers – now identified – had been compromised by the damage and the people just didn’t get the symbolism. Interestingly, synesthesia has also been found to occur mostly in the fusiform and angular gyrus – it’s in the same neighborhood. […]

“Facility with metaphor is a “thing” in synesthesia. Not only do Rama’s brain studies prove it, but I’ve noticed synesthetes seldom choose the expected, clichéd options when forming the figures of speech that describe a thing in a way that is symbolic to explain an idea or make comparisons. It would be more enviable were it not completely involuntary and automatic. In our brains without borders, it just works that way. Our neuronal nets are more interwoven”

The meeting of synaesthesia and metaphor opens up to our greater, if largely forgotten, humanity. As Jaynes and many others have made clear, those in the distant past and those still living in isolated tribes, such people experience the world far differently than us. This can be seen in odd use of language in ancient texts, which we may take as odd turns of phrase, as mere metaphor. But what if these people so foreign to us took their own metaphors quite literally, so to speak. In another post by Maureen Seaberg (The Shamanic Synesthesia of the Kalahari Bushmen), there are clear examples of this:

“The oldest cultures found that ecstatic experience expands our awareness and in its most special form, the world is experienced through more sensory involvement and presence, he says. “The shaman’s transition into ecstasy brought about what we call synesthesia today. But there was more involved than just passively experiencing it. The ecstatic shaman also performed sound, movement, and made reference to vision, smell, and taste in ways that helped evoke extraordinary experiences in others. They were both recipients and performers of multi-sensory theatres. Of course this is nothing like the weekend workshop shamans of the new age who are day dreaming rather than shaking wildly…. Rhythm, especially syncopated African drumming, excites the whole body to feel more intensely. Hence, it is valued as a means of ‘getting there’. A shaman (an ecstatic performer) played all the senses.” If this seems far afield from Western experience, consider that in Exodus 20:18, as Moses ascended Mt. Sinai to retrieve the tablets, the people present were said to have experienced synesthesia. “And all the people saw the voices” of heaven, it says. And we know synesthesia happens even in non-synesthetes during meditation — a heightened state.” “

The metaphorical ground of synaesthesia is immersive and participatory. It is a world alive with meaning. It was a costly trade in sacrificing this in creating our separate and sensory-deprived egoic consciousness, despite all that we gained in wielding power over the world. During the Bronze Age when written language still had metaphorical mud on its living roots, what Jaynes calls the bicameral mind would have been closer to this animistic mindset. A metaphor in that experiential reality was far more than what we now know of as metaphor. The world was alive with beings and voices. This isn’t only the origins of our humanity for it remains the very ground of our being, the source of what we have become — language most of all (“First came the temple, then the city.”):

“Looking at an even more basic level, I was reading Mark Changizi’s Harnessed. He argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature.” That reminded me of Lynne Kelly’s description of how indigenous people would use vocal techniques and musical instruments to mimic natural sounds, as a way of communicating and passing on complex knowledge of the world. Changizi’s argument is based on the observation that “human speech sounds like solid-object physical events” and that “music sounds like humans moving and behaving (usually expressively)” (p. 19). Certain sounds give information about what is going on in the immediate environment, specifically sounds related to action and movement. This sound-based information processing would make for an optimal basis of language formation. This is given support from evidence that Kelly describes in her own books.

“This also touches upon the intimate relationship language has to music, dance, and gesture. Language is inseparable from our experience of being in the world, involving multiple senses or even synaesthesia. The overlapping of sensory experience may have been more common to earlier societies. Research has shown that synaesthetes have better capacity for memory: “spatial sequence synesthetes have a built-in and automatic mnemonic reference” (Wikipedia). That is relevant considering that memory is central to oral societies, as Kelly demonstrates. And the preliterate memory systems are immensely vast, potentially incorporating the equivalent of thousands of pages of info. Knowledge and memory isn’t just in the mind but within the entire sense of self, sense of community, and sense of place.”

We remain haunted by the past (“Beyond that, there is only awe.”):

“Through authority and authorization, immense power and persuasion can be wielded. Jaynes argues that it is central to the human mind, but that in developing consciousness we learned how to partly internalize the process. Even so, Jaynesian self-consciousness is never a permanent, continuous state and the power of individual self-authorization easily morphs back into external forms. This is far from idle speculation, considering authoritarianism still haunts the modern mind. I might add that the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies.

If you are one of those who clearly hears a voice in your head, appreciate all that went into creating and constructing it. This is an achievement of our entire civilization. But also realize how precarious is this modern mind. It’s a strange thing to contemplate. What is that voice that speaks? And who is it that is listening? Now imagine what it would be like if, as with the bicameral gods going silent, your own god-like ego went silent. And imagine this silence spreading across all of society, an entire people suddenly having lost their self-authorization to act, their very sense of identity and social reality. Don’t take for granted that voice within.

* * *

Below is a passage from a book I read long ago, maybe back when it was first published in 1996. The description of cognitive change almost could have been lifted straight out of Julian Jaynes book from twenty years earlier (e.g., the observation of the gods becoming silent). Abrams doesn’t mention Jaynes and it’s possible he was unfamiliar with it, whether or not there was an indirect influence. The kinds of ideas Jaynes was entertaining had been floating around for a long while before him as well. The unique angle that Abrams brings in this passage is framing it all within synaesthesia.

The Spell of the Sensuous
by David Abrams
p. 69

Although contemporary neuroscientists study “synaesthesia”—the overlap and blending of the senses—as though it were a rare or pathological experience to which only certain persons are prone (those who report “seeing sounds,” “hearing colors,” and the like), our primordial, preconceptual experience, as Merleau-Ponty makes evident, is inherently synaesthetic. The intertwining of sensory modalities seems unusual to us only to the extent that we have become estranged from our direct experience (and hence from our primordial contact with the entities and elements that surround us):

…Synaesthetic perception is the rule, and we are unaware of it only because scientific knowledge shifts the center of gravity of experience, so that we have unlearned how to see, hear, and generally speaking, feel, in order to deduce, from our bodily organization and the world as the physicist conceives it, what we are to see, hear, and feel. 20

pp. 131-144

It is remarkable that none of the major twentieth-century scholars who have directed their attention to the changes wrought by literacy have seriously considered the impact of writing—and, in particular, phonetic writing—upon the human experience of the wider natural world. Their focus has generally centered upon the influence of phonetic writing on the structure and deployment of human language, 53 on patterns of cognition and thought, 54 or upon the internal organization of human societies. 55 Most of the major research, in other words, has focused upon the alphabet’s impact on processes either internal to human society or presumably “internal” to the human mind. Yet the limitation of such research—its restriction within the bounds of human social interaction and personal interiority—itself reflects an anthropocentric bias wholly endemic to alphabetic culture. In the absence of phonetic literacy, neither society, nor language, nor even the experience of “thought” or consciousness, can be pondered in isolation from the multiple nonhuman shapes and powers that lend their influence to all our activities (we need think only of our ceaseless involvement with the ground underfoot, with the air that swirls around us, with the plants and animals that we consume, with the daily warmth of the sun and the cyclic pull of the moon). Indeed, in the absence of formal writing systems, human communities come to know themselves primarily as they are reflected back by the animals and the animate landscapes with which they are directly engaged. This epistemological dependence is readily evidenced, on every continent, by the diverse modes of identification commonly categorized under the single term “totemism.”

It is exceedingly difficult for us literates to experience anything approaching the vividness and intensity with which surrounding nature spontaneously presents itself to the members of an indigenous, oral community. Yet as we saw in the previous chapters, Merleau-Ponty’s careful phenomenology of perceptual experience had begun to disclose, underneath all of our literate abstractions, a deeply participatory relation to things and to the earth, a felt reciprocity curiously analogous to the animistic awareness of indigenous, oral persons. If we wish to better comprehend the remarkable shift in the human experience of nature that was occasioned by the advent and spread of phonetic literacy, we would do well to return to the intimate analysis of sensory perception inaugurated by Merleau-Ponty. For without a clear awareness of what reading and writing amounts to when considered at the level of our most immediate, bodily experience, any “theory” regarding the impact of literacy can only be provisional and speculative.

Although Merleau-Ponty himself never attempted a phenomenology of reading or writing, his recognition of the importance of synaesthesia—the overlap and intertwining of the senses—resulted in a number of experiential analyses directly pertinent to the phenomenon of reading. For reading, as soon as we attend to its sensorial texture, discloses itself as a profoundly synaesthetic encounter. Our eyes converge upon a visible mark, or a series of marks, yet what they find there is a sequence not of images but of sounds, something heard; the visible letters, as we have said, trade our eyes for our ears. Or, rather, the eye and the ear are brought together at the surface of the text—a new linkage has been forged between seeing and hearing which ensures that a phenomenon apprehended by one sense is instantly transposed into the other. Further, we should note that this sensory transposition is mediated by the human mouth and tongue; it is not just any kind of sound that is experienced in the act of reading, but specifically human, vocal sounds—those which issue from the human mouth. It is important to realize that the now common experience of “silent” reading is a late development in the story of the alphabet, emerging only during the Middle Ages, when spaces were first inserted between the words in a written manuscript (along with various forms of punctuation), enabling readers to distinguish the words of a written sentence without necessarily sounding them out audibly. Before this innovation, to read was necessarily to read aloud, or at the very least to mumble quietly; after the twelfth century it became increasingly possible to internalize the sounds, to listen inwardly to phantom words (or the inward echo of words once uttered). 56

Alphabetic reading, then, proceeds by way of a new synaesthetic collaboration between the eye and the ear, between seeing and hearing. To discern the consequences of this new synaesthesia, we need to examine the centrality of synaesthesia in our perception of others and of the earth.

The experiencing body (as we saw in Chapter 2) is not a self-enclosed object, but an open, incomplete entity. This openness is evident in the arrangement of the senses: I have these multiple ways of encountering and exploring the world—listening with my ears, touching with my skin, seeing with my eyes, tasting with my tongue, smelling with my nose—and all of these various powers or pathways continually open outward from the perceiving body, like different paths diverging from a forest. Yet my experience of the world is not fragmented; I do not commonly experience the visible appearance of the world as in any way separable from its audible aspect, or from the myriad textures that offer themselves to my touch. When the local tomcat comes to visit, I do not have distinctive experiences of a visible cat, an audible cat, and an olfactory cat; rather, the tomcat is precisely the place where these separate sensory modalities join and dissolve into one another, blending as well with a certain furry tactility. Thus, my divergent senses meet up with each other in the surrounding world, converging and commingling in the things I perceive. We may think of the sensing body as a kind of open circuit that completes itself only in things, and in the world. The differentiation of my senses, as well as their spontaneous convergence in the world at large, ensures that I am a being destined for relationship: it is primarily through my engagement with what is not me that I effect the integration of my senses, and thereby experience my own unity and coherence. 57 […]

The diversity of my sensory systems, and their spontaneous convergence in the things that I encounter, ensures this interpenetration or interweaving between my body and other bodies—this magical participation that permits me, at times, to feel what others feel. The gestures of another being, the rhythm of its voice, and the stiffness or bounce in its spine all gradually draw my senses into a unique relation with one another, into a coherent, if shifting, organization. And the more I linger with this other entity, the more coherent the relation becomes, and hence the more completely I find myself face-to-face with another intelligence, another center of experience.

In the encounter with the cyclist, as in my experience of the blackbird, the visual focus induced and made possible the participation of the other senses. In different situations, other senses may initiate the synaesthesia: our ears, when we are at an orchestral concert; or our nostrils, when a faint whiff of burning leaves suddenly brings images of childhood autumns; our skin, when we are touching or being touched by a lover. Nonetheless, the dynamic conjunction of the eyes has a particularly ubiquitous magic, opening a quivering depth in whatever we focus upon, ceaselessly inviting the other senses into a concentrated exchange with stones, squirrels, parked cars, persons, snow-capped peaks, clouds, and termite-ridden logs. This power—the synaesthetic magnetism of the visual focus—will prove crucial for our understanding of literacy and its perceptual effects.

The most important chapter of Merleau-Ponty’s last, unfinished work is entitled “The Intertwining—The Chiasm.” The word “chiasm,” derived from an ancient Greek term meaning “crisscross,” is in common use today only in the field of neurobiology: the “optic chiasm” is that anatomical region, between the right and left hemispheres of the brain, where neuronal fibers from the right eye and the left eye cross and interweave. As there is a chiasm between the two eyes, whose different perspectives continually conjoin into a single vision, so—according to Merleau-Ponty—there is a chiasm between the various sense modalities, such that they continually couple and collaborate with one another. Finally, this interplay of the different senses is what enables the chiasm between the body and the earth, the reciprocal participation—between one’s own flesh and the encompassing flesh of the world—that we commonly call perception. 59

Phonetic reading, of course, makes use of a particular sensory conjunction—that between seeing and hearing. And indeed, among the various synaesthesias that are common to the human body, the confluence (or chiasm) between seeing and hearing is particularly acute. For vision and hearing are the two “distance” senses of the human organism. In contrast to touch and proprioception (inner-body sensations), and unlike the chemical senses of taste and smell, seeing and hearing regularly place us in contact with things and events unfolding at a substantial distance from our own visible, audible body.

My visual gaze explores the reflective surfaces of things, their outward color and contour. By following the play of light and shadow, the dance of colors, and the gradients of repetitive patterns, the eyes—themselves gleaming surfaces—keep me in contact with the multiple outward facets, or faces, of the things arrayed about me. The ears, meanwhile, are more inward organs; they emerge from the depths of my skull like blossoms or funnels, and their participation tells me less about the outer surface than the interior substance of things. For the audible resonance of beings varies with their material makeup, as the vocal calls of different animals vary with the size and shape of their interior cavities and hollows. I feel their expressive cries resound in my skull or my chest, echoing their sonorous qualities with my own materiality, and thus learn of their inward difference from myself. Looking and listening bring me into contact, respectively, with the outward surfaces and with the interior voluminosity of things, and hence where these senses come together, I experience, over there, the complex interplay of inside and outside that is characteristic of my own self-experience. It is thus at those junctures in the surrounding landscape where my eyes and my ears are drawn together that I most readily feel myself confronted by another power like myself, another life. […]

Yet our ears and our eyes are drawn together not only by animals, but by numerous other phenomena within the landscape. And, strangely, wherever these two senses converge, we may suddenly feel ourselves in relation with another expressive power, another center of experience. Trees, for instance, can seem to speak to us when they are jostled by the wind. Different forms of foliage lend each tree a distinctive voice, and a person who has lived among them will easily distinguish the various dialects of pine trees from the speech of spruce needles or Douglas fir. Anyone who has walked through cornfields knows the uncanny experience of being scrutinized and spoken to by whispering stalks. Certain rock faces and boulders request from us a kind of auditory attentiveness, and so draw our ears into relation with our eyes as we gaze at them, or with our hands as we touch them—for it is only through a mode of listening that we can begin to sense the interior voluminosity of the boulder, its particular density and depth. There is an expectancy to the ears, a kind of patient receptivity that they lend to the other senses whenever we place ourselves in a mode of listening—whether to a stone, or a river, or an abandoned house. That so many indigenous people allude to the articulate speech of trees or of mountains suggests the ease with which, in an oral culture, one’s auditory attention may be joined with the visual focus in order to enter into a living relation with the expressive character of things.

Far from presenting a distortion of their factual relation to the world, the animistic discourse of indigenous, oral peoples is an inevitable counterpart of their immediate, synaesthetic engagement with the land that they inhabit. The animistic proclivity to perceive the angular shape of a boulder (while shadows shift across its surface) as a kind of meaningful gesture, or to enter into felt conversations with clouds and owls—all of this could be brushed aside as imaginary distortion or hallucinatory fantasy if such active participation were not the very structure of perception, if the creative interplay of the senses in the things they encounter was not our sole way of linking ourselves to those things and letting the things weave themselves into our experience. Direct, prereflective perception is inherently synaesthetic, participatory, and animistic, disclosing the things and elements that surround us not as inert objects but as expressive subjects, entities, powers, potencies.

And yet most of us seem, today, very far from such experience. Trees rarely, if ever, speak to us; animals no longer approach us as emissaries from alien zones of intelligence; the sun and the moon no longer draw prayers from us but seem to arc blindly across the sky. How is it that these phenomena no longer address us , no longer compel our involvement or reciprocate our attention? If participation is the very structure of perception, how could it ever have been brought to a halt? To freeze the ongoing animation, to block the wild exchange between the senses and the things that engage them, would be tantamount to freezing the body itself, stopping it short in its tracks. And yet our bodies still move, still live, still breathe. If we no longer experience the enveloping earth as expressive and alive, this can only mean that the animating interplay of the senses has been transferred to another medium, another locus of participation.

IT IS THE WRITTEN TEXT THAT PROVIDES THIS NEW LOCUS . FOR TO read is to enter into a profound participation, or chiasm, with the inked marks upon the page. In learning to read we must break the spontaneous participation of our eyes and our ears in the surrounding terrain (where they had ceaselessly converged in the synaesthetic encounter with animals, plants, and streams) in order to recouple those senses upon the flat surface of the page. As a Zuñi elder focuses her eyes upon a cactus and hears the cactus begin to speak, so we focus our eyes upon these printed marks and immediately hear voices. We hear spoken words, witness strange scenes or visions, even experience other lives. As nonhuman animals, plants, and even “inanimate” rivers once spoke to our tribal ancestors, so the “inert” letters on the page now speak to us! This is a form of animism that we take for granted, but it is animism nonetheless—as mysterious as a talking stone.

And indeed, it is only when a culture shifts its participation to these printed letters that the stones fall silent. Only as our senses transfer their animating magic to the written word do the trees become mute, the other animals dumb.

But let us be more precise, recalling the distinction between different forms of writing discussed at the start of this chapter. As we saw there, pictographic, ideographic, and even rebuslike writing still makes use of, or depends upon, our sensorial participation with the natural world. As the tracks of moose and bear refer beyond themselves to those entities of whom they are the trace, so the images in early writing systems draw their significance not just from ourselves but from sun, moon, vulture, jaguar, serpent, lightning—from all those sensorial, never strictly human powers, of which the written images were a kind of track or tracing. To be sure, these signs were now inscribed by human hands, not by the hooves of deer or the clawed paws of bear; yet as long as they presented images of paw prints and of clouds , of sun and of serpent , these characters still held us in relation to a more-than-human field of discourse. Only when the written characters lost all explicit reference to visible, natural phenomena did we move into a new order of participation. Only when those images came to be associated, alphabetically, with purely human-made sounds, and even the names of the letters lost all worldly, extrahuman significance, could speech or language come to be experienced as an exclusively human power. For only then did civilization enter into the wholly self-reflexive mode of animism, or magic, that still holds us in its spell:

We know what the animals do, what are the needs of the beaver, the bear, the salmon, and other creatures, because long ago men married them and acquired this knowledge from their animal wives. Today the priests say we lie, but we know better. The white man has been only a short time in this country and knows very little about the animals; we have lived here thousands of years and were taught long ago by the animals themselves. The white man writes everything down in a book so that it will not be forgotten; but our ancestors married animals, learned all their ways, and passed on this knowledge from one generation to another. 60

THAT ALPHABETIC READING AND WRITING WAS ITSELF experienced as a form of magic is evident from the reactions of cultures suddenly coming into contact with phonetic writing. Anthropological accounts from entirely different continents report that members of indigenous, oral tribes, after seeing the European reading from a book or from his own notes, came to speak of the written pages as “talking leaves,” for the black marks on the flat, leaflike pages seemed to talk directly to the one who knew their secret.

The Hebrew scribes never lost this sense of the letters as living, animate powers. Much of the Kabbalah, the esoteric body of Jewish mysticism, is centered around the conviction that each of the twenty-two letters of the Hebrew aleph-beth is a magic gateway or guide into an entire sphere of existence. Indeed, according to some kabbalistic accounts, it was by combining the letters that the Holy One, Blessed Be He, created the ongoing universe. The Jewish kabbalists found that the letters, when meditated upon, would continually reveal new secrets; through the process of tzeru, the magical permutation of the letters, the Jewish scribe could bring himself into successively greater states of ecstatic union with the divine. Here, in other words, was an intensely concentrated form of animism—a participation conducted no longer with the sculpted idols and images worshiped by other tribes but solely with the visible letters of the aleph-beth.

Perhaps the most succinct evidence for the potent magic of written letters is to be found in the ambiguous meaning of our common English word “spell.” As the roman alphabet spread through oral Europe, the Old English word “spell,” which had meant simply to recite a story or tale, took on the new double meaning: on the one hand, it now meant to arrange, in the proper order, the written letters that constitute the name of a thing or a person; on the other, it signified a magic formula or charm. Yet these two meanings were not nearly as distinct as they have come to seem to us today. For to assemble the letters that make up the name of a thing, in the correct order, was precisely to effect a magic, to establish a new kind of influence over that entity, to summon it forth! To spell, to correctly arrange the letters to form a name or a phrase, seemed thus at the same time to cast a spell , to exert a new and lasting power over the things spelled. Yet we can now realize that to learn to spell was also, and more profoundly, to step under the influence of the written letters ourselves, to cast a spell upon our own senses. It was to exchange the wild and multiplicitous magic of an intelligent natural world for the more concentrated and refined magic of the written word.

THE BULGARIAN SCHOLAR TZVETAN TODOROV HAS WRITTEN AN illuminating study of the Spanish conquest of the Americas, based on extensive study of documents from the first months and years of contact between European culture and the native cultures of the American continent. 61 The lightning-swift conquest of Mexico by Cortéz has remained a puzzle for historians, since Cortéz, leading only a few hundred men, managed to seize the entire kingdom of Montezuma, who commanded several hundred thousand . Todorov concludes that Cortéz’s astonishing and rapid success was largely a result of the discrepancy between the different forms of participation engaged in by the two societies. The Aztecs, whose writing was highly pictorial, necessarily felt themselves in direct communication with an animate, more-than-human environment. “Everything happens as if, for the Aztecs, [written] signs automatically and necessarily proceed from the world they designate…”; the Aztecs are unable to use their spoken words, or their written characters, to hide their true intentions, since these signs belong to the world around them as much as to themselves. 62 To be duplicitous with signs would be, for the Aztecs, to go against the order of nature, against the encompassing speech or logos of an animate world, in which their own tribal discourse was embedded.

The Spaniards, however, suffer no such limitation. Possessed of an alphabetic writing system, they experience themselves not in communication with the sensuous forms of the world, but solely with one another. The Aztecs must answer, in their actions as in their speech, to the whole sensuous, natural world that surrounds them; the Spanish need answer only to themselves.

In contact with this potent new magic, with these men who participate solely with their own self-generated signs, whose speech thus seems to float free of the surrounding landscape, and who could therefore be duplicitous and lie even in the presence of the sun, the moon, and the forest, the Indians felt their own rapport with those sensuous powers, or gods, beginning to falter:

The testimony of the Indian accounts, which is a description rather than an explanation, asserts that everything happened because the Mayas and the Aztecs lost control of communication. The language of the gods has become unintelligible, or else these gods fell silent. “Understanding is lost, wisdom is lost” [from the Mayan account of the Spanish invasion]….As for the Aztecs, they describe the beginning of their own end as a silence that falls: the gods no longer speak to them. 63

In the face of aggression from this new, entirely self-reflexive form of magic, the native peoples of the Americas—like those of Africa and, later, of Australia—felt their own magics wither and become useless, unable to protect them.

Inequality in the Anthropocene

This post was inspired by an article on the possibility of increasing suicides because of climate change. What occurred to me is that all the social and psychological problems seen with climate change are also seen with inequality (as shown in decades of research), and to a lesser extent as seen with extreme poverty — although high poverty with low inequality isn’t necessarily problematic at all (e.g., the physically and psychologically healthy hunter-gatherers who are poor in terms of material wealth and private property).

Related to this, I noticed in one article that a study was mentioned about the chances of war increasing when detrimental weather events are combined with ethnic diversity. And that reminded me of the research that showed diversity only leads to lowered trust when combined with segregation. A major problem with climate-related refugee crises is that it increases segregation, such as refugee camps and immigrant ghettoization. That segregation will lead to further conflict and destruction of the social fabric, which in turn will promote further segregation — a vicious cycle that will be hard to pull out before the crash, especially as the environmental conditions lead to droughts, famines, and plagues.

As economic and environmental conditions worsen, there are some symptoms that will become increasingly apparent and problematic. Based on the inequality and climatology research, we should expect increased stress, anxiety, fear, xenophobia, bigotry, suicide, homicide, aggressive behavior, short-term thinking, reactionary politics, and generally crazy and bizarre behavior. This will likely result in civil unrest, violent conflict, race wars, genocides, terrorism, militarization, civil wars, revolutions, international conflict, resource-based wars, world wars, authoritarianism, ethno-nationalism, right-wing populism, etc.

The only defense against this will be a strong, courageous left-wing response. That would require eliminating not only the derangement of the GOP but also the corruption of the DNC by replacing both with a genuinely democratic and socialist movement. Otherwise, our society will descend into collective madness and our entire civilization will be under existential threat. There is no other option.

* * *

The Great Acceleration and the Great Divergence: Vulnerability in the Anthropocene
by Rob Nixon

Most Anthropocene scholars date the new epoch to the late-eighteenth-century beginnings of industrialization. But there is a second phase to the Anthropocene, the so-called great acceleration, beginning circa 1950: an exponential increase in human-induced changes to the carbon cycle and nitrogen cycle and in ocean acidification, global trade, and consumerism, as well as the rise of international forms of governance like the World Bank and the IMF.

However, most accounts of the great acceleration fail to position it in relation to neoliberalism’s recent ascent, although most of the great acceleration has occurred during the neoliberal era. One marker of neoliberalism has been a widening chasm of inequality between the superrich and the ultrapoor: since the late 1970s, we have been living through what Timothy Noah calls “the great divergence.” Noah’s subject is the economic fracturing of America, the new American gilded age, but the great divergence has scarred most societies, from China and India to Indonesia, South Africa, Nigeria, Italy, Spain, Ireland, Costa Rica, Jamaica, Australia, and Bangladesh.

My central problem with the dominant mode of Anthropocene storytelling is its failure to articulate the great acceleration to the great divergence. We need to acknowledge that the grand species narrative of the Anthropocene—this geomorphic “age of the human”—is gaining credence at a time when, in society after society, the idea of the human is breaking apart economically, as the distance between affluence and abandonment is increasing. It is time to remold the Anthropocene as a shared story about unshared resources. When we examine the geology of the human, let us also pay attention to the geopolitics of the new stratigraphy’s layered assumptions.

Neoliberalism loves watery metaphors: the trickle-down effect, global flows, how a rising tide lifts all boats. But talk of a rising tide raises other specters: the coastal poor, who will never get storm-surge barriers; Pacific Islanders in the front lines of inundation; Arctic peoples, whose livelihoods are melting away—all of them exposed to the fallout from Anthropocene histories of carbon extraction and consumption in which they played virtually no part.

We are not all in this together
by Ian Angus

So the 21st century is being defined by a combination of record-breaking inequality with record-breaking climate change. That combination is already having disastrous impacts on the majority of the world’s people. The line is not only between rich and poor, or comfort and poverty: it is a line between survival and death.

Climate change and extreme weather events are not devastating a random selection of human beings from all walks of life. There are no billionaires among the dead, no corporate executives living in shelters, no stockbrokers watching their children die of malnutrition. Overwhelmingly, the victims are poor and disadvantaged. Globally, 99 percent of weather disaster casualties are in developing countries, and 75 percent of them are women.

The pattern repeats at every scale. Globally, the South suffers far more than the North. Within the South, the very poorest countries, mostly in Africa south of the Sahara, are hit hardest. Within each country, the poorest people—women, children, and the elderly—are most likely to lose their homes and livelihoods from climate change, and most likely to die.

The same pattern occurs in the North. Despite the rich countries’ overall wealth, when hurricanes and heatwaves hit, the poorest neighborhoods are hit hardest, and within those neighborhoods the primary victims are the poorest people.

Chronic hunger, already a severe problem in much of the world, will be made worse by climate change. As Oxfam reports: “The world’s most food-insecure regions will be hit hardest of all.”

Unchecked climate change will lock the world’s poorest people in a downward spiral, leaving hundreds of millions facing malnutrition, water scarcity, ecological threats, and loss of livelihood. Children will be among the primary victims, and the effects will last for lifetimes: studies in Ethiopia, Kenya, and Niger show that being born in a drought year increases a child’s chances of being irreversibly stunted by 41 to 72 percent.

Environmental racism has left black Americans three times more likely to die from pollution
By Bartees Cox

Without a touch of irony, the EPA celebrated Black History Month by publishing a report that finds black communities face dangerously high levels of pollution. African Americans are more likely to live near landfills and industrial plants that pollute water and air and erode quality of life. Because of this, more than half of the 9 million people living near hazardous waste sites are people of color, and black Americans are three times more likely to die from exposure to air pollutants than their white counterparts.

The statistics provide evidence for what advocates call “environmental racism.” Communities of color aren’t suffering by chance, they say. Rather, these conditions are the result of decades of indifference from people in power.

Environmental racism is dangerous. Trump’s EPA doesn’t seem to care.
by P.R. Lockhart

Studies have shown that black and Hispanic children are more likely to develop asthma than their white peers, as are poor children, with research suggesting that higher levels of smog and air pollution in communities of color being a factor. A 2014 study found that people of color live in communities that have more nitrogen dioxide, a pollutant that exacerbates asthma.

The EPA’s own research further supported this. Earlier this year, a paper from the EPA’s National Center for Environmental Assessment found that when it comes to air pollutants that contribute to issues like heart and lung disease, black people are exposed to 1.5 times more of the pollutant than white people, while Hispanic people were exposed to about 1.2 times the amount of non-Hispanic whites. People in poverty had 1.3 times the exposure of those not in poverty.

Trump’s EPA Concludes Environmental Racism Is Real
by Vann R. Newkirk II

Late last week, even as the Environmental Protection Agency and the Trump administration continued a plan to dismantle many of the institutions built to address those disproportionate risks, researchers embedded in the EPA’s National Center for Environmental Assessment released a study indicating that people of color are much more likely to live near polluters and breathe polluted air. Specifically, the study finds that people in poverty are exposed to more fine particulate matter than people living above poverty. According to the study’s authors, “results at national, state, and county scales all indicate that non-Whites tend to be burdened disproportionately to Whites.”

The study focuses on particulate matter, a group of both natural and manmade microscopic suspensions of solids and liquids in the air that serve as air pollutants. Anthropogenic particulates include automobile fumes, smog, soot, oil smoke, ash, and construction dust, all of which have been linked to serious health problems. Particulate matter was named a known definite carcinogen by the International Agency for Research on Cancer, and it’s been named by the EPA as a contributor to several lung conditions, heart attacks, and possible premature deaths. The pollutant has been implicated in both asthma prevalence and severitylow birth weights, and high blood pressure.

As the study details, previous works have also linked disproportionate exposure to particulate matter and America’s racial geography. A 2016 study in Environment International found that long-term exposure to the pollutant is associated with racial segregation, with more highly segregated areas suffering higher levels of exposure. A 2012 article in Environmental Health Perspectives found that overall levels of particulate matter exposure for people of color were higher than those for white people. That article also provided a breakdown of just what kinds of particulate matter counts in the exposures. It found that while differences in overall particulate matter by race were significant, differences for some key particles were immense. For example, Hispanics faced rates of chlorine exposure that are more than double those of whites. Chronic chlorine inhalation is known for degrading cardiac function.

The conclusions from scientists at the National Center for Environmental Assessment not only confirm that body of research, but advance it in a top-rate public-health journal. They find that black people are exposed to about 1.5 times more particulate matter than white people, and that Hispanics had about 1.2 times the exposure of non-Hispanic whites. The study found that people in poverty had about 1.3 times more exposure than people above poverty. Interestingly, it also finds that for black people, the proportion of exposure is only partly explained by the disproportionate geographic burden of polluting facilities, meaning the magnitude of emissions from individual factories appears to be higher in minority neighborhoods.

These findings join an ever-growing body of literature that has found that both polluters and pollution are often disproportionately located in communities of color. In some places, hydraulic-fracturing oil wells are more likely to be sited in those neighborhoods. Researchers have found the presence of benzene and other dangerous aromatic chemicals to be linked to race. Strong racial disparities are suspected in the prevalence of lead poisoning.

It seems that almost anywhere researchers look, there is more evidence of deep racial disparities in exposure to environmental hazards. In fact, the idea of environmental justice—or the degree to which people are treated equally and meaningfully involved in the creation of the human environment—was crystallized in the 1980s with the aid of a landmark study illustrating wide disparities in the siting of facilities for the disposal of hazardous waste. Leaders in the environmental-justice movement have posited—in places as prestigious and rigorous as United Nations publications and numerous peer-reviewed journals—that environmental racism exists as the inverse of environmental justice, when environmental risks are allocated disproportionately along the lines of race, often without the input of the affected communities of color.

The idea of environmental racism is, like all mentions of racism in America, controversial. Even in the age of climate change, many people still view the environment mostly as a set of forces of nature, one that cannot favor or disfavor one group or another. And even those who recognize that the human sphere of influence shapes almost every molecule of the places in which humans live, from the climate to the weather to the air they breathe, are often loathe to concede that racism is a factor. To many people, racism often connotes purposeful decisions by a master hand, and many see existing segregation as a self-sorting or poverty problem. Couldn’t the presence of landfills and factories in disproportionately black neighborhoods have more to do with the fact that black people tend to be disproportionately poor and thus live in less desirable neighborhoods?

But last week’s study throws more water on that increasingly tenuous line of thinking. While it lacks the kind of complex multivariate design that can really disentangle the exact effects of poverty and race, the finding that race has a stronger effect on exposure to pollutants than poverty indicates that something beyond just the concentration of poverty among black people and Latinos is at play. As the study’s authors write: “A focus on poverty to the exclusion of race may be insufficient to meet the needs of all burdened populations.” Their finding that the magnitude of pollution seems to be higher in communities of color than the number of polluters suggests, indicates that regulations and business decisions are strongly dependent on whether people of color are around. In other words, they might be discriminatory.

This is a remarkable finding, and not only because it could provide one more policy linkage to any number of health disparities, from heart disease to asthma rates in black children that are double those of white children. But the study also stands as an implicit rebuke to the very administration that allowed its release.

Violence: Categories & Data, Causes & Demographics

Most violent crime correlates to social problems in general. Most social problems in general correlate to economic factors such as poverty but even moreso inequality. And in a country like the US, most economic factors correlate to social disadvantage and racial oppression, from economic segregation (redlining, sundown towns, etc) to environmental racism (ghettos located in polluted urban areas, high toxicity rates among minorities, etc) — consider how areas of historically high rates of slavery at present have higher levels of poverty and inequality, impacting not just blacks but also whites living in those communities.

Socialized Medicine & Externalized Costs

About 40 percent of deaths worldwide are caused by water, air and soil pollution, concludes a Cornell researcher. Such environmental degradation, coupled with the growth in world population, are major causes behind the rapid increase in human diseases, which the World Health Organization has recently reported. Both factors contribute to the malnourishment and disease susceptibility of 3.7 billion people, he says.

Percentages of Suffering and Death

Even accepting the data that Pinker uses, it must be noted that he isn’t including all violent deaths. Consider economic sanctions and neoliberal exploitation, vast poverty and inequality forcing people to work long hours in unsafe and unhealthy conditions, covert operations to overthrow governments and destabilize regions, anthropogenic climate change with its disasters, environmental destruction and ecosystem collapse, loss of arable land and food sources, pollution and toxic dumps, etc. All of this would involve food scarcity, malnutrition, starvation, droughts, rampant disease, refugee crises, diseases related to toxicity and stress, etc; along with all kinds of other consequences to people living in desperation and squalor.

This has all been intentionally caused through governments, corporations, and other organizations seeking power and profit while externalizing costs and harm. In my lifetime, the fatalities to this large scale often slow violence and intergenerational trauma could add up to hundreds of millions or maybe billions of lives cut short. Plus, as neoliberal globalization worsens inequality, there is a direct link to higher rates of homicides, suicides, and stress-related diseases for the most impacted populations. Yet none of these deaths would be counted as violent, no matter how horrific it was for the victims. And those like Pinker adding up the numbers would never have to acknowledge this overwhelming reality of suffering. It can’t be seen in the official data on violence, as the causes are disconnected from the effects. But why should only a small part of the harm and suffering get counted as violence?

Learning to Die in the Anthropocene: Reflections on the End of a Civilization
by Roy Scranton
Kindle Locations 860-888 (see here)

Consider: Once among the most modern, Westernized nations in the Middle East, with a robust, highly educated middle class, Iraq has been blighted for decades by imperialist aggression, criminal gangs, interference in its domestic politics, economic liberalization, and sectarian feuding. Today it is being torn apart between a corrupt petrocracy, a breakaway Kurdish enclave, and a self-declared Islamic fundamentalist caliphate, while a civil war in neighboring Syria spills across its borders. These conflicts have likely been caused in part and exacerbated by the worst drought the Middle East has seen in modern history. Since 2006, Syria has been suffering crippling water shortages that have, in some areas, caused 75 percent crop failure and wiped out 85 percent of livestock, left more than 800,000 Syrians without a livelihood, and sent hundreds of thousands of impoverished young men streaming into Syria’s cities. 90 This drought is part of long-term warming and drying trends that are transforming the Middle East. 91 Not just water but oil, too, is elemental to these conflicts. Iraq sits on the fifth-largest proven oil reserves in the world. Meanwhile, the Islamic State has been able to survive only because it has taken control of most of Syria’s oil and gas production. We tend to think of climate change and violent religious fundamentalism as isolated phenomena, but as Retired Navy Rear Admiral David Titley argues, “you can draw a very credible climate connection to this disaster we call ISIS right now.” 92

A few hundred miles away, Israeli soldiers spent the summer of 2014 killing Palestinians in Gaza. Israel has also been suffering drought, while Gaza has been in the midst of a critical water crisis exacerbated by Israel’s military aggression. The International Committee for the Red Cross reported that during summer 2014, Israeli bombers targeted Palestinian wells and water infrastructure. 93 It’s not water and oil this time, but water and gas: some observers argue that Israel’s “Operation Protective Edge” was intended to establish firmer control over the massive Leviathan natural gas field, discovered off the coast of Gaza in the eastern Mediterranean in 2010.94

Meanwhile, thousands of miles to the north, Russian-backed separatists fought fascist paramilitary forces defending the elected government of Ukraine, which was also suffering drought. 95 Russia’s role as an oil and gas exporter in the region and the natural gas pipelines running through Ukraine from Russia to Europe cannot but be key issues in the conflict. Elsewhere, droughts in 2014 sent refugees from Guatemala and Honduras north to the US border, devastated crops in California and Australia, and threatened millions of lives in Eritrea, Somalia, Ethiopia, Sudan, Uganda, Afghanistan, India, Morocco, Pakistan, and parts of China. Across the world, massive protests and riots have swept Bosnia and Herzegovina, Venezuela, Brazil, Turkey, Egypt, and Thailand, while conflicts rage on in Colombia, Libya, the Central African Republic, Sudan, Nigeria, Yemen, and India. And while the world burns, the United States has been playing chicken with Russia over control of Eastern Europe and the melting Arctic, and with China over control of Southeast Asia and the South China Sea, threatening global war on a scale not seen in seventy years. This is our present and future: droughts and hurricanes, refugees and border guards, war for oil, water, gas, and food.

Donald Trump Is the First Demagogue of the Anthropocene
by Robinson Meyer

First, climate change could easily worsen the inequality that has already hollowed out the Western middle class. A recent analysis in Nature projected that the effects of climate change will reduce the average person’s income by 23 percent by the end of the century. The U.S. Environmental Protection Agency predicts that unmitigated global warming could cost the American economy $200 billion this century. (Some climate researchers think the EPA undercounts these estimates.)

Future consumers will not register these costs so cleanly, though—there will not be a single climate-change debit exacted on everyone’s budgets at year’s end. Instead, the costs will seep in through many sources: storm damage, higher power rates, real-estate depreciation, unreliable and expensive food. Climate change could get laundered, in other words, becoming just one more symptom of a stagnant and unequal economy. As quality of life declines, and insurance premiums rise, people could feel that they’re being robbed by an aloof elite.

They won’t even be wrong. It’s just that due to the chemistry of climate change, many members of that elite will have died 30 or 50 years prior. […]

Malin Mobjörk, a senior researcher at the Stockholm International Peace Research Institute, recently described a “growing consensus” in the literature that climate change can raise the risk of violence. And the U.S. Department of Defense already considers global warming a “threat multiplier” for national security. It expects hotter temperatures and acidified oceans to destabilize governments and worsen infectious pandemics.

Indeed, climate change may already be driving mass migrations. Last year, the Democratic presidential candidate Martin O’Malley was mocked for suggesting that a climate-change-intensified drought in the Levant—the worst drought in 900 years—helped incite the Syrian Civil War, thus kickstarting the Islamic State. The evidence tentatively supports him. Since the outbreak of the conflict, some scholars have recognized that this drought pushed once-prosperous farmers into Syria’s cities. Many became unemployed and destitute, aggravating internal divisions in the run-up to the war. […]

They were not disappointed. Heatwaves, droughts, and other climate-related exogenous shocks do correlate to conflict outbreak—but only in countries primed for conflict by ethnic division. In the 30-year period, nearly a quarter of all ethnic-fueled armed conflict coincided with a climate-related calamity. By contrast, in the set of all countries, war only correlated to climatic disaster about 9 percent of the time.

“We cannot find any evidence for a generalizable trigger relationship, but we do find evidence for some risk enhancement,” Schleussner told me. In other words,  climate disaster will not cause a war, but it can influence whether one begins.

Why climate change is very bad for your health
by Geordan Dickinson Shannon

Ecosystems

We don’t live in isolation from other ecosystems. From large-scale weather events, through to the food we eat daily, right down to the minute organisms colonising our skin and digestive systems, we live and breath in co-dependency with our environment.

A change in the delicate balance of micro-organisms has the potential to lead to disastrous effects. For example, microbial proliferation – which is predicted in warmer temperatures driven by climate change – may lead to more enteric infections (caused by viruses and bacteria that enter the body through the gastrointestinal tract), such as salmonella food poisoning and increased cholera outbreaks related to flooding and warmer coastal and estuarine water.

Changes in temperature, humidity, rainfall, soil moisture and sea-level rise, caused by climate change is also affecting the transmission of dangerous insect-borne infectious diseases. These include malaria, dengue, Japanese encephalitis, chikungunya and West Nile viruslymphatic filariasis, plague, tick-borne encephalitis, Lyme diseaserickettsioses, and schistosomiasis.

Through climate change, the pattern of human interaction will likely change and so will our interactions with disease-spreading insects, especially mosquitoes. The World Health Organisation has also stressed the impact of climate change on the reproductive, survival and bite rates of insects, as well as their geographic spread.

Climate refugees

Perhaps the most disastrous effect of climate change on human health is the emergence of large-scale forced migration from the loss of local livelihoods and weather events – something that is recognised by the United Nations High Commission on Human Rights. Sea-level rise, decreased crop yield, and extreme weather events will force many people from their lands and livelihoods, while refugees in vulnerable areas also face amplified conditions such as fewer food supplies and more insect-borne diseases. And those who are displaced put a significant health and economic burden on surrounding communities.

The International Red Cross estimates that there are more environmental refugees than political. Around 36m people were displaced by natural disasters in 2009; a figure that is predicted to rise to more than 50m by 2050. In one worst-case scenario, as many as 200m people could become environmental refugees.

Not a level playing field

Climate change has emerged as a major driver of global health inequalities. As J. Timmons Roberts, professor of Environmental Studies and Sociology at Brown University, put it:

Global warming is all about inequality, both in who will suffer most its effects and in who created the problem in the first place.

Global climate change further polarises the haves and the have-nots. The Intergovernmental Panel on Climate Change predicts that climate change will hit poor countries hardest. For example, the loss of healthy life years in low-income African countries is predicted to be 500 times that in Europe. The number of people in the poorest countries most vulnerable to hunger is predicted by Oxfam International to increase by 20% in 2050. And many of the major killers affecting developing countries, such as malaria, diarrhoeal illnesses, malnutrition and dengue, are highly sensitive to climate change, which would place a further disproportionate burden on poorer nations.

Most disturbingly, countries with weaker health infrastructure – generally situated in the developing world – will be the least able to copewith the effects of climate change. The world’s poorest regions don’t yet have the technical, economic, or scientific capacity to prepare or adapt.

Predictably, those most vulnerable to climate change are not those who contribute most to it. China, the US, and the European Union combined have contributed more than half the world’s total carbon dioxide emissions in the last few centuries. By contrast, and unfairly, countries that contributed the least carbon emissions (measured in per capita emissions of carbon dioxide) include many African nations and small Pacific islands – exactly those countries which will be least prepared and most affected by climate change.

Here’s Why Climate Change Will Increase Deaths by Suicide
by Francis Vergunst, Helen Louise Berry & Massimiliano Orri

Suicide is already among the leading causes of death worldwide. For people aged 15-55 years, it is among the top five causes of death. Worldwide nearly one million people die by suicide each year — more than all deaths from war and murder combined.

Using historical temperature records from the United States and Mexico, the researchers showed that suicide rates increased by 0.7 per cent in the U.S. and by 2.1 per cent in Mexico when the average monthly temperatures rose by 1 C.

The researchers calculated that if global temperatures continue to rise at these rates, between now and 2050 there could be 9,000 to 40,000 additional suicides in the U.S. and Mexico alone. This is roughly equivalent to the number of additional suicides that follow an economic recession.

Spikes during heat waves

It has been known for a long time that suicide rates spike during heat waves. Hotter weather has been linked with higher rates of hospital admissions for self-harmsuicide and violent suicides, as well as increases in population-level psychological distress, particularly in combination with high humidity.

Another recent study, which combined the results of previous research on heat and suicide, concluded there is “a significant and positive association between temperature rises and incidence of suicide.”

Why this is remains unclear. There is a well-documented link between rising temperatures and interpersonal violence and suicide could be understood as an act of violence directed at oneself. Lisa Page, a researcher in psychology at King’s College London, notes:

“While speculative, perhaps the most promising mechanism to link suicide with high temperatures is a psychological one. High temperatures have been found to lead individuals to behave in a more disinhibited, aggressive and violent manner, which might in turn result in an increased propensity for suicidal acts.”

Hotter temperatures are taxing on the body. They cause an increase in the stress hormone cortisol, reduce sleep quality and disrupt people’s physical activity routines. These changes can reduce well-being and increase psychological distress.

Disease, water shortages, conflict and war

The effects of hotter temperatures on suicides are symptomatic of a much broader and more expansive problem: the impact of climate change on mental health.

Climate change will increase the frequency and severity of heat waves, droughts, storms, floods and wildfires. It will extend the range of infectious diseases such as Zika virus, malaria and Lyme disease. It will contribute to food and water shortages and fuel forced migration, conflict and war.

These events can have devastating effects on people’s health, homes and livelihoods and directly impact psychological health and well-being.

But effects are not limited to people who suffer direct losses — for example, it has been estimated that up to half of Hurricane Katrina survivors developed post-traumatic stress disorder even when they had suffered no direct physical losses.

The feelings of loss that follow catastrophic events, including a sense of loss of safety, can erode community well-being and further undermine mental health resilience

The Broken Ladder
by Keith Payne
pp. 3-4 (see here)

[W]hen the level of inequality becomes too large to ignore, everyone starts acting strange.

But they do not act strange in just any old way. Inequality affects our actions and our feelings in the same systematic, predictable fashion again and again. It makes us shortsighted and prone to risky behavior, willing to sacrifice a secure future for immediate gratification. It makes us more inclined to make self-defeating decisions. It makes us believe weird things, superstitiously clinging to the world as we want it to be rather than as it is. Inequality divides us, cleaving us into camps not only of income but also of ideology and race, eroding our trust in one another. It generates stress and makes us all less healthy and less happy.

Picture a neighborhood full of people like the ones I’ve described above: shortsighted, irresponsible people making bad choices; mistrustful people segregated by race and by ideology; superstitious people who won’t listen to reason; people who turn to self-destructive habits as they cope with the stress and anxieties of their daily lives. These are the classic tropes of poverty and could serve as a stereotypical description of the population of any poor inner-city neighborhood or depressed rural trailer park. But as we will see in the chapters ahead, inequality can produce these tendencies even among the middle class and wealthy individuals.

PP. 119-120 (see here)

But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.

To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.

Essentialism On the Decline

Before getting to the topic of essentialism, let me take an indirect approach. In reading about paleolithic diets and traditional foods, a recurring theme is inflammation, specifically as it relates to the health of the gut-brain network and immune system.

The paradigm change this signifies is that seemingly separate diseases with different diagnostic labels often have underlying commonalities. They share overlapping sets of causal and contributing factors, biological processes and symptoms. This is why simple dietary changes can have a profound effect on numerous health conditions. For some, the diseased state expresses as mood disorders and for others as autoimmune disorders and for still others something entirely else, but there are immense commonalities between them all. The differences have more to do with how dysbiosis and dysfunction happens to develop, where it takes hold in the body, and so what symptoms are experienced.

From a paleo diet perspective in treating both patients and her own multiple sclerosis, Terry Wahls gets at this point in a straightforward manner (The Wahls Protocol, p. 47): “In a very real sense, we all have the same disease because all disease begins with broken, incorrect biochemistry and disordered communication within and between our cells. […] Inside, the distinction between these autoimmune diseases is, frankly, fairly arbitrary”. In How Emotions Are Made, Lisa Feldman Barrett wrote (Kindle Locations 3834-3850):

“Inflammation has been a game-changer for our understanding of mental illness. For many years, scientists and clinicians held a classical view of mental illnesses like chronic stress, chronic pain, anxiety, and depression. Each ailment was believed to have a biological fingerprint that distinguished it from all others. Researchers would ask essentialist questions that assume each disorder is distinct: “How does depression impact your body? How does emotion influence pain? Why do anxiety and depression frequently co-occur?” 9

“More recently, the dividing lines between these illnesses have been evaporating. People who are diagnosed with the same-named disorder may have greatly diverse symptoms— variation is the norm. At the same time, different disorders overlap: they share symptoms, they cause atrophy in the same brain regions, their sufferers exhibit low emotional granularity, and some of the same medications are prescribed as effective.

“As a result of these findings, researchers are moving away from a classical view of different illnesses with distinct essences. They instead focus on a set of common ingredients that leave people vulnerable to these various disorders, such as genetic factors, insomnia, and damage to the interoceptive network or key hubs in the brain (chapter 6). If these areas become damaged, the brain is in big trouble: depression, panic disorder, schizophrenia, autism, dyslexia, chronic pain, dementia, Parkinson’s disease, and attention deficit hyperactivity disorder are all associated with hub damage. 10

“My view is that some major illnesses considered distinct and “mental” are all rooted in a chronically unbalanced body budget and unbridled inflammation. We categorize and name them as different disorders, based on context, much like we categorize and name the same bodily changes as different emotions. If I’m correct, then questions like, “Why do anxiety and depression frequently co-occur?” are no longer mysteries because, like emotions, these illnesses do not have firm boundaries in nature.”

What jumped out at me was the conventional view of disease as essentialist, and hence the related essentialism in biology and psychology. This is exemplified by genetic determinism, such as it informs race realism. It’s easy for most well-informed people to dismiss race realists, but essentialism takes on much more insidious forms that are harder to detect and root out. When scientists claimed to find a gay gene, some gay men quickly took this genetic determinism as a defense against the fundamentalist view that homosexuality is a choice and a sin. It turned out that there was no gay gene (by the way, this incident demonstrated how, in reacting to reactionaries, even leftist activists can be drawn into the reactionary mind). Not only is there no gay gene but also no simple and absolute gender divisions at all — as I previously explained (Is the Tide Starting to Turn on Genetics and Culture?):

“Recent research has taken this even further in showing that neither sex nor gender is binary (1234, & 5), as genetics and its relationship to environment, epigenetics, and culture is more complex than was previously realized. It’s far from uncommon for people to carry genetics of both sexes, even multiple DNA. It has to do with diverse interlinking and overlapping causal relationships. We aren’t all that certain at this point what ultimately determines the precise process of conditions, factors, and influences in how and why any given gene expresses or not and how and why it expresses in a particular way.”

The attraction of essentialism is powerful. And as shown in numerous cases, the attraction can be found across the political spectrum, as it offers a seemingly strong defense in diverting attention away from other factors. Similar to the gay gene, many people defend neurodiversity as if some people are simply born a particular way, and that therefore we can’t and shouldn’t seek to do anything to change or improve their condition, much less cure it or prevent it in future generations.

For example, those on the high-functioning end of the autism spectrum will occasionally defend their condition as being gifted in their ability to think and perceive differently. That is fine as far as it goes, but from a scientific perspective we still should find it concerning that conditions like this are on a drastic rise and it can’t be explained by mere greater rates of diagnosis. Whether or not one believes the world would be a better place with more people with autism, this shouldn’t be left as a fatalistic vision of an evolutionary leap, especially considering most on the autism spectrum aren’t high functioning — instead, we should try to understand why it is happening and what it means.

Researchers have found that there are prospective causes to be studied. Consider proprionate, a substance discussed by Alanna Collen (10% Human, p. 83): “although propionate was an important compound in the body, it was also used as a preservative in bread products – the very foods many autistic children crave. To top it all off, clostridia species are known to produce propionate. In itself, propionate is not ‘bad’, but MacFabe began to wonder whether autistic children were getting an overdose.” This might explain why antibiotics helped many with autism, as it would have been knocking off the clostridia population that was boosting propionate. To emphasize this point, when rodents were injected with propionate, they exhibited the precise behaviors of autism and they too showed inflammation in the brain. The fact that autistics often have brain inflammation, an unhealthy condition, is strong evidence that autism shouldn’t be taken as mere neurodiversity (and, among autistics, the commonality of inflammation-related gut issues emphasizes this point).

There is no doubt that genetic determinism, like the belief in an eternal soul, can be comforting. We identify with our genes, as we inherit them and are born with them. But to speak of inflammation or propionate or whatever makes it seem like we are victims of externalities. And it means we aren’t isolated individuals to be blamed or to take credit for who we are. To return to Collen (pp. 88-89):

“In health, we like to think we are the products of our genes and experiences. Most of us credit our virtues to the hurdles we have jumped, the pits we have climbed out of, and the triumphs we have fought for. We see our underlying personalities as fixed entities – ‘I am just not a risk-taker’, or ‘I like things to be organised’ – as if these are a result of something intrinsic to us. Our achievements are down to determination, and our relationships reflect the strength of our characters. Or so we like to think.

“But what does it mean for free will and accomplishment, if we are not our own masters? What does it mean for human nature, and for our sense of self? The idea that Toxoplasma, or any other microbe inhabiting your body, might contribute to your feelings, decisions and actions, is quite bewildering. But if that’s not mind-bending enough for you, consider this: microbes are transmissible. Just as a cold virus or a bacterial throat infection can be passed from one person to another, so can the microbiota. The idea that the make-up of your microbial community might be influenced by the people you meet and the places you go lends new meaning to the idea of cultural mind-expansion. At its simplest, sharing food and toilets with other people could provide opportunity for microbial exchange, for better or worse. Whether it might be possible to pick up microbes that encourage entrepreneurship at a business school, or a thrill-seeking love of motorbiking at a race track, is anyone’s guess for now, but the idea of personality traits being passed from person to person truly is mind-expanding.”

This goes beyond the personal level, which lends a greater threat to the proposal. Our respective societies, communities, etc might be heavily influenced by environmental factors that we can’t see. A ton of research shows the tremendous impact of parasites, heavy metal toxins, food additives, farm chemicals, hormones, hormone mimics, hormone disruptors, etc. Entire regions might be shaped by even a single species of parasite, such as how higher rates of toxoplasmosis gondii in New England is directly correlated to higher rates of neuroticism (see What do we inherit? And from whom? & Uncomfortable Questions About Ideology).

Essentialism, though still popular, has taken numerous major hits in recent years. It once was the dominant paradigm and went largely unquestioned. Consider how early last century respectable fields of study such as anthropology, linguistic relativism and behaviorism suggested that humans were largely products of environmental and cultural factors. This was the original basis of the attack on racism and race realism. In linguistics, Noam Chomsky overturned this view in positing the essentialist belief that, though not observed much less proven, there must exist within the human brain a language module with a universal grammar. It was able to defeat and replace the non-essentialist theories because it was more satisfying to the WEIRD ideologies that were becoming a greater force in an increasingly WEIRD society.

Ever since Plato, Western civilization has been drawn toward the extremes of essentialism (as part of the larger Axial Age shift toward abstraction and idealism). Yet there has also long been a countervailing force (even among the ancients, non-essentialist interpretations were common; consider group identity: here, here, here, here, and here). It wasn’t predetermined that essentialism would be so victorious as to have nearly obliterated the memory of all alternatives. It fit the spirit of the times for this past century, but now the public mood is shifting again. It’s no accident that, as social democracy and socialism regains favor, environmentalist explanations are making a comeback. But this is merely the revival of a particular Western tradition of thought, a tradition that is centuries old.

I was reminded of this in reading Liberty in America’s Founding Moment by Howard Schwartz. It’s an interesting shift of gears, since Schwartz doesn’t write about anything related to biology, health, or science. But he does indirectly get at environmentalist critique that comes out in his analysis of David Hume (1711-1776). I’ve mostly thought of Hume in terms of his bundle theory of self, possibly having been borrowed from Buddhism that he might have learned from Christian missionaries having returned from the East. However he came to it, the bundle theory argued that there is no singular coherent self, as was a central tenet of traditional Christian theology. Still, heretical views of the self were hardly new — some detect a possible Western precursor of Humean bundle theory in the ideas of Baruch Spinoza (1632-1677).

Whatever its origins in Western thought, environmentalism has been challenging essentialism since the Enlightenment. And in the case of Hume, there is an early social constructionist view of society and politics, that what motivates people isn’t essentialism. This puts a different spin on things, as Hume’s writings were widely read during the revolutionary era when the United States was founded. Thomas Jefferson, among others, was familiar with Hume and highly recommended his work. Hume represented the opposite position to John Locke. We are now returning to this old battle of ideas.