The End of History is a Beginning

Francis Fukuyama’s ideological change, from neocon to neoliberal, signaled among the intellectual class a similar but dissimilar change that was happening in the broader population. The two are parallel tracks down which history like a train came barreling and rumbling, the end not in sight.

The difference between them is that the larger shift was ignored, until Donald Trump revealed the charade to be a charade, as it always was. It shouldn’t have come as a surprise, this populist moment. A new mood has been in the air that resonates with an old mood that some thought was lost in the past, the supposed end of history. It has been developing for a long while now. And when reform is denied, much worse comes along.

On that unhappy note, there is a reason why Trump used old school rhetoric of progressivism and fascism (with the underlying corporatism to both ideologies). Just as there is a reason Steve Bannon, while calling himself a Leninist, gave voice to his hope that the present would be as exciting as the 1930s. Back in the early aughts, Fukuyma gave a warning about the dark turn of events, imperialistic ambition turned to hubris. No doubt he hoped to prevent the worse. But not many in the ruling class cared to listen. So here we are.

Whatever you think of him and his views, you have to give Fukuyama credit for the simple capacity of changing his mind and, to some extent, admitting he was wrong. He is a technocratic elitist with anti-populist animosity and paternalistic aspirations. But at the very least his motivations are sincere. One journalist, Andrew O’Hehir, described him this way:

“He even renounced the neoconservative movement after the Iraq war turned into an unmitigated disaster — although he had initially been among its biggest intellectual cheerleaders — and morphed into something like a middle-road Obama-Clinton Democrat. Today we might call him a neoliberal, meaning that not as leftist hate speech but an accurate descriptor.”

Not exactly a compliment. Many neocons and former neocons, when faced with the changes of the Republican Party, found the Clinton Democrats more attractive. For most of them, this conversion only happened with Trump’s campaign. Fukuyama stands out for being one of the early trendsetters on the right in turning against Cold War neoconservatism before it was popular to do so (athough did Fukuyama really change or did he simply look to a softer form of neoconservatism).

For good or ill, the Clinton Democrats, in the mainstream mind, now stand for the sane center, the moderate middle. To those like Fukuyama fearing a populist uprising, Trump marks the far right and Sanders the far left. That leaves the battleground between them that of a milquetoast DNC establishment, holding onto power by its loosening fingertips. Fukuyama doesn’t necessarily offer us much in the way of grand insight or of practical use (here is a harsher critique). It’s still interesting to hear someone like him make such an about face, though — if only in political rhetoric and not in fundamental principles. And for whatever its worth, he so far has been right about Trump’s weakness as a strongman.

It’s also appreciated that those like Francis Fukuyama and Charles Murray bring attention to the dangers of inequality and the failures of capitalism, no matter that I oppose the ideological bent of their respective conclusions. So, even as they disagree with populism as a response, like Teddy Roosevelt, they do take seriously the gut-level assessment of what is being responded to. It’s all the more interesting that these are views coming from respectable figures who once represented the political right, much more stimulating rhetoric than anything coming out of the professional liberal class.

* * *

Donald Trump and the return of class: an interview with Francis Fukuyama

“What is happening in the politics of the US particularly, but also in other countries, is that identity in a form of nationality or ethnicity or race has become a proxy for class.”

Francis Fukuyama interview: “Socialism ought to come back”

Fukuyama, who studied political philosophy under Allan Bloom, the author of The Closing of the American Mind, at Cornell University, initially identified with the neoconservative movement: he was mentored by Paul Wolfowitz while a government official during the Reagan-Bush years. But by late 2003, Fukuyama had recanted his support for the Iraq war, which he now regards as a defining error alongside financial deregulation and the euro’s inept creation. “These are all elite-driven policies that turned out to be pretty disastrous, there’s some reason for ordinary people to be upset.”

The End of History was a rebuke to Marxists who regarded communism as humanity’s final ideological stage. How, I asked Fukuyama, did he view the resurgence of the socialist left in the UK and the US? “It all depends on what you mean by socialism. Ownership of the means of production – except in areas where it’s clearly called for, like public utilities – I don’t think that’s going to work.

“If you mean redistributive programmes that try to redress this big imbalance in both incomes and wealth that has emerged then, yes, I think not only can it come back, it ought to come back. This extended period, which started with Reagan and Thatcher, in which a certain set of ideas about the benefits of unregulated markets took hold, in many ways it’s had a disastrous effect.

“In social equality, it’s led to a weakening of labour unions, of the bargaining power of ordinary workers, the rise of an oligarchic class almost everywhere that then exerts undue political power. In terms of the role of finance, if there’s anything we learned from the financial crisis it’s that you’ve got to regulate the sector like hell because they’ll make everyone else pay. That whole ideology became very deeply embedded within the Eurozone, the austerity that Germany imposed on southern Europe has been disastrous.”

Fukuyama added, to my surprise: “At this juncture, it seems to me that certain things Karl Marx said are turning out to be true. He talked about the crisis of overproduction… that workers would be impoverished and there would be insufficient demand.”

Was Francis Fukuyama the first man to see Trump coming? – Paul Sagar | Aeon Essays

Ancient Atherosclerosis?

In reading about health, mostly about diet and nutrition, I regularly come across studies that are either poorly designed or poorly interpreted. The conclusions don’t always follow from the data or there are so many confounders that other conclusions can’t be discounted. Then the data gets used by dietary ideologues.

There is a major reason I appreciate the dietary debate among proponents of traditional, ancestral, paleo, low-carb, ketogenic, and some other related views (anti-inflammatory diets, autoimmune diets, etc such as the Wahls Protocol for multiple sclerosis and Bredesen Protocol Alzheimer’s). This area of alternative debate leans heavily on questioning conventional certainties by digging deep into the available evidence. These diets seem to attract people capable of changing their minds or maybe it is simply that many people who eventually come to these unconventional views do so after having already tried numerous other diets.

For example, Dr. Terry Wahls is a clinical professor of Internal Medicine, Epidemiology, and Neurology while also being Associate Chief of Staff at a Veterans Affairs hospital. She was as conventional as doctors come until she developed multiple sclerosis, began researching and experimenting, and eventually became a practitioner of functional medicine. Also, she went from being a hardcore vegetarian following mainstream dietary advice (avoided saturated fats, ate whole grains and legumes, etc) to embracing an essentially paleo diet; her neurologist at the Cleveland Clinic referred her to Dr. Loren Cordain’s paleo research at Colorado State University. Since that time, she has done medical research and, recently having procured funding, she is in the process of doing a study in order to further test her diet.

Her experimental attitude, both personally and scientifically, is common among those interested in these kinds of diets and functional medicine. This experimental attitude is necessary when one steps outside of conventional wisdom, something Dr. Wahls felt she had to do to save her own life — a motivating factor of health crisis that leads many people to try a paleo, keto, etc diet after trying all else (these involve protocols to deal with serious illnesses, such as ketosis being medically used for treatment of epileptic seizures). Contradicting professional opinion of respected authorities (e.g., the American Heart Association), a diet like this tends to be an option of last resort for most people, something they come to after much failure and worsening of health. That breeds a certain mentality.

On the other hand, it should be unsurprising that people raised on mainstream views and who hold onto those views into adulthood tend not to be people willing to entertain alternative views, no matter what the evidence indicates. This includes those working in the medical field. Some ask, why are doctors so stupid? As Dr. Michael Eades explains, it’s not that they’re stupid but that many of them are ignorant; to put it more nicely, they’re ill-informed. They simply don’t know because, like so many others, they are repeating what they’ve been told by other authority figures. The reason people stick to the known, even when it is wrong, is because it is familiar and so it feels safe (and because of liability, healthcare workers and health insurance companies like safe). Doctors, as with everyone else, are dependent on heuristics to deal with a complex world. And doctors, more than most people, are too busy to explore the large amounts of data out there, much less analyze it carefully for themselves.

This maybe relates to why most doctors tend to not make the best researchers, not to dismiss those attempting to do quality research. For that reason, you might think scientific researchers who aren’t doctors would be different than doctors. But that obviously isn’t always the case because, if so, Ancel Keys low quality research wouldn’t have dominated professional dietary advice for more than a half century. Keys wasn’t a medical professional or even trained in nutrition, rather he was educated in a wide variety of other fields (economics, political science, zoology, oceanography, biology, and physiology) with his earliest research done on the physiology of fish.

I came across yet another example of this, although less extreme than that of Keys, but also different in that at least some of the authors of the paper are medical doctors. The study in question involved the participation of 19 people. The paper is “Atherosclerosis across 4000 years of human history: the Horus study of four ancient populations,” peer-reviewed and published (2013) in the highly respectable Lancet Journal (Keys’ work, one might note, was also highly respectable and widely published). This study on atherosclerosis was well reported in the mainstream news outlets and received much attention from those critical of paleo diets, offered as a final nail in the coffin, claimed as being absolute proof that ancient people were as unhealthy as we are.

The 19 authors conclude that, “atherosclerosis was common in four preindustrial populations, including a preagricultural hunter-gatherer population, and across a wide span of human history. It remains prevalent in contemporary human beings. The presence of atherosclerosis in premodern human beings suggests that the disease is an inherent component of human ageing and not characteristic of any specific diet or lifestyle.” There you have it. Heart disease is simply in our genetics — so take your statin meds like your doctor tells you to do, just shut up and quit asking questions, quit looking at all the contrary evidence.

But even ignoring all else, does the evidence from this paper support their conclusion? No. It doesn’t require much research or thought to ascertain the weak case presented. In the paper itself, on multiple occasions including in the second table, they admit that three out of four of the populations were farmers who ate largely an agricultural diet and, of course, lived an agricultural lifestyle. At most, these examples can speak to the conditions of the neolithic but not the paleolithic. Of these three, only one was transitioning from an earlier foraging lifestyle, but as with the other two was eating a higher carb diet from foods they farmed. Also, the most well known example of the bunch, the Egyptians, particularly point to the problems of an agricultural diet — as described by Michael Eades in Obesity in ancient Egypt:

“[S]everal thousand years ago when the future mummies roamed the earth their diet was a nutritionist’s nirvana. At least a nirvana for all the so-called nutritional experts of today who are recommending a diet filled with whole grains, fresh fruits and vegetables, and little meat, especially red meat. Follow such a diet, we’re told, and we will enjoy abundant health.

“Unfortunately, it didn’t work that way for the Egyptians. They followed such a diet simply because that’s all there was. There was no sugar – it wouldn’t be produced for another thousand or more years. The only sweet was honey, which was consumed in limited amounts. The primary staple was a coarse bread made of stone-ground, whole wheat. Animals were used as beasts of burden and were valued much more for the work they could do than for the meat they could provide. The banks of the Nile provided fertile soil for growing all kinds of fruits and vegetables, all of which were a part the low-fat, high-carbohydrate Egyptian diet. And there were no artificial sweeteners, artificial coloring, artificial flavors, preservatives, or any of the other substances that are part of all the manufactured foods we eat today.

“Were the nutritionists of today right about their ideas of the ideal diet, the ancient Egyptians should have had abundant health. But they didn’t. In fact, they suffered pretty miserable health. Many had heart disease, high blood pressure, diabetes and obesity – all the same disorders that we experience today in the ‘civilized’ Western world. Diseases that Paleolithic man, our really ancient ancestors, appeared to escape.”

With unintentional humor, the authors of the paper note that, “None of the cultures were known to be vegetarian.” No shit. Maybe that is because until late in the history of agriculture there were no vegetarians and for good reason. As Weston Price noted, there is a wide variety of possible healthy diets as seen in traditional communities. Yet for all his searching for a healthy traditional community that was strictly vegan or even vegetarian, he could never find any; the closest examples were those that relied largely on such things as insects and grubs because of a lack of access to larger sources of protein and fat. On the other hand, the most famous vegetarian population, Hindu Indians, have one of the shortest lifespans (to be fair, though, that could be for other reasons such as poverty-related health issues).

Interestingly, there apparently has never been a study done comparing a herbivore diet and a carnivore diet, although one study touched on it while not quite eliminating all plants from the latter. As for fat, there is no evidence that it is problematic (vegetable oils are another issue), if anything the opposite: “In a study published in the Lancet, they found that people eating high quantities of carbohydrates, which are found in breads and rice, had a nearly 30% higher risk of dying during the study than people eating a low-carb diet. And people eating high-fat diets had a 23% lower chance of dying during the study’s seven years of follow-up compared to people who ate less fat” (Alice Park, The Low-Fat vs. Low-Carb Diet Debate Has a New Answer); and “The Mayo Clinic published a study in the Journal of Alzheimer’s Disease in 2012 demonstrating that in individuals favoring a high-carb diet, risk for mild cognitive impairment was increased by 89%, contrasted to those who ate a high-fat diet, whose risk was decreased by 44%” (WebMD interview of Dr. David Perlmutter). Yet the respectable authorities tell us that fat is bad for our health, making it paradoxical that many high-fat societies have better health. There are so many paradoxes, according to conventional thought, that one begins to wonder if conventional thought is the real paradox.

Now let me discuss the one group, the Unangan, that at first glance stands out from the rest. The authors say that the, “five Unangan people living in the Aleutian Islands of modern day Alaska (ca 1756–1930 CE, one excavation site).” Those mummies are far different than those from the other populations that came much earlier in history. Four of the Unangan died around 1900 and one around 1850. Why does that matter? Well, for the reason that their entire world was being turned on its head at that time. The authors claim that, “The Unangan’s diet was predominately marine, including seals, sea lions, sea otters, whale, fish, sea urchins, and other shellfish and birds and their eggs. They were hunter-gatherers living in barabaras, subterranean houses to protect against the cold and fierce winds.” They base this claim on the assumption that these particular mummified Unangan had been eating the same diet as their ancestors for thousands of years, but the evidence points in the opposite direction.

Questioning this assumption, Jeffery Gerber explains that, “During life (before 1756–1930 CE) not more than a few short hundred years ago, the 5 Unangan/Aleut mummies were hardly part of an isolated group. The Fur Seal industry exploded in the 18th century bringing outside influence, often violent, from countries including Russia and Europe. These mummies during life, were probably exposed to foods (including sugar) different from their traditional diet and thus might not be representative of their hunter-gatherer origins” (Mummies, Clogged Arteries and Ancient Junk Food). One might add that, whatever Western foods that may have been introduced, we do know of another factor — the Government of Nunavat official website states that, “European whalers regularly travelled to the Arctic in the late 17th and 18th century. When they visited, they introduced tobacco to Inuit.” Why is that significant? Tobacco is a known risk factor for atherosclerosis. Gideon Mailer and Nicola Hale, in their book Decolonizing the Diet, elaborate on the colonial history of the region (pp. 162-171):

“On the eve of Western contact, the indigenous population of present-day Alaska numbered around 80,000. They included the Alutiiq and Unangan communities, more commonly defined as Aleuts, Inupiat and Yupiit, Athabaskans, and the Tinglit and Haida groups. Most groups suffered a stark demographic decline from the mid-eighteenth century to the mid-nineteenth century, during the period of extended European — particularly Russian — contact. Oral traditions among indigenous groups in Alaska described whites as having taken hunting grounds from other related communities, warning of a similar fate to their own. The Unangan community, numbering more than 12,000 at contact, declined by around 80 percent by 1860. By as early as the 1820s, as Jacobs has described, “The rhythm of life had changed completely in the Unangan villages now based on the exigencies of the fur trade rather than the subsistence cycle, meaning that often villages were unable to produce enough food to keep them through the winter.” Here, as elsewhere, societal disruption was most profound in the nutritional sphere, helping account for the failure to recover population numbers following disease epidemics.

“In many parts of Alaska, Native American nutritional strategies and ecological niches were suddenly disrupted by the arrival of Spanish and Russian settlers. “Because,” as Saunt has pointed out “it was extraordinarily difficult to extract food from the challenging environment,” in Alaska and other Pacific coastal communities, “any disturbance was likely to place enormous stress on local residents.” One of indigenous Alaska’s most important ecological niches centered on salmon access points. They became steadily more important between the Paleo-Eskimo era around 4,200 years ago and the precontact period, but were increasingly threatened by Russian and American disruptions from the 1780s through the nineteenth century. Dependent on nutrients and omega fatty acids such as DHA from marine resources such as salmon, Aleut and Alutiiq communities also required other animal products, such as intestines, to prepare tools and waterproof clothing to take advantage of fishing seasons. Through the later part of the eighteenth century, however, Russian fur traders and settlers began to force them away from the coast with ruthless efficiency, even destroying their hunting tools and waterproof apparatus. The Russians were clear in their objectives here, with one of their men observing that the Native American fishing boats were “as indispensable as the plow and the horse for the farmer.”

“Here we are provided with another tragic case study, which allows us to consider the likely association between disrupted access to omega-e fatty acids such as DHA and compromised immunity. We have already noted the link between DHA, reduced inflammation and enhanced immunity in the millennia following the evolution of the small human gut and the comparatively larger human brain. Wild animals, but particularly wild fish, have been shown to contain far higher proportions of omega-3 fatty acids than the food sources that apparently became more abundant in Native American diets after European contact, including in Alaska. Fat-soluble vitamins and DHA are abundantly found in fish eggs and fish fats, which were prized by Native Americans in the Northwest and Great Lakes regions, in the marine life used by California communities, and perhaps more than anywhere else, in the salmon products consumed by indigenous Alaskan communities. […]

“In Alaska, where DHA and vitamin D-rich salmon consumption was central to precontact subsistence strategies, alongside the consumption of nutrient-dense animal products and the regulation of metabolic hormones through periods of fasting or even through the efficient use of fatty acids or ketones for energy, disruptions to those strategies compromised immunity among those who suffered greater incursions from Russian and other European settlers through the first half of the nineteenth century.

“A collapse in sustainable subsistence practices among the Aleuts of Alaska exacerbated population decline during the period of Russian contact. The Russian colonial regime from the 1740s to 1840s destroyed Aleut communities through open warfare and by attacking and curtailing their nutritional resources, such as sea otters, which Russians plundered to supply the Chinese market for animal skins. Aleuts were often forced into labor, and threatened by the regular occurrence of Aleut women being taken as hostages. Curtailed by armed force, Aleuts were often relocated to the Pribilof Islands or to California to collect seals and sea otters. The same process occurred as Aleuts were co-opted into Russian expansion through the Aleutian Islands, Kodiak Island and into the southern coast of Alaska. Suffering murder and other atrocities, Aleuts provided only one use to Russian settlers: their perceived expertise in hunting local marine animals. They were removed from their communities, disrupting demography further and preventing those who remained from accessing vital nutritional resources due to the discontinuation of hunting frameworks. Colonial disruption, warfare, captivity and disease were accompanied by the degradation of nutritional resources. Aleut population numbers declined from 18,000 to 2,000 during the period of Russian occupation in the first half of the nineteenth century. A lag between the first period of contact and the intensification of colonial disruption demonstrates the role of contingent interventions in framing the deleterious effects of epidemics, including the 1837-38 smallpox epidemic in the region. Compounding these problems, communities used to a relatively high-fat and low-fructose diet were introduced to alcohol by the Russians, to the immediate detriment of their health and well-being.”

The traditional hunter-gatherer diet, as Mailer and Hale describe it, was high in the nutrients that protect against inflammation. The loss of these nutrients and the simultaneous decimation to the population was a one-two punch. Without the nutrients, their immune systems were compromised. And with their immune systems compromised, they were prone to all kinds of health conditions, probably including heart disease which of course is related to inflammation. Weston A. Price, in Nutrition and Physical Degeneration, observed that morbidity and mortality of health conditions such as heart disease rise and fall with the seasons, following precisely the growth and dying away of vegetation throughout the year (which varies by region as do the morbidity and mortality rates; the regions of comparison were in the United States and Canada). He was able to track this down to the change of fat soluble vitamins, specifically vitamin D, in dairy. When fresh vegetation was available, cows ate it and so produced more of these nutrients and presumably more omega-3s at the same time.

Prior to colonization, the Unang would have had access to even higher levels of these protective nutrients year round. The most nutritious dairy taken from the springtime wouldn’t come close in comparison to the nutrient profile of wild game. I don’t know why anyone would be shocked that, like agricultural populations, hunter-gatherers also experience worsening health after loss of wild resources. Yet the authors of the mummy studies act like they made a radical discovery that throws to the wind every doubt anyone ever had about simplistic mainstream thought. It turns out, they seem to be declaring, that we are all victims of genetic determinism after all and so toss out your romantic fairy tales about healthy primitives from the ancient world. The problem is all the evidence that undermines their conclusion, including the evidence that they present in their own paper, that is when it is interpreted in full context.

As if responding to researchers, Mailer and Hale write (p. 186): “Conditions such as diabetes are thus often associated with heart disease and other syndromes, given their inflammatory component. They now make up a huge proportion of treatment and spending in health services on both sides of the Atlantic. Yet policy makers and researchers in those same health services often respond to these conditions reactively rather than proactively — as if they were solely genetically determined, rather than arising due to external nutritional factors. A similarly problematic pattern of analysis, as we have noted, has led scholars to ignore the central role of nutritional change in Native American population loss after European contact, focusing instead on purportedly immutable genetic differences.”

There is another angle related to the above but somewhat at a tangent. I’ll bring it up because the research paper mentions it in passing as a factor to be considered: “All four populations lived at a time when infections would have been a common aspect of daily life and the major cause of death. Antibiotics had yet to be developed and the environment was non-hygienic. In 20th century hunter-foragers-horticulturalists, about 75% of mortality was attributed to infections, and only 10% from senescence. The high level of chronic infection and inflammation in premodern conditions might have promoted the inflammatory aspects of atherosclerosis.”

This is familiar territory for me, as I’ve been reading much about inflammation and infections. The authors are presenting the old view of the immune system, as opposed to that of functional medicine that looks at the entire human. An example of the latter is the hygiene hypothesis that argues it is the exposure to microbes that strengthens the immune system and there has been much evidence in support of it (such as children raised with animals or on farms being healthier as adults). The researchers above are making an opposing argument that is contradicted by populations remaining healthy when lacking modern medicine as long as they maintain traditional diet and lifestyle in a healthy ecosystem, including living soil that hasn’t been depleted from intensive farming.

This isn’t only about agriculturalists versus hunter-gatherers. The distinction between populations goes deeper into culture and environment. Weston A. Price discovered this simple truth in finding healthy populations among both agriculturalists and hunter-gatherers, but it was specific populations under specific conditions. Also, at the time when he traveled in the early 20th century, there were still traditional communities living in isolation in Europe. One example is Loetschenatal Valley in Switzerland, while visiting the country in two separate trips in the consecutive years of 1931 and 1932 — as he writes of it:

“We were told that the physical conditions that would not permit people to obtain modern foods would prevent us from reaching them without hardship. However, owing to the completion of the Loetschberg Tunnel, eleven miles long, and the building of a railroad that crosses the Loetschental Valley, at a little less than a mile above sea level, a group of about 2,000 people had been made easily accessible for study, shortly prior to 1931. Practically all the human requirements of the people in that valley, except a few items like sea salt, have been produced in the valley for centuries.”

He points out that, “Notwithstanding the fact that tuberculosis is the most serious disease of Switzerland, according to a statement given me by a government official, a recent report of inspection of this valley did not reveal a single case.” In Switzerland and other countries, he found an “association of dental caries and tuberculosis.” The commonality was early life development, as underdeveloped and maldeveloped bone structure led to diverse issues: crowded teeth, smaller skull size, misaligned features, and what was called tubercular chest. And that was an outward sign of deeper and more systemic developmental issues, including malnutrition, inflammation, and the immune system:

“Associated with a fine physical condition the isolated primitive groups have a high level of immunity to many of our modern degenerative processes, including tuberculosis, arthritis, heart disease, and affections  of the internal organs. When, however, these individuals have lost this high level of physical excellence a definite lowering in their resistance to the modern degenerative processes has taken place. To illustrate, the narrowing of the facial and dental arch forms of the children of the modernized parents, after they had adopted the white man’s food, was accompanied by an increase in susceptibility to pulmonary tuberculosis.”

Any population that lost its traditional way of life became prone to disease. But this could often as easily be reversed by having the diseased individual return to healthy conditions. In discussing Dr. Josef Romig, Price said that, “Growing out of his experience, in which he had seen large numbers of the modernized Eskimos and Indians attacked with tuberculosis, which tended to be progressive and ultimately fatal as long as the patients stayed under modernized living conditions, he now sends them back when possible to primitive conditions and to a primitive diet, under which the death rate is very much lower than under modernized  conditions. Indeed, he reported that a great majority of the afflicted recover under the primitive type of living and nutrition.”

The point made by Mailer and Hale was earlier made by Price. As seen with pre-contact Native Alaskans, the isolated traditional residents of Loetschenatal Valley had nutritious diets. Price explained that he “arranged to have samples of food, particularly dairy products, sent to me about twice a month, summer and winter. These products have been tested for their mineral and vitamin contents, particularly the fat-soluble activators. The samples were found to be high in vitamins and much higher than the average samples of commercial dairy products in America and Europe, and in the lower areas of Switzerland.” Whether fat and organ meats from marine animals or dairy from pastured alpine cows, the key is high levels of fat soluble vitamins and, of course, omega-3 fatty acids procured from a pristine environment (healthy soil and clean water with no toxins, farm chemicals, hormones, etc). It also helped that both populations ate much that was raw which maintains the high nutrient content that is partly destroyed through heat.

Some might find it hard to believe that what you eat can determine whether or not you get a serious disease like tuberculosis. Conventional medicine tells us that the only thing that protects us is either avoiding contact or vaccination. But this view is being seriously challenged, as Mailer and Hale make clear (p. 164): “Several studies have focused on the link between Vitamin D and the health outcomes of individuals infected with tuberculosis, taking care to discount other causal factors and to avoid determining causation merely through association. Given the historical occurrence of the disease among indigenous people after contact, including in Alaska, those studies that have isolated the contingency of immunity on active Vitamin D are particularly pertinent to note. In biochemical experiments, the presence of the active form of vitamin D has been shown to have a crucial role in the destruction of Mycobacterium tuberculosis by macrophages. A recent review has found that tuberculosis patients tend to retain a lower-than-average vitamin D status, and that supplementation of the nutrient improved outcomes in most cases.” As an additional thought, the popular tuberculosis sanitoriums, some in the Swiss Alps, were attractive because “it was believed that the climate and above-average hours of sunshine had something to do with it” (Jo Fahy, A breath of fresh air for an alpine village). What does sunlight help the body to produce? Vitamin D.

As an additional perspective, James C. Scotts’ Against the Grain, writes that, “Virtually every infectious disease caused by micro-organisms and specifically adapted to Homo sapiens has arisen in the last ten thousand years, many of them in the last five thousand years as an effect of ‘civilisation’: cholera, smallpox, measles, influenza, chickenpox, and perhaps malaria” It is not only that agriculture introduces new diseases but also makes people susceptible to them. That might be true, as Scott suggests, even of a disease like malaria. The Piraha are more likely to die of malaria than anything else, but that might not have been true in the past. Let me offer a speculation by connecting to mummy study.

The Ancestral Puebloans, one of the groups in the mummy study, were at the time farming maize (corn) and squash while foraging pine nuts, seeds, amaranth (grain), and grasses. How does this compare to the more recent Piraha? A 1948 Smithsonian publication, Handbook of South American Indians ed. Julian H. Steward, reported that, “The Pirah grew maize, sweet manioc (macaxera), a kind of yellow squash (jurumum), watermelon, and cotton” (p. 267). So it turns out that, like the Ancestral Puebloan, the Piraha have been on their way toward a more agricultural lifestyle for a while. I also noted that the same publication added the detail that the Piraha “did not drink rum,” but by the time Daniel Everett met the Piraha in 1978 traders had already introduced them to alcohol and it had become an occasional problem. Not only were they becoming agricultural but also Westernized, two factors that likely contributed to decreased immunity.

Like other modern hunter-gatherers, the Piraha have been effected by the Neolithic Revolution and are in many ways far different from Paleolithic hunter-gatherers. Ancient dietary habits are shown in the analysis of ancient bones — M.P. Richards writes that, “Direct evidence from bone chemistry, such as the measurement of the stable isotopes of carbon and nitrogen, do provide direct evidence of past diet, and limited studies on five Neanderthals from three sites, as well as a number of modern Palaeolithic and Mesolithic humans indicates the importance of animal protein in diets. There is a significant change in the archaeological record associated with the introduction of agriculture worldwide, and an associated general decline in health in some areas. However, there is an rapid increase in population associated with domestication of plants, so although in some regions individual health suffers after the Neolithic revolution, as a species humans have greatly expanded their population worldwide” (A brief review of the archaeological evidence for Palaeolithic and Neolithic subsistence). This is further supported in the analysis of coprolites. “Studies of ancient human coprolites, or fossilized human feces, dating anywhere from three hundred thousand to as recent as fifty thousand years ago, have revealed essentially a complete lack of any plant material in the diets of the subjects studied (Bryant and Williams-Dean 1975),” Nora Gedgaudas tells us in Primal Body, Primal Mind (p. 39).

This diet changed as humans entered our present interglacial period with its warmer temperatures and greater abundance of vegetation, which was lacking during the Paleolithic Period: “There was far more plant material in the diets of our more recent ancestors than our more ancient hominid ancestors, due to different factors” (Gedgaudas, p. 37). Following the earlier megafauna mass extinction, it wasn’t only agriculturalists but also hunter-gatherers who began to eat more plants and in many cases make use of cultivated plants (either that they cultivated or that they adopted from nearby agriculturalists). To emphasize how drastic was this change, this loss of abundant meat and fat, consider the fact that humans have yet to regain the average height and skull size of Paleolithic humans.

The authors of the mummy study don’t even attempt to look at the data of Paleolithic humans. The populations compared are entirely from the past few millennia. And the only hunter-gatherer group included was post-contact. So, why are the authors so confident in their conclusion? I presume they were simply trying to get published and get media attention in a highly competitive market of academic scholarship. These people obviously aren’t stupid, but they had little incentive to fully inform themselves either. All the info I shared in this post I was able to gather in about a half an hour of several web searches, not exactly difficult academic research. It’s amazing the info that is easily available these days, for those who want to find it.

Let me make one last point. The mummy study isn’t without its merits. The paper mentions other evidence that remains to be explained: “We also considered the reliability and previous work of the authors. Autopsy studies done as long ago as the mid-19th century showed atherosclerosis in ancient Egyptians. Also, in more recent times, Zimmerman undertook autopsies and described atherosclerosis in the mummies of two Unangan men from the same cave as our Unangan mummies and of an Inuit woman who lived around 400 CE. A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.”

Let’s break that down. Further examples of Egyptian mummies is irrelevant, as their diet was so strikingly similar to the idealized Western diet recommended by mainstream doctors, dieticians, and nutritionists. That leaves the rest to account for. The older Unangan mummies are far more interesting and any meaningful paper would have led with that piece of data, but even then it wouldn’t mean what the authors think it means. Atherosclerosis is one small factor and not necessarily as significant as assumed. From a functional medicine perspective, it’s the whole picture that matters in how the body actually functions and in the health that results. If so, atherosclerosis might not indicate the same thing for all populations. In Nourishing Diets, Morell writes that (pp. 124-5),

“Critics have pointed out that Keys omitted from his study many areas of the world where consumption of animal foods is high and deaths from heart attack are low, including France — the so-called French paradox. But there is also a Japanese paradox. In 1989, Japanese scientists returned to the same two districts that Keys had studied. In an article titled “lessons fro Science from the Seven Countries Study,” they noted that per capita consumption of rice had declined, while consumption of fats, oils, meats, poultry, dairy products and fruit had all increased. […]

“During the postwar period of increased animal consumption, the Japanese average height increased three inches and the age-adjusted death rate from all causes declined from 17.6 to 7.4 per 1,000 per year. Although the rates of hypertension increased, stroke mortality declined markedly. Deaths from cancer also went down in spite of the consumption of animal foods.

“The researchers also noted — and here is the paradox — that the rate of myocardial infarction (heart attack) and sudden death did not change during this period, in spite of the fact that the Japanese weighed more, had higher blood pressure and higher cholesterol levels, and ate more fat, beef and dairy foods.”

Right here in the United States, we have are own ‘paradox’ as well. Good Calories, Bad Calories by Gary Taubes makes a compelling argument that, based on the scientific research, there is no strong causal link between atherosclerosis and coronary heart disease. Nina Teicholz has also written extensively about this, such as in her book The Big Fat Surprise; and in an Atlantic piece (How Americans Got Red Meat Wrong) she lays out some of the evidence showing that Americans in the 19th century, as compared to the following century, ate more meat and fat while they ate fewer vegetables and fruits. Nonetheless: “During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s.” Whether or not earlier Americans had high rates of atherosclerosis, there is strong evidence indicating they did not have high rates of heart disease, of strokes and heart attacks. The health crisis for these conditions, as Tiecholz notes, didn’t take hold until the very moment meat and animal fat consumption took a nosedive. So what gives?

The takeaway is this. We have no reason to assume that atherosclerosis in the present or in the past can tell us much of anything about general health. Even ignoring the fact that none of the mummies studied was from a high protein and high fat Paleo population, we can make no meaningful interpretations of the presence of atherosclerosis among some of the individuals. Going by modern data, there is no reason to jump to the conclusion that they had high mortality rates because of it. Quite likely, they died from completely unrelated health issues. A case in point is that of the Masai, around which there is much debate in interpreting the data. George V. Mann and others wrote a paper, Atherosclerosis in the Masai, that demonstrated the complexity:

“The hearts and aortae of 50 Masai men were collected at autopsy. These pastoral people are exceptionally active and fit and they consume diets of milk and meat. The intake of animal fat exceeds that of American men. Measurements of the aorta showed extensive atherosclerosis with lipid infiltration and fibrous changes but very few complicated lesions. The coronary arteries showed intimal thickening by atherosclerosis which equaled that of old U.S. men. The Masai vessels enlarge with age to more than compensate for this disease. It is speculated that the Masai are protected from their atherosclerosis by physical fitness which causes their coronary vessels to be capacious.”

Put this in the context provided in What Causes Heart Disease? by Sally Fallon Morell and Mary Enig: “The factors that initiate a heart attack (or a stroke) are twofold. One is the pathological buildup of abnormal plaque, or atheromas, in the arteries, plaque that gradually hardens through calcification. Blockage most often occurs in the large arteries feeding the heart or the brain. This abnormal plaque or atherosclerosis should not be confused with the fatty streaks and thickening that is found in the arteries of both primitive and industrialized peoples throughout the world. This thickening is a protective mechanism that occurs in areas where the arteries branch or make a turn and therefore incur the greatest levels of pressure from the blood. Without this natural thickening, our arteries would weaken in these areas as we age, leading to aneurysms and ruptures. With normal thickening, the blood vessel usually widens to accommodate the change. But with atherosclerosis the vessel ultimately becomes more narrow so that even small blood clots may cause an obstruction.”

A distinction is being made here that maybe wasn’t being made in the the mummy study. What gets measured as atherosclerosis could correlate to diverse health conditions and consequences in various populations across dietary lifestyles, regional environments, and historical and prehistorical periods. Finding atherosclerosis in an individual, especially a mummy, might not tell us any useful info about overall health.

Just for good measure, let’s tackle the last piece of remaining evidence the authors mention: “A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.” Calling him Iceman, to most ears, sounds similar to calling an ancient person a caveman — implying that he was a hunter for it is hard to grow plants on ice. In response, Paul Mabry writes in Did Meat Eating Make Ancient Hunter Gatherers Get Heart Disease, showing what was left out in the research paper:

“Sometimes the folks trying to discredit hunter-gather diets bring in Ötzi, “The Iceman” a frozen human found in the Tyrolean Mountains on the border between Austria and Italy that also had plaques in his heart arteries. He was judged to be 5300 years old making his era about 3400 BCE. Most experts feel agriculture had reach Europe almost 700 years before that according to this article. And Ötzi himself suggests they are right. Here’s a quote from the Wikipedia article on Ötzi’s last meal (a sandwich): “Analysis of Ötzi’s intestinal contents showed two meals (the last one consumed about eight hours before his death), one of chamois meat, the other of red deer and herb bread. Both were eaten with grain as well as roots and fruits. The grain from both meals was a highly processed einkornwheat bran,[14] quite possibly eaten in the form of bread. In the proximity of the body, and thus possibly originating from the Iceman’s provisions, chaff and grains of einkorn and barley, and seeds of flax and poppy were discovered, as well as kernels of sloes (small plumlike fruits of the blackthorn tree) and various seeds of berries growing in the wild.[15] Hair analysis was used to examine his diet from several months before. Pollen in the first meal showed that it had been consumed in a mid-altitude conifer forest, and other pollens indicated the presence of wheat and legumes, which may have been domesticated crops. Pollen grains of hop-hornbeam were also discovered. The pollen was very well preserved, with the cells inside remaining intact, indicating that it had been fresh (a few hours old) at the time of Ötzi’s death, which places the event in the spring. Einkorn wheat is harvested in the late summer, and sloes in the autumn; these must have been stored from the previous year.””

Once again, we are looking at the health issues of someone eating an agricultural diet. It’s amazing that the authors, 19 of them, apparently all agreed that diet has nothing to do with a major component of health. That is patently absurd. To the credit of Lancet, they published a criticism of this conclusion (though these critics repeats their own preferred conventional wisdom, in their view on saturated fat) — Atherosclerosis in ancient populations by Gino Fornaciari and Raffaele Gaeta:

“The development of vascular calcification is related not only to atherosclerosis but also to conditions such as disorders of calcium-phosphorus metabolism, diabetes, chronic microinflammation, and chronic renal insufficiency.

“Furthermore, stating that atherosclerosis is not characteristic of any specific diet or lifestyle, but an inherent component of human ageing is not in agreement with recent studies demonstrating the importance of diet and physical activity.5 If atherosclerosis only depended on ageing, it would not have been possible to diagnose it in a young individual, as done in the Horus study.1

“Finally, classification of probable atherosclerosis on the basis of the presence of a calcification in the expected course of an artery seems incorrect, because the anatomy can be strongly altered by post-mortem events. The walls of the vessels might collapse, dehydrate, and have the appearance of a calcific thickening. For this reason, the x-ray CT pattern alone is insufficient and diagnosis should be supported by histological study.”

As far as I know, this didn’t lead to a retraction of the paper. Nor did this criticism receive the attention that the paper itself was given. None of the people who praised the paper bothered to point out the criticism, at least not among what I came across. Anyway, how did this weakly argued paper based on faulty evidence get published in the first place? And then how does it get spread by so many as if proven fact? This is the uphill battle faced by anyone seeking to offer an alternative perspective, especially on diet. This makes meaningful debate next to impossible. That won’t stop those like me from slowly chipping away at the vast edifice of the dominant paradigm. On a positive note, it helps when the evidence used against an alternative view, after reinterpretation, ends up being strong evidence in favor of it.

An Empire of Shame

America as an empire. This has long been a contentious issue, going back to the colonial era, first as a debate over whether Americans wanted to remain a part of the British Empire and later as a debate over whether Americans wanted to create a new empire. We initially chose against empire with the Declaration of Independence and Articles of Confederation. But then we chose for empire with the Constitution that allowed (pseudo-)Federalists to gain power and reshape the new government.

Some key Federalists openly talked about an American Empire. For the most part, though, American leaders have kept their imperial aspirations hidden behind rhetoric, even as our actions were obviously imperialistic. Heck, an Anti-Federalist like Thomas Jefferson took the imperialistic action of the Louisiana Purchase, a deal between one empire and another. Imperial expansionism continued through the 19th century and into the 20th century with numerous military interventions, from the Indian Wars to the Banana Wars. Not a year has gone by in American history when we weren’t involved in a war of aggression.

Yet it still is hard for Americans to admit that we are an empire. I’ve had numerous discussions with my conservative father on this topic. At times, he has surprisingly admitted we are an empire, but usually he is resistant. In our most recent debate, it occurred to me that the resistance is motivated by shame. We don’t want to admit we are an empire because we are ashamed of our government’s brutal use of power on our behalf. And shame is a powerful force. People will do and allow the most horrific actions out of shame.

Empires of the past tended to be projects built on pride and honor, of brazen rule through force. We Americans, instead, feel the need to hide our imperialism behind an image of benign and reluctant power. The difference is Americans, ever since we were colonists, have had an inferiority complex. It makes us both yearn for greatness and fear mockery. Our country is the young teenager that must prove himself, while not yet having the confidence to really believe in himself. So, we act slyly as an empire with implicit threats and backroom manipulations, proxy wars and covert operations, puppet governments and corporatist front groups. Then, when these morally depraved actions come to light, we rationalize why they were exceptions or why we were being forced to do so because of circumstances. It’s not our fault. We don’t want to hurt others, but we had no other choice. Besides, we were defending freedom and free markets, that is why we constantly intervene in other countries and endlessly kill innocents. It is for the greater good. We are willing to make this self-sacrifice on the behalf of others. We are the real victims.

I regularly come across quotes from American leaders who publicly or privately complained about our government, who spoke of failure and betrayal. This has included presidents and other political officials going back to the beginning of the country. Think of Jefferson and Adams in their later years worrying about the fate of the American experiment, that maybe we Americans didn’t have what it takes, that maybe we aren’t destined to be great. The sense of inferiority has haunted our collective imagination for so long that it is practically a defining feature. Despite our being the largest empire in world history, we don’t have the self-certain righteousness to declare ourselves an empire. That is why we get the false and weak bravado of someone like Donald Trump — sadly, he represents our country all too well. Then again, so did Hillary Clinton represent our country in her suppressing wages in Haiti so that U.S. companies would have cheap foreign labor (i.e., corporate wage slavery), the kind of actions the U.S. does all the time in secret in order to maintain control. We have talent for committing evil with a smiling face… or nervous laughter.

Rather than clear power asserted with pride and honor, the United States government acts like a bully on the world stage. We are constantly trying to prove ourselves. And our denial of imperialism is gaslighting, to make anyone feel crazy if they dare voice the truth of what we are. We tell others that we are the good guys. What we really are trying to do is to convince ourselves of our own bullshit. This causes a nervousness in the public mind, a fear that we might be found out. We are paralyzed by our shame and it gets tiring in our trying to keep up the pretense. The facade is crumbling. Our inner shame has become public such that now we are the shame of the world. That probably means our leaders will soon start a war to divert attention, and both main parties will be glad for the diversion.

There is a compelling argument made by James Gilligan in Preventing Violence. Among other things, he sees as a key cause to interpersonal violence is shame. And that there is something particularly shame-inducing about our society, especially for those on the bottom of society. He is attempting to explain violent crime. But what occurs to me is that our leaders are just as violent, if not more violent. It’s simply that those who make the laws determine which violence is legal and which violence illegal, their own violence being in the former category as it is implemented through the state or with the support of the state. Maybe it is shame that causes our government to be so violent toward foreign populations and toward the American population. And maybe shame is what causes American citizens to remain silent in their complicity, as the violent is done in their name.

The United States was always this way

“The real difficulty is with the vast wealth and power in the hands of the few and the unscrupulous who represent or control capital. Hundreds of laws of Congress and the state legislatures are in the interest of these men and against the interests of workingmen. These need to be exposed and repealed. All laws on corporations, on taxation, on trusts, wills, descent, and the like, need examination and extensive change. This is a government of the people, by the people, and for the people no longer. It is a government of corporations, by corporations, and for corporations.”

―Diary and Letters of Rutherford Birchard Hayes: Nineteenth President of the United States, (from REAL Democracy History Calendar: October 1 – 7)

The Power of Language Learning

“I feel that American as against British English, and English of any major dialect as against Russian, and both languages as against the Tarascan language of Mexico constitute different worlds. I note that it is persons with experience of foreign languages and poetry who feel most acutely that a natural language is a different way not only of talking but of thinking and imaging and of emotional life.”
~Paul Friedrich, The Language Parallax, Kindle Locations 356-359

“Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.”
~Jeremy Lent, The Patterning Instinct, p. 204

Like Daniel Everett, the much earlier Roger Williams went to convert the natives, and in the process he was deconverted, at least to the extent of losing his righteous Puritanism. And as with Everett, he studied the native languages and wrote about them. That could be an example of the power of linguistic relativity, in that studying another language could cause you to enter another cultural worldview.

On a related note, Baruch Spinoza did textual analysis, Thomas Paine did Biblical criticism, Friedrich Nietzsche did philology, etc. It makes one wonder how studying language might help shape the thought and redirect the life trajectory of certain thinkers. Many radicals have a history of studying languages and texts. The same thing is seen with a high number of academics, ministers, and apologists turning into agnostics and atheists through an originally faithful study of the Bible (e.g., Robert M. Price).

There is a trickster quality to language, something observed by many others. To closely study language and the products of language is to risk having one’s mind unsettled and then to risk being scorned by those locked into a single linguistic worldview. What Everett found was that, in trying to translate the Bible for the Piraha, he was destabilizing his place within the religious order and also, in discovering the lack of linguistic recursion, destabilizing his place within the academic order. Both organized religion and organized academia are institutions of power that maintain the proper order. For the same reason of power, governments have often enforced a single language for the entire population, as thought control and social control, as enforced assimilation.

Monolingualism goes hand in hand with monoculturalism. And so simply learning a foreign language can be one of the most radical acts that one can commit. The more foreign the language, the more radical the effect. But sometimes simply scrutinizing one’s own native language can shift one’s mind, a possible connection between writing and a greater potential for independent thought. Then again, knowledge of language can also make one a better rhetorician and propagandist. Language as trickster phenomenon does have two faces.

* * *

The Bilingual Mind
by Aneta Pavlenko
pp. 25-27

Like Humboldt and Sapir before him, Whorf, too, believed in the plasticity of the human mind and its ability to go beyond the categories of the mother tongue. This belief permeates the poignant plea for ‘multilingual awareness’ made by the terminally ill Whorf to the world on the brink of World War II:

I believe that those who envision a world speaking only one tongue, whether English, German, Russian, or any other, hold a misguided ideal and would do the evolution of the human mind the greatest disservice. Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. ([ 1941b ] 2012 : 313)

Whorf’s arguments fell on deaf ears, because they were made in a climate significantly less tolerant of linguistic diversity than that of the late imperial Russia and the USSR. In the nineteenth century, large immigrant communities in the US (in particular German speakers) enjoyed access to native-language education, press and theater. The situation began to change during the period often termed the Great Migration (1880–1924), when approximately 24 million new immigrants entered the country (US Bureau of the Census, 1975 ). The overwhelming influx raised concerns about national unity and the capacity of American society to assimilate such a large body of newcomers. In 1917, when the US entered the European conflict declaring war on Germany, the anti-immigrant sentiments found an outlet in a strong movement against ‘the language of the enemy’: German books were removed from libraries and destroyed, German-language theaters and publications closed, and German speakers became subject to intimidation and threats (Luebke , 1980 ; Pavlenko, 2002a ; Wiley , 1998 ).

The advisability of German – and other foreign-language-medium – instruction also came into question, in a truly Humboldtian fashion that linked the learning of foreign languages with adoption of ‘foreign’ worldviews (e.g., Gordy , 1918 ). The National Education Association went as far as to declare “the practice of giving instruction … in a foreign tongue to be un-American and unpatriotic” (Fitz-Gerald , 1918 : 62). And while many prominent intellectuals stood up in defense of foreign languages (e.g., Barnes, 1918 ), bilingual education gave way and so did foreign-language instruction at the elementary level, where children were judged most vulnerable and where 80% of them ended their education. Between 1917 and 1922, Alabama, Colorado, Delaware, Iowa, Nebraska, Oklahoma, and South Dakota issued laws that prohibited foreign-language instruction in grades I through VIII, while Wisconsin and Minnesota restricted it to one hour a day. Louisiana, Indiana, and Ohio made the teaching of German illegal at the elementary level, and so did several cities with large German-speaking populations, including Baltimore, New York City, and Philadelphia (Luebke , 1980 ; Pavlenko, 2002a ). The double standard that made bilingualism an upper-class privilege reserved for ‘real’ Americans is seen in the address given by Vassar College professor Marian Whitney at the Modern Language Teachers conference in 1918:

In so far as teaching foreign languages in our elementary schools has been a means of keeping a child of foreign birth in the language and ideals of his family and tradition, I think it a bad thing; but to teach young Americans French, German, or Spanish at an age when their oral and verbal memory is keen and when languages come easily, is a good thing. (Whitney , 1918 : 11–12)

The intolerance reached its apogee in Roosevelt ’s 1919 address to the American Defense Society that equated English monolingualism with loyalty to the US:

We have room for but one language here, and that is the English language, for we intend to see that the crucible turns our people out as Americans, of American nationality, and not as dwellers in a polyglot boardinghouse; and we have room for but one sole loyalty, and that is the loyalty to the American people. (cited in Brumberg, 1986 : 7)

Reprinted in countless Board of Education brochures, this speech fortified the pressure not only to learn English but to abandon native languages. This pressure precipitated a rapid shift to English in many immigrant communities, further facilitated by the drastic reduction in immigrant influx, due to the quotas established by the 1924 National Origins Act (Pavlenko , 2002a ). Assimilation efforts also extended to Native Americans, who were no longer treated as sovereign nations – many Native American children were sent to English-language boarding schools, where they lost their native languages (Morgan, 2009 ; Spack , 2002 ).

The endangerment of Native American languages was of great concern to Boas, Sapir , and Whorf , yet their support for linguistic diversity and multilingualism never translated into reforms and policies: in the world outside of academia, Americanization laws and efforts were making US citizenry unapologetically monolingual and the disappearance of ‘multilingual awareness’ was applauded by academics who viewed bilingualism as detrimental to children’s cognitive, linguistic and emotional development (Anastasi & Cordova , 1953 ; Bossard, 1945 ; Smith, 1931 , 1939 ; Spoerl , 1943 ; Yoshioka , 1929 ; for discussion, see Weinreich, 1953 : 115–118). It was only in the 1950s that Arsenian ( 1945 ), Haugen ( 1953 , 1956 ), and Weinreich ( 1953 ) succeeded in promoting a more positive view of bilingualism, yet part of their success resided in the fact that by then bilingualism no longer mattered – it was regarded, as we will see, as an “unusual” characteristic, pervasive at the margins but hardly relevant for the society at large.

In the USSR, on the other hand, linguists’ romantic belief in linguistic rights and politicians’ desire to institutionalize nations as fundamental constituents of the state gave rise to the policy of korenizatsia [nativization] and a unique educational system that promoted the development of multilingual competence (Hirsch, 2005 ; Pavlenko , 2013 ; Smith , 1998 ). It is a little-known and under-appreciated irony that throughout the twentieth century, language policies in the ‘totalitarian’ Soviet Union were significantly more liberal – even during the period of the so-called ‘russification’– than those in the ‘liberal’ United States.

Kavanaugh and the Authoritarians

I don’t care too much about the Brett Kavanaugh hearings, one way or another. There doesn’t appear to be any hope of salvation in our present quandary, not for anyone involved (or uninvolved), far beyond who ends up on the Supreme Court.

But from a detached perspective of depressive realism, the GOP is on a clear decline, to a far greater degree than the Democrats which is saying a lot. Back during the presidential campaign, I stated that neither main political party should want to win. That is because we are getting so close to serious problems in our society or rather getting closer to the results of those problems that have long been with us. Whichever party is in power will be blamed, not that I care either way considering both parties deserve blame.

Republicans don’t seem to be able to help themselves. They’ve been playing right into the narrative of their own decline. At the very moment they needed to appeal to minorities because of looming demographic changes, they doubled down on bigotry. Now, the same people who supported and voted for a president who admitted to grabbing women by the pussy (with multiple sexual allegations against him and multiple known cases of cheating on his wife) are defending Kavanaugh against allegations of sexual wrongdoing.

This is not exactly a surprise, as Trump brazenly and proudly declared that he could shoot a person for everyone to see and his supporters would be fine with it. And certainly his publicly declaring his authoritarianism in this manner didn’t faze many Republican voters and Republican politicians. He was elected and the GOP rallied behind him. Also, it didn’t bother Kavanaugh as his acceptance of the Republican nomination implies he also supports authoritarianism and, if possible, plans on enacting it on the Supreme Court. Whether or not true that Trump could get away with murder, it is an amazing statement to make in public and still get elected president for, in any functioning democracy, that would immediately disqualify a candidate.

It almost doesn’t matter what are the facts of the situation, guilt or innocence. Everyone knows that, even if Kavanaugh was a proven rapist, the same right-wing authoritarians who love Trump would defend Kavanaugh to the bitter end. Loyalty is everything to these people. Not so much for the political left in how individuals are more easily thrown under the bus (or like Al Franken who threw himself under the bus and for a rather minor accusation of an inappropriate joke, not even involving any inappropriate touching). Sexual allegations demoralize Democrats, consider the hard hit it took with Anthony Weiner, in a way that never happens with Republicans who always consider a sexual allegation to be a call to battle.

The official narrative now is that the GOP is the party of old school bigots and chauvinistic pigs. They always had that hanging over their heads. And in the past, they sometimes held it up high with pride as if it were a banner of their strength. But now they find themselves on the defense. It turns out that this narrative they embraced probably doesn’t have much of a future. Yet Republicans can’t find it in themselves to seek a new script. For some odd reason, they are heavily attached to being heartless assholes.

This is even true for many Republican women. My conservative mother who, having not voted for Trump, has been pulled back into partisanship with the present conflict and has explicitly told me that she doesn’t believe men held accountable for past sexual transgressions because that is just the way the world was back then. Some conservative women go even further, arguing that men can’t help themselves and that even now we shouldn’t hold them accountable — as Toyin Owoseje reported:

Groping women is “no big deal”, a Donald Trump supporting mother told her daughters on national television when asked about the sexual misconduct allegations levelled against Supreme Court nominee Brett Kavanaugh.

Among Republicans, we’ve been hearing such immoral defenses for a long time. There is another variety of depravity to be found among Democrats, but they at least have the common sense to not openly embrace depravity in their talent for soft-pedalling their authoritarian tendencies. Yet as full-blown authoritarian extremists disconnected from the average American, Republicans don’t understand why the non-authoritarian majority of the population might find their morally debased views unappealing. To them, loyalty to group is everything, and the opinions of those outside the group don’t matter.

The possibility that Kavanaugh might have raped a woman, to right-wing authoritarians, simply makes him seem all the more of a strong male to be revered. It doesn’t matter what he did, at least not to his defenders. This doesn’t bode well for the Republican Party. With the decline they are on, the only hope they have is for Trump to start World War III and seize total control of the government. They’ve lost the competition of rhetoric. All that is left for them is force their way to the extent they can, which at the moment means trying to push Kavanaugh into the Supreme Court. Of course, they theoretically could simply pick a different conservative nominee without all the baggage, but they can’t back down now no matter what. Consequences be damned!

Just wait to see what they’ll be willing to do when the situation gets worse. Imagine what would happen with a Trump-caused constitutional crisis and Kavanaugh on the Supreme Court. However it ends, the trajectory is not pointing upward. The decline of the GOP might be the (further) decline of the United States.

Straw Men in the Linguistic Imaginary

“For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?”
~ Jeremy Lent, The Patterning Instinct

Portraying the Sapir-Whorf hypothesis as linguistic determinism is a straw man fallacy. It’s false to speak of a Sapir-Whorf hypothesis at all as no such hypothesis was ever proposed by Edward Sapir and Benjamin Lee Whorf. Interestingly, it turns out that researchers have since found examples of what could be called linguistic determinism or at least very strong linguistic relativity, although still apparently rare (similar to how examples of genetic determinism are rare). But that is neither here nor there, considering Sapir and Whorf didn’t argue for linguistic determinism, no matter how you quote-mine their texts. The position of relativity, similar to social constructivism, is the wholesale opposite of rigid determinism — besides, linguistic relativism wasn’t even a major focus of Sapir’s work even as he influenced Whorf.

Turning their view into a caricature of determinism was an act of projection. It was the anti-relativists who were arguing for biology determining language, from Noam Chomsky’s language module in the brain to Brent Berlin and Paul Kay’s supposedly universal color categories. It was masterful rhetoric to turn the charge onto those holding the moderate position in order to dress them up as ideologial extremists and charlatans. And with Sapir and Whorf gone from early deaths, they weren’t around to defend themselves and to deny what was claimed on their behalf.

Even Whorf’s sometimes strongly worded view of relativity, by today’s standards and knowledge in the field, doesn’t sound particularly extreme. If anything, to those informed of the most up-to-date research, denying such obvious claims would now sound absurd. How did so many become disconnected from simple truths of human experience that anyone who dared speak these truths could be ridiculed and dismissed out of hand? For generations, relativists stating common sense criticisms of race realism were dismissed in a similar way, and they were often the same people (cultural relativity and linguistic relativity in American scholarship was influenced by Franz Boas) — the argument tying them together is that relativity in expression and emodiment of our shared humanity (think of it more in terms of Daniel Everett’s dark matter of the mind) is based on a complex and flexible set of universal potentials, such that universalism doesn’t require nor indicate essentialism. Yet why do we go on clinging to so many forms of determinism, essentialism, and nativism, including those ideas advocated by many of Sapir and Whorf’s opponents?

We are in a near impossible situation, Essentialism has been a cornerstone of modern civilization, most of all in its WEIRD varieties. Relativity simply can’t be fully comprehended, much less tolerated, within the dominant paradigm, although as Leavitt argues it resonates with the emphasis on language found in Romanticism which was a previous response to essentialism. As for linguistic determinism, even if it were true beyond a few exceptional cases, it is by and large an untestable hypothesis at present and so scientifically meaningless within WEIRD science. WEIRD researchers exist in a civilization that has become dominated by WEIRD societies with nearly all alternatives destroyed or altered beyond their original form. There is no where to stand outside of the WEIRD paradigm, especially not the WEIRDest of the WEIRD researchers doing most of the research.

If certain thoughts are unthinkable within WEIRD culture and language, we have no completely alien mode of thought by which to objectively assess the WEIRD, as imperialism and globalization has left no society untouched. There is no way for us to even think about what might be unthinkable, much less research it. This doublebind goes right over the heads of most people, even over the heads of some relativists who fear being disparaged if they don’t outright deny any possibility of the so-called strong Sapir-Whorf hypothesis. That such a hypothesis potentially could describe reality to a greater extent than we’d prefer is, for most people infected with the WEIRD mind virus and living within the WEIRD monocultural reality tunnel, itself an unthinkable thought.

It is unthinkable and, in its fullest form, fundamentally untestable. And so it is terra incognito within the collective mind. The response is typically either uncomfortable irritation or nervous laughter. Still, the limited evidence in support of linguistic determinism points to the possibility of it being found in other as-of-yet unexplored areas — maybe a fair amount of evidence already exists that will later be reinterpreted when a new frame of understanding becomes established or when someone, maybe generations later, looks at it with fresh eyes. History is filled with moments when something shifted allowing the incomprehensible and unspeakable to become a serious public debate, sometimes a new social reality. Determinism in all of its varieties seems a generally unfrutiful path of research, although in its linguistic form it is compelling as a thought experiment in showing how little we know and can know, how severely constrained is our imaginative capacities.

We don’t look in the darkness where we lost what we are looking for because the light is better elsewhere. But what would we find if we did search the shadows? Whether or not we discovered proof for linguistic determinism, we might stumble across all kinds of other inconvenient evidence pointing toward ever more radical and heretical thoughts. Linguistic relativity and determinism might end up playing a central role less because of the bold answers offered than in the questions that were dared to be asked. Maybe, in thinking about determinism, we could come to a more profound insight of relativity — after all, a complex enough interplay of seemingly deterministic factors would for all appearances be relativistic, that is to say what seen to be linear causation could when lines of causation are interwoven lead to emergent properties. The relativistic whole, in that case, presumably would be greater than the deterministic parts.

Besides, it always depends on perspective. Consider Whorf who “has been rejected both by cognitivists as a relativist and by symbolic and postmodern anthropologists as a determinist and essentialist” (John Leavitt, Linguistic Relativities, p. 193; Leavitt’s book goes into immense detail about all of the misunderstanding and misinterpretation, much of it because of intellectual laziness or hubris  but some of motivated by ideological agendas; the continuing and consistent wrongheadedness makes it difficult to not take much of it as arguing in bad faith). It’s not always clear what the debate is supposed to be about. Ironically, such terms as ‘determinism’ and ‘relativity’ are relativistic in their use while, in how we use them, determining how we think about the issues and how we interpret the evidence. There is no way to take ourselves out of the debate itself for our own humanity is what we are trying to place under the microscope, causing us tremendous psychological contortions in maintaining whatever worldview we latch onto.

There is less distance between linguistic relativity and linguistic determinism than is typically assumed. The former says we are only limited by habit of thought and all it entails within culture and relationships. Yet habits of thought can be so powerful as to essentially determine social orders for centuries and millennia. Calling this mere ‘habit’ hardly does it justice. In theory, a society isn’t absolutely determined to be the way it is nor for those within it to behave the way they do, but in practice extremely few individuals ever escape the gravity pull of habitual conformity and groupthink (i.e., Jaynesian self-authorization is more a story we tell ourselves than an actual description of behavior).

So, yes, in terms of genetic potential and neuroplasticity, there was nothing directly stopping Bronze Age Egyptians from starting an industrial revolution and there is nothing stopping a present-day Piraha from becoming a Harvard professor of mathematics — still, the probability of such things happening is next to zero. Consider the rare individuals in our own society who break free of the collective habits of our society, as they usually either end up homeless or institutionalized, typically with severely shortened lives. To not go along with the habits of your society is to be deemed insane, incompetent, and/or dangerous. Collective habits within a social order involve systematic enculturation, indoctrination, and enforcement. The power of language — even if only relativistic — over our minds is one small part of the cultural system, albeit an important part.

We don’t need to go that far with our argument, though. However you want to splice it, there is plenty of evidence that remains to be explained. And the evidence has become overwhelming and, to many, disconcerting. The debate over the validity of the theory of linguistic relativity is over. But the opponents of the theory have had two basic strategies to contain their loss and keep the debate on life support. They conflate linguistic relativity with linguistic determinism and dismiss it as laughably false. Or they concede that linguistic relativity is partly correct but argue that it’s insignificant in influence, as if they never denied it and simply were unimpressed.

“This is characteristic: one defines linguistic relativity in such an extreme way as to make it seem obviously untrue; one is then free to acknowledge the reality of the data at the heart of the idea of linguistic relativity – without, until quite recently, proposing to do any serious research on these data.” (John Leavit, Linguistic Relativities, p. 166)

Either way, essentialists maintain their position as if no serious challenge was posed. The evidence gets lost in the rhetoric, as the evidence keeps growing.

Still, there is something more challenging that also gets lost in debate, even when evidence is acknowledged. What motivated someone like Whorf wasn’t intellectual victory and academic prestige. There was a sense of human potential locked behind habit. That is why it was so important to study foreign cultures with their diverse languages, not only for the sake of knowledge but to be confronted by entirely different worldviews. Essentialists are on the old imperial path of Whiggish Enlightenment, denying differences by proclaiming that all things Western are the norm of humanity and reality, sometimes taken as a universal ideal state or the primary example by which to measure all else… an ideology that easily morphs into yet darker specters:

“Any attempt to speak of language in general is illusory; the (no doubt French or English) philosopher who does so is merely elevating his own mother tongue to the status of a universal standard (p. 3). See how the discourse of diversity can be turned to defend racism and fascism! I suppose by now this shouldn’t surprise us – we’ve seen so many examples of it at the end of the twentieth and beginning of the twenty-first century.” (John Leavit, Linguistic Relativities, p. 161)

In this light, it should be unsurprising that the essentialist program presented in Chomskyan linguistics was supported and funded by the Pentagon (their specific interest in this case being about human-computer interface in eliminating messy human error; in studying the brain as a computer, it was expected that the individual human mind could be made more amenable to a computerized system of military action and its accompanying chain-of-command). Essentialism makes promises that are useful for systems of effective control as part of a larger technocratic worldview of social control.

The essentialist path we’ve been on has left centuries of destruction in its wake. But from the humbling vista opening onto further possibilities, the relativists offer not a mere scientific theory but a new path for humanity or rather they throw light onto the multiple paths before us. In offering respect and openness toward the otherness of others, we open ourselves toward the otherness within our own humanity. The point is that, though trapped in linguistic cultures, the key to our release is also to be found in the same place. But this requires courage and curiosity, a broadening of the moral imagination.

Let me end on a note of irony. In comparing linguistic cultures, Joseph Needham wrote that, “Where Western minds asked ‘what essentially is it?’, Chinese minds asked ‘how is it related in its beginnings, functions, and endings with everything else, and how ought we to react to it?” This was quoted by Jeremy Lent in the Patterning Instinct (p. 206; quote originally from: Science and Civilization in China, vol. 2, History of Scientific Thought, pp. 199-200). Lent makes clear that this has everything to do with language. Chinese language embodies ambiguity and demands contextual understanding, whereas Western or more broadly Indo-European language elicits abstract essentialism.

So, it is a specific linguistic culture of essentialism that influences, if not entirely determines, that Westerners are predisposed to see language as essentialist, rather than as relative. And it is this very essentialism that causes many Westerners, especially abstract-minded intellectuals, to be blind to essentialism as being linguistically cultural, but not essentialist to human nature and neurocognitive functioning. That is the irony. This essentialist belief system is further proof of linguistic relativism.

 

* * *

The Patterning Instinct
by Jeremy Lent
pp. 197-205

The ability of these speakers to locate themselves in a way that is impossible for the rest of us is only the most dramatic in an array of discoveries that are causing a revolution in the world of linguistics. Researchers point to the Guugu Yimithirr as prima facie evidence supporting the argument that the language you speak affects how your cognition develops. As soon as they learn their first words, Guugu Yimithirr infants begin to structure their orientation around the cardinal directions. In time, their neural connections get wired accordingly until this form of orientation becomes second nature, and they no longer even have to think about where north, south, east, and west are.3 […]

For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?4

The answer can be found in the remarkable story of one charismatic individual, Benjamin Whorf. In the early twentieth century, Whorf was a student of anthropologist-linguist Edward Sapir, whose detailed study of Native American languages had caused him to propose that a language’s grammatical structure corresponds to patterns of thought in its culture. “We see and hear and otherwise experience very largely as we do,” Sapir suggested, “because the language habits of our community predispose certain choices of interpretation.”5

Whorf took this idea, which became known as the Sapir-Whorf hypothesis, to new heights of rhetoric. The grammar of our language, he claimed, affects how we pattern meaning into the natural world. “We cut up and organize the spread and flow of events as we do,” he wrote, “largely because, through our mother tongue, we are parties to an agreement to do so, not because nature itself is segmented in exactly that way for all to see.”6 […]

Whorf was brilliant but highly controversial. He had a tendency to use sweeping generalizations and dramatic statements to drive home his point. “As goes our segmentation of the face of nature,” he wrote, “so goes our physics of the Cosmos.” Sometimes he went beyond the idea that language affects how we think to a more strident assertion that language literally forces us to think in a certain way. “The forms of a person’s person’s thoughts,” he proclaimed, “are controlled by inexorable laws of pattern of which he is unconscious.” This rhetoric led people to interpret the Sapir-Whorf hypothesis as a theory of linguistic determinism, claiming that people’s thoughts are inevitably determined by the structure of their language.8

A theory of rigid linguistic determinism is easy to discredit. All you need to do is show a Hopi Indian capable of thinking in terms of past, present, and future, and you’ve proven that her language didn’t ordain how she was able to think. The more popular the Sapir-Whorf theory became, the more status could be gained by any researcher who poked holes in it. In time, attacking Sapir-Whorf became a favorite path to academic tenure, until the entire theory became completely discredited.9

In place of the Sapir-Whorf hypothesis arose what is known as the nativist view, which argues that the grammar of language is innate to humankind. As discussed earlier, the theory of universal grammar, proposed by Noam Chomsky in the 1950s and popularized more recently by Steven Pinker, posits that humans have a “language instinct” with grammatical rules coded into our DNA. This theory has dominated the field of linguistics for decades. “There is no scientific evidence,” writes Pinker, “that languages dramatically shape their speakers’ ways of thinking.” Pinker and other adherents to this theory, however, are increasingly having to turn a blind eye—not just to the Guugu Yimithirr but to the accumulating evidence of a number of studies showing the actual effects of language on people’s patterns of thought.10 […]

Psychologist Peter Gordon saw an opportunity to test the most extreme version of the Sapir-Whorf hypothesis with the Pirahã. If language predetermined patterns of thought, then the Pirahã should be unable to count, in spite of the fact that they show rich intelligence in other forms of their daily life. He performed a number of tests with the Pirahã over a two-year period, and his results were convincing: as soon as the Pirahã had to deal with a set of objects beyond three, their counting performance disintegrated. His study, he concludes, “represents a rare and perhaps unique case for strong linguistic determinism.”12

The Guugu Yimithirr, at one end of the spectrum, show the extraordinary skills a language can give its speakers; the Pirahã, at the other end, show how necessary language is for basic skills we take for granted. In between these two extremes, an increasing number of researchers are demonstrating a wide variety of more subtle ways the language we speak can influence how we think.

One set of researchers illustrated how language affects perception. They used the fact that the Greek language has two color terms—ghalazio and ble—that distinguish light and dark blue. They tested the speed with which Greek speakers and English speakers could distinguish between these two different colors, even when they weren’t being asked to name them, and discovered the Greeks were significantly faster.13

Another study demonstrates how language helps structure memory. When bilingual Mandarin-English speakers were asked in English to name a statue of someone with a raised arm looking into the distance, they were more likely to name the Statue of Liberty. When they were asked the same question in Mandarin, they named an equally famous Chinese statue of Mao with his arm raised.14

One intriguing study shows English and Spanish speakers remembering accidental events differently. In English, an accident is usually described in the standard subject-verb-object format of “I broke the bottle.” In Spanish, a reflexive verb is often used without an agent, such as “La botella se rompió”—“the bottle broke.” The researchers took advantage of this difference, asking English and Spanish speakers to watch videos of different intentional and accidental events and later having them remember what happened. Both groups had similar recall for the agents involved in intentional events. However, when remembering the accidental events, English speakers recalled the agents better than the Spanish speakers did.15

Language can also have a significant effect in channeling emotions. One researcher read the same story to Greek-English bilinguals in one language and, then, months later, in the other. Each time, he interviewed them about their feelings in response to the story. The subjects responded differently to the story depending on its language, and many of these differences could be attributed to specific emotion words available in one language but not the other. The English story elicited a sense of frustration in readers, but there is no Greek word for frustration, and this emotion was absent in responses to the Greek story. The Greek version, however, inspired a sense of stenahoria in several readers, an emotion loosely translated as “sadness/discomfort/suffocation.” When one subject was asked why he hadn’t mentioned stenahoria after his English reading of the story, he answered that he cannot feel stenahoria in English, “not just because the word doesn’t exist but because that kind of situation would never arise.”16 […]

Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.25

In particular, the use of root metaphors, embedded so deeply in our consciousness that we don’t even notice them, influences how we define our sense of self and apply meaning to the world around us. “Metaphor plays a very significant role in determining what is real for us,” writes cognitive linguist George Lakoff. “Metaphorical concepts…structure our present reality. New metaphors have the power to create a new reality.”26

These metaphors enter our minds as infants, as soon as we begin to talk. They establish neural pathways that are continually reinforced until, just like the cardinal directions of the Guugu Yimithirr, we use our metaphorical constructs without even recognizing them as metaphors. When a parent, for example, tells a child to “put that out of your mind,” she is implicitly communicating a metaphor of the MIND AS A CONTAINER that should hold some things and not others.27

When these metaphors are used to make sense of humanity’s place in the cosmos, they become the root metaphors that structure a culture’s approach to meaning. Hunter-gatherers, as we’ve seen, viewed the natural world through the root metaphor of GIVING PARENT, which gave way to the agrarian metaphor of ANCESTOR TO BE PROPITIATED. Both the Vedic and Greek traditions used the root metaphor of HIGH IS GOOD to characterize the source of ultimate meaning as transcendent, while the Chinese used the metaphor of PATH in their conceptualization of the Tao. These metaphors become hidden in plain sight, since they are used so extensively that people begin to accept them as fundamental structures of reality. This, ultimately, is how culture and language reinforce each other, leading to a deep persistence of underlying structures of thought from one generation to the next.28

Linguistic Relativities
by John Leavitt
pp. 138-142

Probably the most famous statement of Sapir’s supposed linguistic determinism comes from “The Status of Linguistics as a Science,” a talk published in 1929:

Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of a particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language, and that language is merely an incidental means of solving specific problems of communication or reflection. The fact of the matter is that the “real world” is to a large extent unconsciously built up on the language habits of the group. No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are different worlds, not merely the same world with different labels attached … We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation. (Sapir 1949: 162)

This is the passage that is most commonly quoted to demonstrate the putative linguistic determinism of Sapir and of his student Whorf, who cites some of it (1956: 134) at the beginning of “The Relation of Habitual Thought and Behavior to Language,” a paper published in a Sapir Festschrift in 1941. But is this linguistic determinism? Or is it the statement of an observed reality that must be dealt with? Note that the passage does not say that it is impossible to translate between different languages, nor to convey the same referential content in both. Note also that there is a piece missing here, between “labels attached” and “We see and hear.” In fact, the way I have presented it, with the three dots, is how this passage is almost always presented (e.g., Lucy 1992a: 22); otherwise, the quote usually ends at “labels attached.” If we look at what has been elided, we find two examples, coming in a new paragraph immediately after “attached.” In a typically Sapirian way, one is poetic, the other perceptual. He begins:

The understanding of a simple poem, for instance, involves not merely an understanding of the single words in their average significance, but a full comprehension of the whole life of the community as it is mirrored in the words, or as it is suggested by the overtones.

So the apparent claim of linguistic determinism is to be illustrated by – a poem (Friedrich 1979: 479–80), and a simple one at that! In light of this missing piece of the passage, what Sapir seems to be saying is not that language determines thought, but that language is part of social reality, and so is thought, and to understand either a thought or “a green thought in a green shade” you need to consider the whole.

The second example is one of the relationship of terminology to classification:

Even comparatively simple acts of perception are very much more at the mercy of the social patterns called words than we might suppose. If one draws some dozen lines, for instance, of different shapes, one peceives them as divisible into such categories as “straight,” “crooked,” “curved,” “zigzag” because of the classificatory suggestiveness of the linguistic terms themselves. We see and hear …

Again, is Sapir here arguing for a determination of thought by language or simply observing that in cases of sorting out complex data, one will tend to use the categories that are available? In the latter case, he would be suggesting to his audience of professionals (the source is a talk given to a joint meeting of the Linguistic Society of America and the American Anthropological Association) that such phenomena may extend beyond simple classification tasks.

Here it is important to distinguish between claims of linguistic determinism and the observation of the utility of available categories, an observation that in itself in no way questions the likely importance of the non-linguistic salience of input or the physiological component of perception. Taken in the context of the overall Boasian approach to language and thought, this is clearly the thrust of Sapir’s comments here. Remember that this was the same man who did the famous “Study on Phonetic Symbolism,” which showed that there are what appear to be universal psychological reactions to certain speech sounds (his term is “symbolic feeling-significance”), regardless of the language or the meaning of the word in which these sounds are found (in Sapir 1949). This evidence against linguistic determinism, as it happens, was published the same year as “The Status of Linguistics as a Science,” but in the Journal of Experimental Psychology.3

The metaphor Sapir uses most regularly for the relation of language patterning to thought is not that of a constraint, but of a road or groove that is relatively easy or hard to follow. In Language, he proposed that languages are “invisible garments” for our spirits; but at the beginning of the book he had already questioned this analogy: “But what if language is not so much a garment as a prepared road or groove?” (p. 15); grammatical patterning provides “grooves of expression, (which) have come to be felt as inevitable” (p. 89; cf. Erickson et al. 1997: 298). One important thing about a road is that you can get off it; of a groove, that you can get out of it. We will see that this kind of wording permeates Whorf’s formulations as well. […]

Since the early 1950s, Sapir’s student Benjamin Lee Whorf (1897–1941) has most often been presented as the very epitome of extreme cognitive relativism and linguistic determinism. Indeed, as the name attached to the “linguistic determinism hypothesis,” a hypothesis almost never evoked but to be denied, Whorf has become both the best-known ethnolinguist outside the field itself and one of the great straw men of the century. This fate is undeserved; he was not a self-made straw man, as Marshall Sahlins once called another well-known anthropologist. While Whorf certainly maintained what he called a principle of linguistic relativity, it is clear from reading Language, Thought, and Reality, the only generally available source of his writings, published posthumously in 1956, and even clearer from still largely unpublished manuscripts, that he was also a strong universalist who accepted the general validity of modern science. With some re-evaluations since the early 1990s (Lucy 1992a; P. Lee 1996), we now have a clearer idea of what Whorf was about.

In spite of sometimes deterministic phraseology, Whorf presumed that much of human thinking and perception was non-linguistic and universal across languages. In particular, he admired Gestalt psychology (P. Lee 1996) as a science giving access to general characteristics of human perception across cultures and languages, including the lived experiences that lie behind the forms that we label time and space. He puts this most clearly in discussions of the presumably universal perception of visual space:

A discovery made by modern configurative or Gestalt psychology gives us a canon of reference, irrespective of their languages or scientific jargons, by which to break down and describe all visually observable situations, and many other situations, also. This is the discovery that visual perception is basically the same for all normal persons past infancy and conforms to definite laws. (Whorf 1956: 165)

Whorf clearly believed there was a real world out there, although, enchanted by quantum mechanics and relativity theory, he also believed that this was not the world as we conceive it, nor that every human being conceives it habitually in the same way.

Whorf also sought and proposed general descriptive principles for the analysis of languages of the most varied type. And along with Sapir, he worked on sound symbolism, proposing the universality of feeling-associations to certain speech sounds (1956: 267). Insofar as he was a good disciple of Sapir and Boas, Whorf believed, like them, in the universality of cognitive abilities and of some fundamental cognitive processes. And far from assuming that language determines thought and culture, Whorf wrote in the paper for the Sapir volume that

I should be the last to pretend that there is anything so definite as “a correlation” between culture and language, and especially between ethnological rubrics such as “agricultural, hunting,” etc., and linguistic ones like “inflected,” “synthetic,” or “isolating.” (pp. 138–9)

pp. 146

For Whorf, certain scientific disciplines – elsewhere he names “relativity, quantum theory, electronics, catalysis, colloid chemistry, theory of the gene, Gestalt psychology, psychoanalysis, unbiased cultural anthropology, and so on” (1956: 220), as well as non-Euclidean geometry and, of course, descriptive linguistics – were exemplary in that they revealed aspects of the world profoundly at variance with the world as modern Westerners habitually assume it to be, indeed as the members of any human language and social group habitually assume it to be.

Since Whorf was concerned with linguistic and/or conceptual patterns that people almost always follow in everyday life, he has often been read as a determinist. But as John Lucy pointed out (1992a), Whorf’s critiques clearly bore on habitual thinking, what it is easy to think; his ethical goal was to force us, through learning about other languages, other ways of foregrounding and linking aspects of experience, to think in ways that are not so easy, to follow paths that are not so familiar. Whorf’s argument is not fundamentally about constraint, but about the seductive force of habit, of what is “easily expressible by the type of symbolic means that language employs” (“Model,” 1956: 55) and so easy to think. It is not about the limits of a given language or the limits of thought, since Whorf presumes, Boasian that he is, that any language can convey any referential content.

Whorf’s favorite analogy for the relation of language to thought is the same as Sapir’s: that of tracks, paths, roads, ruts, or grooves. Even Whorf’s most determinist-sounding passages, which are also the ones most cited, sound very different if we take the implications of this analogy seriously: “Thinking … follows a network of tracks laid down in the given language, an organization which may concentrate systematically upon certain phases of reality … and may systematically discard others featured by other languages. The individual is utterly unaware of this organization and is constrained completely within its unbreakable bonds” (1956: 256); “we dissect nature along lines laid down by our native languages” (p. 213). But this is from the same essay in which Whorf asserted the universality of “ways of linking experiences … basically alike for all persons”; and this completely constrained individual is evidently the unreflective (utterly unaware) Mr. Everyman (Schultz 1990), and the very choice of the analogy of traced lines or tracks, assuming that they are not railway tracks – that they are not is suggested by all the other road and path metaphors – leaves open the possibility of getting off the path, if only we had the imagination and the gumption to do it. We can cut cross-country. In the study of an exotic language, he wrote, “we are at long last pushed willy-nilly out of our ruts. Then we find that the exotic language is a mirror held up to our own” (1956: 138). How can Whorf be a determinist, how can he see us as forever trapped in these ruts, if the study of another language is sufficient to push us, kicking and screaming perhaps, out of them?

The total picture, then, is not one of constraint or determinism. It is, on the other hand, a model of powerful seduction: the seduction of what is familiar and easy to think, of what is intellectually restful, of what makes common sense.7 The seduction of the habitual pathway, based largely on laziness and fear of the unknown, can, with work, be resisted and broken. Somewhere in the back of Whorf’s mind may have been the allegory of the broad, fair road to Hell and the narrow, difficult path to Heaven beloved of his Puritan forebears. It makes us think of another New England Protestant: “Two roads diverged in a wood, and I, / I took the one less travelled by, / and that has made all the difference.”

The recognition of the seduction of the familiar implies a real ethical program:

It is the “plainest” English which contains the greatest number of unconscious assumptions about nature … Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. (1956: 244)

Learning non-Western languages offers a lesson in humility and awe in an enormous multilingual world:

We shall no longer be able to see a few recent dialects of the Indo-European family, and the rationalizing techniques elaborated from their patterns, as the apex of the evolution of the human mind, nor their present wide spread as due to any survival from fitness or to anything but a few events of history – events that could be called fortunate only from the parochial point of view of the favored parties. They, and our own thought processes with them, can no longer be envisioned as spanning the gamut of reason and knowledge but only as one constellation in a galactic expanse. (p. 218)

The breathtaking sense of sudden vaster possibility, of the sky opening up to reveal a bigger sky beyond, may be what provokes such strong reactions to Whorf. For some, he is simply enraging or ridiculous. For others, reading Whorf is a transformative experience, and there are many stories of students coming to anthropology or linguistics largely because of their reading of Whorf (personal communications; Alford 2002).

p. 167-168

[T]he rise of cognitive science was accompanied by a restating of what came to be called the “Sapir–Whorf hypothesis” in the most extreme terms. Three arguments came to the fore repeatedly:

Determinism. The Sapir–Whorf hypothesis says that the language you speak, and nothing else, determines how you think and perceive. We have already seen how false a characterization this is: the model the Boasians were working from was only deterministic in cases of no effort, of habitual thought or speaking. With enough effort, it is always possible to change your accent or your ideas.

Hermeticism. The Sapir–Whorf hypothesis maintains that each language is a sealed universe, expressing things that are inexpressible in another language. In such a view, translation would be impossible and Whorf’s attempt to render Hopi concepts in English an absurdity. In fact, the Boasians presumed, rather, that languages were not sealed worlds, but that they were to some degree comparable to worlds, and that passing between them required effort and alertness.

Both of these characterizations are used to set up a now classic article on linguistic relativity by the psychologist Eleanor Rosch (1974):

Are we “trapped” by our language into holding a particular “world view”? Can we never really understand or communicate with speakers of a language quite different from our own because each language has molded the thought of its people into mutually incomprehensible world views? Can we never get “beyond” language to experience the world “directly”? Such issues develop from an extreme form of a position sometimes known as “the Whorfian hypothesis” … and called, more generally, generally, the hypothesis of “linguistic relativity.” (Rosch 1974: 95)

Rosch begins the article noting how intuitively right the importance of language differences first seemed to her, then spends much of the rest of it attacking this initial intuition.

Infinite variability. A third common characterization is that Boasian linguistics holds that, in Martin Joos’s words, “languages can differ from each other without limit and in unpredictable ways” (Joos 1966: 96). This would mean that the identification of any language universal would disprove the approach. In fact, the Boasians worked with the universals that were available to them – these were mainly derived from psychology – but opposed what they saw as the unfounded imposition of false universals that in fact reflected only modern Western prejudices. Joos’s hostile formulation has been cited repeatedly as if it were official Boasian doctrine (see Hymes and Fought 1981: 57).

For over fifty years, these three assertions have largely defined the received understanding of linguistic relativity. Anyone who has participated in discussions and/or arguments about the “Whorfian hypothesis” has heard them over and over again.

p. 169-173

In the 1950s, anthropologists and psychologists were interested in experimentation and the testing of hypotheses on what was taken to be the model of the natural sciences. At a conference on language in culture, Harry Hoijer (1954) first named a Sapir–Whorf hypothesis that language influences thought.

To call something a hypothesis is to propose to test it, presumably using experimental methods. This task was taken on primarily by psychologists. A number of attempts were made to prove or disprove experimentally that language influences thought (see Lucy 1992a: 127–78; P. Brown 2006). Both “language” and “thought” were narrowed down to make them more amenable to experiment: the aspect of language chosen was usually the lexicon, presumably the easiest aspect to control in an experimental setting; thought was interpreted to mean perceptual discrimination and cognitive processing, aspects of thinking that psychologists were comfortable testing for. Eric Lenneberg defined the problem posed by the “Sapir–Whorf hypothesis” as that of “the relationship that a particular language may have to its speakers’ cognitive processes … Does the structure of a given language affect the thoughts (or thought potential), the memory, the perception, the learning ability of those who speak that language?” (1953: 463). Need I recall that Boas, Sapir, and Whorf went out of their way to deny that different languages were likely to be correlated with strengths and weaknesses in cognitive processes, i.e., in what someone is capable of thinking, as opposed to the contents of habitual cognition? […]

Berlin and Kay started by rephrasing Sapir and Whorf as saying that the search for semantic universals was “fruitless in principle” because “each language is semantically arbitrary relative to every other language” (1969: 2; cf. Lucy 1992a: 177–81). If this is what we are calling linguistic relativity, then if any domain of experience, such as color, is identified in recognizably the same way in different languages, linguistic relativity must be wrong. As we have seen, this fits the arguments of Weisgerber and Bloomfield, but not of Sapir or Whorf. […]

A characteristic study was reported recently in my own university’s in-house newspaper under the title “Language and Perception Are Not Connected” (Baril 2004). The article starts by saying that according to the “Whorf–Sapir hypothesis … language determines perception,” and therefore that “we should not be able to distinguish differences among similar tastes if we do not possess words for expressing their nuances, since it is language that constructs the mode of thought and its concepts … According to this hypothesis, every language projects onto its speakers a system of categories through which they see and interpret the world.” The hypothesis, we are told, has been “disconfirmed since the 1970s” by research on color. The article reports on the research of Dominic Charbonneau, a graduate student in psychology. Intrigued by recent French tests in which professional sommeliers, with their elaborate vocabulary, did no better than regular ignoramuses in distinguishing among wines, Charbonneau carried out his own experiment on coffee – this is, after all, a French-speaking university, and we take coffee seriously. Francophone students were asked to distinguish among different coffees; like most of us, they had a minimal vocabulary for distinguishing them (words like “strong,” “smooth,” “dishwater”). The participants made quite fine distinctions among the eighteen coffees served, well above the possible results of chance, showing that taste discrimination does not depend on vocabulary. Conclusion: “Concepts must be independent of language, which once again disconfirms the Sapir–Whorf hypothesis” (my italics). And this of course would be true if there were such a hypothesis, if it was primarily about vocabulary, and if it said that vocabulary determines perception.

We have seen that Bloomfield and his successors in linguistics maintained the unlimited arbitrariness of color classifications, and so could have served as easy straw men for the cognitivist return to universals. But what did Boas, Sapir, Whorf, or Lee actually have to say about color? Did they in fact claim that color perception or recognition or memory was determined by vocabulary? Sapir and Lee are easy: as far as I have been able to ascertain, neither one of them talked about color at all. Steven Pinker attributes a relativist and determinist view of color classifications to Whorf:

Among Whorf’s “kaleidoscopic flux of impressions,” color is surely the most eye-catching. He noted that we see objects in different hues, depending on the wavelengths of the light they reflect, but that the wavelength is a continuous dimension with nothing delineating red, yellow, green, blue, and so on. Languages differ in their inventory of color words … You can fill in the rest of the argument. It is language that puts the frets in the spectrum. (Pinker 1994: 61–2)

No he didn’t. Whorf never noted anything like this in any of his published work, and Pinker gives no indication of having gone through Whorf’s unpublished papers. As far as I can ascertain, Whorf talks about color in two places; in both he is saying the opposite of what Pinker says he is saying.

pp. 187-188

The 1950s through the 1980s saw the progressive triumph of universalist cognitive science. From the 1980s, one saw the concomitant rise of relativistic postmodernism. By the end of the 1980s there had been a massive return to the old split between universalizing natural sciences and their ancillary social sciences on the one hand, particularizing humanities and their ancillary cultural studies on the other. Some things, in the prevailing view, were universal, others so particular as to call for treatment as fiction or anecdote. Nothing in between was of very much interest, and North American anthropology, the discipline that had been founded upon and achieved a sort of identity in crossing the natural-science/humanities divide, faced an identity crisis. Symptomatically, one noticed many scholarly bookstores disappearing their linguistics sections into “cognitive science,” their anthropology sections into “cultural studies.”

In this climate, linguistic relativity was heresy, Whorf, in particular, a kind of incompetent Antichrist. The “Whorfian hypothesis” of linguistic relativism or determinism became a topos of any anthropology textbook, almost inevitably to be shown to be silly. Otherwise serious linguists and psychologists (e.g., Pinker 1994: 59–64) continued to dismiss the idea of linguistic relativity with an alacrity suggesting alarm and felt free to heap posthumous personal vilification on Whorf, the favorite target, for his lack of official credentials, in some really surprising displays of academic snobbery. Geoffrey Pullum, to take only one example, calls him a “Connecticut fire prevention inspector and weekend language-fancier” and “our man from the Hartford Fire Insurance Company” (Pullum 1989 [1991]: 163). This comes from a book with the subtitle Irreverent Essays on the Study of Language. But how irreverent is it to make fun of somebody almost everybody has been attacking for thirty years?

The Language Myth: Why Language Is Not an Instinct
by Vyvyan Evans
pp. 195-198

Who’s afraid of the Big Bad Whorf?

Psychologist Daniel Casasanto has noted, in an article whose title gives this section its heading, that some researchers find Whorf’s principle of linguistic relativity to be threatening. 6 But why is Whorf such a bogeyman for some? And what makes his notion of linguistic relativity such a dangerous idea?

The rationalists fear linguistic relativity – the very idea of it – and they hate it, with a passion: it directly contradicts everything they stand for – if relativism is anywhere near right, then the rationalist house burns down, or collapses, like a tower of cards without a foundation. And this fear and loathing in parts of the Academy can often, paradoxically, be highly irrational indeed. Relativity is often criticised without argumentative support, or ridiculed, just for the audacity of existing as an intellectual idea to begin with. Jerry Fodor, more candid than most about his irrational fear, just hates it. He says: “The thing is: I hate relativism. I hate relativism more than I hate anything else, excepting, maybe, fiberglass powerboats.” 7 Fodor continues, illustrating further his irrational contempt: “surely, surely, no one but a relativist would drive a fiberglass powerboat”. 8

Fodor’s objection is that relativism overlooks what he deems to be “the fixed structure of human nature”. 9 Mentalese provides the fixed structure – as we saw in the previous chapter. If language could interfere with this innate set of concepts, then the fixed structure would no longer be fixed – anathema to a rationalist.

Others are more coy, but no less damning. Pinker’s strategy is to set up straw men, which he then eloquently – but mercilessly – ridicules. 10 But don’t be fooled, there is no serious argument presented – not on this occasion. Pinker takes an untenable and extreme version of what he claims Whorf said, and then pokes fun at it – a common modus operandi employed by those who are afraid. Pinker argues that Whorf was wrong because he equated language with thought: that Whorf assumes that language causes or determines thought in the first place. This is the “conventional absurdity” that Pinker refers to in the first of his quotations above. For Pinker, Whorf was either romantically naïve about the effects of language, or, worse, like the poorly read and ill-educated, credulous.

But this argument is a classic straw man: it is set up to fail, being made of straw. Whorf never claimed that language determined thought. As we shall see, the thesis of linguistic determinism, which nobody believes, and which Whorf explicitly rejected, was attributed to him long after his death. But Pinker has bought into the very myths peddled by the rationalist tradition for which he is cheerleader-in-chief, and which lives in fear of linguistic relativity. In the final analysis, the language-as-instinct crowd should be afraid, very afraid: linguistic relativity, once and for all, explodes the myth of the language-as-instinct thesis.

The rise of the Sapir − Whorf hypothesis

Benjamin Lee Whorf became interested in linguistics in 1924, and studied it, as a hobby, alongside his full-time job as an engineer. In 1931, Whorf began to attend university classes on a part-time basis, studying with one of the leading linguists of the time, Edward Sapir. 11 Amongst other things covered in his teaching, Sapir touched on what he referred to as “relativity of concepts … [and] the relativity of the form of thought which results from linguistic study”. 12 The notion of the relativistic effect of different languages on thought captured Whorf’s imagination; and so he became captivated by the idea that he was to develop and become famous for. Because Whorf’s claims have often been disputed and misrepresented since his death, let’s see exactly what his formulation of his principle of linguistic relativity was:

Users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world. 13

Indeed, as pointed out by the Whorf scholar, Penny Lee, post-war research rarely ever took Whorf’s principle, or his statements, as their starting point. 14 Rather, his writings were, on the contrary, ignored, and his ideas largely distorted. 15

For one thing, the so-called ‘Sapir − Whorf hypothesis’ was not due to either Sapir or Whorf. Sapir – whose research was not primarily concerned with relativity – and Whorf were lumped together: the term ‘Sapir − Whorf hypothesis’ was coined in the 1950s, over ten years after both men had been dead – Sapir died in 1939, and Whorf in 1941.16 Moreover, Whorf’s principle emanated from an anthropological research tradition; it was not, strictly speaking, a hypothesis. But, in the 1950s, psychologists Eric Lenneberg and Roger Brown sought to test empirically the notion of linguistic relativity. And to do so, they reformulated it in such a way that it could be tested, producing two testable formulations. 17 One, the so-called ‘strong version’ of relativity, holds that language causes a cognitive restructuring: language causes or determines thought. This is otherwise known as linguistic determinism, Pinker’s “conventional absurdity”. The second hypothesis, which came to be known as the ‘weak version’, claims instead that language influences a cognitive restructuring, rather than causing it. But neither formulation of the so-called ‘Sapir − Whorf hypothesis’ was due to Whorf, or Sapir. Indeed, on the issue of linguistic determinism, Whorf was explicit in arguing against it, saying the following:

The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur. 18

This demonstrates that, in point of fact, Whorf actually believed in something like the ‘fixed structure’ that Fodor claims is lacking in relativity. The delicious irony arising from it all is that Pinker derides Whorf on the basis of the ‘strong version’ of the Sapir − Whorf hypothesis: linguistic determinism – language causes thought. But this strong version was a hypothesis not created by Whorf, but imagined by rationalist psychologists who were dead set against Whorf and linguistic relativity anyway. Moreover, Whorf explicitly disagreed with the thesis that was posthumously attributed to him. The issue of linguistic determinism became, incorrectly and disingenuously, associated with Whorf, growing in the rationalist sub-conscious like a cancer – Whorf was clearly wrong, they reasoned.

In more general terms, defenders of the language-as-instinct thesis have taken a leaf out of the casebook of Noam Chomsky. If you thought that academics play nicely, and fight fair, think again. Successful ideas are the currency, and they guarantee tenure, promotion, influence and fame; and they allow the successful academic to attract Ph.D. students who go out and evangelise, and so help to build intellectual empires. The best defence against ideas that threaten is ridicule. And, since the 1950s, until the intervention of John Lucy in the 1990s – whom I discuss below – relativity was largely dismissed; the study of linguistic relativity was, in effect, off-limits to several generations of researchers.

The Bilingual Mind, And What it Tells Us about Language and Thought
by Aneta Pavlenko
PP. 27-32

1.1.2.4 The real authors of the Sapir-Whorf hypothesis and the invisibility of scientific revolutions

The invisibility of bilingualism in the United States also accounts for the disappearance of multilingual awareness from discussions of Sapir’s and Whorf’s work, which occurred when the two scholars passed away – both at a relatively young age – and their ideas landed in the hands of others. The posthumous collections brought Sapir’s ( 1949 ) and Whorf’s ( 1956 ) insights to the attention of the wider public (including, inter alia , young Thomas Kuhn ) and inspired the emergence of the field of psycholinguistics. But the newly minted psycholinguists faced a major problem: it had never occurred to Sapir and Whorf to put forth testable hypotheses. Whorf showed how linguistic patterns could be systematically investigated through the use of overt categories marked systematically (e.g., number in English or gender in Russian) and covert categories marked only in certain contexts (e.g., gender in English), yet neither he nor Sapir ever elaborated the meaning of ‘different observations’ or ‘psychological correlates’.

Throughout the 1950s and 1960s, scholarly debates at conferences, summer seminars and in academic journals attempted to correct this ‘oversight’ and to ‘systematize’ their ideas (Black, 1959 ; Brown & Lenneberg , 1954 ; Fishman , 1960 ; Hoijer, 1954a; Lenneberg, 1953 ; Osgood & Sebeok , 1954 ; Trager , 1959 ). The term ‘the Sapir -Whorf hypothesis’ was first used by linguistic anthropologist Harry Hoijer ( 1954b ) to refer to the idea “that language functions, not simply as a device for reporting experience, but also, and more significantly, as a way of defining experience for its speakers” (p. 93). The study of SWH, in Hoijer’s view, was supposed to focus on structural and semantic patterns active in a given language. This version, probably closest to Whorf’s own interest in linguistic classification, was soon replaced by an alternative, developed by psychologists Roger Brown and Eric Lenneberg, who translated Sapir’s and Whorf’s ideas into two ‘testable’ hypotheses (Brown & Lenneberg, 1954 ; Lenneberg, 1953 ). The definitive form of the dichotomy was articulated in Brown’s ( 1958 ) book Words and Things:

linguistic relativity holds that where there are differences of language there will also be differences of thought, that language and thought covary. Determinism goes beyond this to require that the prior existence of some language pattern is either necessary or sufficient to produce some thought pattern. (p. 260)

In what follows, I will draw on Kuhn’s ([1962] 2012 ) insights to discuss four aspects of this radical transformation of Sapir’s and Whorf’s ideas into the SWH: (a) it was a major change of paradigm , that is, of shared assumptions, research foci, and methods, (b) it erased multilingual awareness , (c) it created a false dichotomy, and (d) it proceeded unacknowledged.

The change of paradigm was necessitated by the desire to make complex notions, articulated by linguistic anthropologists, fit experimental paradigms in psychology. Yet ideas don’t travel easily across disciplines: Kuhn ([1962] 2012 ) compares a dialog between scientific communities to intercultural communication, which requires skillful translation if it is to avoid communication breakdowns. Brown and Lenneberg ’s translation was not skillful and while their ideas moved the study of language and cognition forward, they departed from the original arguments in several ways (for discussion, see also Levinson , 2012 ; Lucy , 1992a ; Lee , 1996 ).

First, they shifted the focus of the inquiry from the effects of obligatory grammatical categories, such as tense, to lexical domains, such as color, that had a rather tenuous relationship to linguistic thought (color differentiation was, in fact, discussed by Boas and Whorf as an ability not influenced by language). Secondly, they shifted from concepts as interpretive categories to cognitive processes, such as perception or memory, that were of little interest to Sapir and Whorf, and proposed to investigate them with artificial stimuli, such as Munsell chips, that hardly reflect habitual thought. Third, they privileged the idea of thought potential (and, by implication, what can be said) over Sapir’s and Whorf’s concerns with obligatory categories and habitual thought (and, by definition, with what is said). Fourth, they missed the insights about the illusory objectivity of one’s own language and replaced the interest in linguistic thought with independent ‘language’ and ‘cognition’. Last, they substituted Humboldt ’s, Sapir ’s and Whorf ’s interest in multilingual awareness with a hypothesis articulated in monolingual terms.

A closer look at Brown’s ( 1958 ) book shows that he was fully aware of the existence of bilingualism and of the claims made by bilingual speakers of Native American languages that “thinking is different in the Indian language” (p. 232). His recommendation in this case was to distrust those who have the “unusual” characteristic of being bilingual:

There are few bilinguals, after all, and the testimony of those few cannot be uncritically accepted. There is a familiar inclination on the part of those who possess unusual and arduously obtained experience to exaggerate its remoteness from anything the rest of us know. This must be taken into account when evaluating the impressions of students of Indian languages. In fact, it might be best to translate freely with the Indian languages, assimilating their minds to our own. (Brown, 1958 : 233)

The testimony of German–English bilinguals – akin to his own collaborator Eric Heinz Lenneberg – was apparently another matter: the existence of “numerous bilingual persons and countless translated documents” was, for Brown ( 1958 : 232), compelling evidence that the German mind is “very like our own”. Alas, Brown ’s ( 1958 ) contradictory treatment of bilingualism and the monolingual arrogance of the recommendations ‘to translate freely’ and ‘to assimilate Indian minds to our own’ went unnoticed by his colleagues. The result was the transformation of a fluid and dynamic account of language into a rigid, static false dichotomy.

When we look back, the attribution of the idea of linguistic determinism to multilinguals interested in language evolution and the evolution of the human mind makes little sense. Yet the replacement of the open-ended questions about implications of linguistic diversity with two ‘testable’ hypotheses had a major advantage – it was easier to argue about and to digest. And it was welcomed by scholars who, like Kay and Kempton ( 1984 ), applauded the translation of Sapir’s and Whorf’s convoluted passages into direct prose and felt that Brown and Lenneberg “really said all that was necessary” (p. 66) and that the question of what Sapir and Whorf actually thought was interesting but “after all less important than the issue of what is the case” (p. 77). In fact, by the 1980s, Kay and Kempton were among the few who could still trace the transformation to the two psychologists. Their colleagues were largely unaware of it because Brown and Lenneberg concealed the radical nature of their reformulation by giving Sapir and Whorf ‘credit’ for what should have been the Brown-Lenneberg hypothesis.

We might never know what prompted this unusual scholarly modesty – a sincere belief that they were simply ‘improving’ Sapir and Whorf or the desire to distance themselves from the hypothesis articulated only to be ‘disproved’. For Kuhn ([1962] 2012 ), this is science as usual: “it is just this sort of change in the formulation of questions and answers that accounts, far more than novel empirical discoveries, for the transition from Aristotelian to Galilean and from Galilean to Newtonian dynamics” (p. 139). He also points to the hidden nature of many scientific revolutions concealed by textbooks that provide the substitute for what they had eliminated and make scientific development look linear, truncating the scientists’ knowledge of the history of their discipline. This is precisely what happened with the SWH: the newly minted hypothesis took on a life of its own, multiplying and reproducing itself in myriads of textbooks, articles, lectures, and popular media, and moving the discussion further and further away from Sapir’s primary interest in ‘social reality’ and Whorf’s central concern with ‘habitual thought’.

The transformation was facilitated by four common academic practices that allow us to manage the ever-increasing amount of literature in the ever-decreasing amount of time: (a) simplification of complex arguments (which often results in misinterpretation); (b) reduction of original texts to standard quotes; (c) reliance on other people’s exegeses; and (d) uncritical reproduction of received knowledge. The very frequency of this reproduction made the SWH a ‘fact on the ground’, accepted as a valid substitution for the original ideas. The new terms of engagement became part of habitual thought in the Ivory Tower and to this day are considered obligatory by many academics who begin their disquisitions on linguistic relativity with a nod towards the sound-bite version of the ‘strong’ determinism and ‘weak’ relativity. In Kuhn ’s ([1962] 2012 ) view, this perpetuation of a new set of shared assumptions is a key marker of a successful paradigm change: “When the individual scientist can take a paradigm for granted, he need no longer, in his major works, attempt to build his field anew, starting from first principles and justifying the use of each concept introduced” (p. 20).

Yet the false dichotomy reified in the SWH – and the affective framing of one hypothesis as strong and the other as weak – moved the goalposts and reset the target and the standards needed to achieve it, giving scholars a clear indication of which hypothesis they should address. This preference, too, was perpetuated by countless researchers who, like Langacker ( 1976 : 308), dismissed the ‘weak’ version as obviously true but uninteresting and extolled ‘the strongest’ as “the most interesting version of the LRH” but also as “obviously false”. And indeed, the research conducted on Brown’s and Lenneberg’s terms failed to ‘prove’ linguistic determinism and instead revealed ‘minor’ language effects on cognition (e.g., Brown & Lenneberg, 1954 ; Lenneberg , 1953 ) or no effects at all (Heider , 1972 ). The studies by Gipper ( 1976 ) 4 and Malotki ( 1983 ) showed that even Whorf ’s core claims, about the concept of time in Hopi, may have been misguided. 5 This ‘failure’ too became part of the SWH lore, with textbooks firmly stating that “a strong version of the Whorfian hypothesis cannot be true” (Foss & Hakes , 1978 : 393).

By the 1980s, there emerged an implicit consensus in US academia that Whorfianism was “a bête noire, identified with scholarly irresponsibility, fuzzy thinking, lack of rigor, and even immorality” (Lakoff, 1987 : 304). This consensus was shaped by the political climate supportive of the notion of ‘free thought’ yet hostile to linguistic diversity, by educational policies that reinforced monolingualism, and by the rise of cognitive science and meaning-free linguistics that replaced the study of meaning with the focus on structures and universals. Yet the implications of Sapir ’s and Whorf’s ideas continued to be debated (e.g., Fishman , 1980 , 1982 ; Kay & Kempton , 1984 ; Lakoff, 1987 ; Lucy & Shweder , 1979 ; McCormack & Wurm , 1977 ; Pinxten , 1976 ) and in the early 1990s the inimitable Pinker decided to put the specter of the SWH to bed once and for all. Performing a feat reminiscent of Humpty Dumpty, Pinker ( 1994 ) made the SWH ‘mean’ what he wanted it to mean, namely “the idea that thought is the same thing as language” (p. 57). Leaving behind Brown ’s ( 1958 ) articulation with its modest co-variation, he replaced it in the minds of countless undergraduates with

the famous Sapir-Whorf hypothesis of linguistic determinism , stating that people’s thoughts are determined by the categories made available by their language, and its weaker version, linguistic relativity , stating that differences among languages cause differences in the thoughts of their speakers. (Pinker, 1994 : 57)

And lest they still thought that there is something to it, Pinker ( 1994 ) told them that it is “an example of what can be called a conventional absurdity” (p. 57) and “it is wrong, all wrong” (p. 57). Ironically, this ‘obituary’ for the SWH coincided with the neo-Whorfian revival, through the efforts of several linguists, psychologists, and anthropologists – most notably Gumperz and Levinson ( 1996 ), Lakoff ( 1987 ), Lee ( 1996 ), Lucy ( 1992a , b ), and Slobin ( 1991 , 1996a ) – who were willing to buck the tide, to engage with the original texts, and to devise new methods of inquiry. This work will form the core of the chapters to come but for now I want to emphasize that the received belief in the validity of the terms of engagement articulated by Brown and Lenneberg and their attribution to Sapir and Whorf is still pervasive in many academic circles and evident in the numerous books and articles that regurgitate the SWH as the strong/weak dichotomy. The vulgarization of Whorf ’s views bemoaned by Fishman ( 1982 ) also continues in popular accounts, and I fully agree with Pullum ( 1991 ) who, in his own critique of Whorf, noted:

Once the public has decided to accept something as an interesting fact, it becomes almost impossible to get the acceptance rescinded. The persistent interestingness and symbolic usefulness overrides any lack of factuality. (p. 159)

Popularizers of academic work continue to stigmatize Whorf through comments such as “anyone can estimate the time of day, even the Hopi Indians; these people were once attributed with a lack of any conception of time by a book-bound scholar, who had never met them” (Richards , 1998 : 44). Even respectable linguists perpetuate the strawman version of “extreme relativism – the idea that there are no facts common to all cultures and languages” (Everett, 2012 : 201) or make cheap shots at “the most notorious of the con men, Benjamin Lee Whorf, who seduced a whole generation into believing, without a shred of evidence, that American Indian languages lead their speakers to an entirely different conception of reality from ours” (Deutscher, 2010 : 21). This assertion is then followed by a statement that while the link between language, culture, and cognition “seems perfectly kosher in theory, in practice the mere whiff of the subject today makes most linguists, psychologists, and anthropologists recoil” because the topic “carries with it a baggage of intellectual history which is so disgraceful that the mere suspicion of association with it can immediately brand anyone a fraud” (Deutscher, 2010 : 21).

Such comments are not just an innocent rhetorical strategy aimed at selling more copies: the uses of hyperbole (most linguists, psychologists, and anthropologists ; mere suspicion of association), affect (disgraceful , fraud , recoil , embarrassment), misrepresentation (disgraceful baggage of intellectual history), strawman’s arguments and reduction ad absurdum as a means of persuasion have played a major role in manufacturing the false consent in the history of ideas that Deutscher (2010) finds so ‘disgraceful’ (readers interested in the dirty tricks used by scholars should read the expert description by Pinker , 2007 : 89–90). What is particularly interesting is that both Deutscher (2010) and Everett (2012) actually martial evidence in support of Whorf’s original arguments. Their attempt to do so while distancing themselves from Whorf would have fascinated Whorf, for it reveals two patterns of habitual thought common in English-language academia: the uncritical adoption of the received version of the SWH and the reliance on the metaphor of ‘argument as war’ (Tannen , 1998), i.e., an assumption that each argument has ‘two sides’ (not one or three), that these sides should be polarized in either/or terms, and that in order to present oneself as a ‘reasonable’ author, one should exaggerate the alternatives and then occupy the ‘rational’ position in between. Add to this the reductionism common for trade books and the knowledge that criticism sells better than praise, and you get Whorf as a ‘con man’.

Dark Matter of the Mind
by Daniel L. Everett
Kindle Locations 352-373

I am here particularly concerned with difference, however, rather than sameness among the members of our species— with variation rather than homeostasis. This is because the variability in dark matter from one society to another is fundamental to human survival, arising from and sustaining our species’ ecological diversity. The range of possibilities produces a variety of “human natures” (cf. Ehrlich 2001). Crucial to the perspective here is the concept-apperception continuum. Concepts can always be made explicit; apperceptions less so. The latter result from a culturally guided experiential memory (whether conscious or unconscious or bodily). Such memories can be not only difficult to talk about but often ineffable (see Majid and Levinson 2011; Levinson and Majid 2014). Yet both apperception and conceptual knowledge are uniquely determined by culture, personal history, and physiology, contributing vitally to the formation of the individual psyche and body.

Dark matter emerges from individuals living in cultures and thereby underscores the flexibility of the human brain. Instincts are incompatible with flexibility. Thus special care must be given to evaluating arguments in support of them (see Blumberg 2006 for cogent criticisms of many purported examples of instincts, as well as the abuse of the term in the literature). If we have an instinct to do something one way, this would impede learning to do it another way. For this reason it would surprise me if creatures higher on the mental and cerebral evolutionary scale— you and I, for example— did not have fewer rather than more instincts. Humans, unlike cockroaches and rats— two other highly successful members of the animal kingdom— adapt holistically to the world in which they live, in the sense that they can learn to solve problems across environmental niches, then teach their solutions and reflect on these solutions. Cultures turn out to be vital to this human adaptational flexibility— so much so that the most important cognitive question becomes not “What is in the brain?” but “What is the brain in?” (That is, in what individual, residing in what culture does this particular brain reside?)

The brain, by this view, was designed to be as close to a blank slate as was possible for survival. In other words, the views of Aristotle, Sapir, Locke, Hume, and others better fit what we know about the nature of the brain and human evolution than the views of Plato, Bastian, Freud, Chomsky, Tooby, Pinker, and others. Aristotle’s tabula rasa seems closer to being right than is currently fashionable to suppose, especially when we answer the pointed question, what is left in the mind/ brain when culture is removed?

Most of the lessons of this book derive from the idea that our brains (including our emotions) and our cultures are related symbiotically through the individual, and that neither supervenes on the other. In this framework, nativist ideas often are superfluous.

Kindle Locations 3117-3212

Science, we might say, ought to be exempt from dark matter. Yet that is much harder to claim than to demonstrate. […] To take a concrete example of a science, we focus on linguistics, because this discipline straddles the borders between the sciences, humanities, and social sciences. The basic idea to be explored is this: because counterexamples and exceptions are culturally determined in linguistics, as in all sciences, scientific progress is the output of cultural values. These values differ even within the same discipline (e.g., linguistics), however, and can lead to different notions of progress in science. To mitigate this problem, therefore, to return to linguistics research as our primary example, our inquiry should be informed by multiple theories, with a focus on languageS rather than Language. To generalize, this would mean a focus on the particular rather than the general in many cases. Such a focus (in spite of the contrast between this and many scientists’ view that generalizations are the goal of science) develops a robust empirical basis while helping to distinguish local theoretical culture from broader, transculturally agreed-upon desiderata of science— an issue that theories of language, in a way arguably more extreme than in other disciplines, struggle to tease apart.

The reason that a discussion of science and dark matter is important here is to probe the significance and meaning of dark matter, culture, and psychology in the more comfortable, familiar territory of the reader, to understand that what we are contemplating here is not limited to cultures unlike our own, but affects every person, every endeavor of Homo sapiens, even the hallowed enterprise of science. This is not to say that science is merely a cultural illusion. This chapter has nothing to do with postmodernist epistemological relativity. But it does aim to show that science is not “pure rationality,” autonomous from its cultural matrix. […]

Whether we classify an anomaly as counterexample or exception depends on our dark matter— our personal history plus cultural values, roles, and knowledge structures. And the consequences of our classification are also determined by culture and dark matter. Thus, by social consensus, exceptions fall outside the scope of the statements of a theory or are explicitly acknowledged by the theory to be “problems” or “mysteries.” They are not immediate problems for the theory. Counterexamples, on the other hand, by social consensus render a statement false. They are immediately acknowledged as (at least potential) problems for any theory. Once again, counterexamples and exceptions are the same etically, though they are nearly polar opposites emically. Each is defined relative to a specific theoretical tradition, a specific set of values, knowledge structures, and roles— that is, a particular culture.

One bias that operates in theories, the confirmation bias, is the cultural value that a theory is true and therefore that experiments are going to strengthen it, confirm it, but not falsify it. Anomalies appearing in experiments conducted by adherents of a particular theory are much more likely to be interpreted as exceptions that might require some adjustments of the instruments, but nothing serious in terms of the foundational assumptions of the theory. On the other hand, when anomalies turn up in experiments by opponents of a theory, there will be a natural bias to interpret these as counterexamples that should lead to the abandonment of the theory. Other values that can come into play for the cultural/ theoretical classification of an anomaly as a counterexample or an exception include “tolerance for cognitive dissonance,” a value of the theory that says “maintain that the theory is right and, at least temporarily, set aside problematic facts,” assuming that they will find a solution after the passage of a bit of time. Some theoreticians call this tolerance “Galilean science”— the willingness to set aside all problematic data because a theory seems right. Fair enough. But when, why, and for how long a theory seems right in the face of counterexamples is a cultural decision, not one that is based on facts alone. We have seen that the facts of a counterexample and an exception can be exactly the same. Part of the issue of course is that data, like their interpretations, are subject to emicization. We decide to see data with a meaning, ignoring the particular variations that some other theory might seize on as crucial. In linguistics, for example, if a theory (e.g., Chomskyan theory) says that all relevant grammatical facts stop at the boundary of the sentence, then related facts at the level of paragraphs, stories, and so on, are overlooked.

The cultural and dark matter forces determining the interpretation of anomalies in the data that lead one to abandon a theory and another to maintain it themselves create new social situations that confound the intellect and the sense of morality that often is associated with the practice of a particular theory. William James (1907, 198) summed up some of the reactions to his own work, as evidence of these reactions to the larger field of intellectual endeavors: “I fully expect to see the pragmatist view of truth run through the classic stages of a theory’s career. First, you know, a new theory is attacked as absurd; then it is admitted to be true, but obvious and insignificant; finally it is seen to be so important that its adversaries claim that they themselves discovered it.”

In recent years, due to my research and claims regarding the grammar of the Amazonian Pirahã— that this language lacks recursion— I have been called a charlatan and a dull wit who has misunderstood. It has been (somewhat inconsistently) further claimed that my results are predicted (Chomsky 2010, 2014); it has been claimed that an alternative notion of recursion, Merge, was what the authors had in mind is saying that recursion is the foundation of human languages; and so on. And my results have been claimed to be irrelevant.

* * *

Beyond Our Present Knowledge
Useful Fictions Becoming Less Useful
Essentialism On the Decline
Is the Tide Starting to Turn on Genetics and Culture?
Blue on Blue
The Chomsky Problem
Dark Matter of the Mind
What is the Blank Slate of the Mind?
Cultural Body-Mind
How Universal Is The Mind?
The Psychology and Anthropology of Consciousness
On Truth and Bullshit

The Mind in the Body

“[In the Old Testament], human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’ […] In short, the biblical character presents itself to us more as parts than as a whole”
(Robert A. Di Vito, “Old Testament Anthropology and the Construction of Personal Identity”, p. 227-228)

The Axial Age was a transitional stage following the collapse of the Bronze Age Civilizations. And in that transition, new mindsets mixed with old, what came before trying to contain the rupture and what was forming not yet fully born. Writing, texts, and laws were replacing voices gone quiet and silent. Ancient forms of authorization no longer were as viscerally real and psychologically compelling. But the transition period was long and slow, and in many ways continues to this day (e.g., authoritarianism as vestigial bicameralism).

One aspect was the changing experience of identity, as experienced within the body and the world. But let me take it a step back. In hunter-gatherer societies, there is the common attribute of animism where the world is alive with voices and along with this the sense of identity that, involving sensory immersion not limited to the body, extends into the surrounding environment. The bicameral mind seems to have been a reworking of this mentality for the emerging agricultural villages and city-states. Instead of body as part of the natural environment, there was the body politic with the community as a coherent whole, a living organism. Without a metaphorical framing of inside and outside as the crux of identity as would later develop, self and other was defined by permeable collectivism rather than rigid individualism (bundle theory of mind taken to the extreme of bundle theory of society).

In the late Bronze Age, large and expansive theocratic hierarchies formed. Writing increasingly took a greater role. All of this combined to make the bicameral order precarious. The act of writing and reading texts was still integrated with voice-hearing traditions, a text being the literal ‘word’ of a god, spirit, or ancestor. But the voices being written down began the process of creating psychological distance, the text itself beginning to take onto itself authority. This became a competing metaphorical framing, that of truth and reality as text.

This transformed the perception of the body. The voices became harder to decipher. Hearing a voice of authority speak to you required little interpretation, but a text emphasizes the need for interpretation. Reading became a way of thinking about the world and about one’s way of being in the world. Divination and similar practices was the attempt to read the world. Clouds or lightning, the flight of birds or the organs of a sacrificial animal — these were texts to be read.

Likewise, the body became a repository of voices, although initially not quite a unitary whole. Different aspects of self and spirits, different energies and forces were located and contained in various organs and body parts — to the extent that they had minds of their own, a potentially distressing condition and sometimes interpreted as possession. As the bicameral community was a body politic, the post-bicameral body initiated the internalization of community. But this body as community didn’t at first have a clear egoic ruler — the need for this growing stronger as external authorization further weakened. Eventually, it became necessary to locate the ruling self in a particular place within, such as the heart or throat or head. This was a forceful suppression of the many voices and hence a disallowing of the perception of self as community. The narrative of individuality began to be told.

Even today, we go on looking for a voice in some particular location. Noam Chomsky’s theory of a language organ is an example of this. We struggle for authorization within consciousness, as the ancient grounding of authorization in the world and in community has been lost, cast into the shadows.

Still, dissociation having taken hold, the voices never disappear and they continue to demand being heard, if only as symptoms of physical and psychological disease. Or else we let the thousand voices of media to tell us how to think and what to do. Ultimately, trying to contain authorization within us is impossible and so authorization spills back out into the world, the return of the repressed. Our sense of individualism is much more of a superficial rationalization than we’d like to admit. The social nature of our humanity can’t be denied.

As with post-bicameral humanity, we are still trying to navigate this complex and confounding social reality. Maybe that is why Axial Age religions, in first articulating the dilemma of conscious individuality, remain compelling in what was taught. The Axial Age prophets gave voice to our own ambivalance and maybe that is what gives the ego such power over us. We moderns haven’t become disconnected and dissociated merely because of some recent affliction — such a state of mind is what we inherited, as the foundation of our civilization.

* * *

“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward. But when thou doest alms, let not thy left hand know what thy right hand doeth: That thine alms may be in secret: and thy Father which seeth in secret himself shall reward thee openly.” (Matthew 6:2-4)

“Wherefore if thy hand or thy foot offend thee, cut them off, and cast them from thee: it is better for thee to enter into life halt or maimed, rather than having two hands or two feet to be cast into everlasting fire. And if thine eye offend thee, pluck it out, and cast it from thee: it is better for thee to enter into life with one eye, rather than having two eyes to be cast into hell fire.” (Matthew 18:8-9)

The Prince of Medicine
by Susan P. Mattern
pp. 232-233

He mentions speaking with many women who described themselves as “hysterical,” that is, having an illness caused, as they believed, by a condition of the uterus (hystera in Greek) whose symptoms varied from muscle contractions to lethargy to nearly complete asphyxia (Loc. Affect. 6.5, 8.414K). Galen, very aware of Herophilus’s discovery of the broad ligaments anchoring the uterus to the pelvis, denied that the uterus wandered around the body like an animal wreaking havoc (the Hippocratics imagined a very actively mobile womb). But the uterus could, in his view, become withdrawn in some direction or inflamed; and in one passage he recommends the ancient practice of fumigating the vagina with sweet-smelling odors to attract the uterus, endowed in this view with senses and desires of its own, to its proper place; this technique is described in the Hippocratic Corpus but also evokes folk or shamanistic medicine.

“Between the Dream and Reality”:
Divination in the Novels of Cormac McCarthy

by Robert A. Kottage
pp. 50-52

A definition of haruspicy is in order. Known to the ancient Romans as the Etrusca disciplina or “Etruscan art” (P.B. Ellis 221), haruspicy originally included all three types of divination practiced by the Etruscan hierophant: interpretation of fulgura (lightnings), of monstra (birth defects and unusual meteorological occurrences), and of exta (internal organs) (Hammond). ”Of these, the practice still commonly associated with the term is the examination of organs, as evidenced by its OED definition: “The practice or function of a haruspex; divination by inspection of the entrails of victims” (“haruspicy”).”A detailed science of liver divination developed in the ancient world, and instructional bronze liver models formed by the Etruscans—as well as those made by their predecessors the Hittites and Babylonians—have survived (Hammond). ”Any unusual features were noted and interpreted by those trained in the esoteric art: “Significant for the exta were the size, shape, colour, and markings of the vital organs, especially the livers and gall bladders of sheep, changes in which were believed by many races to arise supernaturally… and to be susceptible of interpretation by established rules”(Hammond). Julian Jaynes, in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind, comments on the unique quality of haruspicy as a form of divination, arriving as it did at the dawn of written language: “Extispicy [divining through exta] differs from other methods in that the metaphrand is explicitly not the speech or actions of the gods, but their writing. The baru [Babylonian priest] first addressed the gods… with requests that they ‘write’ their message upon the entrails of the animal” (Jaynes 243). Jaynes also remarks that organs found to contain messages of import would sometimes be sent to kings, like letters from the gods (Jaynes 244). Primitive man sought (and found) meaning everywhere.

The logic behind the belief was simple: the whole universe is a single, harmonious organism, with the thoughts and intensions of the intangible gods reflected in the tangible world. For those illiterate to such portents, a lightning bolt or the birth of a hermaphrodite would have been untranslatable; but for those with proper training, the cosmos were as alive with signs as any language:

The Babylonia s believed that the decisions of their gods, like those of their kings, were arbitrary, but that mankind could at least guess their will. Any event on earth, even a trivial one, could reflect or foreshadow the intentions of the gods because the universe is a living organism, a whole, and what happens in one part of it might be caused by a happening in some distant part. Here we see a germ of the theory of cosmic sympathy formulated by Posidonius. (Luck 230)

This view of the capricious gods behaving like human king is reminiscent of the evil archons of gnosticism; however, unlike gnosticism, the notion of cosmic sympathy implies an illuminated and vastly “readable” world, even in the darkness of matter. The Greeks viewed pneuma as “the substance that penetrates and unifies all things. In fact, this tension holds bodies together, and every coherent thing would collapse without it” (Lawrence)—a notion that diverges from the gnostic idea of pneuma as spiritual light temporarily trapped in the pall of physicality.

Proper vision, then, is central to all the offices of the haruspex. The world cooperates with the seer by being illuminated, readable.

p. 160

Jaynes establishes the important distinction between the modern notion of chance commonly associated with coin flipping and the attitude of the ancient Mesopotamians toward sortilege:

We are so used to the huge variety of games of chance, of throwing dice, roulette wheels, etc., all of them vestiges of this ancient practice of divination by lots, that we find it difficult to really appreciate the significance of this practice historically. It is a help here to realize that there was no concept of chance whatever until very recent times…. [B]ecause there was no chance, the result had to be caused by the gods whose intentions were being divined. (Jaynes 240)

In a world devoid of luck, proper divination is simply a matter of decoding the signs—bad readings are never the fault of the gods, but can only stem from the reader.

The Consciousness of John’s Gospel
A Prolegomenon to a Jaynesian-Jamesonian Approach

by Jonathan Bernier

When reading the prologue’s historical passages, one notes a central theme: the Baptist witnesses to the light coming into the world. Put otherwise, the historical witnesses to the cosmological. This, I suggest, can be understood as an example of what Jaynes (1976: 317–338) calls ‘the quest for authorization.’ As the bicameral mind broke down, as exteriorised thought ascribed to other-worldly agents gave way to interiorised thought ascribed to oneself, as the voices of the gods spoke less frequently, people sought out new means, extrinsic to themselves, by which to authorise belief and practice; they quite literally did not trust themselves. They turned to oracles and prophets, to auguries and haruspices, to ecstatics and ecstasy. Proclamatory prophecy of the sort practiced by John the Baptist should be understood in terms of the bicameral mind: the Lord God of Israel, external to the Baptist, issued imperatives to the Baptist, and then the Baptist, external to his audience, relayed those divine imperatives to his listeners. Those who chose to follow the Baptist’s imperatives operated according to the logic of the bicameral mind, as described by Jaynes (1976: 84–99): the divine voice speaks, therefore I act. That voice just happens now to be mediated through the prophet, and not apprehended directly in the way that the bicameral mind apprehended the voices and visions. The Baptist as witness to God’s words and Word is the Baptist as bicameral vestige.

By way of contrast, the Word-become-flesh can be articulated in terms of the bicameral mind giving way to consciousness. The Jesus of the prologue represents the apogee of interiorised consciousness: the Word is not just inside him, but he in fact is the Word. 1:17 draws attention to an implication consequent to this indwelling of the Word: with the divine Word – and thus also the divine words – dwelling fully within oneself, what need is there for that set of exteriorised thoughts known as the Mosaic Law? […]

[O]ne notes Jaynes’ (1976: 301, 318) suggestion that the Mosaic Law represents a sort of half-way house between bicameral exteriority and conscious interiority: no longer able to hear the voices, the ancient Israelites sought external authorisation in the written word; eventually, however, as the Jewish people became increasingly acclimated to conscious interiority, they became increasingly ambivalent towards the need for and role of such exteriorised authorisation. Jaynes (1976: 318) highlights Jesus’ place in this emerging ambivalence; however, in 1:17 it is not so much that exteriorised authorisation is displaced by interiorised consciousness but that Torah as exteriorised authority is replaced by Jesus as exteriorised authority. Jesus, the fully conscious Word-made-flesh, might displace the Law, but it is not altogether clear that he offers his followers a full turn towards interiorised consciousness; one might, rather, read 1:17 as a bicameral attempt to re-contain the cognition revolution of which Jaynes considers Jesus to be a flag-bearer.

The Discovery of the Mind
by Bruno Snell
pp. 6-8

We find it difficult to conceive of a mentality which made no provision for the body as such. Among the early expressions designating what was later rendered as soma or ‘body’, only the plurals γυα, μλεα, etc. refer to the physical nature of the body; for chros is merely the limit of the body, and demas represents the frame, the structure, and occurs only in the accusative of specification. As it is, early Greek art actually corroborates our impression that the physical body of man was comprehended, not as a unit but as an aggregate. Not until the classical art of the fifth century do we find attempts to depict the body as an organic unit whose parts are mutually correlated. In the preceding period the body is a mere construct of independent parts variously put together.6 It must not be thought, however, that the pictures of human beings from the time of Homer are like the primitive drawings to which our children have accustomed us, though they too simply add limb to limb.

Our children usually represent the human shape as shown in fig. i, whereas fig. 2 reproduces the Greek concept as found on the vases of the geometric period. Our children first draw a body as the central and most important part of their design; then they add the head, the arms and the legs. The geometric figures, on the other hand, lack this central part; they are nothing but μλεα κα γυα, i.e. limbs with strong muscles, separated from each other by means of exaggerated joints. This difference is of course partially dependent upon the clothes they wore, but even after we have made due allowance for this the fact remains that the Greeks of this early period seem to have seen in a strangely ‘articulated’ way. In their eyes the individual limbs are clearly distinguished from each other, and the joints are, for the sake of emphasis, presented as extraordinarily thin, while the fleshy parts are made to bulge just as unrealistically. The early Greek drawing seeks to demonstrate the agility of the human figure, the drawing of the modern child its compactness and unity.

Thus the early Greeks did not, either in their language or in the visual arts, grasp the body as a unit. The phenomenon is the same as with the verbs denoting sight; in the latter, the activity is at first understood in terms of its conspicuous modes, of the various attitudes and sentiments connected with it, and it is a long time before speech begins to address itself to the essential function of this activity. It seems, then, as if language aims progressively to express the essence of an act, but is at first unable to comprehend it because it is a function, and as such neither tangibly apparent nor associated with certain unambiguous emotions. As soon, however, as it is recognized and has received a name, it has come into existence, and the knowledge of its existence quickly becomes common property. Concerning the body, the chain of events may have been somewhat like this: in the early period a speaker, when faced by another person, was apparently satisfied to call out his name: this is Achilles, or to say: this is a man. As a next step, the most conspicuous elements of his appearance are described, namely his limbs as existing side by side; their functional correlation is not apprehended in its full importance until somewhat later. True enough, the function is a concrete fact, but its objective existence does not manifest itself so clearly as the presence of the individual corporeal limbs, and its prior significance escapes even the owner of the limbs himself. With the discovery of this hidden unity, of course, it is at once appreciated as an immediate and self-explanatory truth.

This objective truth, it must be admitted, does not exist for man until it is seen and known and designated by a word; until, thereby, it has become an object of thought. Of course the Homeric man had a body exactly like the later Greeks, but he did not know it qua body, but merely as the sum total of his limbs. This is another way of saying that the Homeric Greeks did not yet have a body in the modern sense of the word; body, soma, is a later interpretation of what was originally comprehended as μλη or γυα, i.e. as limbs. Again and again Homer speaks of fleet legs, of knees in speedy motion, of sinewy arms; it is in these limbs, immediately evident as they are to his eyes, that he locates the secret of life.7

Hebrew and Buddhist Selves:
A Constructive Postmodern Study

by Nicholas F. Gier

Finally, at least two biblical scholars–in response to the question “What good is this pre-modern self?”–have suggested that the Hebrew view (we add the Buddhist and the Chinese) can be used to counter balance the dysfunctional elements of modern selfhood. Both Robert Di Vito and Jacqueline Lapsley have called this move “postmodern,” based, as they contend, on the concept of intersubjectivity.[3] In his interpretation of Charles S. Peirce as a constructive postmodern thinker, Peter Ochs observes that Peirce reaffirms the Hebraic view that relationality is knowledge at its most basic level.  As Ochs states: “Peirce did not read Hebrew, but the ancient Israelite term for ‘knowledge’–yidiah–may convey Peirce’s claim better than any term he used.  For the biblical authors, ‘to know’ is ‘to have intercourse with’–with the world, with one’s spouse, with God.”[4]

The view that the self is self-sufficient and self-contained is a seductive abstraction that contradicts the very facts of our interdependent existence.  Modern social atomism was most likely the result of modeling the self on an immutable transcendent deity (more Greek than biblical) and/or the inert isolated atom of modern science. […]

It is surprising to discover that the Buddhist skandhas are more mental in character, while the Hebrew self is more material in very concrete ways.  For example, the Psalmist says that “all my inner parts (=heart-mind) bless God’s holy name” (103.1); his kidneys (=conscience) chastise him (16.7); and broken bones rejoice (16:7).  Hebrew bones offer us the most dramatic example of a view of human essence most contrary to Christian theology.  One’s essential core is not immaterial and invisible; rather, it is one’s bones, the most enduring remnant of a person’s being.  When the nepeš “rejoices in the Lord” at Ps. 35.9, the poet, in typical parallel fashion, then has the bones speak for her in v. 10.  Jeremiah describes his passion for Yahweh as a “fire” in his heart (l�b) that is also in his bones (20.9), just as we say that a great orator has “fire in his belly.” The bones of the exiles will form the foundation of those who will be restored by Yahweh’s rãah in Ezekiel 37, and later Pharisidic Judaism speaks of the bones of the deceased “sprouting” with new life in their resurrected bodies.[7]  The bones of the prophet Elijah have special healing powers (2 Kgs. 13.21).  Therefore, the cult of relic bones does indeed have scriptural basis, and we also note the obvious parallel to the worship of the Buddha’s bones.

With all these body parts functioning in various ways, it is hard to find, as Robert A. Di Vito suggests, “a true ‘center’ for the [Hebrew] person . . . a ‘consciousness’ or a self-contained ‘self.’”[8] Di Vito also observes that the Hebrew word for face (p~n§m) is plural, reflecting all the ways in which a person appears in multifarious social interactions.  The plurality of faces in Chinese culture is similar, including the “loss of face” when a younger brother fails to defer to his elder brother, who would have a difference “face” with respect to his father.  One may be tempted to say that the j§va is the center of the Buddhist self, but that would not be accurate because this term simply designates the functioning of all the skandhas together.

Both David Kalupahana and Peter Harvey demonstrate how much influence material form (rãpa) has on Buddhist personality, even at the highest stage of spiritual development.[9]  It is Zen Buddhists, however, who match the earthy Hebrew rhetoric about the human person. When Bodhidharma (d. 534 CE) prepared to depart from his body, he asked four of his disciples what they had learned from him.  As each of them answered they were offered a part of his body: his skin, his flesh, his bones, and his marrow.  The Zen monk Nangaku also compared the achievements of his six disciples to six parts of his body. Deliberately inverting the usual priority of mind over body, the Zen monk Dogen (1200-1253) declared that “The Buddha Way is therefore to be attained above all through the body.”[10]  Interestingly enough, the Hebrews rank the flesh, skin, bones, and sinews as the most essential parts of the body-soul.[11]  The great Buddhist dialectician Nagarjuna (2nd Century CE) appears to be the source of Bodhidharma’s body correlates, but it is clear that Nagarjuna meant them as metaphors.[12]  In contrast it seems clear that, although dead bones rejoicing is most likely a figure of speech, the Hebrews were convinced that we think, feel, and perceive through and with all parts of our bodies.

In Search of a Christian Identity
by Robert Hamilton

The essential points here, are the “social disengagement” of the modern self, away from identifying solely with roles defined by the family group, and the development of a “personal unity” within the individual. Morally speaking, we are no longer empty vessels to be filled up by some god, or servant of god, we are now responsible for our own actions, and decisions, in light of our own moral compass. I would like to mention Julian Jayne’s seminal work, The Origin of Consciousness in the Breakdown of the Bicameral Mind, as a pertinent hypothesis for an attempt to understand the enormous distance between the modern sense of self with that of the ancient mind, and its largely absent subjective state.[13]

“The preposterous hypothesis we have come to in the previous chapter is that at one time human nature was split in two, an executive part called a god, and a follower part called a man.”[14]

This hypothesis sits very well with De Vitos’ description of the permeable personal identity of Old Testament characters, who are “taken over,” or possessed, by Yahweh.[15] The evidence of the Old Testament stories points in this direction, where we have patriarchal family leaders, like Abraham and Noah, going around making morally contentious decisions (in today’s terms) based on their internal dialogue with a god – Jehovah.[16] As Jaynes postulates later in his book, today we would call this behaviour schizophrenia. De Vito, later in the article, confirms, that:

“Of course, this relative disregard for autonomy in no way limits one’s responsibility for conduct–not even when Yhwh has given “statutes that were not good” in order to destroy Israel “(Ezek 20:25-26).[17]

Cognitive Perspectives on Early Christology
by Daniel McClellan

The insights of CSR [cognitive science of religion] also better inform our reconstruction of early Jewish concepts of agency, identity, and divinity. Almost twenty years ago, Robert A. Di Vito argued from an anthropological perspective that the “person” in the Hebrew Bible “is more radically decentered, ‘dividual,’ and undefined with respect to personal boundaries … [and] in sharp contrast to modernity, it is identified more closely with, and by, its social roles.”40 Personhood was divisible and permeable in the Hebrew Bible, and while there was diachronic and synchronic variation in certain details, the same is evident in the literature of Second Temple Judaism and early Christianity. This is most clear in the widespread understanding of the spirit (רוח (and the soul (נפש – (often used interchangeably – as the primary loci of a person’s agency or capacity to act.41 Both entities were usually considered primarily constitutive of a person’s identity, but also distinct from their physical body and capable of existence apart from it.42 The physical body could also be penetrated or overcome by external “spirits,” and such possession imposed the agency and capacities of the possessor.43 The God of Israel was largely patterned after this concept of personhood,44 and was similarly partible, with God’s glory (Hebrew: כבוד ;Greek: δόξα), wisdom (חכמה/σοφία), spirit (רוח/πνεῦµα), word (דבר/λόγος), presence (שכינה ,(and name (שם/ὄνοµα) operating as autonomous and sometimes personified loci of agency that could presence the deity and also possess persons (or cultic objects45) and/or endow them with special status or powers.46

Did Christianity lead to schizophrenia?
Psychosis, psychology and self reference

by Roland Littlewood

This new deity could be encountered anywhere—“Wherever two are gathered in my name” (Mathew 18.20)—for Christianity was universal and individual (“neither Jew nor Greek… bond nor free… male or female, for you are all one man in Christ Jesus” says St. Paul). And ultimate control rested with Him, Creator and Master of the whole universe, throughout the whole universe. No longer was there any point in threatening your recalcitrant (Egyptian) idol for not coming up with the goods (Cumont, 1911/1958, p. 93): as similarly in colonial Africa, at least according to the missionaries (Peel, 2000). If God was independent of social context and place, then so was the individual self at least in its conversations with God (as Dilthey argues). Religious status was no longer signalled by external signs (circumcision), or social position (the higher stages of the Roman priesthood had been occupied by aspiring politicians in the course of their career: “The internal status of the officiating person was a matter of… indifference to the celestial spirits” [Cumont, 1911/1958, p. 91]). “Now it is not our flesh that we must circumcise, we must crucify ourselves, exterminate and mortify our unreasonable desires” (John Chrysostom, 1979), “circumcise your heart” says “St. Barnabas” (2003, p. 45) for religion became internal and private. Like the African or Roman self (Mauss, 1938/1979), the Jewish self had been embedded in a functioning society, individually decentred and socially contextualised (Di Vito, 1999); it survived death only through its bodily descendants: “But Abram cried, what can you give me, seeing I shall die childless” (Genesis 15.2). To die without issue was extinction in both religious systems (Madigan & Levenson, 2008). But now an enduring part of the self, or an associate of it—the soul—had a connection to what might be called body and consciousness yet had some sort of ill defined association with them. In its earthly body it was in potential communication with God. Like God it was immaterial and immortal. (The associated resurrection of the physical body, though an essential part of Christian dogma, has played an increasingly less important part in the Church [cf. Stroumsa, 1990].) For 19th-century pagan Yoruba who already accepted some idea of a hereafter, each village has its separate afterlife which had to be fused by the missionaries into a more universal schema (Peel, 2000, p. 175). If the conversation with God was one to one, then each self-aware individual had then to make up their own mind on adherence—and thus the detached observer became the surveyor of the whole world (Dumont, 1985). Sacral and secular became distinct (separate “functions” as Dumont calls them), further presaging a split between psychological faculties. The idea of the self/soul as an autonomous unit facing God became the basis, via the stages Mauss (1938/1979) briefly outlines, for a political philosophy of individualism (MacFarlane, 1978). The missionaries in Africa constantly attempted to reach the inside of their converts, but bemoaned that the Yoruba did not seem to have any inward core to the self (Peel, 2000, Chapter 9).

Embodying the Gospel:
Two Exemplary Practices

by Joel B. Green
pp. 12-16

Philosopher Charles Taylor’s magisterial account of the development of personal identity in the West provides a useful point of entry into this discussion. He shows how modern assumptions about personhood in the West developed from Augustine in the fourth and fifth centuries, through major European philosophers in the seventeenth and eighteenth centuries (e.g.,Descartes, Locke, Kant), and into the present. The result is a modern human “self defined by the powers of disengaged reason—with its associated ideals of self-responsible freedom and dignity—of self-exploration, and of personal commitment.”2 These emphases provide a launching point for our modern conception of “inwardness,” that is, the widespread view that people have an inner self, which is the authentic self.

Given this baseline understanding of the human person, it would seem only natural to understand conversion in terms of interiority, and this is precisely what William James has done for the modern west. In his enormously influential 1901–02 Gifford Lectures at Edinburgh University, published in 1902 under the title The Varieties of Religious Experience, James identifies salvation as the resolution of a person’s inner, subjective crisis.Salvation for James is thus an individual, instantaneous, feeling-based, interior experience.3 Following James, A.D. Nock’s celebrated study of conversion in antiquity reached a similar conclusion: “By conversion we mean there orientation of the soul of an individual, his [sic] deliberate turning from in different or from an earlier form of piety to another, a turning which involves a consciousness that a great change is involved, that the old was wrong and the new is right.” Nock goes on to write of “a passion of willingness and acquiescence, which removes the feeling of anxiety, a sense of perceiving truths not known before, a sense of clean and beautiful newness within and without and an ecstasy of happiness . . .”4 In short, what is needed is a “change of heart.”

However pervasive they may be in the contemporary West, whether in-side or outside the church, such assumptions actually sit uneasily with Old and New Testament portraits of humanity. Let me mention two studies that press our thinking in an alternative direction. Writing with reference to Old Testament anthropology, Robert Di Vito finds that the human “(1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively trans-parent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘inner depths’), and (4) is ‘authentic’ precisely in its heteronomy, in its obedience to another and dependence upon another.”5 Two aspects of Di Vito’s summary are of special interest: first, his emphasis on a more communitarian experience of personhood; and second, his emphasis on embodiment. Were we to take seriously what these assumptions might mean for embracing and living out the Gospel, we might reflect more on what it means to be saved within the community of God’s people and, indeed, what it means to be saved in relation to the whole of God’s creation. We might also reflect less on conversion as decision-making and more on conversion as pattern-of-life.

The second study, by Klaus Berger, concerns the New Testament. Here,Berger investigates the New Testament’s “historical psychology,” repeatedly highlighting both the ease with which we read New Testament texts against modern understandings of humanity and the problems resident in our doing so.6 His list of troublesome assumptions—troublesome because they are more at home in the contemporary West than in the ancient Mediterranean world—includes these dualities, sometimes even dichotomies: doing and being, identity and behavior, internal and external. A more integrated understanding of people, the sort we find in the New Testament world, he insists, would emphasize life patterns that hold together believing, thinking, feeling, and behaving, and allow for a clear understanding that human behavior in the world is both simply and profoundly em-bodied belief. Perspectives on human transformation that take their point  of departure from this “psychology” would emphasize humans in relation-ship with other humans, the bodily nature of human allegiances and commitments, and the fully integrated character of human faith and life. […]

Given how John’s message is framed in an agricultural context, it is not a surprise that his point turns on an organic metaphor rather than a mechanical one. The resulting frame has no room for prioritizing inner (e.g.,“mind” or “heart”) over outer (e.g., “body” or “behavior”), nor of fitting disparate pieces together to manufacture a “product,” nor of correlating status and activity as cause and effect. Organic metaphors neither depend on nor provoke images of hierarchical systems but invite images of integration, interrelation, and interdependence. Consistent with this organic metaphor, practices do not occupy a space outside the system of change, but are themselves part and parcel of the system. In short, John’s agricultural metaphor inseparably binds “is” and “does” together.

Ressurrection and the Restoration of Israel:
The Ultimate Victory of the God of Life
by Jon Douglas Levenson
pp. 108-114

In our second chapter, we discussed one of the prime warrants often adduced either for the rejection of resurrection (by better-informed individuals) or for its alleged absence, and the alleged absence of any notion of the afterlife, in Judaism (by less informed individuals). That warrant is the finality of death in the Hebrew Bible, or at least in most of it, and certainly in what is from a Jewish point of view its most important subsection, the first five books. For no resurrections take place therein, and predictions of a general resurrection at the end of time can be found in the written Torah only through ingenious derash of the sort that the rabbinic tradition itself does not univocally endorse or replicate in its translations. In the same chapter, we also identified one difficulty with this notion that the Pentateuch exhibits no possibility of an afterlife but supports, instead, the absolute finality of death, and to this point we must now return. I am speaking of the difficulty of separating individuals from their families (including the extended family that is the nation). If, in fact, individuals are fundamentally and inextricably embedded within their fam ilies, then their own deaths, however terrifying in prospect, will lack the final ity that death carries with it in a culture with a more individualistic, atomistic understanding of the self. What I am saying here is something more radical than the truism that in the Hebrew Bible, parents draw consolation from the thought that their descendants will survive them (e.g., Gen 48:11), just as, conversely, the parents are plunged into a paralyzing grief at the thought that their progeny have perished (e.g., Gen 37:33–35; Jer 31:15). This is, of course, the case, and probably more so in the ancient world, where children were the support of one’s old age, than in modern societies, where the state and the pension fund fill many roles previously concentrated in the family. That to which I am pointing, rather, is that the self of an individual in ancient Israel was entwined with the self of his or her family in ways that are foreign to the modern West, and became foreign to some degree already long ago.

Let us take as an example the passage in which Jacob is granted ‘‘the blessing of Abraham,’’ his grandfather, according to the prayer of Isaac, his father, to ‘‘possess the land where you are sojourning, which God assigned to Abraham’’ (Gen 28:1–4). The blessing on Abraham, as we have seen, can be altogether and satisfactorily fulfilled in Abraham’s descendants. Thus, too, can Ezekiel envision the appointment of ‘‘a single shepherd over [Israel] to tend them—My servant David,’’ who had passed away many generations before (Ezek 34:23). Can we, without derash, see in this a prediction that David, king of Judah and Israel, will be raised from the dead? To do so is to move outside the language of the text and the culture of Israel at the time of Ezekiel, which does not speak of the resurrections of individuals at all. But to say, as the School of Rabbi Ishmael said about ‘‘to Aaron’’ in Num 18:28,1 that Ezekiel means only one who is ‘‘like David’’—a humble shepherd boy who comes to triumph in battle and rises to royal estate, vindicating his nation and making it secure and just—is not quite the whole truth, either. For biblical Hebrew is quite capable of saying that one person is ‘‘like’’ another or descends from another’s lineage (e.g., Deut 18:15; 2 Kgs 22:2; Isa 11:1) without implying identity of some sort. The more likely interpretation, rather, is that Ezekiel here predicts the miraculous appearance of a royal figure who is not only like David but also of David, a person of Davidic lineage, that is, who functions as David redivivus. This is not the resurrection of a dead man, to be sure, but neither is it the appearance of some unrelated person who only acts like David, or of a descendant who is ‘‘a chip off the old block.’’ David is, in one obvious sense, dead and buried (1 Kgs 2:10), and his death is final and irreversible. In another sense, harder for us to grasp, however, his identity survives him and can be manifested again in a descendant who acts as he did (or, to be more precise, as Ezekiel thought he acted) and in whom the promise to David is at long last fulfilled. For David’s identity was not restricted to the one man of that name but can reappear to a large measure in kin who share it.

This is obviously not reincarnation. For that term implies that the ancient Israelites believed in something like the later Jewish and Christian ‘‘soul’’ or like the notion (such as one finds in some religions) of a disembodied consciousness that can reappear in another person after its last incarnation has died. In the Hebrew Bible, however, there is nothing of the kind. The best approximation is the nepes, the part of the person that manifests his or her life force or vitality most directly. James Barr defines the nepes as ‘‘a superior controlling centre which accompanies, exposes and directs the existence of that totality [of the personality] and one which, especially, provides the life to the whole.’’2 Although the nepes does exhibit a special relationship to the life of the whole person, it is doubtful that it constitutes ‘‘a superior controlling center.’’ As Robert Di Vito points out, ‘‘in the OT, human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’’’ Thus, the eye talks or thinks (Job 24:15) and even mocks (Prov 30:17), the ear commends or pronounces blessed (Job 29:11), blood cries out (Gen 4:10), the nepes (perhaps in the sense of gullet or appetite) labors (Prov 16:26) or pines (Ps 84:3), kidneys rejoice and lips speak (Prov 23:16), hands shed blood (Deut 21:7), the heart and flesh sing (Ps 84:3), all the psalmist’s bones say, ‘‘Lord, who is like you?’’ (Ps 35:10), tongue and lips lie or speak the truth (Prov 12:19, 22), hearts are faithful (Neh 9:8) or wayward (Jer 5:23), and so forth.3 The point is not that the individual is simply an agglomeration of distinct parts. It is, rather, that the nepes is one part of the self among many and does not control the entirety, as the old translation ‘‘soul’’ might lead us to expect.4 A similar point might be made about the modern usage of the term person.

[4. It is less clear to me that this is also Di Vito’s point. He writes, for example: ‘‘The biblical character presents itself to us more as parts than as a whole . . . accordingly, in the OT one searches in vain for anything really corresponding to the Platonic localization of desire and emotion in a central ‘locale,’ like the ‘soul’ under the hegemony of reason, a unified and self-contained center from which the individual’s activities might flow, a ‘self’ that might finally assert its control’’ (‘‘Old Testament Anthropology,’’ 228).]

All of the organs listed above, Di Vito points out, are ‘‘susceptible to moral judgment and evaluation.’’5 Not only that, parts of the body besides the nepes can actually experience emotional states. As Aubrey R. Johnson notes, ‘‘Despondency, for example, is felt to have a shriveling effect upon the bones . . . just as they are said to decay or become soft with fear or distress, and so may be referred to as being themselves troubled or afraid’’ (e.g., Ezek 37:11; Hab 3:16; Jer 23:9; Ps 31:11). In other words, ‘‘the various members and secretions of the body . . . can all be thought of as revealing psychical properties,’’6 and this is another way of saying that the nepes does not really correspond to Barr’s ‘‘superior controlling centre’’ at all. For many of the functions here attributed to the nepes are actually distributed across a number of parts of the body. The heart, too, often functions as the ‘‘controlling centre,’’ determining, for example, whether Israel will follow God’s laws or not (e.g., Ezek 11:19). The nepes in the sense of the life force of the body is sometimes identified with the blood, rather than with an insensible spiritual essence of the sort that words like ‘‘soul’’ or ‘‘person’’ imply. It is in light of this that we can best understand the Pentateuchal laws that forbid the eating of blood on the grounds that it is the equivalent of eating life itself, eating, that is, an animal that is not altogether dead (Lev 17:11, 14; Deut 12:23; cf. Gen 9:4–5). If the nepes ‘‘provides the life to the whole,’’7 so does the blood, with which laws like these, in fact, equate it. The bones, which, as we have just noted, can experience emotional states, function likewise on occasion. When a dead man is hurriedly thrown into Elisha’s grave in 2 Kgs 13:21, it is contact with the wonder-working prophet’s bones that brings about his resurrection. And when the primal man at long last finds his soul mate, he exclaims not that she (unlike the animals who have just been presented to him) shares a nepes with him but rather that she ‘‘is bone of my bones / And flesh of my flesh’’ (Gen 2:23).

In sum, even if the nepes does occasionally function as a ‘‘controlling centre’’ or a provider of life, it does not do so uniquely. The ancient Israelite self is more dynamic and internally complex than such a formulation allows. It should also be noticed that unlike the ‘‘soul’’ in most Western philosophy, the biblical nepes can die. When the non-Israelite prophet Balaam expresses his wish to ‘‘die the death of the upright,’’ it is his nepes that he hopes will share their fate (Num 23:10), and the same applies to Samson when he voices his desire to die with the Philistines whose temple he then topples upon all (Judg 16:30). Indeed, ‘‘to kill the nepes’’ functions as a term for homicide in biblical Hebrew, in which context, as elsewhere, it indeed has a meaning like that of the English ‘‘person’’ (e.g., Num 31:19; Ezek 13:19).8 As Hans Walter Wolff puts it, nepes ‘‘is never given the meaning of an indestructible core of being, in contradistinction to the physical life . . . capable of living when cut off from that life.’’9 Like heart, blood, and bones, the nepes can cease to function. It is not quite correct to say, however, that this is because it is ‘‘physical’’ rather than ‘‘spiritual,’’ for the other parts of the self that we consider physical— heart, blood, bones, or whatever—are ‘‘spiritual’’ as well—registering emotions, reacting to situations, prompting behavior, expressing ideas, each in its own way. A more accurate summary statement would be Johnson’s: ‘‘The Israelite conception of man [is] as a psycho-physical organism.’’10 ‘‘For some time at least [after a person’s death] he may live on as an individual (apart from his possible survival within the social unit),’’ observes Johnson, ‘‘in such scattered elements of his personality as the bones, the blood and the name.’’11 It would seem to follow that if ever he is to return ‘‘as a psycho-physical organ ism,’’ it will have to be not through reincarnation of his soul in some new person but through the resurrection of the body, with all its parts reassembled and revitalized. For in the understanding of the Hebrew Bible, a human being is not a spirit, soul, or consciousness that happens to inhabit this body or that—or none at all. Rather, the unity of body and soul (to phrase the point in the unhappy dualistic vocabulary that is still quite removed from the way the Hebrew Bible thought about such things) is basic to the person. It thus follows that however distant the resurrection of the dead may be from the understanding of death and life in ancient Israel, the concept of immortality in the sense of a soul that survives death is even more distant. And whatever the biblical problems with the doctrine of resurrection—and they are formidable—the biblical problems with the immortality that modern Jewish prayer books prefer (as we saw in our first chapter) are even greater.

Di Vito points, however, to an aspect of the construction of the self in ancient Israel that does have some affinities with immortality. This is the thorough embeddedness of that individual within the family and the corollary difficulty in the context of this culture of isolating a self apart from the kin group. Drawing upon Charles Taylor’s highly suggestive study The Sources of the Self,12 Di Vito points out that ‘‘salient features of modern identity, such as its pronounced individualism, are grounded in modernity’s location of the self in the ‘inner depths’ of one’s interiority rather than in one’s social role or public relations.’’13 Cautioning against the naïve assumption that ancient Israel adhered to the same conception of the self, Di Vito develops four points of contrast between modern Western and ancient Israelite thinking on this point. In the Hebrew Bible,

the subject (1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively transparent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘‘inner depths’’), and (4) is ‘‘authentic’’ precisely in its heteronomy, in its obedience to another and dependence upon another.14

Although Di Vito’s formulation is overstated and too simple—is every biblical figure, even David, presented as ‘‘altogether lacking in a sense of ‘inner depths’’’?—his first and last points are highly instructive and suggest that the familial and social understanding of ‘‘life’’ in the Hebrew Bible is congruent with larger issues in ancient Israelite culture. ‘‘Life’’ and ‘‘death’’ mean different things in a culture like ours, in which the subject is not so ‘‘deeply embedded . . . in its social identity’’ and in which authenticity tends to be associated with cultivation of individual traits at the expense of conformity, and with the attainment of personal autonomy and independence.

The contrast between the biblical and the modern Western constructions of personal identity is glaring when one considers the structure of what Di Vito calls ‘‘the patriarchal family.’’ This ‘‘system,’’ he tells us, ‘‘with strict subor dination of individual goals to those of the extended lineal group, is designed to ensure the continuity and survival of the family.’’15 In this, of course, such a system stands in marked contrast to liberal political theory that has developed over the past three and a half centuries, which, in fact, virtually assures that people committed to that theory above all else will find the Israelite system oppressive. For the liberal political theory is one that has increasingly envi sioned a system in which society is composed of only two entities, the state and individual citizens, all of whom have equal rights quite apart from their famil ial identities and roles. Whether or not one affirms such an identity or plays the role that comes with it (or any role different from that of other citizens) is thus relegated to the domain of private choice. Individuals are guaranteed the free dom to renounce the goals of ‘‘the extended lineal group’’ and ignore ‘‘the continuity and survival of the family,’’ or, increasingly, to redefine ‘‘family’’ according to their own private preferences. In this particular modern type of society, individuals may draw consolation from the thought that their group (however defined) will survive their own deaths. As we have had occasion to remark, there is no reason to doubt that ancient Israelites did so, too. But in a society like ancient Israel, in which ‘‘the subject . . . is deeply embedded, or engaged, in its social identity,’’ ‘‘with strict subordination of individual goals to those of the extended lineal group,’’ the loss of the subject’s own life and the survival of the familial group cannot but have a very different resonance from the one most familiar to us. For even though the subject’s death is irreversible—his or her nepes having died just like the rest of his or her body/soul—his or her fulfillment may yet occur, for identity survives death. God can keep his promise to Abraham or his promise to Israel associated with the gift of David even after Abraham or David, as an individual subject, has died. Indeed, in light of Di Vito’s point that ‘‘the subject . . . is comparatively decentered and undefined with respect to personal boundaries,’’ the very distinction between Abraham and the nation whose covenant came through him (Genesis 15; 17), or between David and the Judean dynasty whom the Lord has pledged never to abandon (2 Sam 7:8–16; Ps 89:20–38), is too facile.

Our examination of personal identity in the earlier literature of the Hebrew Bible thus suggests that the conventional view is too simple: death was not final and irreversible after all, at least not in the way in which we are inclined to think of these matters. This is not, however, because individuals were be lieved to possess an indestructible essence that survived their bodies. On the one hand, the body itself was thought to be animated in ways foreign to modern materialistic and biologistic thinking, but, on the other, even its most spiritual part, its nepeˇs (life force) or its n˘eˇs¯amâ (breath), was mortal. Rather, the boundary between individual subjects and the familial/ethnic/national group in which they dwelt, to which they were subordinate, and on which they depended was so fluid as to rob death of some of the horror it has in more individualistic cultures, influenced by some version of social atomism. In more theological texts, one sees this in the notion that subjects can die a good death, ‘‘old and contented . . . and gathered to [their] kin,’’ like Abraham, who lived to see a partial—though only a partial—fulfillment of God’s promise of land, progeny, and blessing upon him, or like Job, also ‘‘old and contented’’ after his adversity came to an end and his fortunes—including progeny—were restored (Gen 25:8; Job 42:17). If either of these patriarchal figures still felt terror in the face of his death, even after his afflictions had been reversed, the Bible gives us no hint of it.16 Death in situations like these is not a punishment, a cause for complaint against God, or the provocation of an existential crisis. But neither is it death as later cultures, including our own, conceive it.

Given this embeddedness in family, there is in Israelite culture, however, a threat that is the functional equivalent to death as we think of it. This is the absence or loss of descendants.

The Master and His Emissary
by Iain McGilchrist
pp. 263-264

Whoever it was that composed or wrote them [the Homeric epics], they are notable for being the earliest works of Western civilisation that exemplify a number of characteristics that are of interest to us. For in their most notable qualities – their ability to sustain a unified theme and produce a single, whole coherent narrative over a considerable length, in their degree of empathy, and insight into character, and in their strong sense of noble values (Scheler’s Lebenswerte and above) – they suggest a more highly evolved right hemisphere.

That might make one think of the importance to the right hemisphere of the human face. Yet, despite this, there are in Homeric epic few descriptions of faces. There is no doubt about the reality of the emotions experienced by the figures caught up in the drama of the Iliad or the Odyssey: their feelings of pride, hate, envy, anger, shame, pity and love are the stuff of which the drama is made. But for the most part these emotions are conveyed as relating to the body and to bodily gesture, rather than the face – though there are moments, such as at the reunion of Penelope and Odysseus at the end of the Odyssey, when we seem to see the faces of the characters, Penelope’s eyes full of tears, those of Odysseus betraying the ‘ache of longing rising from his breast’. The lack of emphasis on the face might seem puzzling at a time of increasing empathic engagement, but I think there is a reason for this.

In Homer, as I mentioned in Part I, there was no word for the body as such, nor for the soul or the mind, for that matter, in the living person. The sōma was what was left on the battlefield, and the psuchēwas what took flight from the lips of the dying warrior. In the living person, when Homer wants to speak of someone’s mind or thoughts, he refers to what is effectively a physical organ – Achilles, for example, ‘consulting his thumos’. Although the thumos is a source of vital energy within that leads us to certain actions, the thumos has fleshly characteristics such as requiring food and drink, and a bodily situation, though this varies. According to Michael Clarke’s Flesh and Spirit in the Songs of Homer, Homeric man does not have a body or a mind: ‘rather this thought and consciousness are as inseparable a part of his bodily life as are movement and metabolism’. 15 The body is indistinguishable from the whole person. 16 ‘Thinking, emotion, awareness, reflection, will’ are undertaken in the breast, not the head: ‘the ongoing process of thought is conceived of as if it were precisely identified with the palpable inhalation of the breath, and the half-imagined mingling of breath with blood and bodily fluids in the soft, warm, flowing substances that make up what is behind the chest wall.’ 17 He stresses the importance of flow, of melting and of coagulation. The common ground of meaning is not in a particular static thing but in the ongoing process of living, which ‘can be seen and encapsulated in different contexts by a length of time or an oozing liquid’. These are all images of transition between different states of flux, different degrees of permanence, and allowing the possibility of ambiguity: ‘The relationship between the bodily and mental identity of these entities is subtle and elusive.’ 18 Here there is no necessity for the question ‘is this mind or is it body?’ to have a definitive answer. Such forbearance, however, had become impossible by the time of Plato, and remains, according to current trends in neurophilosophy, impossible today.

Words suggestive of the mind, the thumos ‘family’, for example, range fluidly and continuously between actor and activity, between the entity that thinks and the thoughts or emotions that are its products. 19 Here Clarke is speaking of terms such as is, aiōn, menos. ‘The life of Homeric man is defined in terms of processes more precisely than of things.’ 20 Menos, for example, refers to force or strength, and can also mean semen, despite being often located in the chest. But it also refers to ‘the force of violent self-propelled motion in something non-human’, perhaps like Scheler’s Drang: again more an activity than a thing. 21

This profound embodiment of thought and emotion, this emphasis on processes that are always in flux, rather than on single, static entities, this refusal of the ‘either/ or’ distinction between mind and body, all perhaps again suggest a right-hemisphere-dependent version of the world. But what is equally obvious to the modern mind is the relative closeness of the point of view. And that, I believe, helps to explain why there is little description of the face: to attend to the face requires a degree of detached observation. That there is here a work of art at all, a capacity to frame human existence in this way, suggests, it is true, a degree of distance, as well as a degree of co-operation of the hemispheres in achieving it. But it is the gradual evolution of greater distance in post-Homeric Greek culture that causes the efflorescence, the ‘unpacking’, of both right and left hemisphere capacities in the service of both art and science.

With that distance comes the term closest to the modern, more disembodied, idea of mind, nous (or noos), which is rare in Homer. When nous does occur in Homer, it remains distinct, almost always intellectual, not part of the body in any straightforward sense: according to Clarke it ‘may be virtually identified with a plan or stratagem’. 22 In conformation to the processes of the left hemisphere, it is like the flight of an arrow, directional. 23

By the late fifth and fourth centuries, separate ‘concepts of body and soul were firmly fixed in Greek culture’. 24 In Plato, and thence for the next two thousand years, the soul is a prisoner in the body, as he describes it in the Phaedo, awaiting the liberation of death.

The Great Shift
by James L. Kugel
pp. 163-165

A related belief is attested in the story of Hannah (1 Sam 1). Hannah is, to her great distress, childless, and on one occasion she goes to the great temple at Shiloh to seek God’s help:

The priest Eli was sitting on a seat near the doorpost of the temple of the LORD . In the bitterness of her heart, she prayed to the LORD and wept. She made a vow and said: “O LORD of Hosts, if You take note of Your maidservant’s distress, and if You keep me in mind and do not neglect Your maidservant and grant Your maidservant a male offspring, I will give him to the LORD for all the days of his life; and no razor shall ever touch his head.” * Now as she was speaking her prayer before the LORD , Eli was watching her mouth. Hannah was praying in her heart [i.e., silently]; her lips were moving, but her voice could not be heard, so Eli thought she was drunk. Eli said to her: “How long are you going to keep up this drunkenness? Cut out the boozing!” But Hannah answered: “Oh no, sir, I am a woman of saddened spirit. I have drunk no wine or strong drink, but I have been pouring out my heart to the LORD . Don’t take your maidservant for an ill-behaved woman! I have been praying this long because of my great distress.” Eli answered her: “Then go in peace, and may the God of Israel grant you what you have asked of Him.” (1 Sam 1:9–17)

If Eli couldn’t hear her, how did Hannah ever expect God to hear her? But she did. Somehow, even though no sound was coming out of her mouth, she apparently believed that God would hear her vow and, she hoped, act accordingly. (Which He did; “at the turn of the year she bore a son,” 1 Sam 1:20.) This too seemed to defy the laws of physics, just as much as Jonah’s prayer from the belly of the fish, or any prayer uttered at some distance from God’s presumed locale, a temple or other sacred spot.

Many other things could be said about the Psalms, or about biblical prayers in general, but the foregoing three points have been chosen for what they imply for the overall theme of this book. We have already seen a great deal of evidence indicating that people in biblical times believed the mind to be semipermeable, capable of being infiltrated from the outside. This is attested not only in the biblical narratives examined earlier, but it is the very premise on which all of Israel’s prophetic corpus stands. The semipermeable mind is prominent in the Psalms as well; in a telling phrase, God is repeatedly said to penetrate people’s “kidneys and heart” (Pss 7:10, 26:2, 139:13; also Jer 11:20, 17:10, 20:12), entering these messy internal organs 28 where thoughts were believed to dwell and reading—as if from a book—all of people’s hidden ideas and intentions. God just enters and looks around:

You have examined my heart, visited [me] at night;
You have tested me and found no wickedness; my mouth has not transgressed. (Ps 17:3)
Examine me, O LORD , and test me; try my kidneys and my heart. (26:2)

[28. Robert North rightly explained references to a person’s “heart” alone ( leb in biblical Hebrew) not as a precise reference to that particular organ, but as “a vaguely known or confused jumble of organs, somewhere in the area of the heart or stomach”: see North (1993), 596.]

Indeed God is so close that inside and outside are sometimes fused:

Let me bless the LORD who has given me counsel; my kidneys have been instructing me at night.
I keep the LORD before me at all times, just at my right hand, so I will not stumble. (Ps 16:7–8)

(Who’s giving this person advice, an external God or an internal organ?)

Such is God’s passage into a person’s semipermeable mind. But the flip side of all this is prayer, when a person’s words, devised on the inside, in the human mind, leave his or her lips in order to reach—somehow—God on the outside. As we have seen, those words were indeed believed to make their way to God; in fact, it was the cry of the victim that in some sense made the world work, causing God to notice and take up the cause of justice and right. Now, the God who did so was also, we have seen, a mighty King, who presumably ranged over all of heaven and earth:

He mounted on a cherub and flew off, gliding on the wings of the wind. (Ps 18:11)

He makes the clouds His chariot, He goes about on the wings of the wind. (Ps 104:3)

Yet somehow, no matter where His travels might take Him, God is also right there, just on the other side of the curtain that separates ordinary from extraordinary reality, allowing Him to hear the sometimes geographically distant cry of the victim or even to hear an inaudible, silent prayer like Hannah’s. The doctrine of divine omnipresence was still centuries away and was in fact implicitly denied in many biblical texts, 29 yet something akin to omnipresence seems to be implied in God’s ability to hear and answer prayers uttered from anywhere, no matter where He is. In fact, this seems implied as well in the impatient, recurrent question seen above, “How long, O L ORD ?”; the psalmist seems to be saying, “I know You’ve heard me, so when will You answer?”

Perhaps the most striking thing suggested by all this is the extent to which the Psalms’ depiction of God seems to conform to the general contours of the great Outside as described in an earlier chapter. God is huge and powerful, but also all-enfolding and, hence, just a whisper away. Somehow, people in biblical times seem to have just assumed that God, on the other side of that curtain, could hear their prayers, no matter where they were. All this again suggests a sense of self quite different from our own—a self that could not only be permeated by a great, external God, but whose thoughts and prayers could float outward and reach a God who was somehow never far, His domain beginning precisely where the humans’ left off.

One might thus say that, in this and in other ways, the psalmists’ underlying assumptions constitute a kind of biblical translation of a basic way of perceiving that had started many, many millennia earlier, a rephrasing of that fundamental reality in the particular terms of the religion of Israel. That other, primeval sense of reality and this later, more specific version of it found in these psalms present the same basic outline, which is ultimately a way of fitting into the world: the little human (more specifically in the Psalms, the little supplicant) faced with a huge enfolding Outside (in the Psalms, the mighty King) who overshadows everything and has all the power: sometimes kind and sometimes cruel (in the Psalms, sometimes heeding one’s request, but at other times oddly inattentive or sluggish), the Outside is so close as to move in and out of the little human (in the Psalms as elsewhere, penetrating a person’s insides, but also, able to pick up the supplicant’s request no matter where or how uttered). 30

pp. 205-207

The biblical “soul” was not originally thought to be immortal; in fact, the whole idea that human beings have some sort of sacred or holy entity inside them did not exist in early biblical times. But the soul as we conceive of it did eventually come into existence, and how this transformation came about is an important part of the history that we are tracing.

The biblical book of Proverbs is one of the least favorites of ordinary readers. To put the matter bluntly, Proverbs can be pretty monotonous: verse after verse tells you how much better the “righteous” are than the “wicked”: that the righteous tread the strait and narrow, control their appetites, avoid the company of loose women, save their money for a rainy day, and so forth, while the “wicked” always do quite the opposite. In spite of the way the book hammers away at these basic themes, a careful look at specific verses sometimes reveals something quite striking. 1 Here, for example, is what one verse has to say about the overall subject of the present study:

A person’s soul is the lamp of the LORD , who searches out all the innermost chambers. (Prov 20:27)

At first glance, this looks like the old theme of the semipermeable mind, whose innermost chambers are accessible to an inquisitive God. But in this verse, God does not just enter as we have seen Him do so often in previous chapters, when He appeared (apparently in some kind of waking dream) to Abraham or Moses, or put His words in the mouth of Amos or Jeremiah, or in general was held to “inspect the kidneys and heart” (that is, the innermost thoughts) of people. Here, suddenly, God seems to have an ally on the inside: the person’s own soul.

This point was put forward in rather pungent form by an ancient Jewish commentator, Rabbi Aḥa (fourth century CE ). He cited this verse to suggest that the human soul is actually a kind of secret agent, a mole planted by God inside all human beings. The soul’s job is to report to God (who is apparently at some remove) on everything that a person does or thinks:

“A person’s soul is the lamp of the LORD , who searches out all the innermost chambers”: Just as kings have their secret agents * who report to the king on each and every thing, so does the Holy One have secret agents who report on everything that a person does in secret . . . The matter may be compared to a man who married the daughter of a king. The man gets up early each morning to greet the king, and the king says, “You did such-and-such a thing in your house [yesterday], then you got angry and you beat your slave . . .” and so on for each and every thing that occurred. The man leaves and says to the people of the palace, “Which of you told the king that I did such-and-so? How does he know?” They reply to him, “Don’t be foolish! You’re married to his daughter and you want to know how he finds out? His own daughter tells him!” So likewise, a person can do whatever he wants, but his soul reports everything back to God. 2

The soul, in other words, is like God’s own “daughter”: she dwells inside a human body, but she reports regularly to her divine “father.” Or, to put this in somewhat more schematic terms: God, who is on the outside, has something that is related or connected to Him on the inside, namely, “a person’s soul.” But wasn’t it always that way?

Before getting to an answer, it will be worthwhile to review in brief something basic that was seen in the preceding chapters. Over a period of centuries, the basic model of God’s interaction with human beings came to be reconceived. After a time, He no longer stepped across the curtain separating ordinary from extraordinary reality. Now He was not seen at all—at first because any sort of visual sighting was held to be lethal, and later because it was difficult to conceive of. God’s voice was still heard, but He Himself was an increasingly immense being, filling the heavens; and then finally (moving ahead to post-biblical times), He was just axiomatically everywhere all at once. This of course clashed with the old idea of the sanctuary (a notion amply demonstrated in ancient Mesopotamian religion as well), according to which wherever else He was, God was physically present in his earthly “house,” that is, His temple. But this ancient notion as well came to be reconfigured in Israel; perched like a divine hologram above the outstretched wings of the cherubim in the Holy of Holies, God was virtually bodiless, issuing orders (like “Let there be light”) that were mysteriously carried out. 3

If conceiving of such a God’s being was difficult, His continued ability to penetrate the minds of humans ought to have been, if anything, somewhat easier to account for. He was incorporeal and omnipresent; 4 what could stand in the way of His penetrating a person’s mind, or being there already? Yet precisely for this reason, Proverbs 20:27 is interesting. It suggests that God does not manage this search unaided: there is something inside the human being that plays an active role in this process, the person’s own self or soul.

p. 390

It is striking that the authors of this study went on specifically to single out the very different sense of self prevailing in the three locales as responsible for the different ways in which voice hearing was treated: “Outside Western culture people are more likely to imagine [a person’s] mind and self as interwoven with others. These are, of course, social expectations, or cultural ‘invitations’—ways in which other people expect people like themselves to behave. Actual people do not always follow social norms. Nonetheless, the more ‘independent’ emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally many times in many places—among them India and Africa . . .” The passage continues: “For instance, the anthropologist McKim Marriott wanted to be so clear about how much Hindus conceive themselves to be made through relationships, compared with Westerners, that he called the Hindu person a ‘dividual’. His observations have been supported by other anthropologists of South Asia and certainly in south India, and his term ‘dividual’ was picked up to describe other forms of non-Western personhood. The psychologist Glenn Adams has shown experimentally that Ghanaians understand themselves as intrinsically connected through relationships. The African philosopher John Mbiti remarks: ‘only in terms of other people does the [African] individual become conscious of his own being.’” Further, see Markus and Mullally (1997); Nisbett (2004); Marriot (1976); Miller (2007); Trawick (1992); Strathern (1988); Ma and Schoeneman (1997); Mbiti (1969).

The “Other” Psychology of Julian Jaynes
by Brian J. McVeigh
p. 74

The Heart is the Ruler of the Body

We can begin with the word xin1, or heart, though given its broader denotations related to both emotions and thought, a better translation is “heart-mind” (Yu 2003). Xin1 is a pictographic representation of a physical heart, and as we will see below, it forms the most primary and elemental building block for Chinese linguo-concepts having to do with the psychological. The xin1 oversaw the activities of an individual’s psychophysiological existence and was regarded as the ruler of the body — indeed, the person — in the same way a king ruled his people. If individuals cultivate and control their hearts, then the family, state, and world cold be properly governed (Yu 2007, 2009b).

Psycho-Physio-Spiritual Aspects of the Person

Under the control of heart were the wu3shen2 of “five spirits” (shen2, hun2, po4, yi4, zhi4) which dwelt respectively in the heart, liver, lungs, spleen, and kidneys. The five shen2 were implicated in the operations of thinking, perception, and bodily systems and substances. A phonosemantic, shen2 has been variously translated as mind, spirit, supernatural being, consciousness, vitality, expression, soul, energy, god, or numen/numinous. The left side element of this logograph means manifest, show, demonstrate; we can speculate that whatever was manifested came from a supernatural source; it may have meant “ancestral spirit” (Keightley 1978: 17). The right side provides sound but also the additional meaning of “to state” or “report to a superior”; again we can speculate that it meant communing to a supernatural superior.

Introspective Illusion

On split brain research, Susan Blackmore observed that, “In this way, the verbal left brain covered up its ignorance by confabulating.” This relates to the theory of introspective illusion (see also change blindness, choice blindness, and bias blind spot). In both cases, the conscious mind turns to confabulation to explain what it has no access to and so what it doesn’t understand.

This is how we maintain a sense of being in control. Our egoic minds have immense talent at rationalization and it can happen instantly with total confidence in the reason(s) given. That indicates that consciousness is a lot less conscious than it really is… or rather that consciousness isn’t what we think it is.

Our theory of mind, as such, is highly theoretical in the speculative sense. That is to say it isn’t particularly reliable in most cases. First and foremost, what matters is that the story told is compelling, to both us and others (self-justification, in its role within consciousness, is close to Jaynesian self-authorization). We are ruled by our need for meaning, even as our body-minds don’t require meaning to enact behaviors and take actions. We get through our lives just fine mostly on automatic.

According to Julian Jaynes theory of the bicameral mind, the purpose of consciousness is to create an internal stage upon which we play out narratives. As this interiorized and narratized space is itself confabulated, that is to say psychologically and socially constructed, this space allows all further confabulations of consciousness. We imaginatively bootstrap our individuality into existence, and that requires a lot of explaining.

* * *

Introspection illusion
Wikipedia

A 1977 paper by psychologists Richard Nisbett and Timothy D. Wilson challenged the directness and reliability of introspection, thereby becoming one of the most cited papers in the science of consciousness.[8][9] Nisbett and Wilson reported on experiments in which subjects verbally explained why they had a particular preference, or how they arrived at a particular idea. On the basis of these studies and existing attribution research, they concluded that reports on mental processes are confabulated. They wrote that subjects had, “little or no introspective access to higher order cognitive processes”.[10] They distinguished between mental contents (such as feelings) and mental processes, arguing that while introspection gives us access to contents, processes remain hidden.[8]

Although some other experimental work followed from the Nisbett and Wilson paper, difficulties with testing the hypothesis of introspective access meant that research on the topic generally stagnated.[9]A ten-year-anniversary review of the paper raised several objections, questioning the idea of “process” they had used and arguing that unambiguous tests of introspective access are hard to achieve.[3]

Updating the theory in 2002, Wilson admitted that the 1977 claims had been too far-reaching.[10] He instead relied on the theory that the adaptive unconscious does much of the moment-to-moment work of perception and behaviour. When people are asked to report on their mental processes, they cannot access this unconscious activity.[7] However, rather than acknowledge their lack of insight, they confabulate a plausible explanation, and “seem” to be “unaware of their unawareness”.[11]

The idea that people can be mistaken about their inner functioning is one applied by eliminative materialists. These philosophers suggest that some concepts, including “belief” or “pain” will turn out to be quite different from what is commonly expected as science advances.

The faulty guesses that people make to explain their thought processes have been called “causal theories”.[1] The causal theories provided after an action will often serve only to justify the person’s behaviour in order to relieve cognitive dissonance. That is, a person may not have noticed the real reasons for their behaviour, even when trying to provide explanations. The result is an explanation that mostly just makes themselves feel better. An example might be a man who discriminates against homosexuals because he is embarrassed that he himself is attracted to other men. He may not admit this to himself, instead claiming his prejudice is because he believes that homosexuality is unnatural.

2017 Report on Consciousness and Moral Patienthood
Open Philanthropy Project

Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.

I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.

A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability.

59. I’m not aware of surveys indicating how common illusionist approaches are, though Frankish (2016a) remarks that:

The topic of this special issue is the view that phenomenal consciousness (in the philosophers’ sense) is an illusion — a view I call illusionism. This view is not a new one: the first wave of identity theorists favoured it, and it currently has powerful and eloquent defenders, including Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey. However, it is widely regarded as a marginal position, and there is no sustained interdisciplinary research programme devoted to developing, testing, and applying illusionist ideas. I think the time is ripe for such a programme. For a quarter of a century at least, the dominant physicalist approach to consciousness has been a realist one. Phenomenal properties, it is said, are physical, or physically realized, but their physical nature is not revealed to us by the concepts we apply to them in introspection. This strategy is looking tired, however. Its weaknesses are becoming evident…, and some of its leading advocates have now abandoned it. It is doubtful that phenomenal realism can be bought so cheaply, and physicalists may have to accept that it is out of their price range. Perhaps phenomenal concepts don’t simply fail to represent their objects as physical but misrepresent them as phenomenal, and phenomenality is an introspective illusion…

[Keith Frankish, Editorial Introduction, Journal of Consciousness Studies, Volume 23, Numbers 11-12, 2016, pp. 9-10(2)]

The Round-Based Community

Yet there’s an even deeper point to be made here, which is that flatness may actually be closer to how we think about the people around us, or even about ourselves.

This is a useful observation from Alec Nevala-Lee (The flat earth society).

I’m willing to bet that perceiving others and oneself as round characters has to do with the ability of cognitive complexity and tolerance for cognitive dissonance. These are tendencies of the liberal-minded, although research shows that with cognitive overload, from stress to drunkenness, even the liberal-minded will become conservative-minded (e.g., liberals who watched repeated video of 9/11 terrorist attacks were more likely to support Bush’s war on terror; by the way, identifying a conflict by a single emotion is a rather flat way of looking at the world).

Bacon concludes: “Increasingly, the political party you belong to represents a big part of your identity and is not just a reflection of your political views. It may even be your most important identity.” And this strikes me as only a specific case of the way in which we flatten ourselves out to make our inner lives more manageable. We pick and choose what else we emphasize to better fit with the overall story that we’re telling. It’s just more obvious these days.

So, it’s not only about characters but entire attitudes and worldviews. The ego theory of self itself encourages flatness, as opposed to the (Humean and Buddhist) bundle theory of self. It’s interesting to note how much more complex identity has become in the modern world and how much more accepting we are of allowing people to have multiple identities than in the past. This has happened at the very same time that fluid intelligence has drastically increased, and of course fluid intelligence correlates with liberal-mindedness (correlating as well to FFM openness, MBTI perceiving, Hartmann’s thin boundary type, etc).

Cultures have a way of taking psychological cues from their heads of state. As Forster says of one critical objection to flat characters: “Queen Victoria, they argue, cannot be summed up in a single sentence, so what excuse remains for Mrs. Micawber?” When the president himself is flat—which is another way of saying that he can no longer surprise us on the downside—it has implications both for our literature and for our private lives.

At the moment, the entire society is under extreme duress. This at least temporarily rigidifies the ego boundaries. Complexity of identity becomes less attractive to the average person at such times. Still, the most liberal-minded (typically radical leftists in the US) will be better at maintaining their psychological openness in the face of conflict, fear, and anxiety. As Trump is the ultimate flat character, look to the far left for those who will represent the ultimate round character. Mainstream liberals, as usual, will attempt to play to the middle and shift with the winds, taking up flat and round in turn. It’s a battle of not only ideological but psychological worldviews. And which comes to define our collective identity will dominate our society for the coming generation.

The process is already happening. And it shouldn’t astonish us if we all wake up one day to discover that the world is flat.

It’s an interesting moment. Our entire society is becoming more complex — in terms of identity, demographics, technology, media, and on and on. This requires we develop the ability of roundedness or else fall back on the simplifying rhetoric and reaction of conservative-mindedness with the rigid absolutes of authoritarianism being the furthest reaches of flatness… and, yes, such flatness tends to be memorable (the reason it is so easy to make comparisons to someone like Hitler who has become an extreme caricature of flatness). This is all the more reason for the liberal-minded to gain the awareness and intellectual defenses toward the easy attraction of flat identities and worldviews, since in a battle of opposing flat characters the most conservative-minded will always win.